title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[] | [
"A Kupsc \nUppsala University\nUppsalaSweden\n",
"P Moskal \nJagiellonian University\nCracowPoland\n",
"M Zieliński \nJagiellonian University\nCracowPoland\n"
] | [
"Uppsala University\nUppsalaSweden",
"Jagiellonian University\nCracowPoland",
"Jagiellonian University\nCracowPoland"
] | [
"IKP"
] | Multimeson production in pp interactions comprises important background for η, ω and η ′ mesons production experiments and for the studies of their decays planned with WASA detector at COSY. The available information about the reactions is summarized and the need for efforts to describe the processes is stressed. | null | [
"https://arxiv.org/pdf/0803.2673v1.pdf"
] | 15,884,826 | 0803.2673 | 0085d01d09fd82a776607ab06a7dcc1e64d1a060 |
2007
A Kupsc
Uppsala University
UppsalaSweden
P Moskal
Jagiellonian University
CracowPoland
M Zieliński
Jagiellonian University
CracowPoland
IKP
Forschungzentrum Jülich, Germany2007MULTIMESON PRODUCTION IN pp INTERACTIONS AS A BACKGROUND FOR η AND η ′ DECAYS
Multimeson production in pp interactions comprises important background for η, ω and η ′ mesons production experiments and for the studies of their decays planned with WASA detector at COSY. The available information about the reactions is summarized and the need for efforts to describe the processes is stressed.
Multimeson production
Direct production of three or more pions in proton-proton interactions, has not received proper attention, neither experimentally nor theoretically, despite the fact that it comprises the main background for η, η ′ and ω production experiments. With a 4π facility such as WASA, aiming for measurements of decays of η and η ′ produced in pp interactions [1], the understanding of the pp → ppπππ reactions becomes very important as they constitute a severe background for studies of η and η ′ decays into three pions. Those decays provide key ingredients for determination of the ratios of light quark masses [2,3], since the partial decay widths are proportional to d and u quark mass difference squared. In addition precise studies of η ′ decays require knowledge of pp → ppππππ, pp → ppπππππ, pp → ppπη and pp → ppππη reactions for beam energies around η ′ production threshold. For more than forty years there were only three experimental points available for the cross section of pp → ppπ + π − π 0 and pp → pnπ + π + π − reactions, all coming from bubble chamber experiments [4][5][6]. Only recently the data base has been extended by the measurements of pp → ppπ + π − π 0 and pp → ppπ 0 π 0 π 0 reactions cross sections near the threshold by the CELSIUS/WASA collaboration [7]. For the remaining reactions there is no data in that energy region.
The direct production should proceed by an excitation of one or two baryon resonances followed by the subsequent decays [8]. For example in the case of three pion production the low energy region a mechanism with simultaneous excitation of N * and ∆(1232)P 33 resonances is expected to dominate. The N * involved has to decay into Nππ and therefore the lowest lying Roper (N(1440)P 11 ) and N(1520)D 13 resonances could be considered.
The influence of the resonances can be studied in the invariant mass distributions of the subsystems of the outgoing protons and pions. Such studies were done for the pp → ppπ + π − π 0 and pp → pnπ + π + π − reactions in bubble chamber experiments performed at higher energies (beam kinetic energies of 4.15 GeV and 9.11 GeV) with up to thousand events [9][10][11]. However, close to the threshold the analysis is not conclusive since the widths of the involved resonances are comparable with the available excess energy (Q). Therefore, in this case one expects that phase space distribution and the final state interaction among the outgoing nucleons provide a reasonable description of the observed cross sections. The near threshold cross sections for the single meson production via the nucleon-nucleon interaction can indeed be quite satisfactory described by such ansatz. For the productions of multiple mesons the assumption should hold even for higher excess energies since on average the energy available to the pairs of the outgoing particles will be lower. That is consistent with CELSIUS/WASA results on pp → ppπ + π − π 0 and pp → ppπ 0 π 0 π 0 reactions studied at Q ≈ 100 MeV.
The production mechanism could be studied instead by measuring ratios of the cross sections for the different charge states. With the lack of information the simplest assumption is Fermi model [12], where amplitudes for all isospin states are put to be equal. The assumption leads to definite predictions for ratios between different charge states of the reactions. For example σ(pp → ppπ + π − π 0 ) : σ(pp → ppπ 0 π 0 π 0 ) : σ(pp → pnπ + π + π − ) ≡ σ 1 : σ 2 : σ 3 = 8 : 1 : 10. Resonances in the intermediate state will modify the ratios. The effect can be illustrated in the Isobar Model [8] where the ratio σ 1 : σ 2 : σ 3 is 7 : 1 : 25 (assuming ∆N * intermediate state) or 5 : 2 : 10 (assuming N * 1 N * 2 ). Experimentally: σ 1 : σ 3 at beam kinetic energy 2.0 GeV (1:2.53±0.46) [4] and at 2.85 GeV (1:1.59±0.27) [5]. The ratio σ 1 : σ 2 = 5.2 ± 0.8 : 1 was measured at lower energy -1.36 GeV in recent CELSIUS/WASA experiment [7]. One trivial modification to the predicted ratios close to threshold comes from difference between volumes of the phase spaces due to m π + = m π 0 and it amounts to 18% at 1.36 GeV.
Due to lack of microscopic model calculations, of the same kind as those for single meson or for double pion production the semiclassical Isobar Model is at present the only option to describe more complicated reactions below √ s ≈ 5 GeV. The modern version of the isobar model is used as input to relativistic ions calculations using transport equations [13,14]. Reliability of the calculations can be tested in simpler cases by comparison with exist-A. Kupsc et al.
Multimeson Production
ing calculations for the double pion production [15]. The implementation of the resonances in the Isobar Model, their production cross sections and decay branching ratios can be evaluated by studying exclusive meson production reactions. Multimeson production in proton-proton interactions provides very sensitive test of the parameters. The existing calculations within the framework have focused so far on production of dileptons in proton-proton interactions with the aim to understand the background for high density nuclear matter probes [16]. The byproduct of such studies were calculations of the background for η and η ′ decays involving dileptons (see for example ref. [17]). There is a need to extend the calculations to obtain predictions also for multimeson processes.
2 Background for η, η ′ decays A feature of η(η ′ ) detection from pp → ppη(η ′ ) in the WASA detector is the precise tagging by missing mass technique with a resolution of a few MeV/c 2 . The resolution of the invariant mass of the decay system is typically considerably worse. Therefore a figure of merit to describe background from direct production process, leading to the identical final state as for an η(η ′ ) decay, is given by ρ B ≡ dσ B /dµ| µ=m η(η ′ ) -the differential cross section for the background at the η(η ′ ) peak in the pp missing mass µ. Figure 1 shows inclusive ρ B values for η ′ derived from the COSY-11 measurements [18][19][20]. When estimating the background for a given decay channel, quantity ρ B ∆µ (where ∆µ is the resolution in the missing mass) should be compared to σ η(η ′ ) BR i (production cross section times branching ratio for the decay mode). The ρ B value depends on the total cross section and on the reaction mechanism. For η(η ′ ) production close to threshold the background distributions at the edge of the phase space are relevant. For both signal and the multimeson production this region of the phase space is strongly influenced by pp final state. When approaching the threshold the ρ B decreases quickly and in addition missing mass resolution improves (since it is constrained more by the beam momentum resolution) increasing signal to background ratio. The price is however a lower production cross section σ η(η ′ ) .
Detailed studies of η → πππ decays in pp → ppη reaction at beam energy 1.36 GeV by CELSIUS/WASA collaboration shows that background from direct three pion production is 10-20% for the π 0 π + π − and 5% for the π 0 π 0 π 0 channel in the final data sample [7,22]. This allows for precise study of the η decays providing large number of events is collected. For η ′ decays the situation is quite different: the three pion decays have branching ratios at percent or permil level [23]. In comparison to the η meson the η ′ production ρB [nb/MeV] Figure 1: Inclusive differential cross section ρ B for multipion production derived from the COSY-11 data [18][19][20]. The line is the parametrization ρ B = α(Q/(1 MeV)) β fitted to the data: α = 0.64 ± 0.14 [nb/MeV] and β = 1.66 ± 0.08 [21].
cross section at similar excess energies is about 30 times lower [18][19][20][24][25][26]. Finally the total cross section for multipion reactions increases strongly when going from η to η ′ production threshold region, e.g. for pp → ppπππ reaction 50-100 times. In conclusion embarking on the η ′ decay program in pp interactions requires much better understanding of multimeson production reactions.
. H H Adam, WASA-at-COSY Collab.arXiv:nucl-ex/0411038H. H. Adam et al. [WASA-at-COSY Collab.], arXiv:nucl-ex/0411038.
. D J Gross, S B Treiman, F Wilczek, Phys. Rev. 192188D. J. Gross, S. B. Treiman, F. Wilczek, Phys. Rev. D19, 2188 (1979).
. H Leutwyler, Phys. Lett. 378313H. Leutwyler, Phys. Lett. B378, 313 (1996).
. E Pickup, D K Robinson, E O Salant, Phys. Rev. 1252091E. Pickup, D. K. Robinson, E. O. Salant, Phys. Rev. 125, 2091 (1962).
. E L Hart, Phys. Rev. 126747E. L. Hart et al., Phys. Rev. 126, 747 (1962).
. A M Eisner, Phys. Rev. 138670A. M. Eisner et al., Phys. Rev. 138, B670 (1965).
. C Pauly, Phys. Lett. 649122C. Pauly et al. , Phys. Lett. B649, 122 (2007).
. R M Sternheimer, S J Lindenbaum, Phys. Rev. 123333R. M. Sternheimer, S. J. Lindenbaum, Phys. Rev. 123, 333 (1961).
. G Alexander, Phys. Rev. 1541284G. Alexander et al., Phys. Rev. 154, 1284 (1967).
. A P Colleraine, U Nauenberg, Phys. Rev. 1611387A. P. Colleraine, U. Nauenberg, Phys. Rev. 161, 1387 (1967).
. S P Almeida, Phys. Rev. 1741638S. P. Almeida et al., Phys. Rev. 174, 1638 (1968).
. E Fermi, Prog. Theor. Phys. 5570E. Fermi, Prog. Theor. Phys. 5, 570 (1950).
. S Teis, Z. Phys. 356421S. Teis et al., Z. Phys. A356, 421 (1997).
. S A Bass, Prog. Part. Nucl. Phys. 41255S. A. Bass et al., Prog. Part. Nucl. Phys. 41, 255 (1998).
. L Alvarez-Ruso, E Oset, E Hernandez, Nucl. Phys. 633519L. Alvarez-Ruso, E. Oset, E. Hernandez, Nucl. Phys. A633, 519 (1998).
. A Faessler, J. Phys. 29603A. Faessler et al., J. Phys. G29, 603 (2003).
. A Faessler, Phys. Rev. 6937001A. Faessler et al., Phys. Rev. C69, 037001 (2004).
. P , Phys. Rev. Lett. 803202P. Moskal et al., Phys. Rev. Lett. 80, 3202 (1998);
. P , Phys. Lett. 474416P. Moskal et al., Phys. Lett. B474, 416 (2000);
. A Khoukaz, Eur. Phys. J. 20345A. Khoukaz et al., Eur. Phys. J. A20, 345 (2004).
M Zieliński, Diploma thesis. Jagellonian Universityin preparationM. Zieliński, Diploma thesis, Jagellonian University, in preparation.
. M Bashkanov, Phys. Rev. 7648201M. Bashkanov et al. , Phys. Rev. C76, 048201 (2007).
. W M Yao, J. Phys. 331W. M. Yao, et al., J. Phys. G33, 1 (2006).
. A M Bergdolt, Phys. Rev. 482969A. M. Bergdolt et al., Phys. Rev. D48, R2969 (1993);
. E Chiavasa, Phys. Lett. 322270E. Chiavasa et al., Phys. Lett. B322, 270 (1994);
. H Calen, Phys. Lett. 36639H. Calen et al., Phys. Lett. B366, 39 (1996);
. H Calen, Phys. Rev. Lett. 792642H. Calen et al., Phys. Rev. Lett. 79, 2642 (1997);
. J Smyrski, Phys. Lett. 474182J. Smyrski et al., Phys. Lett. B474, 182 (2000).
. F Hibou, Phys. Lett. 43841F. Hibou et al., Phys. Lett. B438, 41 (1998);
. F Balestra, Phys. Lett. 49129F. Balestra et al., Phys. Lett. B491, 29 (2000).
| [] |
[
"An isoperimetric inequality for antipodal subsets of the discrete cube",
"An isoperimetric inequality for antipodal subsets of the discrete cube"
] | [
"David Ellis ",
"Imre Leader "
] | [] | [] | We say a family of subsets of {1, 2, . . . , n} is antipodal if it is closed under taking complements. We prove a best-possible isoperimetric inequality for antipodal families of subsets of {1, 2, . . . , n} (of any size). Our inequality implies that for any k ∈ N, among all such families of size 2 k , a family consisting of the union of two antipodal (k − 1)-dimensional subcubes has the smallest possible edge boundary.Conjecture 1. Let n ∈ N with n ≥ 2, and let µ denote the n-dimensional Hausdorff measure on S n . Let A ⊆ S n be open and antipodal. Then there exists | 10.1016/j.ejc.2017.12.003 | [
"https://arxiv.org/pdf/1609.04270v3.pdf"
] | 21,386,626 | 1609.04270 | 86341950104f67af06b51553063502f7b00c0d10 |
An isoperimetric inequality for antipodal subsets of the discrete cube
11 Nov 2017 14th September 2016
David Ellis
Imre Leader
An isoperimetric inequality for antipodal subsets of the discrete cube
11 Nov 2017 14th September 2016arXiv:1609.04270v3 [math.CO]
We say a family of subsets of {1, 2, . . . , n} is antipodal if it is closed under taking complements. We prove a best-possible isoperimetric inequality for antipodal families of subsets of {1, 2, . . . , n} (of any size). Our inequality implies that for any k ∈ N, among all such families of size 2 k , a family consisting of the union of two antipodal (k − 1)-dimensional subcubes has the smallest possible edge boundary.Conjecture 1. Let n ∈ N with n ≥ 2, and let µ denote the n-dimensional Hausdorff measure on S n . Let A ⊆ S n be open and antipodal. Then there exists
Introduction
Isoperimetric questions are classical objects of study in mathematics. In general, they ask for the minimum possible 'boundary-size' of a set of a given 'size', where the exact meaning of these words varies according to the problem.
The classical isoperimetric problem in the plane asks for the minimum possible perimeter of a shape in the plane with area 1. The answer, that it is best to take a circle, was 'known' to the ancient Greeks, but it was not until the 19th century that this was proved rigorously, by Weierstrass in a series of lectures in the 1870s in Berlin.
The isoperimetric problem has been solved for n-dimensional Euclidean space E n , for the n-dimensional unit sphere S n := {x ∈ R n+1 : n+1 i=1 x 2 i = 1}, and for n-dimensional hyperbolic space H n (for all n), with the natural notion of boundary in each case, corresponding to surface area for sufficiently 'nice' sets. (For background on isoperimetric problems, we refer the reader to the book of Burago and Zalgaller [2], the surveys of Osserman [6] and of Ros [8], and the references therein.) One of the most well-known open problems in the area is to solve the isoperimetric problem for n-dimensional real projective space RP n , or equivalently for antipodal subsets of the n-dimensional sphere S n . (We say a subset A ⊆ S n is antipodal if A = −A.) The conjecture can be stated as follows.
a set B ⊆ S n such that µ(B) = µ(A), σ(B) ≤ σ(A), and
B = {x ∈ S n : r i=1 x 2 i > a}
for some r ∈ [n] and some a ∈ R.
Here, if A ⊆ S n is an open set, then σ(A) denotes the surface area of A, i.e. the (n − 1)-dimensional Hausdorff measure of the topological boundary of A.
Only the cases n = 2 and n = 3 of Conjecture 1 are known, the former being 'folklore' and the latter being due to Ritoré and Ros [7]. In this paper, we prove a discrete analogue of Conjecture 1.
First for some definitions and notation. If X is a set, we write P(X) for the power-set of X. For n ∈ N, we write [n] := {1, 2, . . . , n}, and we let Q n denote the graph of the n-dimensional discrete cube, i.e. the graph with vertexset P([n]), where x and y are joined by an edge if |x∆y| = 1. If A ⊆ P([n]), we write ∂A for the edge-boundary of A in the discrete cube Q n , i.e. ∂A is the set of edges of Q n which join a vertex in A to a vertex outside A. We write e(A) for the number of edges of Q n which have both end-vertices in A. We say that two families A, B ⊆ P([n]) are isomorphic if there exists an automorphism σ of Q n such that B = σ(A). Clearly, if A and B are isomorphic, then |∂A| = |∂B|.
The binary ordering on P([n]) is defined by x < y iff max(x∆y) ∈ y. An initial segment of the binary ordering on P([n]) is the set of the first k (smallest) elements of P([n]) in the binary ordering, for some k ≤ 2 n . For any k ≤ 2 n , we write I n,k for the initial segment of the binary ordering on P([n]) with size k.
Harper [3], Lindsay [5], Bernstein [1] and Hart [4] solved the edge isoperimetric problem for Q n , showing that among all subsets of P([n]) of given size, initial segments of the binary ordering on P([n]) have the smallest possible edge-boundary.
In this paper, we consider the edge isoperimetric problem for antipodal sets in Q n . If x ⊆ [n], we define x := [n] \ x, and if A ⊆ P([n]), we define A := {x : x ∈ A}. We say a family A ⊆ P([n]) is antipodal if A = A. This notion is of course the natural analogue in the discrete cube of antipodality in S n ; indeed, identifying P([n]) with {−1, 1} n ⊆ √ n · S n−1 ⊆ R n in the natural way, x → x corresponds to the antipodal map v → −v.
We prove the following best-possible edge isoperimetric inequality for antipodal families.
Theorem 2. Let A ⊆ P([n]) be antipodal. Then
|∂A| ≥ |∂(I n,|A|/2 ∪ I n,|A|/2 )|.
We remark that Theorem 2 implies that if A ⊆ P([n]) is antipodal with
|A| = 2 k for some k ∈ [n − 1], then |∂A| ≥ |∂(S k−1 ∪ S k−1 )|, where S k−1 := I n,2 k−1 = {x ⊆ [n] : x ⊆ [k−1]} is a (k−1)-dimensional subcube.
In other words, a union of two antipodal subcubes has the smallest possible edge-boundary, over all antipodal sets of the same size.
To prove Theorem 2, it will be helpful for us to rephrase it slightly. Firstly, observe that for any A ⊆ P([n]), we have ∂(A c ) = ∂A, and that for any k ≤ 2 n−1 , the family (I n,k ∪ I n,k ) c is isomorphic to the family I n,2 n−1 −k ∪ I n,2 n−1 −k , via the isomorphism x → x∆{n}. Hence, by taking complements, it suffices to prove Theorem 2 in the case |A| ≤ 2 n−1 .
Secondly, for any family A ⊆ P([n]), we have If k, n ∈ N with k ≤ 2 n , we write F (k) := e(I n,k ). (It is easy to see that F (k) is independent of n.) Putting all this together, we see that Theorem 2 is equivalent to the following:
2e(A) + |∂A| = n|A|,(1)e(A) ≤ 2F (|A|/2) ∀A ⊆ P([n]) : |A| ≤ 2 n−1 , A is antipodal.(2)
Now for a few words about our proof. In the special cases of |A| = 2 n−1 and |A| = 2 n−2 , Theorem 2 can be proved by an easy Fourier-analytic argument, but it is fairly obvious that this argument has no hope of proving the theorem for general set-sizes. Our proof of Theorem 2 is purely combinatorial; we prove a stronger statement by induction on n. Our aim is to do induction on n in the usual way: namely, by choosing some i ∈ [n] and considering the upper and lower i-sections of A, defined respectively by
A + i := {x ∈ P([n] \ {i}) : x ∪ {i} ∈ A}, A − i := {x ∈ P([n] \ {i}) : x ∈ A}.
However, a moment's thought shows that an i-section of an antipodal family need not be antipodal. For example, if A = S k−1 ∪ S k−1 (a union of two antipodal (k − 1)-dimensional subcubes), then for any i ≥ k, the i-section A − i consists of a single (k−1)-dimensional subcube, which is not an antipodal family. This rules out an inductive hypothesis involving antipodal families. Hence, we seek a stronger statement, about arbitrary subsets of P([n]); one which we can prove by induction on n, and which will imply Theorem 2. It turns out that the right statement is as follows. To prove Theorem 2, it suffices to prove the following.
Indeed, assume that Theorem 3 holds. Let A ⊆ P([n]) be antipodal with |A| ≤ 2 n−1 . We have A − n = A + n and |A + n | = |A|/2 ≤ 2 n−2 , and so
e(A) = e(A + n ) + e(A − n ) + |A + n ∩ A − n | = e(A + n ) + e(A + n ) + |A + n ∩ A + n | = 2e(A + n ) + |A + n ∩ A + n | = f (A + n ) ≤ 2F (|A + n |) = 2F (|A|/2),
implying (2) and so proving Theorem 2.
Note that the function f takes the same value (namely, k2 k ) when A is a k-dimensional subcube, as when A is the union of two antipodal (k − 1)dimensional subcubes. This is certainly needed in order for our inductive approach to work, by our above remark about the i-sections of the union of two antipodal subcubes.
We prove Theorem 3 in the next section; in the rest of this section, we gather some additional facts we will use in our proof.
We will use the following lemma of Hart from [4].
Lemma 4.
For any x, y ∈ N ∪ {0}, we have
F (x + y) − F (x) − F (y) ≥ min{x, y}.
Equality holds if y is a power of 2 and x ≤ y.
We will also use the following easy consequence of Lemma 4.
Lemma 5. Let x, y ∈ N ∪ {0} and let n ∈ N such that x + y ≤ 2 n , y ≥ 2 n−1 and y ≤ 2 n−1 + x. Then
F (x + y) − F (y) − F (x) − y + 2 n−1 ≥ x.
Proof. Write z := y − 2 n−1 ; then z ≤ x and x + z ≤ 2 n−1 . We therefore have
F (x + y) − F (y) − F (x) − y + 2 n−1 = F (2 n−1 + x + z) − F (2 n−1 + z) − F (x) − z = F (2 n−1 ) + F (x + z) + x + z − F (2 n−1 ) − F (z) − z − F (x) − z = F (x + z) − F (x) − F (z) + x − z ≥ min{x, z} + x − z = x,
where the last inequality uses Lemma 4.
We also need the following lemma, which says that for any family A ⊆ P([n]), one coordinate from every pair of coordinates is such that the upper and lower sections of A corresponding to that coordinate are 'somewhat' close in size. Lemma 6. Let n ∈ N with n ≥ 2, and let A ⊆ P([n]). Then for any 1 ≤ i < j ≤ n, we have
min{ |A + i | − |A − i | , |A + j | − |A − j | } ≤ 2 n−2 .
Proof. Without loss of generality, by considering {A∆S : A ∈ A} for some S ⊆ {i, j}, we may assume that |A + i | ≤ |A − i | and that |A + j | ≤ |A − j |. Interchanging i and j if necessary, we may assume that |(
A − i ) + j | ≥ |(A + i ) − j |. Then we have 0 ≤ |A − j | − |A + j | = |(A − i ) − j | + |(A + i ) − j | − |(A − i ) + j | − |(A + i ) + j | ≤ |(A − i ) − j | − |(A i ) + ) + j | ≤ |(A − i ) − j | ≤ 2 n−2 ,
proving the lemma.
We also need the following.
Moreover, using (1) and the fact that (I n,k ) c is isomorphic to I n,2 n −k , we have
2F (k) − 2F (2 n − k) = kn − ∂(I n,k ) − ((2 n − k)n − ∂((I n,k ) c )) = kn − ∂(I n,k ) − ((2 n − k)n − ∂(I n,k )) = (2k − 2 n )n(6)
for any k ≤ 2 n . It follows from (5) and (6), by taking complements, that Theorem 3 is equivalent to the inequality
f (A) ≤ 2F (|A|) + 2|A| − 2 n ∀A ⊆ P([n]) : |A| ≥ 2 n−1 .(7)
2 Proof of Theorem 3
Our proof is by induction on n. The base case n = 1 of Theorem 3 is easily checked. We turn to the induction step. Let n ≥ 2, and assume that Theorem 3 holds when n is replaced by n − 1. Let A ⊆ P([n]) with |A| ≤ 2 n−1 . Observe that for any i ∈ [n], we have
f (A) = 2e(A) + |A ∩ A| = 2e(A + i ) + 2e(A − i ) + 2|A + i ∩ A − i | + |A + i ∩ A − i | + |A − i ∩ A + i | = 2e(A + i ) + 2e(A − i ) + 2|A + i ∩ A − i | + 2|A + i ∩ A − i |.(8)
We now split into two cases. Case 1. Firstly, suppose that there exists i ∈ [n] such that max{|A + i |, |A − i |} ≤ 2 n−2 . Without loss of generality, we may assume that this holds for i = n, i.e. that max{|A + n |, |A − n |} ≤ 2 n−2 . We may also assume that |A + n | ≤ |A − n |.
We now apply the induction hypothesis to C and D. Since |C| ≤ |D| ≤ 2 n−2 , we may apply (3) where the second inequality follows from Lemma 4, and the third inequality follows from Lemma 7. This completes the induction step in Case 1. Case 2. Secondly, suppose that Case 1 does not occur, i.e. that max{|A + j |, |A − j |} > 2 n−2 for all j ∈ [n]. By Lemma 6, there exists i ∈ [n] such that |A + i | − |A − i | ≤ 2 n−2 , and therefore
2 n−2 < max{|A + i |, |A − i |} ≤ min{|A + i |, |A − i |} + 2 n−2 .
Without loss of generality, we may assume that this holds for i = n, and that |A + n | ≤ |A − n |, so that 2 n−2 < |A − n | ≤ 2 n−2 + |A + n |. where the second inequality uses Lemma 5, applied with x = |C| and y = |D|, and with n − 1 in place of n, and the third inequality uses Lemma 7. This completes the induction step in Case 2, proving the theorem.
Conclusion
We feel that our proof of Theorem 3 (and therefore of Theorem 2) is somewhat delicate, as it relies on the fact that, in the inductive step, the terms involving F can be dealt with using the fortunate properties of the function F (in Lemmas 4 and 5), and the other terms can be dealt with using the elementary inequality in Lemma 7. We also note that there is a nested sequence of families (with one family of every possible size), each of which is extremal for Theorem 2. In contrast, the (conjectural) extremal families in Conjecture 1 do not have this 'nested' property. Hence, perhaps unfortunately, we feel that Theorem 2, and our proof thereof, may shed only a limited amount of light on Conjecture 1.
so
Theorem 2 is equivalent to the statement that if A ⊆ P([n]) is antipodal, then e(A) ≤ e(I n,|A|/2 ∪ I n,|A|/2 ). Note also that if B is an initial segment of the binary ordering on P([n]) with |B| ≤ 2 n−2 , then B ⊆ {x ⊆ [n] : x ∩ {n − 1, n} = ∅} and B ⊆ {x ⊆ [n] : {n − 1, n} ⊆ x}, so B ∩ B = ∅ and e(B, B) = 0. Moreover, it is easy to see that B is isomorphic to B, and therefore e(B) = e(B). Hence, e(B ∪ B) = e(B) + e(B) = 2e(B).
For any A ⊆ P([n]) (not necessarily antipodal), we define f (A) := 2e(A) + |A ∩ A|.
Theorem 3 .
3For any n ∈ N and any A ⊆ P([n]) with |A| ≤ 2 n−1 , we have f (A) ≤ 2F (|A|).
Lemma 7 .
7Let n ∈ N and let C, D ⊆ P([n]). Then 2|C ∩ D| + 2|C ∩ D| ≤ |C ∩ C| + |D ∩ D| + 2 min{|C|, |D|}.(4)Proof. Note that both sides of the above inequality are invariant under interchanging C and D, so it suffices to prove the lemma in the case |C| ≤ |D|. By inclusion-exclusion, we have2|C ∩ D| + 2|C ∩ D| = 2|C ∩ (D ∪ D)| + 2|C ∩ (D ∩ D)| ≤ 2|C| + 2|C ∩ (D ∩ D)|,so it suffices to prove that 2|C ∩ (D ∩ D)| ≤ |C ∩ C| + |D ∩ D|. Writing E = D ∩ D, it suffices to prove that for any antipodal set E ⊆ P([n]), and any set C ⊆ P([n]), we have 2|C ∩ E| ≤ |C ∩ C| + |E|. This follows immediately from inclusion-exclusion again; indeed, we have 2|C∩E| = |C∩E|+|C∩E| = |C∩E|+|C∩E| = |(C∩C)∩E|+|(C∪C)∩E| ≤ |C∩C|+|E|, whenever E is antipodal. Finally, we note that for any A ⊆ P([n]), we have f (A c ) = 2e(A c ) + |A c ∩ A c | = n|A c | − |∂(A c )| + |A c ∩Ā c | = n|A| + n(|A c | − |A|) − |∂A| + |(A ∪ A) c | = n|A| − |∂A| + n(2 n − 2|A|) + 2 n − |A ∪ A| = 2e(A) + n(2 n − 2|A|) + 2 n − 2|A| + |A ∩ A| = f (A) + 2(n + 1)(2 n−1 − |A|).
Then, defining C := A + n ⊆ P([n − 1]) and D := A − n ⊆ P([n − 1]), and invoking (8) with i = n, we have f (A) = 2e(C) + 2e(D) + 2|C ∩ D| + 2|C ∩ D| = f (C) + f (D) − |C ∩ C| − |D ∩ D| + 2|C ∩ D| + 2|C ∩ D|.
, obtaining f (C) ≤ 2F (|C|) and f (D) ≤ 2F (|D|). Substituting the last two inequalities into (9), we obtain f (A) ≤ 2F (|C|) + 2F (|D|) + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| = 2F (|C| + |D|) − 2F (|C| + |D|) − 2F (|C|) − 2F (|D|) + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| ≤ 2F (|A|) + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| − 2 min{|C|, |D|} = 2F (|A|) + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| − 2|C| ≤ 2F (|A|),
Defining C := A + n ⊆ P([n − 1]) and D := A − n ⊆ P([n − 1]) as before, and invoking(8) with i = n, we have f (A) = 2e(C) + 2e(D) + 2|C ∩ D| + 2|C ∩ D| = f (C) + f (D) − |C ∩ C| − |D ∩ D| + 2|C ∩ D| + 2|C ∩ D|.(10)Now, since |C| ≤ |D|, we have 2|C| ≤ |C| + |D| = |A| ≤ 2 n−1 and therefore |C| ≤ 2 n−2 . On the other hand, we have |D| > 2 n−2 . Applying the induction hypothesis to C and D (using (3) for C and(7)for D), we obtain f (C) ≤ 2F (|C|) and f (D) ≤ 2F (|D|) + 2|D| − 2 n−1 ; substituting these two inequalities into (10) yields f (A) ≤ 2F (|C|) + 2F (|D|) + 2|D| − 2 n−1 + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| = 2F (|C| + |D|) − 2F (|C| + |D|) − 2F (|C|) − 2F (|D|) − 2|D| + 2 · 2 n−2 + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| ≤ 2F (|A|) + 2|C ∩ D| + 2|C ∩ D| − |C ∩ C| − |D ∩ D| − 2|C| ≤ 2F (|A|),
AcknowledgementsWe would like to thank an anonymous referee for suggesting the above proof of Lemma 6, which is simpler than our original argument.
Maximally connected arrays on the n-cube. A J Bernstein, SIAM J. Appl. Math. 15A. J. Bernstein, Maximally connected arrays on the n-cube, SIAM J. Appl. Math. 15 (1967), 1485-1489.
Y D Burago, V A Zalgaller, Geometric inequalities. BerlinSpringer-VerlagY. D. Burago and V. A. Zalgaller, Geometric inequalities, Springer-Verlag, Berlin (1988).
Optimal assignment of numbers to vertices. L H Harper, SIAM J. Appl. Math. 12L. H. Harper, Optimal assignment of numbers to vertices, SIAM J. Appl. Math., 12 (1964), 131-135.
A note on the edges of the n-cube. S Hart, Discrete Math. 14S. Hart, A note on the edges of the n-cube, Discrete Math. 14 (1976), pp 157-161.
Assignment of numbers to vertices. J H Lindsey, I I , Amer. Math. Monthly. 71J. H. Lindsey II, Assignment of numbers to vertices, Amer. Math. Monthly 71 (1964), 508-516.
The isoperimetric inequality. R Osserman, Bull. Amer. Math. Soc. 84R. Osserman, The isoperimetric inequality, Bull. Amer. Math. Soc. 84 (1978), 1182-1238.
Stable mean constant curvature tori and the isoperimetric problem in three space forms. M Ritoré, A Ros, Comment. Math. Helvet. 67M. Ritoré and A. Ros, Stable mean constant curvature tori and the isoperi- metric problem in three space forms, Comment. Math. Helvet. 67 (1992), 293-305.
The Isoperimetric Problem, Lecture notes from the Clay Mathematics Institute Summer School on the Global Theory of Minimal Surfaces. A Ros, Mathematical Sciences Research Institute. A. Ros, The Isoperimetric Problem, Lecture notes from the Clay Mathe- matics Institute Summer School on the Global Theory of Minimal Surfaces, June-July 2001, Mathematical Sciences Research Institute, Berkeley, Cal- ifornia. Available at http://www.ugr.es/ aros/isoper.pdf.
| [] |
[
"Shortcuts to adiabaticity in the infinite-range Ising model by mean-field counter-diabatic driving",
"Shortcuts to adiabaticity in the infinite-range Ising model by mean-field counter-diabatic driving"
] | [
"Takuya Hatomura \nDepartment of Physics\nGraduate School of Science\nThe University of Tokyo\n7-3-1 HongoBunkyo, TokyoJapan\n"
] | [
"Department of Physics\nGraduate School of Science\nThe University of Tokyo\n7-3-1 HongoBunkyo, TokyoJapan"
] | [] | The strategy of shortcuts to adiabaticity enables us to realize adiabatic dynamics in finite time.In the counter-diabatic driving approach, an auxiliary Hamiltonian which is called the counterdiabatic Hamiltonian is appended to an original Hamiltonian to cancel out diabatic transitions. The counter-diabatic Hamiltonian is constructed by using the eigenstates of the original Hamiltonian. Therefore, it is in general difficult to construct the counter-diabatic Hamiltonian for quantum many-body systems. Even if the counter-diabatic Hamiltonian for quantum many-body systems is obtained, it is generally non-local and even diverges at critical points. We construct an approximated counter-diabatic Hamiltonian for the infinite-range Ising model by making use of the mean-field approximation. An advantage of this method is that the mean-field counter-diabatic Hamiltonian is constructed by only local operators. We numerically demonstrate the effectiveness of this method through quantum annealing processes going the vicinity of the critical point. It is also confirmed that the mean-field counter-diabatic Hamiltonian is still well-defined in the limit to the critical point. The present method can take higher order contributions into account and is consistent with the variational approach for local counter-diabatic driving. | 10.7566/jpsj.86.094002 | [
"https://arxiv.org/pdf/1705.03168v1.pdf"
] | 119,091,506 | 1705.03168 | 0f211259192bab6818df176051c9e6abebfe3d56 |
Shortcuts to adiabaticity in the infinite-range Ising model by mean-field counter-diabatic driving
9 May 2017 (Dated: May 10, 2017)
Takuya Hatomura
Department of Physics
Graduate School of Science
The University of Tokyo
7-3-1 HongoBunkyo, TokyoJapan
Shortcuts to adiabaticity in the infinite-range Ising model by mean-field counter-diabatic driving
9 May 2017 (Dated: May 10, 2017)numbers: 6460Ht0530Rt6470Tg7540Mg
The strategy of shortcuts to adiabaticity enables us to realize adiabatic dynamics in finite time.In the counter-diabatic driving approach, an auxiliary Hamiltonian which is called the counterdiabatic Hamiltonian is appended to an original Hamiltonian to cancel out diabatic transitions. The counter-diabatic Hamiltonian is constructed by using the eigenstates of the original Hamiltonian. Therefore, it is in general difficult to construct the counter-diabatic Hamiltonian for quantum many-body systems. Even if the counter-diabatic Hamiltonian for quantum many-body systems is obtained, it is generally non-local and even diverges at critical points. We construct an approximated counter-diabatic Hamiltonian for the infinite-range Ising model by making use of the mean-field approximation. An advantage of this method is that the mean-field counter-diabatic Hamiltonian is constructed by only local operators. We numerically demonstrate the effectiveness of this method through quantum annealing processes going the vicinity of the critical point. It is also confirmed that the mean-field counter-diabatic Hamiltonian is still well-defined in the limit to the critical point. The present method can take higher order contributions into account and is consistent with the variational approach for local counter-diabatic driving.
I. INTRODUCTION
Shortcuts to adiabaticity are collectively called methods which enable us to realize adiabatic dynamics in finite time [1][2][3][4][5][6]. One of the famous methods is the counterdiabatic driving approach [1][2][3][4] (or in other words, the transitionless quantum driving approach), in which an additional auxiliary Hamiltonian is introduced to let a system be adiabatic in an original Hamiltonian frame. In contrast, in the invariant-based inverse engineering approach [5], variables of a Hamiltonian are parametrized according to schedules of a dynamical invariant [7]. At first, these methods have been applied to simple systems which have small degrees of freedom [6].
Application of these methods to quantum many-body systems is a curious problem and recently has been investigated as a central issue [8][9][10][11][12][13][14]. In quantum many-body systems, quantum phase transitions drastically change properties of systems [15,16]. In particular, when we consider quenched dynamics, critical points of systems play an important role. That is, correlation length and relaxation time diverge at critical points, which leads to effectively frozen dynamics [17][18][19][20][21][22][23]. It means that adiabatic dynamics is broken down around critical points. Associated with the divergence of correlation length, the counter-diabatic Hamiltonian for quantum many-body systems becomes non-local and even diverges at critical points. In addition, the counter-diabatic driving approach requires the eigenstates of a Hamiltonian. Therefore, application to general quantum many-body systems is quite restricted. * [email protected]
In infinite-dimensional system cases, the infinite-range Ising model, which is also known as the Lipkin-Meshkov-Glick model, was studied [9,12,14]. One of the ways to treat this model is taking the Holstein-Primakoff transformation in the thermodynamic limit [9,12]. Then, this model is mapped to a harmonic oscillator and an approximated counter-diabatic Hamiltonian can be obtained. In combination with the optimization approach, quasi-adiabatic dynamics was realized across the critical point [12]. This model was also investigated in the invariant-based inverse engineering approach [14]. Although dynamical invariants of this model generally contain infinitely non-local operators, a local dynamical invariant was constructed by making use of the mean-field approximation. The similar idea should be also explored in the counter-diabatic driving approach.
In this paper, we construct a local counter-diabatic Hamiltonian for the infinite-range Ising model by making use of the mean-field approximation. Then, we can express the counter-diabatic Hamiltonian by only local operators although there is an auxiliary self-consistent equation. Following the previous work [14], we numerically demonstrate the usefulness of this method through quantum annealing processes. We also show the nondivergence of the mean-field counter-diabatic Hamiltonian in the limit to the critical point for certain situations. The relation to the variational approach for local counter-diabatic driving [13] is also of interest. We confirm that the present mean-field counter-diabatic driving approach includes higher order contributions, which cannot be taken into account if we naively adopt the variational approach.
This article is constructed as follows. Section II A is devoted to the brief review of shortcuts to adiabaticity by the counter-diabatic driving approach and the simplest example is shown as a preliminary in Sec. II B. In Sec. II C, the couter-diabatic Hamiltonian for the infinite-range Ising model is constructed by making use of the mean-field approximation. The infinite-range Ising model can be easily calculated by numerical simulations because it is equivalent to the uniaxial single-spin system as seen in Sec. II D. The usefulness of the meanfield counter-diabatic driving is demonstrated through quantum annealing processes in Sec. III A. The nondivergence of the mean-field counter-diabatic Hamiltonian in the limit to the critical point is shown for a certain class of schedules of the transverse field in Sec. III B. The relation to the variational approach for local counterdiabatic driving is discussed in Sec. III C. We summarize in Sec. IV.
II. METHOD
A. Counter-diabatic driving
We consider a time-dependent Hamiltonian H 0 (t) with the eigenenergies and the eigenstates
H 0 (t)|ψ n (t) = E n (t)|ψ n (t) .(1)
The time-evolution operator of adiabatic dynamics is defined by
U (t) = n e iαn(t) |ψ n (t) ψ n (0)|,(2)
where α n (t) is the dynamical phase
α n (t) = − 1 h t 0 dt ′ E n (t ′ ) +i t 0 dt ′ ψ n (t ′ )|∂ t ′ ψ n (t ′ ) .(3)
We construct the assisted Hamiltonian H(t) so that the adiabatic dynamcis |Ψ(t) = U (t)|Ψ(0) becomes the solution of the Schrödinger equation
ih ∂ ∂t |Ψ(t) = H(t)|Ψ(t) ,(4)|Ψ(t) = n c n e iαn(t) |ψ n (t) ,(5)
where c n is the coefficient of the initial state, |Ψ(0) = n c n |ψ n (0) . By inversely solving Eq. (4), we find that the assisted Hamiltonian is constructed as
H(t) = H 0 (t) + H cd (t),(6)H cd (t) = ih n =m ψ n (t)|∂ t ψ m (t) |ψ n (t) ψ m (t)|. (7)
where H cd (t) is called the counter-diabatic Hamiltonian. In order to construct the counter-diabatic Hamiltonian, the eigenstates of the Hamiltonian H 0 (t) are required as seen in Eq. (7). However, in general, it is difficult to explicitly obtain the eigenstates of quantum many-body systems. Even if the eigenstates are obtained, the counter-diabatic Hamiltonian becomes highly nonlocal and even diverges at critical points.
B. Preliminary: Two-level system
When we adopt the mean-field approximation, problems of the infinite-range Ising model is reduced to problems of a two-level system. Therefore, as a preliminary, we consider a two-level system
H 0 (t) = −Γ(t)σ x − h(t)σ z ,(8)
where σ α , α = x, y, z are the Pauli matrices, Γ(t) is a transverse field, and h(t) is a longitudinal field. The eigenenergies and the eigenstates are given by
E ± (t) = ± h(t) 2 + Γ(t) 2 ,(9)|ψ − (t) = cos θ sin θ , |ψ + (t) = − sin θ cos θ ,(10)with sin 2θ = Γ(t) h(t) 2 + Γ(t) 2 ,(11)cos 2θ = h(t) h(t) 2 + Γ(t) 2 .(12)
The counter-diabatic Hamiltonian is given by
H cd (t) =θ(t)σ y ,(13)θ(t) = 1 2 h(t)Γ(t) −ḣ(t)Γ(t) h 2 (t) + Γ 2 (t) .(14)
Here and hereafter,h = 1.
C. Mean-field counter-diabatic driving for the infinite-range Ising model
The Hamiltonian of the infinite-range Ising model is given by
H 0 (t) = − J 2N i,j σ z i σ z j − Γ(t) i σ x i − h i σ z i ,(15)
where J is the coupling constant and N is the number of Ising spins. Throughout this paper, we consider the timedependent transverse field Γ(t) and the fixed longitudinal field h. There exists the critical point at the transverse field Γ(t) = J and the longitudinal field h = 0. The system is in the ferromagnetic phase for Γ(t) < J, and the paramagnetic phase for Γ(t) > J.
Owing to the long-range nature of the interactions, the mean-field approximation is valid for static properties of large N systems. By using the mean-field approximation, we obtain the following mean-field Hamiltonian
H MF 0 (t) = JN 2 (m z (t)) 2 − (Jm z (t) + h) i σ z i −Γ(t) i σ x i ,(16)
where m z (t) is given by the expectation value of σ z i and determined later. In the mean-field Hamiltonian (16), N spins are decoupled from each other. Therefore, the eigenstates are the products of the eigenstates of the twolevel system (10), and the eigenenergies are the summations of the eigenenergies of the two-level system (9). We remark that the longitudinal field is modulated as h(t) → Jm z (t) + h due to the existence of the meanfield. The counter-diabatic Hamiltonian derived by the mean-field approximation is given by
H MF cd (t) =θ(t) i σ y i ,(17)θ(t) = 1 2 (Jm z (t) + h)Γ(t) − Jṁ z (t)Γ(t) (Jm z (t) + h) 2 + Γ 2 (t) .(18)
In this paper, we only consider the ground state tracking. Therefore, we impose the following self-consistent equation
m z (t) = ψ − (t)|σ z |ψ − (t) = Jm z (t) + h (Jm z (t) + h) 2 + Γ 2 (t) ,(19)
which can be rewritten as
0 = J 2 (m z (t)) 4 + 2Jh(m z (t)) 3 −(J 2 − h 2 − Γ 2 (t))(m z (t)) 2 −2Jhm z (t) − h 2 .(20)
Here, this equation is a quartic equation with respect to the mean-field m z (t). Therefore, we can in principle solve this equation although the expression for the mean-field m z (t) is quite complicated. By differentiating Eq. (20) with respect to time t, we obtain the time-derivative of the magnetizatioṅ
m z (t) = −Γ(t)Γ(t)(m z (t)) 2 ×{2J 2 (m z (t)) 3 + 3Jh(m z (t)) 2 −(J 2 − h 2 − Γ 2 (t))m z (t) − Jh} −1 . (21)
In combination with Eqs. (20) and (21), we can calculate the mean-field counter-diabatic field (18).
D. Mapping to the single-spin system
In order to confirm availability of the mean-field counter-diabatic Hamiltonian (17), we perform numerical simulations. The total assisted Hamiltonian is given by the summation of (15) and (17) H
MF (t) = H 0 (t) + H MF cd (t) = − J 2N i,j σ z i σ z j − Γ(t) i σ x i −h i σ z i +θ(t) i σ y i .(22)
Because this assisted Hamiltonian commutes with the square of the total spin ( i σ i ) 2 , Eq. (22) can be blockdiagonalized. Therefore, in our choice of the initial state, i.e. the ground state, dynamics driven by Eq. (22) is equivalent to dynamics driven by the Hamiltonian
H MF (t) = − J S (S z ) 2 − 2Γ(t)S x − 2hS z + 2θ(t)S y ,(23)
where S α , α = x, y, z is the quantum spin operator with spin-size S = N/2. Now, we consider the Schrödinger dynamics of this Hamiltonian
i ∂ ∂t |Ψ MF (t) = H MF (t)|Ψ MF (t) .(24)
III. RESULTS
A. Quantum annealing process
We consider quantum annealing processes going the vicinity of the critical point, so we sweep the transverse field Γ(t) from a positive certain value to zero and fix the longitudinal field h to be infinitesimal [24,25]. Here, we adopt a polynomial schedule (15) and (right) the assisted Hamiltonian (22). Here, the number of spins is N = 2S = 1000. With the mean-field counter-diabatic Hamiltonian, the magnetization along the z axis reaches S z /S ≃ 0.998 for the fast operation t f = 1, while S z /S ≃ 1.9 × 10 −3 without the mean-field counter-diabatic Hamiltonian. It is evident that the mean-field counter-diabatic Hamiltonian greatly improves adiabaticity. We remark that adiabaticity is not good for slow operations because errors due to the mean-field approximation are accumulated for long time. We also check the other components of the magnetization. We compare the assisted adiabatic magnetization dynamics with the exact adiabatic case in Fig. 3. Here, the other components of the magnetization also have a good agreement with the exact adiabatic dynamics. Now, we discuss how adiabatic the system is because we cannot conclude the system is almost adiabatic just looking at the magnetization dynamics. Figure 4 represents the fidelity of the assisted adiabatic dynamics, which is the absolute square of the inner-product between the assisted state |Ψ MF (t) and the exact adiabatic state |ψ 0 (t) , i.e. | ψ 0 (t)|Ψ MF (t) | 2 . Although there exist deviations, in particular, around the critical point, a strong deviation is removed for large N (Fig. 4, left). It should be pointed out that the fidelity in the final state is almost independent of N . It is rather surprising because, in general, it is very hard to maintain a large fidelity for a large number of spins. For example, if we consider the absolute square of the inner-product between a fulldirected state |Ψ 1 = ⊗ N i=1 | ↑ i and a slightly deviated
Γ(t) = J[48(t/t f ) 5 − 120(t/t f ) 4 +100(t/t f ) 3 − 30(t/t f ) 2 + 2] = J[48s 5 − 120s 4 + 100s 3 − 30s 2 + 2],(25)state |Ψ 2 = ⊗ N i=1 ( √ 1 − ǫ 2 | ↑ i + ǫ| ↓ i ) with ǫ ≪ 1,
where | ↑ and | ↓ are up and down spin states, the fidelity is suppressed as | Ψ 1 |Ψ 2 | 2 = (1 − ǫ 2 ) N → 0 when N → ∞. Therefore, this result tremendously supports the effectiveness of our method in the present model.
We conclude that the mean-field counter-diabatic driving approach, which is expected to realize the quasiadiabatic dynamics, works well and the dynamics is almost adiabatic for fast operations. B. Non-divergence of the mean-field counter-diabatic field in the limit to the critical point
In the above calculations, we consider the dynamics near the critical point through the quantum annealing processes. Now, we show that the mean-field counterdiabatic field (18) is still well-defined in the limit h → +0. We assume the limit h → +0 in Eq. (19). Then, the selfconsistent equation (19) can be easily solved as
m z (t) = 0, Γ(t) > J,(26)m z (t) = 1 − Γ 2 (t) J 2 , Γ(t) ≤ J.(27)
Then, substituting Eqs. (26) and (27) for Eq. (18), we obtain the approximated expression for the mean-field counter-diabatic fielḋ
θ(t) = 0, Γ(t) > J,(28)θ(t) =Γ (t) 2 J 2 − Γ 2 (t) , Γ(t) ≤ J,(29)
for h → +0. Here, the divergence appears in the mean-field counter-diabatic fieldθ(t) at the critical point Γ(t) = J. However, this divergence can be removed by a natural choice of schedules of the transverse field Γ(t), for example, Eq. (25). From Eq. (25), we find that the divergence of the mean-field counter-diabatic field comes from the following factor
1 J 2 − Γ 2 (t) ∝ 1 (s − 1/2) 3/2 ,(30)
while the time derivative of the transverse field Γ(t) pro-ducesΓ
(t) ∝ (s − 1/2) 2 .(31)
Therefore, the divergence of the mean-field counterdiabatic field is removed aṡ
θ(t) ∝ (s − 1/2) 1/2 ,(32)
at the critical point s = 1/2.
C. Relation to the variational approach for local counter-diabatic driving Sels and Polkovnikov developed the variational approach to construct the local counter-diabatic Hamiltonian [13]. Their method can be applied to a wide range of quantum and classical systems including many-body systems. It is also an advantage of their method that the eigenstates of the original Hamiltonian are not necessary to construct the counter-diabatic Hamiltonian. In this subsection, we explain the relationship between their and our methods.
Regarding our Hamiltonian (15), we require the following form of the gauge potential
A * (t) = α(t) i σ y i ,(33)
where α(t) is nothing but the counter-diabatic fieldθ(t) in the present paper. From Eqs. (15) and (33), we obtain the following function
G(t) ≡ ∂ t H 0 (t) + i[A * (t), H 0 (t)] = −(Γ(t) − 2hα(t)) i σ x i + Jα(t) N i,j (σ z i σ x j + σ x i σ z j ) −2Γ(t)α(t) i σ z i ,(34)
or equivalently
G(t) = −2(Γ(t) − 2hα(t))S x + 2Jα(t) S (S z S x + S x S z ) −4Γ(t)α(t)S z ,(35)
as discussed in Sec. II D. We calculate the Hilbert-Schmidt norm of this function and obtain TrG 2 (t) = 4 3 (Γ(t) − 2hα(t)) 2 S(S + 1)(2S + 1)
+ 1 15 2Jα(t) S 2 S(S + 1)(2S + 1) ×(2S − 1)(2S + 3) + 1 3 (4Γ(t)α(t)) 2 S(S + 1)(2S + 1).(36)
By minimizing this norm with respect to α(t), we obtain the local counter-diabatic field
α(t) = 1 2 hΓ(t) h 2 + Γ 2 (t) + (J/S) 2 (2S − 1)(2S + 3)/20 ,(37)
where the correction factor (J/S) 2 (2S − 1)(2S + 3)/20 vanishes for J = 0 or S = 1/2, for which the long-range Ising model is equivalent to the independent N two-level spins.
It is obvious that this local counter-diabatic field does not equal to the mean-field counterdiabatic field (18). If we adopt the mean-field Hamiltonian (16) instead of (15), we can obtain the mean-field counter-diabatic field (18) as the local counter-diabatic field although there is no longer an advantage of the variational approach, which is the unnecessity of the eigenstates. This result suggest that our mean-field counter-diabatic driving approach takes higher order contributions into account and is consistent with the variational approach for local counter-diabatic driving.
IV. SUMMARY
In this article, we constructed the counter-diabatic Hamiltonian for the infinite-range Ising model (15) by making use of the mean-field approximation. This meanfield counter-diabatic Hamiltonian (17) is quite useful because it is constructed by only the local operators. However, owing to the mean-field approximation, we had to check if this mean-field counter-diabatic Hamiltonian can assist the adiabatic dynamics of the original Hamiltonian.
First, we tested the mean-field counter-diabatic driving in the quantum annealing processes. If we adopt the polynomial schedule (25), the mean-field counterdiabatic field (18) becomes a two-pulse like shape as seen in Fig. 1. It was found that the quasi-adiabatic dynamics realizes under the mean-field counter-diabatic field (18) as seen in Figs. 2, 3, and 4. In particular, it was the surprising result that the fidelity of the final state is almost independent of the system size N . We also confirmed the non-divergence of the mean-field counter-diabatic field at the critical point although we require an infinitesimal longitudinal field h = +0 in order to break the symmetry.
We also investigated the relation to the variational approach for local counter-diabatic driving [13]. We found that our mean-field counter-diabatic driving approach takes higher order contributions into account. This contributions cannot be taken into account if we naively adopt the variational approach. If we apply the meanfield approximation to the variational approach, we can obtain the same local counter-diabatic field in the present paper. However, this treatment spoils the unnecessity of the eigenstates, which is an advantage of the variational approach.
Finally, we discuss application of the mean-field counter-diabatic driving approach. One of the application which we can immediately apply to is fast magnetization switching. When we consider magnetization reversal of uniaxial single-spin magnets under finite-rate magnetic field sweeping, systems are inevitable to be excited associated with the formation of the hysteresis caused by the first order phase transitions [26,27]. The mean-field counter-diabatic driving approach can avoid such excitations. On the other hand, there exist gaps to realize the quantum annealing processes in experiments, which we demonstrated in this paper. We use the information of the ground state to calculate the mean-field. However, the purpose of quantum annealing is nothing but to obtain the information of the ground state. Therefore, this method cannot apply to quantum annealing in the present formalism. Further development in order to apply to quantum annealing should be investigated in the future.
FIG. 1 .
1Schedule of the transverse field and the mean-field counter-diabatic field. The horizontal axis is the normalized time s = t/t f and the vertical axis is the strength of the fields Γ(t) andθ(t). The purple curve represents Γ(t) and the green curve doesθ(t).
FIG. 2 .
2Operation-time dependence of the magnetization processes (left) without and (right) with the mean-field counterdiabatic Hamiltonian. The horizontal axis is the normalized time s = t/t f and the vertical axis is normalized magnetization S z /S. The dotted line represents the exact adiabatic magnetization dynamics of the original Hamiltonian. The number of spins is N = 2S = 1000.where t f is the operation time and we denote s = t/t f . Here, Γ(0) = 2J, Γ(t f ) = 0, andΓ(0) =Γ(t f ) = 0 at the initial and final time, and this system smoothly reaches the critical point at t = t f /2, Γ(t f /2) = J anḋ Γ(t f /2) = 0. The schedule of the transverse field (25) and the consequent mean-field counter-diabatic field(18) are depicted inFig. 1for J = 1, h = 10 −3 , and t f = 1. Hereafter, J = 1 for all numerical calculations and h = 10 −3 in this subsection.
Figure 2
2represents the quantum annealing processes driven by (left) only the original Hamiltonian
FIG. 3 .
3All the components of the magnetization dynamics. The horizontal axis is the normalized time s = t/t f and the vertical axis is the normalized magnetization S α /S, α = x, y, z. The solid curves represent the magnetization processes assisted by the mean-field counter-diabatic Hamiltonian and the dashed curves represent the exact adiabatic magnetization dynamics. Here, N = 2S = 1000 and t f = 1.
FIG. 4 .
4Fidelity of the adiabatic dynamics driven by the mean-field counter-diabatic field. (left) The system-size dependence of the fidelity for t f = 1 and (right) the operation-time dependence of the fidelity for N = 2S = 1000 are depicted. The horizontal axis is the normalized time s = t/t f and the vertical axis is the fidelity, which is the absolute square of the inner-product between the exact adiabatic state |ψ0(t) and the state driven by the mean-field counter-diabatic Hamiltonian, |Ψ MF (t) .
ACKNOWLEDGMENTSThe author is grateful to Dr. Takashi Mori and Prof. Seiji Miyashita for fruitful comments and also thanks the anonymous referee for useful comments. This work is supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan through the Elements Strategy Initiative Center for Magnetic Materials. The author is supported by the Japan Society for the Promotion of Science (JSPS) through the Program for Leading Graduate Schools: Material Education program for the future leaders in Research, Industry, and Technology (MERIT) of the University of Tokyo.
. M Demirplak, S A Rice, 10.1021/jp030708aJ. Phys. Chem. A. 1079937M. Demirplak and S. A. Rice, J. Phys. Chem. A 107, 9937 (2003).
. M Demirplak, S A Rice, 10.1021/jp040647wJ. Phys. Chem. B. 1096838M. Demirplak and S. A. Rice, J. Phys. Chem. B 109, 6838 (2005).
. M Demirplak, S A Rice, 10.1063/1.2992152J. Chem. Phys. 129154111M. Demirplak and S. A. Rice, J. Chem. Phys. 129, 154111 (2008).
. M V Berry, J. Phys. A: Math. Theor. 42365303M. V. Berry, J. Phys. A: Math. Theor. 42, 365303 (2009).
. X Chen, A Ruschhaupt, S Schmidt, A Campo, D Guéry-Odelin, J G Muga, 10.1103/PhysRevLett.104.063002Phys. Rev. Lett. 10463002X. Chen, A. Ruschhaupt, S. Schmidt, A. del Campo, D. Guéry-Odelin, and J. G. Muga, Phys. Rev. Lett. 104, 063002 (2010).
. E Torrontegui, S Ibáñez, S Martínez-Garaot, M Modugno, A Campo, D Guéry-Odelin, A Ruschhaupt, X Chen, J G Muga, 10.1016/B978-0-12-408090-4.00002-5Adv. At. Mol. Opt. Phys. 62117E. Torrontegui, S. Ibáñez, S. Martínez-Garaot, M. Modugno, A. del Campo, D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga, Adv. At. Mol. Opt. Phys. 62, 117 (2013).
. H R Lewis, W B Riesenfeld, 10.1063/1.1664991J. Math. Phys. 101458H. R. Lewis and W. B. Riesenfeld, J. Math. Phys. 10, 1458 (1969).
. A Campo, M M Rams, W H Zurek, 10.1103/PhysRevLett.109.115703Phys. Rev. Lett. 109115703A. del Campo, M. M. Rams, and W. H. Zurek, Phys. Rev. Lett. 109, 115703 (2012).
. K Takahashi, 10.1103/PhysRevE.87.062117Phys. Rev. E. 8762117K. Takahashi, Phys. Rev. E 87, 062117 (2013).
. H Saberi, T Opatrný, K Mølmer, A Del Campo, 10.1103/PhysRevA.90.060301Phys. Rev. A. 9060301H. Saberi, T. Opatrný, K. Mølmer, and A. del Campo, Phys. Rev. A 90, 060301 (2014).
. B Damski, J. Stat. Mech.: Theor. Exp. 12019B. Damski, J. Stat. Mech.: Theor. Exp. 2014, P12019 (2014).
. S Campbell, G De Chiara, M Paternostro, G M Palma, R Fazio, 10.1103/PhysRevLett.114.177206Phys. Rev. Lett. 114177206S. Campbell, G. De Chiara, M. Pater- nostro, G. M. Palma, and R. Fazio, Phys. Rev. Lett. 114, 177206 (2015).
Minimizing irreversible losses in quantum systems by local counter-diabatic driving. D Sels, A Polkovnikov, arXiv:1607.05687D. Sels and A. Polkovnikov, "Minimizing irreversible losses in quantum systems by local counter-diabatic driv- ing," (2016), arXiv:1607.05687.
. K Takahashi, 10.1103/PhysRevA.95.012309Phys. Rev. A. 9512309K. Takahashi, Phys. Rev. A 95, 012309 (2017).
S Sachdev, Quantum Phase Transitions. Cambridge University Press2nd ed.S. Sachdev, Quantum Phase Transitions, 2nd ed. (Cam- bridge University Press, 2011).
H Nishimori, G Ortiz, Elements of Phase Transitions and Critical Phenomena. Oxford University PressH. Nishimori and G. Ortiz, Elements of Phase Transi- tions and Critical Phenomena (Oxford University Press, 2011).
. T W B Kibble, J. Phys. A: Math. Gen. 91387T. W. B. Kibble, J. Phys. A: Math. Gen. 9, 1387 (1976).
. T Kibble, 10.1016/0370-1573(80)90091-5Phys. Rep. 67183T. Kibble, Phys. Rep. 67, 183 (1980).
. W H Zurek, 10.1038/317505a0Nature. 317505W. H. Zurek, Nature 317, 505 (1985).
. W H Zurek, Acta Phys. Pol. B. 241301W. H. Zurek, Acta Phys. Pol. B 24, 1301 (1993).
. J Dziarmaga, 10.1080/00018732.2010.514702Adv. Phys. 591063J. Dziarmaga, Adv. Phys. 59, 1063 (2010).
. A Polkovnikov, K Sengupta, A Silva, M Vengalattore, 10.1103/RevModPhys.83.863Rev. Mod. Phys. 83863A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalat- tore, Rev. Mod. Phys. 83, 863 (2011).
. A Campo, W H Zurek, 10.1142/S0217751X1430018XInt. J. Mod. Phys. A. 291430018A. del Campo and W. H. Zurek, Int. J. Mod. Phys. A 29, 1430018 (2014).
. T Kadowaki, H Nishimori, 10.1103/PhysRevE.58.5355Phys. Rev. E. 585355T. Kadowaki and H. Nishimori, Phys. Rev. E 58, 5355 (1998).
S Suzuki, J Inoue, B K Chakrabarti, 10.1007/978-3-642-33038-4Quantum Ising Phases and Transitions in Transverse Ising Models. Berlin HeidelbergSpringer-Verlag8622nd ed.S. Suzuki, J. Inoue, and B. K. Chakrabarti, Quantum Ising Phases and Transitions in Transverse Ising Models, 2nd ed., Lect. Notes on Phys., Vol. 862 (Springer-Verlag Berlin Heidelberg, 2013).
. T Hatomura, B Barbara, S Miyashita, 10.1103/PhysRevLett.116.037203Phys. Rev. Lett. 11637203T. Hatomura, B. Barbara, and S. Miyashita, Phys. Rev. Lett. 116, 037203 (2016).
Distribution of eigenstate populations and dissipative beating dynamics in uniaxial single-spin magnets. T Hatomura, B Barbara, S Miyashita, arXiv:1704.06466T. Hatomura, B. Barbara, and S. Miyashita, "Distri- bution of eigenstate populations and dissipative beat- ing dynamics in uniaxial single-spin magnets," (2017), arXiv:1704.06466.
| [] |
[
"ON THE CURVATURE OF METRIC CONTACT PAIRS",
"ON THE CURVATURE OF METRIC CONTACT PAIRS"
] | [
"Gianluca Bande ",
"ANDDavid E Blair ",
"Amine Hadjar "
] | [] | [] | We consider manifolds endowed with metric contact pairs for which the two characteristic foliations are orthogonal. We give some properties of the curvature tensor and in particular a formula for the Ricci curvature in the direction of the sum of the two Reeb vector fields. This shows that metrics associated to normal contact pairs cannot be flat. Therefore flat non-Kähler Vaisman manifolds do not exist. Furthermore we give a local classification of metric contact pair manifolds whose curvature vanishes on the vertical subbundle. As a corollary we have that flat associated metrics can only exist if the leaves of the characteristic foliations are at most three-dimensional. | 10.1007/s00009-012-0216-2 | [
"https://arxiv.org/pdf/1110.6278v1.pdf"
] | 119,338,572 | 1110.6278 | a94e97f2ef1c5dc818c28a816bff735f24e93bd2 |
ON THE CURVATURE OF METRIC CONTACT PAIRS
28 Oct 2011
Gianluca Bande
ANDDavid E Blair
Amine Hadjar
ON THE CURVATURE OF METRIC CONTACT PAIRS
28 Oct 2011
We consider manifolds endowed with metric contact pairs for which the two characteristic foliations are orthogonal. We give some properties of the curvature tensor and in particular a formula for the Ricci curvature in the direction of the sum of the two Reeb vector fields. This shows that metrics associated to normal contact pairs cannot be flat. Therefore flat non-Kähler Vaisman manifolds do not exist. Furthermore we give a local classification of metric contact pair manifolds whose curvature vanishes on the vertical subbundle. As a corollary we have that flat associated metrics can only exist if the leaves of the characteristic foliations are at most three-dimensional.
Introduction
A contact pair on a smooth even-dimensional manifold M is a pair of one-forms α 1 and α 2 of constant and complementary classes, for which α 1 restricted to the leaves of the characteristic foliation of α 2 is a contact form and vice versa [2,5]. The Reeb vector fields on these contact leaves determine two global vector fields Z 1 and Z 2 called the Reeb vector fields of the pair. This notion was first introduced by Blair, Ludden and Yano [14] under the name bicontact in the context of Hermitian geometry, and further studied by Abe [1].
In [6,8] the first and the third authors constructed metrics adapted to contact pairs as in metric contact geometry. More precisely, a metric contact pair on an even dimensional manifold is a triple (α 1 , α 2 , g), where (α 1 , α 2 ) is a contact pair with Reeb vector fields Z 1 , Z 2 , and g is an associated metric, i.e. a Riemannian metric such that g(X, Z i ) = α i (X), for i = 1, 2, and for which the endomorphism field φ, uniquely defined by g(X, φY ) = (dα 1 + dα 2 )(X, Y ), satisfies
φ 2 = −Id + α 1 ⊗ Z 1 + α 2 ⊗ Z 2 .
Contact pairs always admit associated metrics for which the two characteristic foliations are orthogonal [6] or, equivalently, whose structure tensor φ is decomposable (i.e. φ preserves the characteristic distributions of α 1 and α 2 ).
In this paper we prove the following classification theorem which is analogous to that of the second author [12] concerning metric contact manifolds with curvature vanishing on the vertical subbundle:
Main Theorem. Let M be a (2h + 2k + 2)-dimensional manifold endowed with a metric contact pair (α 1 , α 2 , φ, g) of type (h, k) (with h ≥ 1) and decomposable φ. If the curvature R of the metric g satisfies R XY Z i = 0 (i = 1, 2), then M is locally isometric to
E h+1 × S h (4) × E k+1 × S k (4) if k ≥ 1 or E h+1 × S h (4) × E 1 if k = 0.
If the manifold is complete, then its Riemannian universal covering is globally isometric to E h+1 × S h (4) × E k+1 × S k (4) if k ≥ 1 or E h+1 × S h (4) × E 1 if k = 0. In this statement we will understand that when h (or k) is equal to 1, the S h (4) factor will just contribute another line to the Euclidean factor.
As a corollary we obtain that the only manifolds which can carry flat metric contact pairs are either four or six-dimensional with metric contact pairs of type (1,0) or (1,1) respectively. We also prove several formulae concerning the curvature tensor and the Ricci curvature of a metric associated to a contact pair with decomposable φ. In particular, we show that, on a 2n-dimensional manifold endowed with such a structure, the Ricci curvature of the associated metric in the direction of the vector field Z = Z 1 + Z 2 is n − 1 − 1 2 Tr h 2 , where h = 1 2 L Z φ and L Z is the Lie derivative along Z. An immediate consequence is the non-existence of flat metrics associated to normal contact pairs with decomposable endomorphism. This implies that the metric of a non-Kähler Vaisman structure on a smooth manifold cannot be flat. This is interesting since the property is local; until now this result was well known only for closed manifolds (see [19] and [16,Proposition 2.5]).
In the sequel we denote by Γ(B) the space of sections of a vector bundle B, by Tr the trace of an endomorphism field, and by ∇ the Levi-Civita connection of a given metric. All the differential objects considered are assumed to be smooth.
Preliminaries on metric contact pairs
In this section we gather the notions concerning contact pairs that will be needed in the sequel. We refer the reader to [3,4,5,6,7,8,9] for further informations and several examples of such structures.
2.1. Contact pairs. A pair (α 1 , α 2 ) of 1-forms on a manifold is said to be a contact pair of type (h, k) if (see [2,5]):
α 1 ∧ (dα 1 ) h ∧ α 2 ∧ (dα 2 ) k is a volume form, (dα 1 ) h+1 = 0 and (dα 2 ) k+1 = 0.
Since the form α 1 (respectively α 2 ) has constant class 2h + 1 (respectively 2k + 1), the distribution Ker α 1 ∩ Ker dα 1 (respectively Ker α 2 ∩ Ker dα 2 ) is completely integrable and then it determines the so-called characteristic foliation F 1 (respectively F 2 ) whose leaves are endowed with a contact form induced by α 2 (respectively α 1 ). The equations
α 1 (Z 1 ) = α 2 (Z 2 ) = 1, α 1 (Z 2 ) = α 2 (Z 1 ) = 0 , i Z 1 dα 1 = i Z 1 dα 2 = i Z 2 dα 1 = i Z 2 dα 2 = 0 ,
where i X is the contraction with the vector field X, determine uniquely the two vector fields Z 1 and Z 2 , called Reeb vector fields. Since they commute [2,5], they give rise to a locally free R 2 -action, called the Reeb action.
The tangent bundle of a manifold M endowed with a contact pair can be split in different ways. For i = 1, 2, let T F i be the subbundle determined by the characteristic foliation of α i , T G i the subbundle of T M whose fibers are given by ker dα i ∩ ker α 1 ∩ ker α 2 and RZ 1 , RZ 2 the line bundles determined by the Reeb vector fields. Then we have the following splittings:
T M = T F 1 ⊕ T F 2 = T G 1 ⊕ T G 2 ⊕ V, where V = RZ 1 ⊕ RZ 2 . Moreover we have T F 1 = T G 1 ⊕ RZ 2 and T F 2 = T G 2 ⊕ RZ 1 .
Definition 2.1. We say that a vector field is vertical if it is a section of V and horizontal if it is a section of T G 1 ⊕ T G 2 . The subbundles V and T G 1 ⊕ T G 2 will be called vertical and horizontal respectively.
Notice that dα 1 (respectively dα 2 ) is symplectic on the vector bundle T G 2 (respectively T G 1 ).
Example 2.2. Take (R 2h+2k+2 , α 1 , α 2 ) where α 1 , α 2 are the Darboux contact forms on R 2h+1 and R 2k+1 respectively. This is also a local model for all contact pairs of type (h, k). Hence a contact pair manifold is locally the product of two contact manifolds [2,5].
Contact pair structures.
We recall now the notion of contact pair structure studied in [6,7,8].
Definition 2.3 ([6]). A contact pair structure on a manifold M is a triple (α 1 , α 2 , φ), where (α 1 , α 2 )
is a contact pair and φ a tensor field of type (1, 1) such that:
(1) φ 2 = −Id + α 1 ⊗ Z 1 + α 2 ⊗ Z 2 , φZ 1 = φZ 2 = 0
where Z 1 and Z 2 are the Reeb vector fields of (α 1 , α 2 ).
One can see that α i • φ = 0 for i = 1, 2 and that the rank of φ is equal to dim M − 2 . Since we are also interested in the induced structures, we recall the following:
Definition 2.4 ([6]). The endomorphism φ is said to be decomposable if φ(T F i ) ⊂ T F i , for i = 1, 2.
The condition for φ to be decomposable is equivalent to φ(T G i ) = T G i for i = 1, 2. If φ is decomposable, then (α 1 , Z 1 , φ) (respectively (α 2 , Z 2 , φ)) induces, on every leaf of F 2 (respectively F 1 ), an almost contact structure (see e.g. [13]) consisting of a contact form, its Reeb vector field and a structure tensor, the restriction of φ to the leaf.
On a manifold M endowed with a contact pair, there always exists a decomposable endomorphism field φ satisfying (1) (see [6]).
As a trivial example one can take two contact manifolds M i , i = 1, 2 with structure tensors (α i , φ i ), and consider the contact pair structure (α 1 , α 2 , φ 1 ⊕ φ 2 ) on M 1 × M 2 . In [7] we gave examples of contact pair structures with decomposable endomorphism which are not locally products.
In what follows, on a manifold M endowed with a contact pair structure (α 1 , α 2 , φ), we will consider the tensor fields defined by:
N 1 (X, Y ) =[φ, φ](X, Y ) + 2dα 1 (X, Y )Z 1 + 2dα 2 (X, Y )Z 2 , N 2 i (X, Y ) =(L φX α i )(Y ) − (L φY α i )(X), i = 1, 2, h = 1 2 L Z φ ,
for all X, Y ∈ Γ(T M), where Z = Z 1 + Z 2 and [φ, φ] is the Nijenhuis tensor of φ. The vanishing of N 1 gives exactly the normality of the pair [7], that is the integrability of the two almost complex structures φ ± (α 1 ⊗ Z 2 − α 2 ⊗ Z 1 ). In this case, by [7, Equation (3.5)] we have the following:
Proposition 2.5. If a contact pair structure (α 1 , α 2 , φ) with Reeb vector fields Z 1 and Z 2 is normal, we have N 2 1 = N 2 2 = 0 , L Z 1 φ = L Z 2 φ = 0 and then h = 0 . 2.3. Metric contact pairs. On manifolds endowed with contact pair structures it is natural to consider the following metrics: Definition 2.6 ([6]). Let (α 1 , α 2 , φ) be a contact pair structure on a manifold M, with Reeb vector fields Z 1 and Z 2 . A Riemannian metric g on M is said to be:
i) compatible if g(φX, φY ) = g(X, Y ) − α 1 (X)α 1 (Y ) − α 2 (X)α 2 (Y ) for all X, Y ∈ Γ(T M),
ii) associated if g(X, φY ) = (dα 1 + dα 2 )(X, Y ) and g(X, Z i ) = α i (X), for i = 1, 2 and for all X, Y ∈ Γ(T M).
An associated metric is compatible, but the converse is not true. where (α 1 , α 2 , φ) is a contact pair structure and g an associated metric with respect to it.
The manifold M will be called an MCP manifold or simply an MCP.
For an MCP (α 1 , α 2 , φ, g) the endomorphism field φ is decomposable if and only if the characteristic foliations F 1 , F 2 are orthogonal [6]. In this case (α i , φ, g) induces a metric contact structure on the leaves of F j , for j = i .
Using a standard polarization on the symplectic vector bundles T G i , one can see that for a given contact pair (α 1 , α 2 ) there always exist a decomposable endomorphism field φ and a metric g such that (α 1 , α 2 , φ, g) is an MCP (see [6]). Moreover we have: Proposition 2.8. Let (α 1 , α 2 , φ, g) be an MCP with decomposable φ. Then we have:
(2) N 2 1 = N 2 2 = 0. Proof.
Since φ is decomposable, if X, Y are vector fields tangent to different foliations, we have dα i (φX, Y ) = dα i (φY, X) = 0, i = 1, 2 . If X, Y are tangent to the same foliation F i , because φ preserves the foliation, we have N 2 i (X, Y ) = 0. Moreover, for j = i, the triple (α j , φ, g) restricted to the leaves of F i is a metric contact structure and then it satisfies (2), which is a well known fact in metric contact geometry. Some other properties of MCP's are given by the following results:
Theorem 2.9 ([6])
. Let M be a manifold endowed with a contact pair structure (α 1 , α 2 , φ), with Reeb vector fields Z 1 , Z 2 . Let g be a metric compatible metric with the structure. Then we have:
(1) g(Z i , X) = α i (X) for i = 1, 2 and for every X ∈ Γ(T M); We end this section with a result from [8]:
(2) g(Z i , Z j ) = δ ij for i, j = 1, 2; (3) ∇ Z i Z j = 0 for i,Theorem 2.12 ([8])
. On an MCP manifold (M, α 1 , α 2 , φ, g) with decomposable φ the leaves of the characteristic foliations of the contact pair are orthogonal and minimal.
As example, one can simply take the product of two metric contact manifolds. Here is an interesting example from [8] which shows that an MCP manifold is not always locally the product of two metric contact manifolds:
Example 2.13. Let us consider the simply connected 6-dimensional nilpotent Lie group G with structure equations:
dω 3 = dω 6 = 0 , dω 2 = ω 5 ∧ ω 6 , dω 1 = ω 3 ∧ ω 4 , dω 4 = ω 3 ∧ ω 5 , dω 5 = ω 3 ∧ ω 6 ,
where the ω i 's form a basis for the cotangent space of G at the identity. Then (ω 1 , ω 2 ) together with the metric
g = ω 2 1 + ω 2 2 + 1 2 6 i=3 ω 2 i is a left invariant MCP of type (1, 1) on G.
Note that the two characteristic foliations are orthogonal, and that their leaves, although minimal, are not totally geodesic. So the metric g is not locally a product. Since the structure constants of the group are rational, there exist lattices Γ such that G/Γ is compact. This MCP descends to all quotients G/Γ, and we obtain closed nilmanifolds carrying the same type of structure. Moreover one can see that these MCP structures are not normal, their Reeb vector fields are however Killing and hence h = 0.
The tensor h and the Levi-Civita connection for MCP's
Here we show some properties of the tensor field h for MCP manifolds. We also prove some formulae concerning the Levi-Civita connection ∇ for a metric associated to a contact pair.
3.1. The covariant derivative of φ. Proposition 3.1. Let (α 1 , α 2 , φ) be a contact pair structure together with a compatible metric g, and Φ the two-form defined by Φ(X, Y ) = g(φX, Y ). Then the covariant derivative of φ is given by
2g ((∇ X φ)Y, W ) = 3dΦ(X, Y, W ) − 3dΦ(X, φY, φW ) + g N 1 (Y, W ), φX +2 2 i=1 dα i (φY, X)α i (W ) − dα i (φW, X)α i (Y ) + 2 i=1 α i (X)N 2 i (Y, W ).(3)
Proof. Applying the definition of the Levi-Civita connection to 2g(∇ X Y, W ) and using the formula for the exterior derivative of Φ in terms of Lie brackets, we have:
2g((∇ X φ)Y, W ) = 2g(∇ X (φY ), W ) + 2g(∇ X Y, φW ) = XΦ(Y, W ) + φY Φ(X, φW ) + 2 i=1 α i (X)α i (W ) + W Φ(X, Y ) + Φ([X, φY ], φW ) + 2 i=1 α i ([X, φY ])α i (W ) − Φ([W, X], Y ) − g(φ[φY, W ], φX) − 2 i=1 α i ([φY, W ])α i (X) − XΦ(φY, φW ) + Y Φ(W, X) − φW Φ(X, φY ) + 2 i=1 α i (X)α i (Y ) − Φ([X, Y ], W ) + Φ([φW, X], φY ) + 2 i=1 α i ([φW, X])α i (Y ) − g(φ[Y, φW ], φX) + 2 i=1 α i (X)α i ([φW, Y ]) − g([Y, W ], φX) − Φ([Y, W ], X) + g([φY, φW ], φX) + Φ([φY, φW ], X) + g (2dα 1 (Y, W )Z 1 , φX) + g(2dα 2 (Y, W )Z 2 , φX) = 3dΦ(X, Y, W ) − 3dΦ(X, φY, φW ) + g N 1 (Y, W ), φX + 2 2 i=1 dα i (φY, X)α i (W ) − dα i (φW, X)α i (Y ) + 2 i=1 α i (X)N 2 i (Y, W ).
Applying Proposition 3.1 to a MCP with decomposable φ, we obtain:
Corollary 3.2.
For an MCP (α 1 , α 2 , φ, g) with decomposable φ, the covariant derivative of φ is given by
(4) 2g((∇ X φ)Y, W ) = g N 1 (Y, W ), φX + 2 2 i=1 dα i (φY, X)α i (W ) − dα i (φW, X)α i (Y ) .
Proof. For an MCP with decomposable φ we have N 2 1 = N 2 2 = 0 by Proposition 2.8. Moreover −Φ = dα 1 + dα 2 . Then (3) reduces to (4).
Corollary 3.3.
For an MCP with decomposable φ and Reeb vector fields Z 1 , Z 2 we have:
∇ Z 1 φ = ∇ Z 2 φ = 0.
Proof. In (4), we put X = Z i for i = 1, 2, and we obtain g((
∇ Z i φ)Y, W ) = 0.
3.2. The tensor field h. When φ is decomposable so is the tensor field h, because for every
X ∈ Γ(T F i ) we have [Z j , X] ∈ Γ(T F i ) for i, j = 1, 2.
In this case we have:
h = h 1 ⊕ h 2 and φ = φ 1 ⊕ φ 2 ,
where h 1 (respectively φ 1 ) is the endomorphism of T F 2 induced by h (respectively by φ) and vice-versa. We can now state the following results:
Theorem 3.4. Let (α 1 , α 2 , φ, g) be an MCP with decomposable φ on a manifold M. Let Z 1 , Z 2 be the Reeb vector fields of (α 1 , α 2 ) and Z = Z 1 + Z 2 . Then we have: (a) L Z 1 φ , L Z 2 φ , h , h 1 and h 2 are symmetric operators; (b) ∇ X Z = −φX − φ h X for every X ∈ Γ(T M); (c) h •φ + φ • h = 0 and h i •φ i + φ i • h i = 0 for i = 1, 2; (d) Tr h = Tr h 1 = Tr h 2 = 0 ; (e) α i • h = α i • h j = 0 for every i, j = 1, 2.
To prove this we need the following:
Lemma 3.5. Let (α 1 , α 2 , φ, g)
be an MCP on a manifold M with Reeb vector fields Z 1 and Z 2 . Then for every X ∈ Γ(T M), ∇ X Z 1 and ∇ X Z 2 are both tangent to the kernels of α 1 and α 2 .
Proof. Since ∇ Z i Z j = 0, it is enough to take X horizontal. Note also that α 1 (∇ X Z 2 ) = −α 2 (∇ X Z 1 ) and α 1 (∇ X Z 1 ) = α 2 (∇ X Z 2 ) = 0. Now α 1 (∇ X Z 2 ) = g(∇ Z 2 X + [X, Z 2 ], Z 1 ) = α 1 ([X, Z 2 ]) + Z 2 α 1 (X) − g(X, ∇ Z 2 Z 1 ) = 0 since dα 1 (X, Z 2 ) = 0. Proof of Theorem 3.4. (a) We want to show that g X, (L Z j φ)Y = g (L Z j φ)X, Y , for j = 1, 2.
We prove the property for j = 1, since the other case is similar.
For X = Z i , i = 1, 2 we have g ((L Z 1 φ)Z i , Y ) = 0 and g (Z i , (L Z 1 φ)Y ) = 0. The same holds for Y = Z i , i = 1, 2.
Then we have to prove the symmetry of L Z 1 φ for X, Y tangent to ker α 1 ∩ ker α 2 . By Corollary 3.3 we have ∇ Z i φ = 0 for i = 1, 2. For X, Y ∈ ker α 1 ∩ ker α 2 , we have:
g ((L Z 1 φ)X, Y ) =g (−∇ φX Z 1 + φ(∇ X Z 1 ), Y ) =g(Z 1 , ∇ φX Y ) − g(∇ X Z 1 , φY ) =α 1 (∇ φX Y ) + α 1 (∇ X φY ) =α 1 (∇ Y φX) + α 1 (∇ φY X) =g (X, (L Z 1 φ)Y ) ,
where we have used that Z 1 is orthogonal to X, Y ∈ ker α 1 ∩ ker α 2 and that for an MCP the tensors N 2 1 and N 2 2 vanish. It is clear that h is symmetric as well and after restriction this is also true for h 1 and h 2 . (b) By Corollary 3.2, for i = 1, 2, and for every X, Y ∈ Γ(T M), we have:
2g ((∇ X φ)Z i , Y ) =g N 1 (Z i , Y ), φX − 2dα i (φY, X) =g φ 2 [Z i , Y ] − φ[Z i , φY ], φX − 2dα i (φY, X) = − g (φ(L Z i φ)Y ), φX) − 2dα i (φY, X) = − g ((L Z i φ)Y ), X) + 2 j=1 α j ((L Z i φ)Y ) α j (X) − 2dα i (φY, X) = − g ((L Z i φ)Y, X) − 2dα i (φY, X).
Then we obtain:
2 i=1 2g ((∇ X φ)Z i , Y ) = 2 i=1 −2g ((L Z i φ)Y, X) − 2dα i (φY, X) = − 2 2 i=1 g ((L Z i φ)Y, X) − 2g(φY, φX) = − 2 2 i=1 g ((L Z i φ)Y, X) − 2g(Y, X) + 2 2 i=1 α i (X)α i (Y )
and then
g((∇ X φ)Z, Y ) = − g(h Y, X) − g(X, Y ) + 2 i=1 α i (X)α i (Y ) = − g(h Y, X) − g(X, Y ) + 2 i=1 g (α i (X)Z i , Y ) .
Since the last equation is true for every X, Y ∈ Γ(T M), this implies
−φ∇ X Z = (∇ X φ)Z = − h X − X + α 1 (X)Z 1 + α 2 (X)Z 2 .
Applying φ to the last equation and using Lemma 3.5 gives:
∇ X Z = −φX − φ h X + α 1 (∇ X Z)Z 1 + α 2 (∇ X Z)Z 2 = −φX − φ h X.
(c) For X, Y ∈ Γ(T M), by the symmetry of h and the formula ∇ X Z = −φX − φ h X, we have:
2g(X, φY ) =2(dα 1 + dα 2 )(X, Y ) = 2 i=1 g(∇ X Z i , Y ) − g(∇ Y Z i , X) =g(∇ X Z, Y ) − g(∇ Y Z, X) = − g(φX, Y ) + g(φY, X) − g(φ h X, Y ) + g(φ h Y, X) = − g(φ h X, Y ) + g(φ h Y, X) + 2g(X, φY ) =g(h φY + φ h Y, X) + 2g(X, φY ), which implies that h •φ + φ • h = 0. After restriction of h and φ to T F i for i = 1, 2, we obtain h i •φ i + φ i • h i = 0. (d)
Since the endomorphism h is symmetric, at every point p ∈ M there exists an eigenbasis of T p M. Let V be an eigenvector relative to the eigenvalue λ. Then, by (c), we have:
h p (φ p V ) = −λ(φ p V ),
which means that −λ is also an eigenvalue, relative to the eigenvector φ p V , and then the trace of h p vanishes for every p ∈ M. Similarly we have Tr h 1 = Tr h 2 = 0. (e) The last property follows easily from (c).
Corollary 3.6. Let (α 1 , α 2 , φ, g) be an MCP with decomposable φ and Reeb vector fields Z 1 , Z 2 . If the vector field Z = Z 1 + Z 2 is Killing, then we have
∇ X Z = −φX.
Proof. By Proposition 2.11, the vector field Z is Killing if and only if h = 0. Applying this to Theorem 3.4-(b), we get ∇ X Z = −φX.
A special case is given when both Reeb vector fields are Killing. A first example of the latter situation concerns the non-normal MCP on the nilpotent Lie group G and its closed nilmanifolds G/Γ described in Example 2.13.
One can also have Z i Killing by choosing normal structures (see Corollary 2.10). Then here is a second example, with a normal MCP but where the manifold is not a product of two metric contact manifolds: Example 3.7. Let M = SL 2 be the universal covering of the identity component of the isometry group of the hyperbolic plane H 2 endowed with an invariant Sasakian structure (α, φ, g) (see [17]) and N = M × M. It is well known that N admits cocompact irreducible lattices Γ (see [15]). This means that Γ does not admit any subgroup of finite index which is a product of two lattices of M. The manifold N can be endowed with the product MCP structure and by the invariance of the structure by Γ, the MCP descends to the quotient and is normal. Even if the local structure is like a product, globally the two characteristic foliations can be very interesting in the sense that both could have dense leaves.
Some curvature properties
In this section, for a manifold M carrying an MCP (α 1 , α 2 , φ, g) with decomposable φ, we set Z = Z 1 + Z 2 where Z 1 , Z 2 are the Reeb vector fields. We prove some properties of the curvature tensor and the Ricci curvature, which are analogous to those of metric contact structures (see e.g. [13]). As a consequence we prove the non-flatness of metrics associated to normal MCP's. This implies the non-existence of flat non-Kähler Vaisman manifolds.
4.1. The curvature. We denote by R the curvature tensor of the metric g, and by Ric its Ricci curvature.
Proposition 4.1. Let (α 1 , α 2 , φ, g) be an MCP with decomposable φ on a manifold M. Then:
(∇ Z h)X = φX − h 2 φX − φ R XZ Z (5) 1 2 R ZX Z − φ(R ZφX Z) = φ 2 X + h 2 X.(6)
Proof. Corollary 3.3 implies ∇ Z (φX) = φ(∇ Z X). Using this and Theorems 2.9 and 3.4, we apply φ to:
R ZX Z = ∇ Z ∇ X Z − ∇ X ∇ Z Z − ∇ [Z,X] Z = ∇ Z (−φX − φ h X) + φ[Z, X] + φ h[Z, X],
and we obtain:
φ(R ZX Z) =∇ Z (X + h X) − α 1 ∇ Z (X + h X) Z 1 − α 2 ∇ Z (X + h X) Z 2 − [Z, X] +α 1 ([Z, X])Z 1 + α 2 ([Z, X])Z 2 − h[Z, X] + α 1 (h[Z, X])Z 1 + α 2 (h[Z, X])Z 2 =(∇ Z h)X + ∇ X Z + h ∇ X Z =(∇ Z h)X − φX − φ h X + h(−φX − φ h X) =(∇ Z h)X − φX + h 2 φX,
which gives (5).
To prove (6) we first remark that R ZX Z is tangent to the kernels of α 1 and α 2 . Then we have:
R ZX Z = −φ 2 R ZX Z = φ 2 X − φ h 2 φX − φ (∇ Z h)X = φ 2 X + h 2 X − φ (∇ Z h)X .
Using the previous expression and taking the difference R ZX Z − φ(R ZφX Z) gives (6).
Theorem 4.2. Let (α 1 , α 2 , φ, g) be an MCP of type (h, k) with decomposable φ on a (2h + 2k + 2)-dimensional manifold M. Then we have:
(7) Ric(Z) = h + k − 1 2 Tr h 2 .
Moreover Ric(Z) = h + k if and only if Z is Killing.
Proof. Denote by K(Z, X) the sectional curvature of the plane determined by {Z, X}. By using (6) for X of unit length and orthogonal to Z 1 and Z 2 , and recalling that g(Z, Z) = 2, we obtain
K(Z, X) + K(Z, φX) = − 1 2 g R ZX Z − φ(R ZφX Z), X = − g(φ 2 X + h 2 X, X) =1 − g(h 2 X, X).
Let {Z 1 , Z 2 , X 1 , · · · , X 2h+2k } be a local φ-basis, that is an orthogonal basis for which the X i have unit length and X 2i = φX 2i−1 . Then, since K (Z, Z 1 − Z 2 ) = 0, taking the sum with X orthogonal to both Z 1 and Z 2 , the value of the sectional curvature K(X, Z) is 1/2. Moreover in this case, for every Y we have:
(8) R Y Z Z = Y − α 1 (Y )Z 1 − α 2 (Y )Z 2 .
Proof. If for all the plane sections (Z, X) with X orthogonal to Z 1 and Z 2 , we have K(X, Z) = 1/2. Then Ric(Z) = h + k, and h = 0 by Theorem 4.2 .
Conversely, let h = 0. Using ∇ X Z = −φX, for X of unit length and orthogonal to Z 1 and Z 2 , and recalling that ∇ Z Z = 0, we have 2 K(Z, X) = − g(R ZX Z, X)
= − g(∇ Z ∇ X Z − ∇ [Z,X] Z, X) =g(∇ Z φX − φ[Z, X], X) =g(φ(∇ Z X) − φ[Z, X], X) =g(φ(∇ X Z + [Z, X]) − φ[Z, X], X) =g(φ(∇ X Z), X) =g(−φ 2 X, X) =1.
To obtain (8), we have just to set h = 0 in (5), apply φ on both sides and observe that an easy computation gives g(R Y Z Z, Z i ) = 0.
4.2.
Normal MCP's and Vaisman structures. By Proposition 2.5, for a normal contact pair the tensor h vanishes necessarily. Thus, by (7), we have:
Corollary 4.4. A metric associated to a normal contact pair with decomposable endomorphism cannot be flat.
In particular this is true for normal MCP's of type (h, 0) which are nothing but non-Kähler Vaisman structures modulo constant rescaling of the metric [10]. For this case the previous Corollary can be stated as:
Theorem 4.5. The metric of a non-Kähler Vaisman manifold cannot be flat.
Here compactness is not needed. However this result was known for closed manifolds. Indeed a Vaisman structure is a particular locally conformally Kähler (lcK) structure. According to [19] (see also [16, Proposition 2.5]) a closed lcK manifold of constant curvature is necessary Kähler. Hence flat non-Kähler Vaisman structures do not exist on closed manifolds.
4.3.
In complete analogy to the case of contact metric manifolds, we want to define two tensor fields that are useful for the calculations in the problem of finding metric contact pairs with curvature vanishing on the vertical subbundle. First observe that for a metric contact pair with decomposable φ, we have:
2g((∇ X φ)W, φY ) − 2g((∇ X φ)φW, Y ) =g(N 1 (W, φY ) − N 1 (φW, Y ), φX) − 2dα 1 (φ 2 Y, X)α 1 (W ) − 2dα 2 (φ 2 Y, X)α 2 (W ) − 2dα 1 (φ 2 W, X)α 1 (Y ) − 2dα 2 (φ 2 W, X)α 2 (Y ) =α 1 (Y )g([φW, Z 1 ] − φ[W, Z 1 ], φX) + α 2 (Y )g([φW, Z 2 ] − φ[W, Z 2 ], φX) + α 1 (W )g([φY, Z 1 ] − φ[Y, Z 1 ], φX) + α 2 (W )g([φY, Z 2 ] − φ[Y, Z 2 ], φX) + 2dα 1 (Y, X)α 1 (W ) + 2dα 2 (Y, X)α 2 (W ) + 2dα 1 (W, X)α 1 (Y ) + 2dα 2 (W, X)α 2 (Y ) .(9)
Replacing W with φW in (9), we obtain the following:
2g((∇ X φ)φW, φY ) + 2g((∇ X φ)W, Y ) − 2g(Y, (∇ X φ)(α 1 (W )Z 1 + α 2 (W )Z 2 )) =α 1 (Y )g(−[W, Z 1 ] − φ[φW, Z 1 ], φX) + α 2 (Y )g(−[W, Z 2 ] − φ[φW, Z 2 ], φX) + 2dα 1 (φW, X)α 1 (Y ) + 2dα 2 (φW, X)α 2 (Y ) .(10)
Taking X, Y, W horizontal in (9) and in (10), we have: (11) g((∇ X φ)W, φY ) = g((∇ X φ)φW, Y ) .
(12) g((∇ X φ)φW, φY ) = −g((∇ X φ)W, Y ) .
Using (4) with X, Y, W horizontal we get
(13) g((∇ X φ)Y, W ) + g((∇ φX φ)φY, W ) = 0 .
For the curvature operator we have:
R XY Z = −∇ X (φY + φ h Y ) + ∇ Y (φX + φ h X) + φ[X, Y ] + φ h[X, Y ] = −(∇ X φ)Y + (∇ Y φ)X − (∇ X φ h)Y + (∇ X φ h)Y ,(14)
which gives:
(15) g(R ZW X, Y ) = −g(X, (∇ W φ)Y ) − g(W, (∇ X φ h)Y ) + g(W, (∇ Y φ h)X) ,
or equivalently
(16) g(R ZX Y, W ) = −g(Y, (∇ X φ)W ) − g(X, (∇ Y φ h)W ) + g(X, (∇ W φ h)Y ) .
Now we define the following tensors:
Definition 4.6. For X, Y, W ∈ Γ(T M), set A(X, Y, W ) = − g(Y, (∇ X φ)W ) + g(φY, (∇ X φ)φW ) − g(Y, (∇ φX φ)φW ) − g(φY, (∇ φX φ)W ) ,(17)B(X, Y, W ) = − g(X, (∇ Y φ h)W ) + g(X, (∇ φY φ h)φW ) − g(φX, (∇ Y φ h)φW ) − g(φX, (∇ φY φ h)W ) .(18)
Taking X, Y, W horizontal and using (16), we obtain the following relation:
A(X, Y, W ) + B(X, Y, W ) − B(X, W, Y ) =g(R ZX Y, W ) − g(R ZX φY, φW ) + g(R ZφX Y, φW ) + g(R ZφX φY, W ) .(19)
The following lemma will be useful in the sequel:
Lemma 4.7.
For every X, Y, W horizontal, we have:
A(X, Y, W ) + B(X, Y, W ) − B(X, W, Y ) = −2g((∇ h X φ)Y, W )
Proof. For X, Y, W horizontal, using (11), (12) and (13) we have:
A(X, Y, W ) = − 2g(Y, (∇ X φ)W ) − 2g(Y, (∇ φX φ)φW ) = 0 .(20)
Also for X, Y, W horizontal, we calculate
B(X, Y, W ) = − g(X, (∇ Y φ) h W + φ(∇ Y h)W ) + g(X, (∇ φY h)W − φ h(∇ φY φ)W ) − g(φX, (∇ Y h)W − φ h(∇ Y φ)W ) − g(φX, (∇ φY ) h W + φ(∇ φY h)W ) = − g(X, (∇ Y φ) h W ) + g(X, h φ(∇ φY φ)W ) + g(φX, φ h(∇ Y φ)W ) − g(φX, (∇ φY φ) h W ) . (21) Now, we have −g(φ h X, (∇ φY φ)W ) =g((∇ φY φ)φ h X, W ) = − g((∇ Y φ) h X, W ) =g(h X, (∇ Y φ)W ) ,(22)
where in the second line we used (13), and furthermore we also have
−g(φX, (∇ φY φ) h W ) =g((∇ φY φ)φX, h W ) = − g((∇ Y φ)X, h W ) =g(X, (∇ Y φ) h W ) ,(23)
again using (13). This in turn gives
B(X, Y, W ) = 2g(h X, (∇ Y φ)W ) ,
and putting all this together, we obtain
A(X, Y, W ) + B(X, Y, W ) − B(X, W, Y ) =2g(h X, (∇ Y φ)W ) − 2g(h X, (∇ W φ)Y ) = − 2g((∇ h X φ)Y, W ).(24)
Corollary 4.8. If the curvature of an MCP with decomposable φ satisfies R XY Z 1 = R XY Z 2 = 0 for all X, Y , then for all horizontal vector fields
g((∇ h X φ)Y, W ) = 0.
Proof. The left hand side of the equation of Lemma 4.7 vanishes by the assumption on the curvature and by (19).
Curvature vanishing on the vertical subbundle
In this section we prove for MCP's the analogues of the results of the second named author on metric contact manifolds [11,12]. Recall that for a contact pair we defined the vertical subbundle as the subbundle V spanned by the Reeb vector fields Z 1 , Z 2 . We will say that an MCP has curvature vanishing on the vertical subbundle if the following condition is satisfied for all vector fields X, Y : R XY Z i = 0 , for i = 1, 2 . The standard example of such a situation is the product of the unit tangent bundles of E h and E k , since each of them is endowed with a metric contact structure with curvature vanishing along the corresponding Reeb vector field (see [12]). Actually Theorem 5.1 below says exactly that locally this is the only possibility.
Before the statement of the theorem, we make some remarks. First, observe that a contact pair of type (0, 0) is in some sense trivial, because we would like to have an induced contact form on the leaves of at least one of the characteristic foliations. Moreover, if the manifold is endowed with an associated metric then it is flat and then locally isometric to E 2 . Since the forms composing the contact pair play a symmetric role, to exclude the former trivial case we only consider contact pairs of type (h, k) with h ≥ 1.
Theorem 5.1. Let M be a (2h + 2k + 2)-dimensional manifold endowed with a metric contact pair (α 1 , α 2 , φ, g) of type (h, k) (with h ≥ 1) and decomposable φ. If the curvature of g vanishes on the vertical subbundle, then M is locally isometric to
E h+1 ×S h (4)×E k+1 ×S k (4) if k ≥ 1 or to E h+1 × S h (4) × E 1 if k = 0.
Proof. We split the proof into several steps: a) We have seen that the decomposability of φ implies that h is also decomposable and we set φ = φ 1 ⊕ φ 2 and h = h 1 ⊕ h 2 as in Section 3.2. If the curvature tensor R vanishes on the vertical subbundle, by (6) we have h 2 = −φ 2 and then, since h is symmetric, for its rank we have rk h = rk h 2 = rk φ 2 = rk φ = 2h + 2k. If X is an eigenvector of h corresponding to a non-zero eigenvalue λ then it is orthogonal to the Reeb vector fields (which are 0-eigenvectors) and we have:
λ 2 g(X, X) = g(λX, λX) = g(h X, h X) = g(h 2 X, X) = −g(φ 2 X, X) = g(X, X).
Thus the non-zero eigenvalues of h are ±1. By restriction the same is true for the eigenvalues of h 1 and h 2 . Moreover the (±1)-eigenspaces of h are direct sums of the (±1)-eigenspaces of h 1 and h 2 at every point. Also observe that φ (respectively φ j ) intertwines the eigenspaces corresponding to +1 and −1, because it anticommutes with h (respectively h j ).
0 = R XY Z = −∇ [X,Y ] Z = φ[X, Y ] + φ h[X, Y ] , which implies h φ[X, Y ] = φ[X, Y ] and then φ[X, Y ] ∈ [+1] (respectively [+1] j ). Applying φ to both sides of the equation gives φ 2 [X, Y ] ∈ [−1] (respectively [−1] j ). Calculating further we have φ 2 [X, Y ] = −[X, Y ] + dα 1 (X, Y )Z 1 + dα 2 (X, Y )Z 2 = −[X, Y ] .
The last equation is clear if X, Y are tangent to different foliations, since in this case the dα i vanish, and is easily deduced from the following observation when X, Y are tangent to the same foliation. In fact if X, Y are tangent to the same foliation, say F 1 for example, we have dα 2 (X, Y ) = dα 1 (X, Y ) + dα 2 (X, Y ) = g(X, φY ) = 0 since X ∈ [−1] and φ intertwines the (±1)-eigenspaces.
The same calculations give [X,
Z i ] ∈ [−1] (respectively [−1] j ) for every X ∈ [−1] (re- spectively [−1] j ) and [φY, Z i ] ∈ [−1] (respectively [−1] j ) for every Y ∈ [+1] (respectively [+1] j ). In particular this implies that the distributions [−1] j , [−1], [−1] j ⊕ RZ i , [−1] ⊕ RZ i , [−1] j ⊕ RZ 1 ⊕ RZ 2 and [−1] ⊕ RZ 1 ⊕ RZ 2 are integrable.
b) According to the local model for contact pairs of type (h, k), on every point there exist local coordinates (u 0 , · · · , u 2k , v 0 , · · · , v 2h ) such that ∂ ∂u 0 , · · · , ∂ ∂u 2k span Ker α 1 ∩ Ker dα 1 and ∂ ∂v 0 , · · · , ∂ ∂v 2h span Ker α 2 ∩ Ker dα 2 . By the integrability of [−1] 2 ⊕ RZ 2 (respectively [−1] 1 ⊕ RZ 1 ), these local coordinates can be chosen such that one also has that ∂ ∂u 0 , · · · , ∂ ∂u k span [−1] 2 ⊕ RZ 2 and ∂ ∂v 0 , · · · , ∂ ∂v h span [−1] 1 ⊕ RZ 1 . Let us define the vector fields
X i = ∂ ∂u k+i + k p=0 f ip ∂ ∂up , for 1 ≤ i ≤ k and Y s = ∂ ∂v h+s + h q=0f sq ∂ ∂vq for 1 ≤ s ≤ h,
where the functions f ip ,f sq are chosen in such a way that X i ∈ [+1] 2 and Y s ∈ [+1] 1 . In general those functions depend on all coordinates. It is clear that at every point of M, the X i 's and Y s 's form a basis for [+1] 2 and [+1] 1 respectively.
A direct calculation with local coordinates gives:
∂ ∂u p , X i ∈ [−1] 2 ⊕ RZ 2 , 0 ≤ p ≤ k , 0 ≤ i ≤ k , ∂ ∂v q , Y s ∈ [−1] 1 ⊕ RZ 1 , 0 ≤ q ≤ h , 0 ≤ s ≤ h , ∂ ∂u p , Y s ∈ [−1] 1 ⊕ RZ 1 , 0 ≤ p ≤ k , 0 ≤ s ≤ h , ∂ ∂v q , X i ∈ [−1] 2 ⊕ RZ 2 , 0 ≤ q ≤ h , 0 ≤ i ≤ k , and [X i , Y s ] ∈ [−1] ⊕ RZ 1 ⊕ RZ 2 . Then we have ∇ [ ∂ ∂up ,X i ] Z = ∇ [ ∂ ∂up ,Ys] Z = ∇ [ ∂ ∂vq ,X i ] Z = ∇ [ ∂ ∂vq ,Ys] Z = ∇ [X i ,Ys] Z = 0 .
The assumption on the curvature implies R ∂ ∂up X i Z = 0, 0 ≤ p ≤ k, and we obtain (26)
0 = ∇ ∂ ∂up ∇ X i Z − ∇ X i ∇ ∂ ∂up Z = −2∇ ∂ ∂up φX i .
Since the ∂ ∂up 's span [−1] 2 ⊕ RZ 2 and the connection is tensorial in the first argument, we have:
(27) ∇ φX j φX i = 0 , ∀i, j .
In a similar way we obtain the following formulae: ∇ φYr φY s = 0 , ∀r, s , ∇ φX i φY r = 0 , ∀i, r , ∇ φYr φX i = 0 , ∀i, r .
These imply that the integral submanifolds of [−1] ⊕ RZ 1 ⊕ RZ 2 are totally geodesic and flat.
A direct calculation with local coordinates shows that [X i , X j ] is in [−1] 2 ⊕ RZ 2 . Differentiating g(X i , Z) = 0 along X j we obtain g(∇ X j X i , Z) = 0. Interchanging i and j and taking the difference we get 0
= g([X i , X j ], Z) = g([X i , X j ], Z 2 ) since [X i , X j ] is orthogonal to Z 1 . This actually means that [X i , X j ] is in [−1] 2 .
Then we have
0 = R X i X j Z = −2∇ X i φX j + 2∇ X j φX i , or equivalently (29) ∇ X i φX j = ∇ X j φX i .
Similarly we obtain (30) ∇ Yr φY s = ∇ Ys φY r .
With similar calculations, using the fact that
[X i , Y r ] ∈ [−1] ⊕ RZ 1 ⊕ RZ 2 , we obtain (31) ∇ X i φY r = ∇ Yr φX i .
Equations (29)-(31) can also be written as follows:
φ[X i , X j ] = −(∇ X i φ)X j + (∇ X j φ)X i , φ[Y r , Y s ] = −(∇ Yr φ)Y s + (∇ Ys φ)Y r , φ[X i , Y r ] = −(∇ X i φ)Y r + (∇ Yr φ)X i .(32)
Using (25) and (27) we obtain
0 = R X i φX j Z = −∇ [X i ,φX j ] Z or equivalently φ[X i , φX j ] + φ h[X i , φX j ] = 0 .
Applying φ and recalling that h[X i , φX j ] is in the kernels of α 1 and α 2 , we get
−[X i , φX j ] + µZ 1 + νZ 2 = h[X i , φX j ] ,
for some functions µ, ν.
Taking the scalar product with X l and using the symmetry of h, we obtain
−g([X i , φX j ], X l ) = g(h[X i , φX j ], X l ) = g([X i , φX j ], h X l ) = g([X i , φX j ], X l ) ,
and finally
(33) g([X i , φX j ], X l ) = 0 .
Similarly we have
g([X i , φX j ], Y s ) = g([Y r , φY s ], X l ) = 0 , g([Y r , φX j ], X i ) = g([Y r , φX j ], Y s ) = 0 , g([φY r , X i ], X l ) = g([Y r , φY s ], Y t ) = g([X i , φY r ], Y s ) = 0 .(34)
c) Now we want to show that ∇ X i X j ∈ [+1] ⊕ RZ 1 ⊕ RZ 2 . In fact, using Corollary 4.8 and (27) we have
0 = g ((∇ φX i φ)X j , φX l ) = −g (φ(∇ φX i X j ), φX l ) = −g (∇ φX i X j , X l ) ,
where we have used the fact that X l ∈ ker α 1 ∩ ker α 2 . Since g(X j , φX l ) = 0, we obtain:
g (∇ X i X j , φX l ) = − g (X j , ∇ X i φX l ) = − g (X j , ∇ φX l X i + [X i , φX l ]) = − g(X j , [X i , φX l ]) =0 ,
where in the last equality we have used (33).
Similarly we obtain g(∇ φYr X i , X j ) = 0 and in turn g(∇ X i X j , φY r ) = 0. Therefore ∇ X i X j ∈ [+1] ⊕ RZ 1 ⊕ RZ 2 as desired. d) We want to show that [+1] is integrable. For X, Y, W ∈ [+1] we have:
0 =R XY Z = ∇ X (−2φY ) + ∇ Y (2φX) + φ[X, Y ] + φ h[X, Y ] = − 2(∇ X φ)Y + 2(∇ Y φ)X − φ[X, Y ] − h φ[X, Y ] .(35)0 =dα i (X, φY ) = − dα i (φX, Y ) = 1 2 α i ([φX, Y ]) .(37)0 =R XY Z = −2∇ X φY + φ[X, Y ] + φ h[X, Y ] = − 2(∇ X φ)Y − φ∇ X Y − φ∇ Y X − h φ∇ X Y + h φ∇ Y X .(38)
Taking the scalar product with W ∈ [−1], we obtain
0 = − 2g((∇ X φ)Y, W ) − g(φ∇ X Y, W ) − g(φ∇ Y X, W ) − g(φ∇ X Y, h W ) + g(φ∇ Y X, h W ) = − 2g((∇ X φ)Y, W ) − 2g(φ∇ Y X, W ) ,(39)which implies g((∇ X φ)Y, W ) = −g(φ∇ Y X, W ) = g(∇ Y X, φW ) . Since g((∇ X φ)Y, W ) = 0 by Corollary 4.8, we have ∇ Y X ∈ [−1] ⊕ V.
Next observe that for X, Y horizontal, we have:
2g((∇ X φ)Z i , Y ) =g(N 1 (Z i , Y ), φX) − 2dα i (φY, X) =g(φ 2 [Z i , Y ] − φ[Z i , φY ], φX) − 2dα i (φY, X) =g(−φ(L Z i φ)Y, φX) − 2dα i (φY, X) = − g((L Z i φ)Y, X) − 2dα i (φY, X) ,(40)then g((∇ X φ)Z i , Y ) is symmetric in X and Y . Now for Y ∈ [+1], X ∈ [−1]
, taking the scalar product of (38) with Z i (i = 1, 2), we get:
0 =g((∇ X φ)Y, Z i ) = g((∇ Y φ)X, Z i ) = − g((∇ Y φ)Z i , X) = g(φ∇ Y Z i , X) = − g((∇ Y Z i , φX) ,(41)X i φ)X j , Z 2 ) =g([Z 2 , φX j ] − φ[Z 2 , X j ], X i ) + 2g(X j , X i ) =g([Z 2 , X j ], φX i ) + 2g(X j , X i ) =g(∇ Z 2 X j − ∇ X j Z 2 , φX i ) + 2g(X j , X i ) = − g(X j , ∇ Z 2 φX i ) + g(Z 2 , ∇ X j φX i ) + 2g(X j , X i ) =g(Z 2 , ∇ X i φX j ) + 2g(X j , X i ) =g(Z 2 , (∇ X i φ)X j ) + 2g(X j , X i )(42)
which gives
(43) g ((∇ X i φ)X j , Z 2 ) = 2g(X i , X j ) .
Using (43) we obtain:
g ((∇ X i φ)X j , Z 1 ) =g (∇ X i φX j , Z 1 ) = − g (φX j , ∇ X i Z 1 ) = − g (φX j , ∇ X i Z) + g (φX j , ∇ X i Z 2 ) = − g (φX j , −2φX i ) − g (∇ X i φX j , Z 2 ) =2g (X j , X i ) − g ((∇ X i φ)X j , Z 2 ) =0(44)(45) (∇ X φ)Y = 2g(X, Y )Z 2 .
Moreover (32) implies [X i , X j ] = 0 and then [+1] 2 is integrable. With a similar argument we see that [+1] 1 is also integrable. g) Setḣ 2 = 1 2 L Z 2 φ 2 , let L be a leaf of T F 1 considered as a submanifold and X tangent to L. Let∇ be the induced connection, σ the second fundamental form, A Z 1 the Weingarten operator in the direction Z 1 and ∇ ′ the connection in the normal bundle. Then we have:
∇ X Z = −φX − φ h X = −φ 2 X − φ 2 h 2 X ∇ X Z = ∇ X Z 1 + ∇ X Z 2 = −A Z 1 X + ∇ ′ X Z 1 +∇ X Z 2 + σ(X, Z 2 ) .
Comparing the previous equations we obtain
A Z 1 X = φ 2 (h 2 X −ḣ 2 X) = φ(h X −ḣ 2 X) ∇ ′ X Z 1 = −σ(X, Z 2 )
. Moreover it is clear that we have σ(Z 2 , Z 2 ) = 0 and A Z 1 Z 2 = 0.
We want to prove that h 2 =ḣ 2 and this is equivalent to proving the vanishing of A Z 1 . To do this we will prove the vanishing of g(A Z 1 X, Y ) for X, Y elements of the eigenbasis of h constructed before.
First observe that g(A Z 1 X, Z 2 ) = g(φ(h X −ḣ 2 X), Z 2 ) = 0 , and then we have to prove our statement for X, Y horizontal. Also note that, by direct calculation, we have:
g(A Z 1 φX, Y ) = g(A Z 1 X, φY ) .
Then it remains to prove that g(A Z 1 X i , X j ) and g(A Z 1 φX i , X j ) both vanish. For the first one:
g(A Z 1 X i , X j ) =g(φ(h X i −ḣ 2 X i ), X j ) =g(φ(X i −ḣ 2 X i ), X j ) =g(ḣ 2 X i , φX j ) = 1 2 (g([Z 2 , φ 2 X i ] − φ 2 [Z 2 , X i ], φ 2 X j )) = 1 2 (g(∇ Z 2 φ 2 X i − ∇ φ 2 X i Z 2 , φ 2 X j ) − g(φ 2 [Z 2 , X i ], φ 2 X j )) = 1 2 (g(∇ Z 2 φ 2 X i , φ 2 X j ) − g([Z 2 , X i ], X j )) =0 ,(46)g(A Z 1 X i , φX j ) =g(φ 2 (h 2 X i −ḣ 2 X i ), φ 2 X j ) =g(X i −ḣ 2 X i , X j ) =g(X i , X j ) − g(ḣ 2 X i , X j ) =g(X i , X j ) − 1 2 g([Z 2 , φ 2 X i ] − φ 2 [Z 2 , X i ], X j ) =g(X i , X j ) − 1 2 g([Z 2 , X i ], φ 2 X j )) =g(X i , X j ) − 1 2 g(∇ Z 2 X i − ∇ X i Z 2 , φ 2 X j )) =g(X i , X j ) − 1 2 (−g(X i , ∇ Z 2 φ 2 X j ) + g(Z 2 , (∇ X i φ 2 )X j )) =0 ,(47)
where at the end we used (26) and (45).
h) To prove that [+1] 2 is also totally geodesic, let us consider the operators H i = 1 2 L Z i φ. Each H i is symmetric by Theorem 3.4. From our observation above that h 2 =ḣ 2 a simple direct computation shows that we have
H 2 X i = X i and similarly H 1 Y r = Y r .
This implies that H 1 X i has no Y r or φY r component. Thus since A Z 1 vanishes we have H 1 X i = 0 and similarly H 2 Y i = 0.
Applying (45), we obtain
2g(X i , X j )Z 2 = ∇ X i φX j − φ∇ X i X j ,
which, differentiating along Y r , gives:
(48) 2(Y r g(X i , X j ))Z 2 + 2g(X i , X j )∇ Yr Z 2 = ∇ Yr ∇ X i φX j − (∇ Yr φ)∇ X i X j − φ∇ Yr ∇ X i X j .
Taking the scalar product with Z 2 , we get (49) 2Y r g(X i , X j ) = g(∇ Yr ∇ X i φX j , Z 2 ) − g((∇ Yr φ)∇ X i X j , Z 2 ) . Now, applying (4), we obtain
−g((∇ Yr φ)∇ X i X j , Z 2 ) =g(∇ X i X j , (∇ Yr φ)Z 2 ) = − g(φH 2 ∇ X i X j , φY r ) − dα 2 (φ∇ X i X j , Y r ) = − g(H 2 ∇ X i X j , Y r ) = − g(∇ X i X j , H 2 Y r ) =0 .(50)
Applying (4) again we first note that by computing as we have been doing, the above properties of H 1 and H 2 yield g((∇ Yr φ)X j , Z 1 ) = g((∇ Yr φ)X j , Z 2 ) = 0. Corollary 4.8 then gives (∇ Yr φ)X j = 0 and therefore by (4) g(∇ X i ∇ Yr φX j , Z 2 ) =g((∇ X i φ)∇ Yr X j , Z 2 ) = + g(H 2 ∇ Yr X j , X i ) + dα 2 (φ∇ Yr X j , X i ) =2g(∇ Yr X j , X i ) .
(51) Equation (49) then becomes 2Y r g(X i , X j ) =g(∇ Yr ∇ X i φX j , Z 2 ) =g(R YrX i φX j , Z 2 ) + g(∇ X i ∇ Yr φX j , Z 2 ) =2g(∇ Yr X j , X i )
since [Y r , X j ] = 0 by (32). But, by the compatibility condition of the Levi-Civita connection we also have Y r g(X i , X j ) = g(∇ Yr X i , X j ) + g(X i , ∇ Yr X j ) .
Comparing this with (52), gives g(∇ Yr X i , X j ) = 0 , and in turn, again noting [Y r , X j ] = 0, g(Y r , ∇ X j X i ) = 0.
Since [+1] is totally geodesic, we also have g(∇ X j X i , φY r ) = 0, therefore [+1] 2 is totally geodesic.
i) We rewrite (45) as follows (53) 2g(X i , X j )Z 2 = (∇ X i φ)X j = ∇ X i φX j − φ∇ X i X j , and we want to apply ∇ X l to (53). We firstly need the following calculations:
2g(X i , X j ) =g ((∇ X i φ)X j , Z 2 )
= − g (X j , (∇ X i φ)Z 2 ) = − g (X j , −φ(∇ X i Z 2 )) = − g (φX j , ∇ X i Z 2 ) .(54)
and therefore g(∇ X l Z 2 , φX m ) = −2g(X l , X m ) .
Applying ∇ X l to (53) we obtain 2(X l g(X i , X j ))Z 2 + 2g(X i , X j )∇ X l Z 2 =
∇ X l ∇ X i φX j − (∇ X l φ)(∇ X i X j ) − φ(∇ X l ∇ X i X j ) .
Taking the scalar product with φX m we get (55) − 4g(X i , X j )g(X l , X m ) = g(∇ X l ∇ X i φX j − φ(∇ X l ∇ X i X j ), φX m ) , since g((∇ X l φ)(∇ X i X j ), φX m ) = 0 by Corollary 4.8. To conclude the proof we exhibit the curvature. To do this interchange the role of X l and X i in (55) and take the difference. Then we have −4g(X i , X j )g(X l , X m ) + 4g(X l , X j )g(X i , X m ) = g(R X l X i φX j , φX m ) − g(R X l X i X j , X m ).
The first term on the right vanishes by the local Riemannian product structure and the second term then gives the desired value of the curvature. If h or k is equal to 1, the corresponding [+1] i subbundle is 1-dimensional and the leaves of the characteristic foliation are flat.
Corollary 5.2. Let M be a (2h + 2k + 2)-dimensional manifold endowed with a metric contact pair (α 1 , α 2 , φ, g) of type (h, k) (with h ≥ 1) and decomposable φ, and such that the curvature of g vanishes on the vertical subbundle. If M is complete, then its Riemannian universal covering is isometric to
E h+1 × S h (4) × E k+1 × S k (4) if k ≥ 1 or E h+1 × S h (4) × E 1 if k = 0.
It is to be understood that when h (or k) is equal to 1, the S h (4) factor will just contribute another line to the Euclidean factor.
Proof. The Riemannian universal coveringM is locally isometric to M and then by Theorem 5.1 is locally isometric to E h+1 × S h (4) × E k+1 × S k (4) if k ≥ 1 or to E h+1 × S h (4) × E 1 if k = 0. Then one concludes by applying the de Rham Decomposition Theorem.
Corollary 5.3. Let M be a (2h + 2k + 2)-dimensional manifold endowed with a metric contact pair (α 1 , α 2 , φ, g) of type (h, k) (with h ≥ 1) and decomposable φ. If g is flat then h, k ≤ 1.
Date: October 31, 2011; MSC 2010 classification: primary 53C25; secondary 53B20, 53C12, 53B35. The first author was supported by the Project Start-up giovani ricercatori-2009 of Università degli Studi di Cagliari and by a Visiting Professor fellowship at the Université de Haute Alsace in June 2010 and in June 2011. The second and the third authors were supported by a Visiting Professor fellowship at the Università degli Studi di Cagliari in April 2011 and January 2010 respectively, financed by Regione Autonoma della Sardegna.
Definition 2. 7
7([6]). A metric contact pair (MCP) on a manifold M is a four-tuple (α 1 , α 2 , φ, g)
K
(Z, X i ) we obtain (7). Now Ric(Z) = h + k if and only if Tr h 2 = 0. Because h is symmetric the trace of h 2 vanishes if and only if h = 0. Now use Proposition 2.11 to complete the proof. The following result generalizes to MCP's a theorem of Hatakeyama et al. [18]: Theorem 4.3. Let (α 1 , α 2 , φ, g) be an MCP with decomposable φ on a (2h + 2k + 2)dimensional manifold M. Then Z is Killing if and only if for all the plane sections (Z, X)
Moreover, if g is an associated metric, then L Z i φ = 0 if and only if Z i is Killing.Proposition 2.11. Let (α 1 , α 2 , φ, g) be an MCP with Reeb vector fields Z 1 and Z 2 . Then h vanishes if and only if Z is Killing.j = 1, 2 (in particular the integral curves of the Reeb vector fields
are geodesics);
(4) the Reeb action is totally geodesic (i.e. the orbits are totally geodesic two-dimensional
submanifolds).
In the normal case, by Proposition 2.5, an immediate consequence is:
Corollary 2.10. If an MCP (α 1 , α 2 , φ, g) is normal, the Reeb vector fields Z 1 and Z 2 are
Killing.
Now using the invariance of the α i 's under the flow of Z = Z 1 + Z 2 one can also prove the
following:
Taking the inner product with W and noting that φW is in [−1], we obtain(36)
0 = g(φW, [X, Y ]) ,
which means that [X, Y ] is orthogonal to [−1]. Now let Y ∈ [+1] and X ∈ [−1]. By the
integrability of [−1], we have:
which means that ∇ Y Z i is orthogonal to [+1].This implies that [+1] is totally geodesic and then, since [−1] ⊕ RZ 1 ⊕ RZ 2 is integrable with totally geodesic leaves, the manifold splits as a local Riemannian product. f) Using equation(4) and the integrability of [−1] ⊕ RZ 1 ⊕ RZ 2 with totally geodesic leaves we have 2g((∇
Equations (43), (44) and Corollary 4.8 then give for every X, Y ∈ [+1] 2
because [Z 2 , X i ] is in [−1] 2 + RZ 2 and where we have used (26) and (27).Now we have:
j )
jNote that from Theorem 2.9 we know that the integral submanifolds of V are totally geodesic. Moreover we proved that [+1] 1 and [+1] 2 are integrable and totally geodesic. We also know that [−1] 1 and [−1] 2 are integrable, totally geodesic and flat by (27) and (28). Then the manifold M splits locally as a Riemannian product of integrable submanifolds of V, [±1] 1 and [±1] 2 .
On a class of Hermitian manifolds. K Abe, Invent. Math. 51K. Abe, On a class of Hermitian manifolds, Invent. Math. 51 (1979), 103-121.
Formes de contact généralisé, couples de contact et couples contacto-symplectiques. G Bande, MulhouseUniversité de Haute AlsaceThèse de DoctoratG. Bande, Formes de contact généralisé, couples de contact et couples contacto-symplectiques, Thèse de Doctorat, Université de Haute Alsace, Mulhouse, 2000.
Couples contacto-symplectiques. G Bande, Trans. Amer. Math. Soc. 355G. Bande, Couples contacto-symplectiques, Trans. Amer. Math. Soc. 355 (2003), 1699-1711.
Stability theorems for symplectic and contact pairs. G Bande, P Ghiggini, D Kotschick, Int. Math. Res. Not. 68G. Bande, P. Ghiggini and D. Kotschick, Stability theorems for symplectic and contact pairs, Int. Math. Res. Not. 68 (2004), 3673-3688.
Contact Pairs. G Bande, A Hadjar, Tohoku Math. J. 57G. Bande and A. Hadjar, Contact Pairs, Tohoku Math. J. 57 (2005), 247-260.
G Bande, A Hadjar, Contact pair structures and associated metrics, Differential Geometry -Proceedings of the 8th International Colloquium. G. Bande and A. Hadjar, Contact pair structures and associated metrics, Differential Geometry - Proceedings of the 8th International Colloquium, World Sci. Publ. (2009), 266-275.
On normal contact pairs. G Bande, A Hadjar, Internat. J. Math. 21G. Bande and A. Hadjar, On normal contact pairs, Internat. J. Math. 21 (2010), 737-754.
On the characteristic foliations of metric contact pairs, Harmonic maps and differential geometry. G Bande, A Hadjar, Contemp. Math. 542Amer. Math. SocG. Bande and A. Hadjar, On the characteristic foliations of metric contact pairs, Harmonic maps and differential geometry, 255-259, Contemp. Math., 542, Amer. Math. Soc., Providence, RI, 2011.
The Geometry of Symplectic pairs. G Bande, D Kotschick, Trans. Amer. Math. Soc. 358G. Bande and D. Kotschick, The Geometry of Symplectic pairs, Trans. Amer. Math. Soc. 358 (2006), 1643-1655.
Contact pairs and locally conformally symplectic structures, Harmonic maps and differential geometry. G Bande, D Kotschick, Contemp. Math. 542Amer. Math. SocG. Bande and D. Kotschick, Contact pairs and locally conformally symplectic structures, Harmonic maps and differential geometry, 85-98, Contemp. Math., 542, Amer. Math. Soc., Providence, RI, 2011..
On the non-existence of flat contact metric structures. D E Blair, Tohoku Math. J. 28D. E. Blair, On the non-existence of flat contact metric structures, Tohoku Math. J. 28 (1976), 373-379.
Two remarks on contact metric structures. D E Blair, Tohoku Math. J. 29D. E. Blair, Two remarks on contact metric structures, Tohoku Math. J. 29 (1977), 319-324.
D E Blair, Riemannian geometry of contact and symplectic manifolds. Birkhäuser2032nd EdD. E. Blair, Riemannian geometry of contact and symplectic manifolds, Progress in Mathematics, 2nd Ed., vol. 203, Birkhäuser, 2010.
Geometry of complex manifolds similar to the Calabi-Eckmann manifolds. D E Blair, G D Ludden, K Yano, J. Differential Geom. 9D. E. Blair, G. D. Ludden and K. Yano, Geometry of complex manifolds similar to the Calabi-Eckmann manifolds, J. Differential Geom. 9 (1974), 263-274.
Compact Clifford-Klein forms of symmetric spaces. A Borel, Topology. 2A. Borel, Compact Clifford-Klein forms of symmetric spaces, Topology 2 (1963), 111-122.
S Dragomir, L Ornea, Locally conformal Kähler geometry. Birkhäuser155S. Dragomir and L. Ornea, Locally conformal Kähler geometry, Progress in Mathematics, vol. 155, Birkhäuser, 1998.
Normal contact structures on 3-manifolds. H Geiges, Tohoku Math. J. 49H. Geiges, Normal contact structures on 3-manifolds, Tohoku Math. J. 49 (1997), 415-422.
Some properties of manifolds with contact metric structure. Y Hatakeyama, Y Ogawa, S Tanno, Tohoku Math. J. 15Y. Hatakeyama, Y. Ogawa and S. Tanno, Some properties of manifolds with contact metric structure, Tohoku Math. J. 15 (1963), 42-48.
Some curvature properties of locally conformal Kähler manifolds. I Vaisman, Trans. Amer. Math. Soc. 259I. Vaisman, Some curvature properties of locally conformal Kähler manifolds, Trans. Amer. Math. Soc. 259 (1980), 439-447.
Università degli studi di Cagliari, Via Ospedale 72, 09124 Cagliari, Italy E-mail address: gbande@unica. Dipartimento Di Matematica E Informatica, it Department of Mathematics, Michigan State University, East LansingMI 48824-1027, USA E-mail address: [email protected] di Matematica e Informatica, Università degli studi di Cagliari, Via Os- pedale 72, 09124 Cagliari, Italy E-mail address: [email protected] Department of Mathematics, Michigan State University, East Lansing, MI 48824-1027, USA E-mail address: [email protected]
Informatique et Applications, Université de Haute Alsace -4, Rue de Frères Lumière, 68093 Mulhouse Cedex, France E-mail address: mohamed. Mathématiques Laboratoire De, [email protected] de Mathématiques, Informatique et Applications, Université de Haute Al- sace -4, Rue de Frères Lumière, 68093 Mulhouse Cedex, France E-mail address: [email protected]
| [] |
[
"Generative Adversarial Networks for geometric surfaces prediction in injection molding Performance analysis with Discrete Modal Decomposition",
"Generative Adversarial Networks for geometric surfaces prediction in injection molding Performance analysis with Discrete Modal Decomposition"
] | [
"Pierre Nagorny [email protected] ",
"Thomas Lacombe [email protected] ",
"Hugues Favrelière [email protected] ",
"Maurice Pillet ",
"Eric Pairel ",
"Ronan Le Goff [email protected] ",
"Marlène Wali ",
"Jérôme Loureaux ",
"Patrice Kiener [email protected] ",
"\nIPC Technical Center -Centre Technique Industriel de la Plasturgie et des Composites Bellignat\nSYMME laboratory Savoie Mont Blanc University Annecy\nFrance, France\n",
"\nInModelia Paris\nFrance\n"
] | [
"IPC Technical Center -Centre Technique Industriel de la Plasturgie et des Composites Bellignat\nSYMME laboratory Savoie Mont Blanc University Annecy\nFrance, France",
"InModelia Paris\nFrance"
] | [] | Geometrical and appearance quality requirements set the limits of the current industrial performance in injection molding.To guarantee the product's quality, it is necessary to adjust the process settings in a closed loop. Those adjustments cannot rely on the final quality because a part takes days to be geometrically stable. Thus, the final part geometry must be predicted from measurements on hot parts. In this paper, we use recent success of Generative Adversarial Networks (GAN) with the pix2pix network architecture to predict the final part geometry, using only hot parts thermographic images, measured right after production. Our dataset is really small, and the GAN learns to translate thermography to geometry. We firstly study prediction performances using different image similarity comparison algorithms. Moreover, we introduce the innovative use of Discrete Modal Decomposition (DMD) to analyze network predictions. The DMD is a geometrical parameterization technique using a modal space projection to geometrically describe surfaces. We study GAN performances to retrieve geometrical parameterization of surfaces. | 10.1109/icit.2018.8352405 | [
"https://arxiv.org/pdf/1901.10178v1.pdf"
] | 59,336,309 | 1901.10178 | 84d7d8b799cfaa6061d5077450b248ef1f537912 |
Generative Adversarial Networks for geometric surfaces prediction in injection molding Performance analysis with Discrete Modal Decomposition
Pierre Nagorny [email protected]
Thomas Lacombe [email protected]
Hugues Favrelière [email protected]
Maurice Pillet
Eric Pairel
Ronan Le Goff [email protected]
Marlène Wali
Jérôme Loureaux
Patrice Kiener [email protected]
IPC Technical Center -Centre Technique Industriel de la Plasturgie et des Composites Bellignat
SYMME laboratory Savoie Mont Blanc University Annecy
France, France
InModelia Paris
France
Generative Adversarial Networks for geometric surfaces prediction in injection molding Performance analysis with Discrete Modal Decomposition
Injection moldingQuality predictionThermographyGenerative Adversarial NetworksDiscrete Modal Decomposition
Geometrical and appearance quality requirements set the limits of the current industrial performance in injection molding.To guarantee the product's quality, it is necessary to adjust the process settings in a closed loop. Those adjustments cannot rely on the final quality because a part takes days to be geometrically stable. Thus, the final part geometry must be predicted from measurements on hot parts. In this paper, we use recent success of Generative Adversarial Networks (GAN) with the pix2pix network architecture to predict the final part geometry, using only hot parts thermographic images, measured right after production. Our dataset is really small, and the GAN learns to translate thermography to geometry. We firstly study prediction performances using different image similarity comparison algorithms. Moreover, we introduce the innovative use of Discrete Modal Decomposition (DMD) to analyze network predictions. The DMD is a geometrical parameterization technique using a modal space projection to geometrically describe surfaces. We study GAN performances to retrieve geometrical parameterization of surfaces.
I. INTRODUCTION
Thermoplastics injection molding can produce complex parts in large quantities. Quality requirements are increasing to guaranty the final product functionality. The final part quality depends on multiple settings and external non-controllable factors, from raw material hygrometry to in-mold pressure. Thus, closed-loop process settings adjustments are needed to achieve optimal quality [1]. Moreover, the final part quality can only be measured on stabilized parts, days after the production. Molded parts need to cool down with internal mechanical constraints relaxation. Thus, it is not possible to measure part quality just after production. To achieve next part process settings adjustment in closed loop, hot parts must be measured, and final part quality must be inferred. It is interesting to measure the hot part and infer the final part quality. Measurements and settings adjustments computation must be done in the industrial cycle time (less than thirty seconds). Thermographic imagery is a fast and easy to set-up measurement, which can be used on the production line. Inference of specific final parts quality based on hot parts measurements can be achieve using regressive models or neural networks. A previously learned model is fast to execute for inference on a recent computer. Convolution Neural Networks [2] have been used to predict a continuous geometry measurement of final part from thermographic images of hot parts [3]. However, a unique geometrical value is not meaningful for a human operator, and even more information could be extract from thermographic images. Recent research about Generative Adversarial Networks [4] used as images translators are applicable on industrial problems. In this paper, we propose the use of a GAN to translate thermographic images into final surface geometry (Fig. 1). We measure the prediction performance with different images similarity comparison methods. Furthermore, we introduce the Discrete Modal Decomposition analysis [19] as an innovative tool to characterize the GAN generation performance to retrieve geometric parameterization. With some enhancement, this method could be proposed as a real-time tool to inspect the future part geometry, even in an augmented reality real-time overlay for human operators.
II. RELATED WORK
Regulation of injection molding is a productive literature subject, using multiple advanced methods. Many works have been done using in-mold measurements with good success [5], but far lesser proposed to measure hot parts just after production. The serious disadvantage of in-mold measurements lays on its need of expensive sensors placed inside the mold with a custom mold design for wire routing. Besides, robotic is now heavily deployed in the industry. Thus, hot parts measurements can be done in-situ, in the production line and cycle time. If the measurement process is fast enough, every part can be measured. A robotic arm can move parts on different measurements stands. Thermography imagery, 3D scanning, weighing or photography are easy to set up. Then, mathematical models must be construct between hot parts measurements and final parts quality. Neural networks models can benefit from multi modal sensors fusion [6]. In this study, we work specifically on thermographic images as model input. Generative Adversarial Networks were introduced in 2014 and have since shown success in very specific image processing applications. Recent research used GAN to translate images, after the learning of a transformation model on pair images (pix2pix), or even with no pair prior at all (Cycle GAN) [7].
III. METHODS
A. Experimental dataset creation
Under industrial setting, it is usually difficult to create a big dataset. This paper aims to use a GAN network architecture to solve this limit. Generative networks are known to require a small dataset. We actually have a dataset with ten times fewer elements than classical dataset used for GAN. Our study is exploratory and aims to study the performance of GAN in industrial constraints. The dataset creation was done using a Taguchi orthogonal L12 Design of Experiments [8], with twelve different trials. Furthermore, we choose to put two parts of each trial in the training dataset. An exception was done for on trial which parts are not at all in the training dataset but only in the validation dataset. These special not learned parts will be used for further validation of the network performance. We use a robotic arm to grab the part right after molding and to show them in front of a thermographic camera. A still frame was acquired for each part 10 seconds after the ejection of the part Then, we have been waiting a month before geometrical measurements to be sure that parts were geometrically stable. Surfaces were scanned on a confocal optical profilometer (Altimet AltiSurf 520). The scanning takes one hour; this duration is the actual limit to the size of our dataset.
From the nearly two hundred parts we have produced and measured on-line, we have actually only scanned 37 parts: 23 parts are used as a training dataset and 14 parts are used for validation.
B. Dataset preprocessing
The robotic arm and the vacuum prehension device were not enough repeatable to guarantee pixel to pixel position similarity between measurements. Thus, thermographic images were stabilized using the Lucas-Kanade optical flow tracking algorithm [9]. Images were normalized on the minimum and the maximum values of all images. Images were then cropped and scaled to 71 by 71 pixels in grayscale, without multiple color channels. Finally, images are upsampled (bicubic interpolation) to 128x128 to be used as networks input. Upsampling the small definition acts as a 1.8x1.8 pixels Gaussian filter. We voluntary choose small size images to simulate industrial constraint where multiple parts must be imaged with a unique camera, leading to a small image definition peer part. After normalization and reduction to 8 bits images the thermographic image has a resolution of 0.902 degrees Celsius. After normalization and reduction to 8 bits images, the height map images have a resolution of 1.57 µm. This is quite low, but it is acceptable to characterize low frequency geometric form defects. In our case, better results in term of geometry precision could be obtained with floating points input instead of 8 bits images, but the network size and training time should be far bigger.
C. Pix2pix Generative Adversarial Networks architecture
A "Generative Adversarial Network" is composed by a generator and a discriminator which are both trained sequentially one after the other. The generator is trained to produce realistic images from a random normal distribution. Then, the discriminator must distinguish the "fake" generated images from the real images. At each training loop, the generator gets the difference of the output between the discriminator and the "fake" generated sample. The generator is train to increase the error of the discriminator by producing perfect counterfeit image. The discriminator is train on real images. The convergence of the network is achieved when the generator and the discriminator reach an equilibrium point between counterfeit performances and detection. The pix2pix framework [7] use a deep convolutional GANs (DCGANs) architecture proposed by Radford, Metz and Chintala [10]. Pix2pix also proposes the use of conditional GAN [11], which learn their loss function from the observation of the inputted images. This framework simplifies many previous works and propose to use a "U-Net" architecture [12]. The U-Net architecture has skip connection between each center symmetric layers. Each skip connection concatenates channels from layer i with channels from layer n -i, with n the total number of layers in the network. Pix2pix also use a convolutional classifier, which work on small patches of the input image (PatchGAN). Thus, the convolutional layer will be only receptive to pattern at the patch size scale. The patch size choice must be investigated. For our problem, the patch size will certainly change the prediction performance of various geometrical scale. The generator and the discriminator have the same layer architecture: each layer has a convolution layer and LeakyReLu [14] as an activation function. The augmentation part of the generator use Rectified Linear Unit [13]. In our study, we do not use any batch training nor normalization because we have a small dataset. Finally, pix2pix introduces a L1 distance loss between the generated images and the real ground truth images. Thus, the generator must fool the discriminator and minimize the L1 Manhattan distance with the real image. The complete network architecture is show in Fig. 2. We have only trained two hundred epochs for now. We use Xavier layers initialization [15] based on success in the literature. We envisage hyper parameters tuning as a direct pursuit of this research. Other architectures must also be evaluated, particularly the Perceptual Adversarial Network [16], which shows better performance in specific cases.
D. Pix2pix GAN training
The network was trained using the Adam stochastic gradient descent solver [17]. We kept the image augmentation techniques used in the pix2pix [7] proposition: random jitter and mirroring. The network training takes 3 hours on a Core i7 CPU. We will investigate multiple hyper parameters tuning as soon as we have new GPU computing resources. We observe the importance of the Conditional L1 loss in the training process (Fig. 3, Fig. 4). The training completion was effective after 180 epochs. The training is relatively fast. It can be done over a night for industrial applications.
E. Discrete Modal Decomposition
In order to describe geometrically the surfaces and check if the GAN generation enables to retrieve geometrical features from the true images, we introduce here some shape parameterization methods, especially the Discrete Modal Decomposition (DMD). Global shape parameterization techniques lay on the identification of parameters which characterize the shape geometrical elements as well as possible. In practice, we find descriptors by decomposing the surface into a descriptors space, specific to each method. The DMD uses a decomposition space based on a vibration mechanics problem [19], [20]. Among the other widespread methods used for parameterization in the literature:
i. The Discrete Cosine Transform (DST) decomposition [21] uses a cosine harmonics space to describe the surface. ii.
The Fourier series decomposition is based on the Discrete Fourier Transform (DFT). This method enables to decompose a surface into a sine and cosine harmonics space, more general than the DST one. iii.
The Spherical harmonics decomposition describes complex shapes into a spherical harmonics space. This technique is especially used to characterize 3D shapes. If we work with a plane geometry, the DMD descriptors are the natural vibration eigenmodes of the reference plane geometry (Fig 5). Le Goïc [18] has shown that this method can be used to characterize global as well as local surfaces geometries. We propose to use this method to parameterize the geometry of our samples, and to see if this parameterization can be retrieved through the GAN image generation.
IV. PREDICTION PERFORMANCE ANALYSIS
We use multiple methods to determine the predictive performance of the network. We firstly compare real measured height map images with images generated by the GAN (Fig. 6). The validation is done on a dedicated dataset of 14 parts (Fig 7). Simple statistical features and histogram comparisons with different metrics are used. Then, we introduce Discrete Modal parameters as an innovative prediction performance tool on specific geometrical features.
A. Statistical and Haralick features comparisons
Generated images visually show good similarities with real images. In order to validate the similarity, we compute different statistical features on each couple of real and generated images. We compute the mean, median, kurtosis, skewness and multiple quantile of the images. We also compute the Haralick features [22] as they are good textural descriptors. We study the difference between those values. We compute the p-value to validate the result on this really small population (14 elements). The standard deviation on all the tested results is high and the p-value was superior to 5% on most of the metrics. Thus, we cannot reject the null hypothesis [23]. We then must increase the test dataset size to be able to get significant results with these textural parameters. The only features with a p-value superior to 5% are Haralick's difference variance, energy, entropy and homogeneity. The energy and the homogeneity metrics indicate the uniformity of the image. The entropy measured the presence of random patterns. Thus, our real and generated images are similar in their variance distribution.
B. Real vs generated image similarity
We avoided pixel to pixel comparison because our generated images can be slightly translated due to the dataset augmentation methods used. Histogram comparisons are robust to scaling and movements. We used various distance metrics on histograms (Fig. 8):
Bhattacharya [24], Khisquared, correlation [25], cosine (similar to Euclidian L2), Hellinger, Kullback-Leibler [26], Manhattan L1 and Minkowski distances. We used the p-value test to test for the null hypothesis and only the cosine and correlation distances were kept. We also compute the Structural Similarity [27] and the Peak Signal to Noise Ratio (PSNR), two widespread measures for image comparison. Results on the test dataset are shown in Table I. test06 and test06-bis are images obtained for the same settings in the design of experiment: no part with similar settings were present in the train dataset. Results on the test06 and test06-bis are used to study the generalization capability of this predictive method. Although test06 results are good, test06-bis is not on the same level of prediction quality based on cosine and correlation distances on histograms. The median value on the 14 parts shows the overall good predicting performance of the methods. Further analysis is done with the Discrete Modal Decompositions.
C. Discrete Modal Decomposition analysis
We introduce here a new performance indicator for the GAN predictions: the DMD analysis. The objective is to study if the GAN is able to retrieve the surfaces in the DMD geometrical parameterization. DMD analysis is especially interesting in the case of plastic parts deformation. Our objective is to anticipate deformation on the real part. The firsts DMD coefficients give the global part geometry. The last DMD coefficients concern the roughness and very local deformation.
In plastic molding, we particularly have to anticipate the global part geometry, so, the first modal coefficients are especially appropriate. The real pictures modal spectrums are calculated and compared with the generated images ones (Fig. 9). Similarity between the 2 spectrums vectors for the 14 pairs of images have been calculated thanks to correlation coefficient and cosine similarity (Table II). The results of the spectrums comparison study show a very good similarity between the spectrums with a median R² coefficient value of 0.98 and a median cosine distance value of 0.99 (very close to 1). Moreover, standard deviations are very low for both measurements, which shows a strong consistency for the 14 tested parts results. These measures bring to light the good performances of the GAN to retrieve the geometric parameterization of the parts' surfaces. It also shows the opportunity of using DMD analysis as an innovative performance indicator for neural network image generation. This paper illustrates a preliminary study for the use of GAN to predict geometry parameters through DMD. For further developments, we also propose to study these results from "the modes viewpoint". Instead of analyzing the correlation between images global modal spectrums, we want to investigate each mode independently to see which ones are best predicted by the network. Through this work, we hope to show what types of shapes and deformations the network is able to generate the best. As explained in III.E., DMD is not the only method to geometrically characterize surfaces. It could be also interesting to apply other type of decompositions and compare with the DMD analysis. In this study, we have shown the possibility of using Generative Adversarial Networks pix2pix on a very small dataset, to comply with industrial constraints. Based on the measured similarity between final image and generate image, we show that the prediction is acceptable in the context of our very limited learning study. Quantities of information contained in the thermal image seem sufficient to predict our geometry in our specific application case. We introduce Discrete Modal Decomposition (DMD) to analyze the prediction performance from a geometrical parameterization side. The DMD presents a clear meaning to analyze geometrical properties but it could also further be used to analyze any generative network performance. In thermoplastic injection molding, we don't need to predict the rugosity (high modes in DMD) but only geometric form defects (low modes in DMD). Our problematic is to anticipate the global cold part geometry, so, the first modal coefficients are appropriate. Performance evaluation and hyperparameters tuning must be done on a larger dataset. Further studies can also be designed to analyze more precisely the GAN performances to retrieve independent modes to show what shapes can be best predicted. Finally, further research needs to be done on the complete 3D model generation based on thermographic prior, as literature shows encouraging result in 2D images to 3D shapes transformation [28].
Fig. 1 .
1Process ajustement based on thermography and GAN.
Fig. 2 .
2Pix2pix UNet_128 GAN network architecture.
Fig. 3 .
3Training loss over epoch.
Fig. 4 .
4Real/generated image comparison during the network training.
Fig. 5 .
5Vector basis of modals descriptors (DMD) for a plane geometry[18].
Fig. 6 .
6Right: real geometry image, center: generated geometry image, left: inputed thermographic image (images are normalized between 0-255).
Fig. 7 .Fig. 8 .
78Complete test dataset for the trained network after 200 epoch Histogram comparison for test01: left is real, right is generated image
Fig. 9 .
9DMD modal spectrum real/generated images comparison V. CONCLUSION
TABLE I .
IIMAGES SIMILARITIESPart name
Similarity between real and generated images
Cosine
distance
Correlation
distance
PSNR
SSIM
test01
0.95
0.92
14.24
0.79
test06
0.90
0.83
17.91
0.84
test06-bis
0.60
0.40
13.59
0.81
MEDIAN 14 parts
0.90
0.84
18.35
0.86
STD 14 parts
0.10
0.15
3.95
0.07
TABLE II .
IISPECTRUMS SIMILARITYPart name
Similarity between real and generated
images' spectrums vectors
Cosine distance
Correlation
coefficient R²
test01
0.94
0.88
test06
0.98
0.96
test06-bis
0.99
0.99
MEDIAN 14 parts
0.99
0.98
STD 14 parts
0.08
0.04
On-line multivariate optimization of injection molding. S Johnston, C Mccready, D Hazen, D Vanderwalker, D Kazmer, Polym. Eng. Sci. 5512S. Johnston, C. McCready, D. Hazen, D. VanDerwalker, and D. Kazmer, "On-line multivariate optimization of injection molding," Polym. Eng. Sci., vol. 55, no. 12, pp. 2743-2750, Jan. 2015.
Gradientbased learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proc. IEEE. IEEE86Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient- based learning applied to document recognition," Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
Quality prediction in injection molding. P Nagorny, 2017 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA. P. Nagorny et al., "Quality prediction in injection molding," in 2017 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2017, pp. 141-146.
Generative Adversarial Networks. I J Goodfellow, ArXiv14062661 Cs Stat. I. J. Goodfellow et al., "Generative Adversarial Networks," ArXiv14062661 Cs Stat, Jun. 2014.
Online control of the injection molding process based on process variables. W Michaeli, A Schreiber, Adv. Polym. Technol. 282W. Michaeli and A. Schreiber, "Online control of the injection molding process based on process variables," Adv. Polym. Technol., vol. 28, no. 2, pp. 65-76, Jun. 2009.
J Ngiam, A Khosla, M Kim, J Nam, H Lee, A Y Ng, 28th International Conference on Machine Learning. Multimodal deep learning," presented at theJ. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, "Multimodal deep learning," presented at the 28th International Conference on Machine Learning, ICML 2011, 2011.
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. J.-Y Zhu, T Park, P Isola, A A Efros, J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," ArXiv170310593 Cs, Mar. 2017.
G Taguchi, Introduction to Quality Engineering: Designing Quality into Products and Processes. Tokyo: Quality Resources. G. Taguchi, Introduction to Quality Engineering: Designing Quality into Products and Processes. Tokyo: Quality Resources, 1986.
An Iterative Image Registration Technique with an Application to Stereo Vision. B D Lucas, T Kanade, Proceedings of the 7th International Joint Conference on Artificial Intelligence. the 7th International Joint Conference on Artificial IntelligenceSan Francisco, CA, USA2B. D. Lucas and T. Kanade, "An Iterative Image Registration Technique with an Application to Stereo Vision," in Proceedings of the 7th International Joint Conference on Artificial Intelligence -Volume 2, San Francisco, CA, USA, 1981, pp. 674-679.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. A Radford, L Metz, S Chintala, A. Radford, L. Metz, and S. Chintala, "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks," ArXiv151106434 Cs, Nov. 2015.
Context Encoders: Feature Learning by Inpainting. D Pathak, P Krahenbuhl, J Donahue, T Darrell, A A Efros, D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, "Context Encoders: Feature Learning by Inpainting," ArXiv160407379 Cs, Apr. 2016.
U-Net: Convolutional Networks for Biomedical Image Segmentation. O Ronneberger, P Fischer, T Brox, ArXiv150504597 Cs. O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," ArXiv150504597 Cs, May 2015.
X Glorot, A Bordes, Y Bengio, Deep Sparse Rectifier Neural Networks," presented at the International Conference on Artificial Intelligence and Statistics. X. Glorot, A. Bordes, and Y. Bengio, "Deep Sparse Rectifier Neural Networks," presented at the International Conference on Artificial Intelligence and Statistics, 2011, pp. 315-323.
Rectifier Nonlinearities Improve Neural Network Acoustic Models. A Maas, A Hannun, A Ng, Proceddings ICML. eddings ICMLA. Maas, A. Hannun, and A. Ng, "Rectifier Nonlinearities Improve Neural Network Acoustic Models," Proceddings ICML, Jun. 2013.
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics. the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and StatisticsX. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics, 2010.
Perceptual Adversarial Networks for Image-to-Image Transformation. C Wang, C Xu, C Wang, D Tao, C. Wang, C. Xu, C. Wang, and D. Tao, "Perceptual Adversarial Networks for Image-to-Image Transformation," ArXiv170609138 Cs, Jun. 2017.
Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," ArXiv14126980 Cs, Dec. 2014.
Multi scale modal decomposition of primary form, waviness and roughness of surfaces. G L Goic, H Favrelière, S Samper, F Formosa, Scanning. 335G. L. Goic, H. Favrelière, S. Samper, and F. Formosa, "Multi scale modal decomposition of primary form, waviness and roughness of surfaces," Scanning, vol. 33, no. 5, pp. 332-341, Oct. 2011.
Modal Tolerancing : From metrology to specifications. H Favreliere, H. Favreliere, "Modal Tolerancing : From metrology to specifications," Oct. 2009.
Multiscale roughness analysis of engineering surfaces: A comparison of methods for the investigation of functional correlations. G L Goïc, M Bigerelle, S Samper, H Favrelière, M Pillet, Mech. Syst. Signal Process. Supplement CG. L. Goïc, M. Bigerelle, S. Samper, H. Favrelière, and M. Pillet, "Multiscale roughness analysis of engineering surfaces: A comparison of methods for the investigation of functional correlations," Mech. Syst. Signal Process., vol. 66-67, no. Supplement C, pp. 437-457, Jan. 2016.
K R Rao, P Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications. San Diego, CA, USAAcademic Press Professional, IncK. R. Rao and P. Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications. San Diego, CA, USA: Academic Press Professional, Inc., 1990.
Textural Features for Image Classification. R M Haralick, K Shanmugam, I Dinstein, IEEE Trans. Syst. Man Cybern. 36R. M. Haralick, K. Shanmugam, and I. Dinstein, "Textural Features for Image Classification," IEEE Trans. Syst. Man Cybern., vol. SMC-3, no. 6, pp. 610-621, Nov. 1973.
An investigation of the false discovery rate and the misinterpretation of p-values. D Colquhoun, R. Soc. Open Sci. 13140216D. Colquhoun, "An investigation of the false discovery rate and the misinterpretation of p-values," R. Soc. Open Sci., vol. 1, no. 3, p. 140216, Nov. 2014.
On a Measure of Divergence between Two Multinomial Populations. A Bhattacharyya, Sankhyā Indian J. Stat. 74A. Bhattacharyya, "On a Measure of Divergence between Two Multinomial Populations," Sankhyā Indian J. Stat. 1933-1960, vol. 7, no. 4, pp. 401-406, 1946.
Measuring and testing dependence by correlation of distances. G J Székely, M L Rizzo, N K Bakirov, Ann. Stat. 356G. J. Székely, M. L. Rizzo, and N. K. Bakirov, "Measuring and testing dependence by correlation of distances," Ann. Stat., vol. 35, no. 6, pp. 2769-2794, Dec. 2007.
On Information and Sufficiency. S Kullback, R A Leibler, Ann. Math. Stat. 221S. Kullback and R. A. Leibler, "On Information and Sufficiency," Ann. Math. Stat., vol. 22, no. 1, pp. 79-86, Mar. 1951.
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE Trans. Image Process. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, Apr. 2004.
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. J Wu, C Zhang, T Xue, W T Freeman, J B Tenenbaum, J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum, "Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling," ArXiv161007584 Cs, Oct. 2016.
| [] |
[
"A Newton-MR algorithm with complexity guarantees for nonconvex smooth unconstrained optimization",
"A Newton-MR algorithm with complexity guarantees for nonconvex smooth unconstrained optimization"
] | [
"Yang Liu ",
"Fred Roosta "
] | [] | [] | In this paper, we consider variants of Newton-MR algorithm, initially proposed in[62], for solving unconstrained, smooth, but non-convex optimization problems. Unlike the overwhelming majority of Newton-type methods, which rely on conjugate gradient algorithm as the primary workhorse for their respective sub-problems, Newton-MR employs minimum residual (MINRES) method. Recently, [51] establishes certain useful monotonicity properties of MINRES as well as its inherent ability to detect non-positive curvature directions as soon as they arise. We leverage these recent results and show that our algorithms come with desirable properties including competitive first and second-order worst-case complexities. Numerical examples demonstrate the performance of our proposed algorithms.However, in non-convex problems, where the Hessian matrix can be singular and/or indefinite, the classical Newton's method and its straightforward Newton-CG variant, lack any convergence guarantees. To address this, many Newton-type variants have been proposed which aim at extending Newton-CG beyond strongly convex problems to more general settings, e.g., trust-region[21]and cubic regularization[14,15,56]. However, many of these extensions involve sub-problems that are non-trivial to solve, e.g., the sub-problems of trust-region and cubic regularization methods are non-linear and non-convex[76,77]. For instance, the CG-Steihaug method[69]is not guaranteed to solve the trust-region sub-problem to an arbitrary accuracy. This can negatively affect the convergence speed of the trust-region method. Indeed, if either of the negative curvature or the boundary is encountered too early, the CG-Steihaug method terminates and the resulting step is only slightly, if at all, better than the Cauchy direction[21].In this light, there has been recent efforts in studying non-convex Newton-type methods whose sub-problems involve elementary linear algebra problems, which are much better understood. Notably among these is[64], which enhances the classical Newton-CG approach with safeguards to detect non-positive curvature (NPC) direction in the Hessian, that may arise during the iterations of CG. By exploiting NPC directions, the method in[64]can be applied to general non-convex settings and enjoys favorable complexity guarantees that have been shown to be optimal in certain settings. Stochastic variants of the method in [64] has also recently been considered in[79]. In fact, beyond CG, all conjugate direction methods such as conjugate residual (CR) [66] involve iterations that allow for ready access to NPC directions. Using this property, [26] studies variants of Newton's algorithm where CR replaces CG as the subproblem solver, and by employing NPC direction, obtains asymptotic convergence guarantees for non-convex problems. Arguably, however, the iterative method-of-choice when it comes to real symmetric but potentially indefinite matrices is the celebrated Minimum Residual (MINRES) algorithm[19,58]. Inspired by this, and by reformulating the sub-problems as simple symmetric linear least-squares, a new variant of Newton's method, called Newton-MR, is introduced in[50,62]where the classical CG algorithm is replaced with MINRES. However,[50,62]are limited in their scope in that, unlike the Newton-CG alternatives in [64], the proposed Newton-MR algorithms can only be applied to a sub-class of non-convex objectives, known as invex problems[54]. This shortcoming lies in the fact that the algorithms proposed in[50,62]do not have the ability to leverage NPC directions.Recently, [51] has established several non-trivial properties of MINRES that mimic, and in fact surpass, those found in CG. These include certain useful monotonicity properties as well as an inherent ability to detect NPC directions, when they arise. These properties allow MINRES to be considered as an alternative sub-problem solver to CG for many Newton-type non-convex optimization algorithms. Equipped with the results of [51], in this paper we aim to address the shortcomings of the algorithms in[50,62]and introduce variants of Newton-MR algorithms, which can be applied to general non-convex problems and enjoy highly desirable complexity guarantees. As compared with Newton-CG alternatives in [64], we will show that not only do our proposed Newton-MR variants have advantageous convergence properties, but they also have superior empirical performance in a variety of examples.Contributions and Main Results (Informal)Our contributions and main results are informally summarized below.(1) For optimization of non-convex optimization (1), we propose variants of Newton-MR (Algorithms 4 and 5), which under certain assumptions, provide the following guarantees, which have been shown to be optimal.(1.i) After at most O(ε −3/2 ) iterations, Algorithm 4 guarantees approximate firstorder optimality where ∇f (x) ≤ ε (Theorem 3). Furthermore, this is guar- | null | [
"https://export.arxiv.org/pdf/2208.07095v1.pdf"
] | 251,564,297 | 2208.07095 | c87e6cc003670f5a246a9aed3531a9cb17aa8bab |
A Newton-MR algorithm with complexity guarantees for nonconvex smooth unconstrained optimization
August 16, 2022
Yang Liu
Fred Roosta
A Newton-MR algorithm with complexity guarantees for nonconvex smooth unconstrained optimization
August 16, 2022
In this paper, we consider variants of Newton-MR algorithm, initially proposed in[62], for solving unconstrained, smooth, but non-convex optimization problems. Unlike the overwhelming majority of Newton-type methods, which rely on conjugate gradient algorithm as the primary workhorse for their respective sub-problems, Newton-MR employs minimum residual (MINRES) method. Recently, [51] establishes certain useful monotonicity properties of MINRES as well as its inherent ability to detect non-positive curvature directions as soon as they arise. We leverage these recent results and show that our algorithms come with desirable properties including competitive first and second-order worst-case complexities. Numerical examples demonstrate the performance of our proposed algorithms.However, in non-convex problems, where the Hessian matrix can be singular and/or indefinite, the classical Newton's method and its straightforward Newton-CG variant, lack any convergence guarantees. To address this, many Newton-type variants have been proposed which aim at extending Newton-CG beyond strongly convex problems to more general settings, e.g., trust-region[21]and cubic regularization[14,15,56]. However, many of these extensions involve sub-problems that are non-trivial to solve, e.g., the sub-problems of trust-region and cubic regularization methods are non-linear and non-convex[76,77]. For instance, the CG-Steihaug method[69]is not guaranteed to solve the trust-region sub-problem to an arbitrary accuracy. This can negatively affect the convergence speed of the trust-region method. Indeed, if either of the negative curvature or the boundary is encountered too early, the CG-Steihaug method terminates and the resulting step is only slightly, if at all, better than the Cauchy direction[21].In this light, there has been recent efforts in studying non-convex Newton-type methods whose sub-problems involve elementary linear algebra problems, which are much better understood. Notably among these is[64], which enhances the classical Newton-CG approach with safeguards to detect non-positive curvature (NPC) direction in the Hessian, that may arise during the iterations of CG. By exploiting NPC directions, the method in[64]can be applied to general non-convex settings and enjoys favorable complexity guarantees that have been shown to be optimal in certain settings. Stochastic variants of the method in [64] has also recently been considered in[79]. In fact, beyond CG, all conjugate direction methods such as conjugate residual (CR) [66] involve iterations that allow for ready access to NPC directions. Using this property, [26] studies variants of Newton's algorithm where CR replaces CG as the subproblem solver, and by employing NPC direction, obtains asymptotic convergence guarantees for non-convex problems. Arguably, however, the iterative method-of-choice when it comes to real symmetric but potentially indefinite matrices is the celebrated Minimum Residual (MINRES) algorithm[19,58]. Inspired by this, and by reformulating the sub-problems as simple symmetric linear least-squares, a new variant of Newton's method, called Newton-MR, is introduced in[50,62]where the classical CG algorithm is replaced with MINRES. However,[50,62]are limited in their scope in that, unlike the Newton-CG alternatives in [64], the proposed Newton-MR algorithms can only be applied to a sub-class of non-convex objectives, known as invex problems[54]. This shortcoming lies in the fact that the algorithms proposed in[50,62]do not have the ability to leverage NPC directions.Recently, [51] has established several non-trivial properties of MINRES that mimic, and in fact surpass, those found in CG. These include certain useful monotonicity properties as well as an inherent ability to detect NPC directions, when they arise. These properties allow MINRES to be considered as an alternative sub-problem solver to CG for many Newton-type non-convex optimization algorithms. Equipped with the results of [51], in this paper we aim to address the shortcomings of the algorithms in[50,62]and introduce variants of Newton-MR algorithms, which can be applied to general non-convex problems and enjoy highly desirable complexity guarantees. As compared with Newton-CG alternatives in [64], we will show that not only do our proposed Newton-MR variants have advantageous convergence properties, but they also have superior empirical performance in a variety of examples.Contributions and Main Results (Informal)Our contributions and main results are informally summarized below.(1) For optimization of non-convex optimization (1), we propose variants of Newton-MR (Algorithms 4 and 5), which under certain assumptions, provide the following guarantees, which have been shown to be optimal.(1.i) After at most O(ε −3/2 ) iterations, Algorithm 4 guarantees approximate firstorder optimality where ∇f (x) ≤ ε (Theorem 3). Furthermore, this is guar-
Introduction
Consider the unconstrained optimization problem
min x∈R d f (x),(1)
where f : R d → R is sufficiently smooth. Over the last few decades, many optimization algorithms have been developed to solve(1) in a variety of settings. Among them, the class of second-order algorithms play a prominent role. The canonical example of such algorithms is arguably the classical Newton's method whose iterations are typically written as x k+1 = x k +α k d k with
H k d k = −g k ,(2)
where x k , g k = ∇f (x k ), H k = ∇ 2 f (x k ), and α k are respectively the current iterate, the gradient, the Hessian matrix, and the step-size that is often chosen using an Armijo-type linesearch to enforce sufficient decrease in f [57]. When f is smooth and strongly convex, it is well known that the local and global convergence rates of the classical Newton's method are, respectively, quadratic and linear [13,55,57]. For such problems, if forming the Hessian matrix explicitly and/or solving (2) exactly is prohibitive, the update direction d k is obtained using conjugate gradient (CG) to approximately solve (2), resulting in the celebrated Newton-CG method [28,34,57].
anteed in at mostÕ(ε −3/2 ) gradient and Hessian-vector product evaluations a (Theorem 4). (1.ii) After at most O(ε −3/2 ) iterations, Algorithm 5 guarantees, with probability one, approximate second-order optimality where ∇f (x) ≤ ε and λ min (∇f (x)) ≥ − √ ε (Theorem 5). With high probability, we then obtain second-order operation complexity of orderÕ(ε −7/4 ) (Theorem 6), which is then improved toÕ(ε −3/2 ) for functions with a certain structural property in regions near saddle points (Corollary 4).
(2) Of potential independent interests, we give a few novel properties of MINRES.
(2.i) We build on the results of [51] and provide further refined theoretical guarantees of MINRES in terms of obtaining a NPC direction and certificate of positive (semi-)definiteness (or lack thereof) for ∇ 2 f (x) (Theorems 1 and 2). (2.ii) We also perform a novel convergence analysis for MINRES, which improves upon the existing bounds in terms of dependence on the spectrum for indefinite matrices (Lemma 4).
a Following [64,65], we mainly consider gradient and Hessian-vector product evaluations as dominant costs in every iteration, which is henceforth referred to as operational complexity.
The rest of this paper is organized as follows. We end this section by introducing some definitions and notation used in this paper, as well as a a brief review of the complexity guarantees for some related works. In Section 2, we first discuss descent properties of the directions obtained from MINRES. Subsequently, we study in details the complexity of MINRES for obtaining such directions. We then introduce our Newton-MR variants in Section 3, accompanied by their convergence guarantees and complexity analysis. We then evaluate the empirical performance of our proposed algorithms in Section 4. Conclusions and further thoughts are gathered in Section 5. To be self-contained, we bring some algorithmic details of MINRES as well as its relevant properties in Appendix A.
Notation and Definitions
Throughout the paper, vectors and matrices are denoted by bold lower-case and bold upper-case letters, respectively, e.g., v and V. We use regular lower-case and upper-case letters to denote scalar constants, e.g., d or L. For a real vector, v, its transpose is denoted by v . For two vectors v, y, their inner-product is denoted by v, y = v y. For a vector v and a matrix V, v and V denote vector 2 norm and matrix spectral norm, respectively. The subscripts denote the iteration counter for the Newton-MR algorithm, while we use superscripts to denote the iterations of MINRES. For example, s (t) k refers to the t th iterate of MINRES using ∇f (x k ) and ∇ 2 f (x k ) where x k denotes the k th iterate of Newton-MR. For notational simplicity, we use g(x)
∇f (x) ∈ R d and H(x) ∇ 2 f (x) ∈ R d×d for the gradient and the Hessian of f at x, respectively. At times, we may drop their dependence on x and/or the subscript, e.g., g = g k = g(x k ) and H = H k = H(x k ). For any t ≥ 1, the Krylov sub-space of degree t generated using g k and H k is denoted by K t (H k , g k ). We also denote r (t)
k = −g k − H k s (t)
k to be the residual corresponding to s (t) k . When the context is clear, we may also drop the dependence of MINRES iterates on x k and use s t in lieu of s (t) k , and r t in lieu of r (t) k . Given two sets A and B, A \ B denotes the set subtraction A − B = A ∩ B . The cardinality of a set A is denotes by |A|. Natural logarithm is denoted by log(.). Logarithmic factors in the "big-O" notation are represented byÕ (.).
Clearly, the interplay between H and g is the main underlying factor affecting the overall convergence behavior of iterative methods based on the Krylov sub-space K t (H, g). This inter-play is entirely encapsulated in the notion of the grade of g with respect to H; see [66] for more details.
Definition 1 (Grade of g w.r.t. H). The grade of g with respect to H is the positive integer g ∈ N such that dim (K t (H, g)) = t, t ≤ g, g, t > g.
Since the Hessian matrix encodes information about the geometry of the optimization landscape, Definition 2 describes what is often referred to as NPC direction. This simply amounts to any direction that lies in the eigenspace corresponding to the nonpositive eigenvalues of H. Definition 2 (Nonpositive Curvature direction). Any non-zero z ∈ R d for which z, Hz ≤ 0 is called a nonpositive curvature (NPC) direction.
Under non-convex settings, we often seek to obtain solutions that satisfy certain approximate local optimality. Throughout this paper, we make use of the following popular notion of approximate optimality.
Definition 3 ((ε g , ε H )-Optimality). Given (ε g , ε H ) ∈ (0, 1) × (0, 1), a point x ∈ R d is an (ε g , ε H )-optimal solution to the problem (1), if
g(x) ≤ ε g ,(3a)λ min (H(x)) ≥ −ε H . (3b)
Complexity for Non-convex Second-order Algorithms
For optimization of (1), and in order to obtain approximate first-order optimality (3a), the iteration complexity of the classical Newton's method, when iterations are well-defined, has shown to be of the same order as that of steepest descent, namely O(ε −2 g ) [16]. In fact, an inexact trust-region method, which ensures at least a Cauchy (steepest-descent-like) decrease at each iteration, is shown to be of the same order [11,18,37,38,39]. Recent non-trivial modifications of the classical trust-region methods have also been proposed which improve upon the iteration complexity to O(ε −3/2 g ) [23,24,25]. These bounds can be shown to be tight [16] in the worst case. From the worst-case complexity point of view, cubic regularization methods have generally a better dependence on ε g compared to the trust region algorithms. More specifically, [56] showed that, under global Lipschitz continuity assumption on the Hessian, if the sub-problem is solved exactly, then the resulting cubic regularization algorithm achieves (3a) with an iteration complexity of O(ε −3/2 g ). These results were extended by the seminal works of [14,15] to an adaptive variant. In particular, the authors showed that the worst case complexity of O(ε −3/2 g ) can be achieved without requiring the knowledge of the Hessian's Lipschitz constant, access to the exact Hessian, or multi-dimensional global optimization of the sub-problem. These results were further refined in [18] where it was shown that, not only, multi-dimensional global minimization of the sub-problem is unnecessary, but also the same complexity can be achieved with mere one or two dimensional search. This O(ε −3/2 g ) bound has been shown to be tight [17]. To achieve approximate second-order optimality (3), iteration complexity bounds in the orders of O(max{ε −1 H ε −2 g , ε −3 H }) and O(max{ε −3 g , ε −3 H }) have been obtained in the context of trust-region methods in [18] and [36], respectively. Similar bounds were also given in [39] under probabilistic model. Bounds of this order have shown to be optimal with certain choices of ε g and ε H [18]. For cubic regularization methods, [18] showed that at most O(max{ε −2 g , ε −3 H }) iterations is required to obtain (3). With further assumptions on the inexactness of the sub-problem solution, [15,18] also show that one can achieve the iteration complexity O(max{ε −3/2 g , ε −3 H }), which is shown to be tight for any choice of ε g and ε H [16]. Variants of adaptive cubic regularization under inexact function, gradient, and or Hessian information have been shown to achieve the same convergence rate [5,7,44,74,76,78]. Related algorithms and extensions with similar complexity guarantees have also been studied [4,6,8,9,43,53].
Beyond trust-region and cubic-regularization frameworks, and most related to our work here, several line-search based methods have also been recently shown to achieve similar first and second-order complexity guarantees [63,64,65]. Better dependence on optimality tolerance can be obtained if one assumes additional structure, such as invexity [50,62]. In addition to iteration complexity, the computational cost of each iteration has also been considered in several recent works, e.g., [2,22,64,65,79]. Taking ε 2 H = ε g = ε, the overall operation complexities in these works achieve the rate ofÕ(ε −7/4 g ).
MINRES: Newton-MR's Workhorse
Recall that the original variant of Newton-MR, as studied in [62], relies on replacing the subproblems of Newton-CG, i.e., the linear system in (2), with the ordinary least-squares formulation as
min d∈R d H k s + g k 2 .(4)
In fact, the term "MR" refers to the sub-problems being in the form of Minimum Residual, i.e., least-squares. While there is a plethora of algorithms for approximately solving (4), it turns out that MINRES offers a catalog of beneficial properties that play a central role in optimization of (1). Among them, the most immediately relevant properties are those which guarantee descent for (1) and are discussed in Section 2.1. Further applicable properties are gathered in Appendix A.2.
Descent Properties
For solving (4), recall that the t th iteration of MINRES, detailed in Algorithm 1, yields a direction, s (t) k , which satisfies [32]. Suppose g k = 0 (we will treat the case for saddle points later). Since 0 ∈ K t (H k , g k ), it follows that
s (t) k = arg min s∈Kt(H k ,g k ) H k s + g k 2 , t = 1, 2, . . . ,(5)where K t (H k , g k ) = Span{g k , H k g k , . . . , [H k ] t−1 g k } denotes the Krylov sub-space of order ts (t) k , H k g k ≤ − H k s (t) k 2 /2 < 0, t = 1, 2, . . . ,
as long as s (t) k / ∈ Null(H k ) (which can be guaranteed if, for example, g k / ∈ Null(H k )). In other words, s (t) k from MINRES, for all t ≥ 1, is a descent direction for g(x)
2 at x k . This observation forms the basis of the plain Newton-MR variant considered in [62]. Due to such descent property, once s (t) k satisfies a certain condition, the iterations of MINRES are terminated, and the update Algorithm 1 MINRES(H, g, η)
1: Input: Hessian H, gradient g, and inexactness tolerance η > 0
2: φ 0 =β 0 = g , r 0 = −g, v 1 = r 0 /φ 0 , v 0 = s 0 = w 0 = w −1 = 0, 3: s 0 = 0, c 0 = −1, δ(1)1 = τ 0 = 0, t = 1, D type = 'SOL', 4: while True do 5: q t = Hv t ,α t = v t q t , q t = q t −β t v t−1 , q t = q t −α t v t ,β t+1 = q t 6: δ (2) t t+1 γ (1) t δ (1) t+1 = c t−1 s t−1 s t−1 −c t−1 δ (1) t 0 α tβt+1 7: if φ t−1 γ (1) t 2 + δ (1) t+1 2 ≤ η φ 2 0 − φ 2 t−1 then 8: D type = 'SOL' 9: return s t−1 , D type 10: end if 11: if c t−1 γ (1) t ≥ 0 then 12: D type = 'NPC' 13: return r t−1 , D type .
14: 16: if γ
end if 15: γ (2) t = (γ (1) t ) 2 +β 2 t+1 ,c t = γ (1) t /γ (2) t , s t =β t+1 /γ (2) t , τ t = c t φ t−1 , φ t = s t φ t−1 , 18: w t = v t − δ (2) t w t−1 − (1) t w t−2 /γ (2) t , s t = s t−1 + τ t w t 19: ifβ t+1 = 0 then 20: v t+1 = q t /β t+1 , r t = s 2 t r t−1 − φ t c t v t+1 ,c t = 0, s t = 1, τ t = 0, φ t = φ t−1 , r t = r t−1 , s t = s t−1 , 24: end if 25: t ← t + 1, 26: end while direction d k = s (t)
k is used within an Armijo-type line-search to obtain a step-size, guaranteeing decrease in g(x)
2 . By its construction, the iterates of such a naïve variant of Newton-MR converge to the zeros of the gradient field, g(x). However, unless the objective function f (x) is invex [54], such points include saddle points or even (local) maxima. To design a new Newton-MR variant for more general non-convex optimization, a natural question perhaps is "when can MINRES generate a descent direction for the objective function, f (x)? ". This question has very recently been thoroughly studied in [51]. The answer is based on the inherent ability of MINRES to detect NPC directions, if they exist, and to otherwise provide a certificate for positive semi-definiteness of the Hessian matrix. In fact, MINRES provides two complementary mechanisms to generate descent directions for f (x): one prior to the detection of non-positive curvature and one based on the detected NPC direction.
Prior to Detecting NPC
As long as no NPC direction has been detected, any iterate of MINRES is guaranteed to yield descent for f (x) (Lemmas 12 and 13). For such iterates, we have
s (t) k , g k < 0, and s (t) k , H k g k < 0.
For example, if H k 0, all iterates can be used to obtain descent, not only in the norm of its gradient, but also in the objective function itself. As a result, prior to the detection of any NPC direction, if a certain inexactness condition is met, the MINRES iterations can be terminated, returning an inexact solution as an update direction.
Inexactness Condition
The most widely used termination criterion for the sub-problem is that based on the residual, i.e., we terminate when r (t) k ≤ η for some η > 0. However, in non-convex, if g k / ∈ Range(H k ), then r (t) k is lower bounded by (I − H k [H k ] † )g k , which is a priori unavailable. Hence, the acceptable range for η relies on an unavailable lower-bound, which is often unfortunately overlooked or ignored in many related works, making this approach rather impractical. In Algorithm 1, at any iteration t, we instead check the inexactness condition
H k r (t−1) k ≤ η H k s (t−1) k ,(6)
for some η > 0 with the iterate from the previous iteration, and return s Lemma 3.11], and hence for any η > 0, the inexactness condition (6) will always be satisfied at some iteration. Furthermore, (6) can be verified without any additional matrix vector products; see Step 7 of Algorithm 1 as well as (29d).
(t−1) k if (6) is satisfied. Note that, in Algorithm 1, H k r (t) k is decreasing to zero while H k s (t) k is monotonically increasing from zero to H k [H k ] † g k [50,
After Detecting NPC
Once a NPC direction arises for the first time, say at iteration t, the subsequent iterates cease to enjoy a descent guarantee for f , and may in fact even yield ascent (although they always provide descent for g(x)
2 ). However, it turns out that the detected NPC direction, r (t−1) k , is itself naturally a descent direction for f (x); see (29b) in Lemma 11. In addition, r (t−1) k forms an interesting angle with the gradient of g(x) 2 at x k ; see (29a) in Lemma 11. Indeed, with such an NPC direction, we have
r (t−1) k , g k < 0, and r (0) k , H k g k ≤ 0, (if NPC is detected at t = 1) r (t−1) k , H k g k = 0, (if NPC is detected at t > 1)
.
NPC Condition
To determine whether or not a NPC direction is available at iteration t for MINRES, it suffices to monitor the condition
r (t−1) k , H k r (t−1) k ≤ 0,(7)
where r
(t−1) k = −g k − H k s (t−1) k
is the residual vector of the previous iteration. It also turns out that the NPC condition (7) can be readily verified as part of the MINRES iterations without explicitly computing H k r (t−1) k ; see Step 11 of Algorithm 1, as well as (30) and Lemma 11. In fact, the existence of a NPC direction for H k at iteration t is equivalent to the NPC condition (7); see Lemma 12.
Remark 1 (Contrast Between NPC Detection in MINRES and Capped CG of [64]). Within the Capped CG procedure, [64,Algorithm 1], if a NPC direction does not arise naturally, an additional test is employed to extract such a direction. This is done by comparing the observed convergence rate of the residual norm of the iterates with the theoretical one under the positive definiteness assumption. A slower convergence rate than what is theoretically expected serves as an indication that the underlying matrix contains non-positive eigenvalues. This is then exploited to extract a NPC direction. Similar approaches are also used in [22, Lemma 3.1, Lemma 3.2] for the Truncated CG solver within the trust region framework. In contrast, the NPC directions always arise naturally within the MINRES iterations, as soon as they become available; see Lemma 12. In this light, monitoring the NPC condition (7) is necessary and sufficient to detect, or to otherwise rule out, the existence of directions.
As discussed above, one can always construct a descent direction within the MINRES iterations. Indeed, if the inexactness condition (6) is satisfied for s (t−1) k before the NPC condition (7) is detected, we let d k = s (t−1) k . Otherwise, once the NPC direction is detected at iteration t, we set d k = r (t−1) k . In either case, we are guaranteed that the search direction d k satisfies
d k , g k < 0, and d k , H k g k ≤ 0.(8)
Remark 2 (Contrast Between CG and MINRES Directions). One can make several intriguing observations about the quality of the search directions obtained from MINRES in comparison with those from CG.
• Just as is the case for the classical CG, the output of the Capped CG procedure in [64] can be a direction of ascent for g 2 . This could result in a, rather, chaotic trajectory for the iterates of Newton-CG in terms of the magnitude of the gradient. This is in fact reminiscent of the non-monotonic behavior of the residuals during the iterations of CG for solving a positive definite linear system. In contrast, by constructing a search direction from MINRES as in (8), we obtain a direction that not only implies descent in the function, f (x), but it also forms a "friendly" angle with the direction of the greatest decrease of g(x)
2 .
• Recall that the t th iteration of CG is equivalent to the formulation arg min
s∈Kt(H k ,g k ) g k , s + 1 2 s, H k s .
As long as non-positive curvature of H k has not been encountered, any solution s (t) k to above optimization problem is guaranteed to satisfy g k , s
(t) k ≤ − s (t) k , H k s (t)
k /2 ≤ 0, and is hence a descent direction. Similarly, as discussed above, any iterate of MIN-RES prior to the detection of non-positive curvature, provides the same guarantee. However, as shown in [51] (see also Lemma 13), for such an iterate, we also have g k , s
(t) k ≤ − s (t) k , H k s (t) k
≤ 0, where in contrast to CG, the factor multiplying the quadratic term is one, instead of 1/2. In other words, the directions from MINRES could yield sharper descent as compared with CG.
Complexity Analysis
In this section, we study the complexity of MINRES in obtaining a direction satisfying the conditions (6) and (7). For notational simplicity, we drop the dependence of g(x k ) and H(x k ) on x k and denote g = g k = g(x k ) and H = H k = H(x k ). Subsequently, we use s t and r t to denote s (t) k and r (t) k , respectively. It turns out that the interplay between H and g is paramount in establishing the complexity of MINRES. For example, if g lies entirely in some eigen-space corresponding to certain eigenvalues of H, the induced Krylov sub-space will never contain any of the remaining eigenvectors of H. In other words, we only need to consider the portion of the spectrum of H that is relevant to g. Null(H − λI). The set of g-relevant eigenvalues of H is defined as
Ψ(H, g) {λ ∈ Θ(H) | g ⊥ E λ (H)} .
In other words, Ψ(H, g) is the set of all eigenvalues of H whose associated eigenvectors are not orthogonal to g. Let ψ |Ψ(H, g)| = ψ + + ψ − + ψ 0 where ψ + , ψ − , and ψ 0 , respectively, denote the number of positive, negative, and zero g-relevant eigenvalues of H (clearly, ψ 0 ∈ {0, 1}). Note that Θ(H) \ Ψ(H, g) may contain positive, negative, and/or zero eigenvalues. Without loss of generality, we label the g-relevant eigenvalues in Ψ(H, g) such that λ 1 > λ 2 > . . . > λ ψ+ > 0, and 0 > λ ψ++ψ0+1 > . . . > λ ψ .
For a given λ i ∈ Ψ(H, g),
1 ≤ i ≤ ψ, let dim(E λi (H)) = m i where 1 ≤ m i ≤ d.
If m i > 2, we can always consider a basis of E λi (H) for which g has a non-zero projection onto only one of the basis vectors, i.e., for any orthonormal basis W i ∈ R d×mi of E λi (H), we can consider the unit eigenvectors given by u
U i = u i u {2} i · · · u {mi} i , i = 1, . . . , ψ.(9a)
Putting the above all together, we will consider the full eigenvalue decomposition of H as
H = U U ⊥ Λ Λ ⊥ U U ⊥ ,(9b)
where
U = U + U − , Λ = Λ + Λ − ,(9c)
and Λ + and Λ − , are the diagonal matrices respectively containing the strictly positive and strictly negative g-relevant eigenvalues in Ψ(H, g) (with their multiplicities), with U + and U − being their corresponding eigenvectors as discussed above, i.e.,
U + = U 1 U 2 · · · U ψ+ , , U − = U ψ++ψ0+1 U ψ++ψ0+1 · · · U ψ+ ,(9d)
where U i is as in (9a). Also, Λ ⊥ contains all the remaining eigenvalues of H (with multiplicities) with the corresponding eigenvectors collected in U ⊥ . It follows that the decomposition of g in the basis U U ⊥ is given by
g = U U ⊥ U U ⊥ g = ψ i=1 u i , g u i ,(10)
where u i is as in (9a). Let us also define a few quantities related to (9), which we use in the proofs of this paper. For some 1 ≤ i ≤ ψ + and ψ + + ψ 0 + 1 ≤ j ≤ ψ, we partition Λ + , U + , Λ − , and U − in (9) as
Λ + = Λ i+ Λ i+ , U + = U i+ U i+ (11a) Λ − = Λ j− Λ j− , U − = U j− U j− .(11b)
In other words, Λ i+ is the diagonal sub-matrix of Λ + in (9) associated with the largest positive g-relevant eigenvalues λ 1 ≥ λ 1 ≥ . . . ≥ λ i . The corresponding columns of U + are gathered in U i+ . Similarly, Λ j− is the diagonal sub-matrix of Λ − in (9) associated with the most negative g-relevant eigenvalues λ j ≥ . . . ≥ λ ψ ≥ λ ψ , with U j− containing the corresponding eigenvectors. Lemma 1 establishes a crucial connection between the grade of g with respect to H defined in Definition 1 and the cardinality of Ψ(H, g). Lemma 1. We have g = ψ + + ψ − + ψ 0 , where g be the grade of g with respect to H, and ψ + , ψ − , and ψ 0 , respectively, denote the number of positive, negative, and zero g-relevant eigenvalues of H.
Proof. The proof follows a similar line of reasoning as that in [10, Theorem 2.6.2]. Let u i denote the eigenvector corresponding to λ i ∈ Ψ(H, g) as described above. From (10), for j ≥ 0, we have
z j H j g = ψ i=1 λ j i u i , g u i ,
where we have defined 0 0 = 1. Denoting y i u i , g u i , we have
z 0 z 1 . . . z ψ−1 = Y ψ y 1 y 2 . . . y ψ 1 λ 1 · · · λ ψ−1 1 1 λ 2 · · · λ ψ−1 2 . . . . . . . . . . . . 1 λ ψ · · · λ ψ−1 ψ (12)
Since λ i = λ j , i = j, the Vandermonde matrix is non-singular. The matrix Y ψ is also nonsingular, and hence it follows that g = ψ = ψ + + ψ − + ψ 0 .
Recall that, as part of MINRES, we have T t = V t HV t where T t ∈ R t×t is a symmetric tridiagonal matrix and V t ∈ R d×t is the basis obtained from the Lanczos process for K t (H, g); see Appendix A for more details. Lemma 2 characterizes the implications of ψ 0 ∈ {0, 1} on the structure of T g . Lemma 2. Let g be the grade of g with respect to H.
(i) If g ∈ Range(H), i.e., ψ 0 = 0, then T g is non-singular and r g = 0. Furthermore, if the NPC condition (7) is never detected for any 1 ≤ t ≤ g, then T g 0.
(ii) If g / ∈ Range(H), i.e., ψ 0 = 1, then T g is singular. Furthermore, if the NPC condition (7) is first detected only at the very last iteration, then T g 0, and r g−1 = r g is a zero curvature direction.
Proof.
(i) When g ∈ Range(H), we obviously have r g = 0. Also, by [19, Appendix A], we must have T g is non-singular. Further, if the NPC condition (7) is never detected for any 1 ≤ t ≤ g, then from Lemma 12, we also have T g 0.
(ii) When g / ∈ Range(H), by [19, Throem 3.2], we have T g is necessarily singular and γ
(2) g = 0. This implies that γ (1) g = 0 and hence r g−1 must be a zero curvature direction, i.e., r g−1 , Hr g−1 = 0. Since the NPC condition (7) is only detected for the first time at the last iteration, then by Lemma 12 we have T g−1 0. But this, in turn, implies that r t , Hr t > 0 for all 0 ≤ t < g − 1 (recall that r t ∈ K g−1 (H, g), 0 ≤ t ≤ g − 2). Now, assume λ min (T g ) < 0. Then, there must exist some non-zero vector p ∈ K g (H, g) = Span{r 0 , r 1 , . . . , r g−1 } such that p, Hp < 0. Let ξ 1 , . . . , ξ g−1 be scalars such that p = g−1 i=0 ξ i r i . Note that since r g−1 , Hr g−1 = 0, at least one of ξ 1 , . . . , ξ g−2 must be nonzero. Otherwise p = r g−1 and r g−1 , Hr g−1 < 0, which contradicts the fact that r g−1 is a zero curvature direction. By (29a), we have
0 > p T Hp = g−1 i=0 g−1 j=0 ξ i ξ j r i , Hr j = g−1 i=0 ξ 2 i r i , Hr i = g−2 i=0 ξ 2 i r i , Hr i > 0,
which is a contradiction. Hence, we must have T g 0. By [51, Remark 3.10-(ii)], we also have r g−1 = r g .
Lemma 2 indicate that if ψ 0 = 1, the NPC condition (7) will be detected at the last iteration. However, if in addition ψ − ≥ 1, we expect to detect the NPC condition (7) earlier in the iterations. It turns out considering the interplay between g and H through the lens of grelevant eigenvalues is paramount in establishing the complexity of MINRES for returning a direction satisfying the NPC condition (7). For example, if ψ − = ψ 0 = 0 and ψ + ≥ 1, i.e., Ψ(H, g) only contains positive eigenvalues of H, then the non-positive curvature of H is simply not visible through the lens of g. However, when ψ − + ψ 0 ≥ 1, i.e., Ψ(H, g) contains at least one non-positive eigenvalue of H, then we are able to detect a NPC direction. For example, when ψ − = ψ + = 0, then g ∈ Null(H), which implies g, Hg = 0, i.e., g is a NPC direction, which is detected at the very first iteration. Theorem 1 characterizes this property of MINRES.
Theorem 1. The NPC condition (7) is detected for some 1 ≤ t ≤ g if and only if ψ − +ψ 0 ≥ 1. Furthermore, the NPC condition (7) is detected with strict inequality for some 1 ≤ t ≤ g if and only if ψ − ≥ 1. Proof. By Lemma 1, we recall that g = ψ = ψ + + ψ − + ψ 0 . Suppose ψ − + ψ 0 ≥ 1. Let 1 ≤ t ≤ g be the first iteration where K t (H, g)
contains an eigenvector, say u, corresponding to a non-positive eigenvalue in Ψ(H, g). From (12), it is clear that such an iteration exists. Let V t ∈ R d×t denote the basis obtained from the Lanczos process for K t (H, g). Hence, we can write u = V t w for some non-zero w ∈ R t . It follows that
w, T t w = V t w, HV t w = u, Hu ≤ 0,
which implies that T t 0. In other words, by Lemma 12, as soon as u ∈ K t (H, g), the NPC condition (7) is satisfied.
Conversely, suppose ψ − = ψ 0 = 0. Clearly, no eigenvector corresponding to any non-positive eigenvalue of H can belong to K t (H, g) for any 1 ≤ t ≤ g. Indeed, suppose for some λ ≤ 0, its corresponding eigenvector, v, and some
1 ≤ t ≤ g, we have v = t−1 i=0 c i H i g. In this case, since λ / ∈ Φ(H, g), we must have v, v = t−1 i=0 c i v, H i g = t−1 i=0 c i λ i v, g = 0,
which contradicts the assumption that v is an eigenvector and hence non-zero 1 . Now, from (12), it follows that Range(Y ψ ) = Span{u 1 , . . . , u ψ } = K g (H, g) where u i 's are as in (10). For any 1 ≤ t ≤ g, consider V t ∈ R d×t to be the basis obtained from the Lanczos process for K t (H, g). For any non-zero y ∈ R t , let z = V t y ∈ K t (H, g) ⊆ K g (H, g) and suppose ξ 1 , . . . , ξ ψ are some
scalars such that z = ψ i=1 ξ i u i . We have y, T t y = V t y, HV t y = z, Hz = ψ i=1 ψ j=1 ξ i ξ j u i , Hu j = ψ i=1 ξ 2 i λ i u i , u i > 0, which implies T t 0. Finally, if instead of ψ − + ψ 0 ≥ 1, we have ψ − ≥ 1, i.e.
, we have at least one negative g-relevant eigenvalue, then the proof can be modified accordingly.
Clearly g / ∈ Range(H) if and only if ψ 0 = 1. Now, if the NPC condition (7) is first detected only at the very last iteration, then from Lemma 2-(ii) and the proof of Theorem 1, it follows that we must have ψ − = 0. Conversely, if ψ − = 0, then just as in the proof of Theorem 1, we can show that T t 0 for all 1 ≤ t ≤ g. Now, by the Sturm Sequence Property [31, Theorem 8.4.1], since the smallest eigenvalue of T t decreases to zero monotonically, it follows that we must have T g 0 but T t 0, 1 ≤ t ≤ g − 1. If, in addition, ψ 0 = 0, then necessarily we have T g 0 also.
Building upon Theorem 1, it turns out, we can also obtain a more refined certificate for the positive (semi) definiteness of H than [51, Theorem 3.5], which might be of independent interest than the rest of this paper. (7) is never detected for all 1 ≤ t ≤ g. (7) is first detected at the last iteration and evaluates to zero.
H 0 if and only if the NPC condition
H 0 if and only if the NPC condition
Proof.
1.
Clearly if H 0, then by (29c), the NPC condition (7) is never detected. Conversely, suppose H 0. If t ≤ g is the first iteration such that T t 0, then by Lemma 12, the NPC condition (7) must hold. So, suppose T t 0 for all 1 ≤ t ≤ g. Since Ψ(H, g) = Θ(H), the decomposition (10) must involve an eigenvector, u, corresponding to a non-positive eigenvalue of H. From (12), it follows that u ∈ K g (H, g). Let V g ∈ R d×g denote the basis obtained from the Lanczos process for K g (H, g). Hence, we can write u = V g w for some non-zero w ∈ R g . It follows that w, T g w = V g w, HV g w = u, Hu ≤ 0, which implies that T g 0, and we arrive at a contradiction.
2. If H 0, then we must have T t = V t HV t 0. Now, by the Sturm Sequence Property, it follows that we must have T g 0 but T t 0, 1 ≤ t ≤ g − 1. The results follows from applying Lemma 12. Now, suppose the NPC condition (7) is first detected at the last iteration, and r g−1 , Hr g−1 = 0 but H 0. By Lemmas 2 and 12, we must have T g 0. Similar to the proof of the first part, consider u ∈ K g (H, g) to be an eigenvector in the decomposition (10), which corresponds to a negative eigenvalue of H. Letting u = V g w for some non-zero w ∈ R g , it follows that w, T g w = V g w, HV g w = u, Hu < 0, which implies that T g 0, and we arrive at a contradiction.
We end this section with an important implication of (9), which is used heavily in our analysis later in this paper.
Lemma 3. Let 1 ≤ t ≤ g. For any z ∈ K t (H, g), we have Hz = HUU z where U is as in (9).
Proof. We first note that that since Λ ⊥ U ⊥ g = 0, it follows that HU ⊥ U ⊥ g = U ⊥ U ⊥ Hg = 0. Hence, for any z ∈ K t (H, g), we have
Hz = H t−1 i=0 c i H i g = t−1 i=0 c i H i+1 g = t−1 i=0 c i H i (HUU + HU ⊥ U ⊥ ) g = t−1 i=0 c i H i HUU g = H t−1 i=0 c i H i UU g = HUU t−1 i=0 c i H i g = HUU z.
Complexity of Finding Approximate Solution
So long as the NPC condition (7) has not been detected, the iterates of Algorithm 1 are guaranteed to yield decent in both f (x) and g 2 . We now investigate the complexity of Algorithm 1 for obtaining an approximate solution to (4). Clearly, if g k ∈ Null(H k ), then g k is declared an NPC direction at the first iteration of Algorithm 1. So, in obtaining the convergence rate, we can safely assume that g k ∈ Null(H k ). Lemma 4 will give the complexity of MINRES for obtaining a solution, which satisfies the inexactness condition (6).
Lemma 4. Suppose g / ∈ Null(H), i.e., ψ + + ψ − ≥ 1. Consider any 1 ≤ i ≤ ψ + and ψ + + ψ 0 + 1 ≤ j ≤ ψ. For any 0 < θ < 1, after at most t ≥ max{κ + i , κ − j } 4 log 4 θ ,
iterations of Algorithm 1 (without the NPC detection mechanism of Step 11), we have
r t 2 ≤ I − U i+ U j− U i+ U j− g 2 + θ U i+ U j− U i+ U j− g 2 ,
and in particular,
UU r t 2 ≤ UU − U i+ U j− U i+ U j− g 2 + θ U i+ U j− U i+ U j− g 2 , where κ + i λ 1 λ i , and, κ − j λ ψ λ j ,
and U, U i+ , and U j− are as in (9) and (11). If ψ − = 0, then the statement holds with
κ + i instead of max{κ + i , κ − j }. The statement is similarly modified if ψ + = 0.
Proof. Recalling the t th iteration of MINRES as in (5), it follows that min s∈Kt(H,g)
Hs + g 2 = min s∈Kt(H,g)
Hs + (UU + U ⊥ U ⊥ ) g 2 = min s∈Kt(H,g) Hs + UU g 2 + U ⊥ U ⊥ g 2 = min s∈Kt(H,g) HUU s + UU g 2 + U ⊥ U ⊥ g 2 = min s∈UU Kt(H,g) HUU s + UU g 2 + U ⊥ U ⊥ g 2 = min s∈UU Kt(H,g) UU Hs + UU g 2 + U ⊥ U ⊥ g 2 = min s∈UU Kt(H,g) U + U + Hs + U + U + g 2 + U − U − Hs + U − U − g 2 + U ⊥ U ⊥ g 2 = min
s∈UU Kt(H,g)
HU + U + s + U + U + g 2 + HU − U − s + U − U − g 2 + U ⊥ U ⊥ g 2 = min p∈U+U + Kt(H,g) HU + U + p + U + U + g 2 + min q∈U−U − Kt(H,g) HU − U − q + U − U − g 2 + U ⊥ U ⊥ g 2 = min p∈U+U + Kt(H,g) Λ + U + p + U + g 2 + min q∈U−U − Kt(H,g) Λ − U − q + U − g 2 + U ⊥ U ⊥ g 2 = min p∈U+Kt(Λ+,U + g) Λ + U + p + U + g 2 + min q∈U−Kt(Λ−,U − g) Λ − U − q + U − g 2 + U ⊥ U ⊥ g 2 = min p∈Kt(Λ+,U + g) Λ +p + U + g 2 + min q∈Kt(Λ−,U − g) Λ −q + U − g 2 + U ⊥ U ⊥ g 2 . Now, for 1 ≤ i ≤ ψ + and ψ + + ψ 0 + 1 ≤ j ≤ ψ, we have min s∈Kt(H,g) Hs + g 2 = miñ p∈Kt(Λi+,U i+ g) Λ i+p + U i+ g 2 + min p∈Kt(Λ i+ ,[U i+ ] g) Λ i+p + U i+ g 2 + miñ q∈Kt(Λj−,U j− g) Λ j−q + U j− g 2 + min q∈Kt(Λ j− ,[U j− ] g) Λ j−q + U j− g 2 + U ⊥ U ⊥ g 2 ≤ miñ p∈Kt(Λi+,U i+ g) Λ i+p + U i+ g 2 + miñ q∈Kt(Λj−,U j− g) Λ j−q + U j− g 2 + U i+ g 2 + U j− g 2 + U ⊥ g 2 .
Since Λ i+ 0 and Λ j− ≺ 0, from the standard convergence rate of MINRES for definite matrices [40, (3.12)], we get
Λ i+p + U i+ g ≤ 2 U i+ U i+ g κ + i − 1 κ + i + 1 t , Λ j−q + U j− g 2 ≤ 2 U j− U j− g κ − j − 1 κ − j + 1 t ,
which implies that
r t 2 ≤ U i+ U i+ + U j− U j− + U ⊥ U ⊥ g 2 + 4 U i+ U + g 2 κ + i − 1 κ + i + 1 2t + 4 U j− U j− g 2 κ − j − 1 κ − j + 1 2t ≤ I − U i+ U j− U i+ U j− g 2 + 4 U i+ U i+ + U j− U j− g 2 max κ + i − 1 κ + i + 1 , κ − j − 1 κ − j + 1 2t , where we have used the fact that U i+ U j− U i+ U j− = I−U i+ U i+ −U j− U j− − U ⊥ U ⊥ . Hence, using the fact that log max{a, b} = max log{a, b}, a > 0, b > 0, if t ≥ max{κ + i , κ − j } 4 log (4/θ) ,
then we have the desired result.
Remark 3. Lemma 4 gives the convergence of MINRES involving arbitrary symmetric matrices. To our knowledge, it provides an improvement over the existing results on the convergence of MINRES applied to indefinite symmetric matrices. Indeed, available general convergence results for indefinite problems imply rates depending on κ + and κ − as opposed to √ κ + and √ κ − , e.g., [29,40,68,75]. While the existing convergence results in the literature are sub-optimal, Lemma 4 provides a convergence rate, which similar to positive-definite system, depends optimally on √ κ + and √ κ − . The key to obtaining the improvements in Lemma 4 is by considering the interplay between g and H by separating positive and negative g-relevant eigenvalues, and the projection of the iterates on the corresponding eigenspaces.
From the proof of Lemma 4, it is easy to see that, for any 1 ≤ i ≤ ψ + and ψ + +ψ 0 +1 ≤ j ≤ ψ, we have min s∈Kt(H,g)
Hs + g 2 ≤ min min p∈Kt(Λi+,U j+ g) Λ i+p + U i+ g 2 + I − U i+ U i+ g , min q∈Kt(Λj−,U j− g) Λ j−q + U j− g 2 + I − U j− U j− g 2 ,
from which, following similar steps as in the proof of Lemma 4, we get the following corollary.
Corollary 1. Suppose ψ + ≥ 1. Consider any 1 ≤ i ≤ ψ + . For any 0 < θ < 1, after at most t ≥ κ + i 4 log 4 θ ,
iterations of Algorithm 1 (without the NPC detection mechanism of Step 11), we have
r t 2 ≤ I − U i+ U i+ g 2 + θ U i+ U i+ g 2 ,
and in particular,
UU r t 2 ≤ UU − U i+ U i+ g 2 + θ U i+ U i+ g 2 .
If ψ − ≥ 1, then letting ψ + + ψ 0 + 1 ≤ j ≤ ψ, the statement also holds with U j− and κ − j , instead of U i+ and κ + i , respectively.
Corollary 1 only involves the positive (or negative) g-relevant spectrum of H and hence is more appealing than Lemma 4 when either of κ − (or κ + ) is very large. Of course, this comes at the cost of a larger residual error on the right-hand.
The following Lemma will be used in conjunction with Lemma 4 and Corollary 1 to establish the iteration complexity of Algorithm 1 to obtain a solution which satisfies the inexactness condition (6).
Lemma 5. Let U be as in (9). If for any η > 0, we have
UU r t−1 2 ≤ η 2 U HU 2 + η 2 UU g 2 ,
then the inexactness condition (6) holds.
Proof. First recall that r t−1 ∈ K t (H, g). We have
U HU 2 UU r t−1 2 ≤ η 2 UU g 2 − UU r t−1 2 .
We also get Hs t−1 2 = HUU s t−1 2 = UU g 2 − UU r t−1 2 . This, coupled with the fact that HUU r t−1 ≤ U HU UU r t−1 and noting that by Lemma 3 we have Hr t−1 = HUU r t−1 , implies the desired result.
For example, suppose η is large enough such that
η 2 U HU 2 + η 2 > UU − U i+ U i+ g 2 UU g 2 .
From Lemma 4, after at most
t ≥ max{κ + i , κ − j } 4 log 4 η 2 / U HU 2 + η 2 − UU − U i+ U i+ g 2 / UU g 2 + 1,
iteration we have a solution s t−1 , which satisfies (6).
Complexity of Finding NPC Direction
Theorem 1 shows that as long as H contains any non-positive g-relevant eigenvalues, MINRES is guaranteed to detect a NPC direction. We now establish a worst-case complexity of MINRES for detecting such directions. To detect an NPC direction, we do not necessarily need to estimate the smallest g-relevant eigenvalue of H. As long as we can crudely approximate any negative g-relevant eigenvalue, we can safely extract the corresponding NPC direction. This observation is basis in the derivation of Lemma 6. The proof idea below draws upon that in [67, Sections 4.4 and 6.6].
Lemma 6. Suppose ψ + ≥ 1, ψ − ≥ 1, and let ζ t be the smallest eigenvalue of T t for t ≤ g.
For any ψ + + ψ 0 + 1 ≤ j ≤ ψ, we have ζ t − λ j ≤ 4 1 − ν j ν j (λ 1 − λ j ) κ j + 1 − 1 κ j + 1 + 1 2(t−1) , where κ j λ 1 −λ j ,(13)
and
ν j U j− U j− g 2 UU g 2 ,(14)
and λ i , U and U j− are as in (9) and (11).
Proof. We first note that
ζ t = min p∈Pt−1 p(H)g, Hp(H)g p(H)g, p(H)g ≤ min p∈Pt−1 p(H)UU g, Hp(H)UU g p(H)UU g, p(H)UU g ,
where P t−1 is the space of all polynomials of degree not exceeding t − 1 and the inequality follows by Lemma 3 and the fact that UU p(H)g ≤ p(H)g . Denote I {1, . . . , ψ + , ψ + + ψ 0 + 1, . . . , ψ} .
Note that |I| = ψ if ψ 0 = 0, otherwise |I| = ψ − 1. With U as in (9) and defining
c i u i , g UU g ,
we have
UU g = i∈I u i , g u i = UU g i∈I c i u i .
Noting that λ i < λ j ≤ 0 for any ψ + + ψ 0 + 1 ≤ j < i < ψ, it follows that for any
ψ + + ψ 0 + 1 ≤ j ≤ ψ, we have ζ t − λ j ≤ min p∈Pt−1 p(H)UU g, Hp(H)UU g p(H)UU g, p(H)UU g − λ j = min p∈Pt−1 i∈I λ i c 2 i p 2 (λ i ) i∈I c 2 i p 2 (λ i ) − λ j = min p∈Pt−1 i≤ψ+ λ i c 2 i p 2 (λ i ) + i≥ψ++ψ0+1 λ i c 2 i p 2 (λ i ) i∈I c 2 i p 2 (λ i ) − λ j ≤ min p∈Pt−1 i≤ψ+ λ i c 2 i p 2 (λ i ) + j≤i≤ψ λ i c 2 i p 2 (λ i ) i∈I c 2 i p 2 (λ i ) − λ j ≤ min p∈Pt−1 i≤ψ+ λ i c 2 i p 2 (λ i ) + λ j j≤i≤ψ c 2 i p 2 (λ i ) i≤ψ+ c 2 i p 2 (λ i ) + j≤i≤ψ c 2 i p 2 (λ i ) − λ j = min p∈Pt−1 i≤ψ+ (λ i − λ j ) c 2 i p 2 (λ i ) i≤ψ+ c 2 i p 2 (λ i ) + j≤i≤ψ c 2 i p 2 (λ i ) ≤ (λ 1 − λ j ) i≤ψ+ c 2 i p 2 (λ i ) i≤ψ+ c 2 i p 2 (λ i ) + j≤i≤ψ c 2 i p 2 (λ i ) , ∀p ∈ P t−1 . Now, take p(λ) = C t−1 1 + 2 λ ψ+ − 2λ 2λ 1 − λ ψ+ ,
where C t is the Chebyshev polynomial of the first kind of degree t defined as
C t (x) cos(t cos −1 (x)), |x| ≤ 1, 1 2 x + √ x 2 − 1 t + x + √ x 2 − 1 −t , |x| > 1. Since |p(λ i )| ≤ 1, 1 ≤ i ≤ ψ + , we have ζ t − λ j ≤ (λ 1 − λ j ) i≤ψ+ c 2 i i≤ψ+ c 2 i p 2 (λ i ) + j≤i≤ψ c 2 i p 2 (λ i ) ≤ (λ 1 − λ j ) i∈I i≤j−1 c 2 i j≤i≤ψ c 2 i p 2 (λ i ) .
It is easy to show that C t (x) is increasing on x > 1. Indeed, we have
dC t (x) dx = 1 2 1 + 2x √ x 2 − 1 t x + x 2 − 1 t−1 − t x + x 2 − 1 −t−1 = t 2 1 + 2x √ x 2 − 1 x + √ x 2 − 1 2t − 1 x + √ x 2 − 1 t+1 > 0. Hence, p(λ j ) < p(λ j+1 ) < . . . < p(λ ψ ). It follows that ζ t − λ j ≤ (λ 1 − λ j ) p 2 (λ j ) i∈I i≤j−1 c 2 i j≤i≤ψ c 2 i = (λ 1 − λ j ) p 2 (λ j ) i∈I c 2 i − j≤i≤ψ c 2 i j≤i≤ψ c 2 i . From λ ψ+ − 2λ j 2λ 1 − λ ψ+ ≥ −λ j λ 1 , we also have p(λ j ) = C t−1 1 + 2 λ ψ+ − 2λ j 2λ 1 − λ ψ+ ≥ C t−1 1 − 2λ j λ 1 = C t−1 1 + 2 κ j = 1 2 1 + 2 κ j + 1 + 2 κ j 2 − 1 t−1 + 1 + 2 κ j + 1 + 2 κ j 2 − 1 1−t = 1 2 κ j + 2 + 2 κ j + 1 κ j t−1 + κ j + 2 + 2 κ j + 1 κ j 1−t = 1 2 κ j + 1 + 1 κ j + 1 − 1 t−1 + κ j + 1 − 1 κ j + 1 + 1 1−t ≥ 1 2 κ j + 1 + 1 κ j + 1 − 1 t−1 ,
which gives our desired result.
As it can be seen, the bound in Lemma 6 is controlled by the proportion (14) and also the "condition number" κ j , which shed light on the factors that affect the performance of MINRES in detecting a negative curvature direction. Indeed, suppose ψ + ≥ 1, ψ − ≥ 1 and j ≥ ψ + + ψ 0 + 1. The ratio ν j measures the portion of g that lies in the relevant sub-space Range(U j− ) compared with the "left-over" portion that lies in the orthogonal complement in Range(U). It is natural to expect that λ j can be estimated faster when U j− U j− g constitutes a relatively large portion of UU g. Also, the "condition number" κ j indicates that MINRES can detect a NPC direction faster in cases where λ j is sufficiently larger, in magnitude, compared to λ 1 . In fact, the term "condition number" is somewhat a misnomer since κ j can indeed be smaller than one if |λ j | ≥ λ 1 .
Lemma 6 implies that, for any > 0, after at most
t = min max κ j + 1 4 log 4 1 − ψ i=j c 2 i / ψ i=j c 2 i + 1 , 1 , g
iterations, for any 2 ≤ j ≤ ψ, we have
ζ t − λ j ≤ (λ 1 − λ j ) .(15)
Indeed, it suffices to find t large enough such that
4 1 − ψ i=j c 2 i ψ i=j c 2 i κ j + 1 − 1 κ j + 1 + 1 2(t−1) ≤ . If 4 1 − ψ i=j c 2 i / ψ i=j c 2 i
≤ , this holds trivially for any t. Otherwise, this is satisfied if we have
t ≥ 1 2 log ψ i=j c 2 i 4(1 − ψ i=j c 2 i ) / log κ j + 1 − 1 κ j + 1 + 1 + 1 = 1 2 log 4(1 − ψ i=j c 2 i ) ψ i=j c 2 i / log 1 + 2 κ j + 1 − 1 + 1.
Now, the last bit is done by noting that for b > 0 we have
log 1 + 1 b ≥ 2 2b + 1 .
If ψ − = ψ 0 = 0, then by Theorem 1, no NPC direction is detected within MINRES iterations. Suppose, ψ − = 0 but ψ 0 = 1. In this case, as per Lemma 2, a zero curvature direction is detected for the first time only at the very last iteration. However, if ψ + = 0, i.e., H has no positive g-relevant eigenvalues, then a NPC direction is detected at the very first iteration. Indeed,
letting r 0 = g = ψ i=1 ξ i u i , we have r 0 , Hr 0 = ψ i=1 ξ 2 i λ i u i , u i ≤ 0.
Corollary 2 gives a complexity for detecting the NPC Condition (7) for non-trivial cases where ψ − ≥ 1 and ψ + ≥ 1.
Corollary 2.
Suppose H has at least one negative and one positive g-relevant eigenvalues, i.e., ψ − ≥ 1 and ψ + ≥ 1. After at most t = min {T, g} iterations of Algorithm 1 (without the inexactness mechanism of Step 7), the NPC Condition (7) is satisfied, where
T min {T j | ψ + + ψ 0 + 1 ≤ j ≤ ψ} , T j max κ j + 1 4 log 4 (κ j + 1) 1 − ν j ν j + 1 , 1 ,
where U, U j− , κ j and ν j are as in (9), (11), (13) and (14), respectively.
Proof. Let λ j < 0 be a negative g-relevant eigenvalue of H for some ψ + + ψ 0 + 1 ≤ j ≤ ψ.
Letting ≤ −λ j /(λ 1 − λ j ) in (15), we get ζ t ≤ (λ 1 − λ j ) + λ j ≤ 0.
Now, the result follows by taking the minimum over ψ + + ψ 0 + 1 ≤ j ≤ ψ.
Newton-MR: Algorithms and Convergence Analysis
Building upon the properties of Algorithm 1 in Section 2, we now present two variants of Newton-MR for optimization of (1). The first variant, discussed in Section 3.2 and depicted in Algorithm 4, offers first-order approximate optimality guarantee as in (3a). In Section 3.3, we then propose a slightly more refined variant, which comes equipped with second-order approximate optimality guarantees as in (3).
Blanket Assumptions
To carry out the analysis of this section, we make the following blanket assumptions regarding smoothness of the function f .
Assumption 1 (g-relevant Hessian Boundedness). There exists a 0 ≤ L g < ∞ such that, for any x ∈ R d , we have U HU ≤ L g , where U is as in (9) Assumption 1 implies that max{λ 1 , |λ ψ |} ≤ L g , i.e., only the g-relevant subset of the spectrum of H is required to be bounded. In other words, it implies a directional smoothness with respect to a restricted sub-space. Clearly, Assumption 1 is a relaxation of the usual Lipschitz continuity assumption of g. Indeed, the widely-used condition
∇f (x) − ∇f (y) ≤ L g x − y , ∀x, y ∈ R d ,
for twice-continuously differentiable f , implies H ≤ L g , which is stronger than Assumption 1.
Assumption 2 (Smoothness)
. The function f is twice continuously differentiable with Lipschitz continuous Hessian. That is, there exist a constant 0 ≤ L H < ∞ such that for all (x, y) ∈ R d×d we have
∇ 2 f (x) − ∇ 2 f (y) ≤ L H x − y .(16)
Assumption 2 is the only uniform smoothness assumption we make in this paper. Furthermore, unlike the analysis in [64,65,79], we make no assumption on the boundedness of g.
Recall the familiar relationship T t = V t HV t where T t ∈ R t×t is the symmetric tridiagonal matrix obtained in the t th iteration of Algorithm 1 and V t ∈ R d×t is the basis obtained from the Lanczos process for K t (H, g); see Appendix A.1 for more details. Lemma 12 guarantees that as long as a NPC direction has not been encountered, we have T t 0. Assumption 3 entails assuming that there is a small enough σ > 0 such that, as long as the NPC condition (7) has not been detected, T t remains uniformly positive definite.
Assumption 3 (Krylov Sub-space Regularity Property). There is a constant σ > 0 small enough such that for any x ∈ R d , as long as the NPC condition (7) has not been detected, we have T t σI.
Clearly, Assumption 3 is equivalent to having s, Hs ≥ σ s 2 , ∀s ∈ K t (H, g) as long as the NPC condition has not been detected, that is as long as K t (H, g) does not contain any NPC direction for H. Since by Lemma 12, for all x k , prior to the detection of the NPC direction we have s, H k s > 0, ∀s ∈ K t (H k , g k ), it follows that Assumption 3 trivially holds if we further assume that the sub-level set S(
x 0 ; f ) x ∈ R d | f (x) ≤ f (x 0 )
is compact, which is often made in similar literature, e.g., [64,65,79]. We also note that Assumption 3 implies a local convexity property for f in a uniformly small ball within the Krylov subspace K t (H, g) prior to the detection of the NPC condition (7).
Corollary 3.
For any x ∈ R d , suppose 1 ≤ t ≤ g be such that K t (H(x), g(x)) does not contain any NPC direction for H(x). Under Assumptions 2 and 3,
f (x + d) ≥ f (x) + g(x), d , ∀d ∈ D(x), where D(x) is a small ball within K t (H(x), g(x)), defined as D(x) d ∈ K t (H(x), g(x)) | d ≤ 2σ L H .
Proof. Recall that Assumption 3 is equivalent to having d, H k d ≥ σ d 2 for any d ∈ K t (H(x), g(x)). It follows that for any d ∈ D(x),
f (x + d) − f (x) = ∇f (x), d + 1 2 1 0 d, ∇ 2 f (x + td)d dt = ∇f (x), d + 1 2 1 0 d, ∇ 2 f (x + td) − ∇ 2 f (x) + ∇ 2 f (x) d dt = ∇f (x), d + 1 2 1 0 d, ∇ 2 f (x)d dt + 1 2 1 0 d, ∇ 2 f (x + td) − ∇ 2 f (x) d dt ≥ ∇f (x), d + σ 2 d 2 − L H 4 d 3 ≥ ∇f (x), d .
Newton-MR With First-order Complexity Guarantee
In this section, we present our first variant of Newton-MR, detailed in Algorithm 4, which offers first-order approximate optimality guarantee as in (3a). Recall from (8) that, as long as g k = 0, Algorithm 1 returns a search direction, d k , which is always guaranteed to be a descent direction. As a result, the step-size, α k , can be chosen using the Armijo-type line-search by finding the largest α k that satisfies
f (x k + α k d k ) ≤ f (x k ) + ρα k g k , d k ,(17)
where ρ > 0 is some appropriately chosen line-search parameter. This is often approximately achieved using a back-tracking line-search strategy as in Algorithm 2. In Algorithm 4, when D type = "SOL", we set α = 1 as the initial trial step-size for the backtracking line-search procedure. However, when D type = "NPC", the proof of Lemma 9 below highlights the fact that the chosen step-size may in fact be much larger. In this light, when d k is a direction of non-positive curvature, instead of back-tracking line-search, we can employ a forward-tracking procedure, depicted in Algorithm 3, to search for larger step-sizes that satisfy the line-search criterion (17). We note that the notion of forward-tracking procedure has been considered in the literature before in various contexts, e.g., [34].
φ(α) f (x k ) + α d k , g k + α 2 2 d k , H k d k ,
which is often taken as a local approximation to f in many Newton-type algorithms. By the Lemma 13, which is a unique property of MINRES, we can upper bound the mismatch between φ(α) and f (x k ) as
φ(α) − f (x k ) ≤ α 2 2 − α d k , H k d k .
Clearly, α = 1 is the minimizer of this upper bound, which implies that α = 1 may, in some sense, be an optimal choice for the initial step-size in Newton-MR. This choice is also often made for the Newton-CG and many quasi-Newton methods. However, the motivation for such a choice for those algorithms is entirely different than the discussion above and is based on achieving quadratic/super-linear local convergence rate. This also raises an interesting research direction to quantify the local convergence of Newton-MR with unit step-size.
Optimal Iteration Complexity
In this section, we provide the first-order iteration complexity analysis of Algorithm 4 and show that it achieves the optimal rate of O ε Call Algorithm 1 as [d k , D type ] = MINRES(H k , g k , θ √ ε g )
5:
if D type = 'SOL' then 6:
Find the largest 0 < α k ≤ 1 satisfying (17) with 0 < ρ < 1/2 using Algorithm 2 7: else 8:
Find the largest α k > 0 satisfying (17) with 0 < ρ < 1 using Algorithm 3 9:
end if
10:
x k+1 = x k + α k d k
11:
k = k + 1 12: end while 13: Output: x k satisfying first-order optimality (3a)
by Assumption 3,
s (t) k ≤ s (t) k , H k s (t) k σ s (t) k ≤ H k s (t) k σ ≤ g k σ ,
as long as no NPC direction has been detected. An important ingredient of our analysis is to establish a converse, which will then allow us to obtain a bound on a worst-case decrease in the objective value. Suppose MINRES returns an iterate d k = s (t−1) k that satisfies the inexact condition (6). Lemma 7 shows that magnitude of the direction serves as an estimate on the gradient norm at the point x k+1 = x k + d k . (6) is satisfied with η = θ √ ε g for some θ > 0,
Lemma 7. Under Assumptions 1 to 3, if
we have
s (t−1) k ≥ c 0 min ∇f (x k + s (t−1) k ) √ ε g , √ ε g ,
where c 0 2σ θ 2 L 2 g + θL g + 2σ 2 L H , and L g , L H , and σ are as in Assumptions 1 to 3.
Proof. Firs, we note that since r (t−1) k ∈ K t (H k , g k ) and the NPC condition (7) has not yet been detected at iteration t, Assumption 3 implies that
r (t−1) k 2 ≤ r (t−1) k , H k r (t−1) k /σ ≤ r (t−1) k H k r (t−1) k /σ, which gives r (t−1) k ≤ H k r (t−1) k /σ. Denote g k+1 ∇f (x k + s (t−1) k
). From Assumptions 1 and 2 and (6), we have
g k+1 = g k+1 − g k − H k s (t−1) k − r (t−1) k ≤ g k+1 − g k − H k s (t−1) k + r (t−1) k ≤ L H 2 s (t−1) k 2 + H k r (t−1) k σ ≤ L H 2 s (t−1) k 2 + η H k s (t−1) k σ ≤ L H 2 s (t−1) k 2 + ηL g s (t−1) k σ = L H 2 s (t−1) k 2 + θ √ ε g L g s (t−1) k σ Now, by solving σL H s (t−1) k 2 + 2θ √ ε g L g s (t−1) k − 2σ g k+1 ≥ 0, we obtain s (t−1) k ≥ −2θ √ ε g L g + 4θ 2 ε g L 2 g + 8σ 2 L H g k+1 2σL H = −θL g + θ 2 L 2 g + 2σ 2 L H g k+1 /ε g σL H √ ε g = 2σ g k+1 /ε g θL g + θ 2 L 2 g + 2σ 2 L H g k+1 /ε g √ ε g ≥ 2σ θL g + θ 2 L 2 g + 2σ 2 L H √ ε g min { g k+1 /ε g , 1} .
The next lemma gives the worst-case amount of descent obtained by using a solution direction satisfying (6).
f (x k+1 ) ≤ f (x k ) − min c 1 , c 2 ε g , c 2 g k+1 2 /ε g , where c 1 ρσ 3σ (1 − 2ρ) L H 2
, c 2 ρσc 2 0 , c 0 , L H , and σ are defined, respectively, in Lemma 7 and Assumptions 2 and 3, and 0 < ρ < 1/2 is the line-search parameter.
Proof. Since D type = 'SOL', we have d k = s (t−1) k . From Assumption 2, for any 0 < α ≤ 1,
f (x k + αd k ) − f (x k ) − αρ d k , g k ≤ α d k , g k + α 2 2 d k , Hd k + L H 6 α 3 d k 3 − αρ d k , g k ≤ α(1 − ρ) d k , g k + α 2 d k , Hd k + L H 6 α 3 d k 3 ≤ α(1 − 2ρ) 2 d k , g k + α 2 ( d k , g k + d k , Hd k ) + L H 6 α 3 d k 3 ≤ α(1 − 2ρ) 2 d k , g k + L H 6 α 3 d k 3 ≤ − α(1 − 2ρ) 2 d k , H k d k + L H 6 α 3 d k 3 ≤ − ασ(1 − 2ρ) 2 d k 2 + L H 6 α 3 d k 3 ,
where the last two inequalities follows from Lemma 13 and Assumption 3, respectively. To satisfy the line-search criterion (17), we require that the right hand side is negative, which implies that for the largest α k satisfying the line-search condition (17), we must have
α k ≥ min 1, 3σ (1 − 2ρ) L H d k . If d k ≥ 3σ (1 − 2ρ) L H ,
it follows that
f (x k + α k d k ) ≤ f (x k ) + α k ρ d k , g k ≤ f (x k ) − ρ 3σ (1 − 2ρ) L H d k d k , H k d k ≤ f (x k ) − ρσ 3σ (1 − 2ρ) L H d k 3/2 ≤ f (x k ) − ρσ 3σ (1 − 2ρ) L H 2 .
Otherwise, we have α k = 1, and
f (x k + d k ) ≤ f (x k ) − ρσ d k 2 ≤ f (x k ) − ρσc 2 0 min ∇f (x k + d k ) 2 ε g , ε g ,
where the last inequality follows from Lemma 7.
Suppose the NPC condition (7) is detected at iteration t, before the inexactness condition (6) with some η = θ √ ε g > 0 is satisfied. By Hr t−1 > η Hs t−1 and by Assumption 1 and (29d), we must have r (t−1) k > η/ L 2 g + η 2 g k . Our choice of the inexactness tolerance η = θ √ ε g could raise suspicion that we must have r (t−1) k ∈ Ω( √ ε g ). However, we now argue that a lower bound for NPC direction must be independent of ε g . For the NPC direction, r (t−1) k , we clearly have r
(t−1) k > 0. In fact, r (t−1) k > (I−H k [H k ]
† )g k , and hence as long as g k / ∈ Range(H k ), we obtain a lower-bound on r (t−1) k / g k , which is independent of the inexactness tolerance as well as the magnitude of g k . More generally, from the construction of Algorithm 1, we can clearly see that all of the underlying quantities are built from H k and v 1 = g k / g k . That is, for all t ≥ 1, the relevant factors such asα t ,β t+1 , and hence γ (1) t , s t , and c t , detailed in Appendix A, are all independent of inexactness tolerance η and the magnitude of g k . As a result, the relative residual r (t−1) k / g k = φ t /φ 0 = s 1 s 2 . . . s t−1 and the NPC condition (7) (or equivalently (30)) all enjoy the same independence. In addition, the maximum number of iterations to detect an NPC direction, i.e., T N in (19) below, is also independent of these two factors. Let us further investigate these observations with some examples.
Example 1. Consider the following example
H = L g −µ , g = − 1 ε , u 1 = 1 0 , u 2 = 0 1 .
Clearly, asε → 0, g becomes increasingly orthogonal to u 2 . On the other hand, when ε → ±∞, g become more aligned with u 2 . Following Algorithm 1, we have
r 0 = −g = 1 ε , v 1 = 1 √ 1 +ε 2 1 ε .
For t = 1, we havẽ
α 1 = v 1 Hv 1 = L g − µε 2 1 +ε 2 = γ (1) 1 ,β 2 =ε (L g + µ) 1 +ε 2 , v 2 = 1 √ 1 +ε 2 ε −1 and γ (2) 1 = L 2 g +ε 2 µ 2 1 +ε 2 , c 1 = L g − µε 2
(1 +ε 2 )(L 2 g +ε 2 µ 2 )
, s 1 =ε (L g + µ)
(1 +ε 2 )(L 2 g +ε 2 µ 2 ) Now, considering t = 2, we get
δ (1) 2 =β 2 ,α 2 = v 2 Hv 2 = L gε 2 − µ 1 +ε 2 , γ(1)2 = s 1 δ(1)
2 − c 1α2 = L g µ 1 +ε 2 L 2 g +ε 2 µ 2 , andβ 3 = 0, which implies the algorithm terminates after two iterations. Consider the NPC condition (7) (or equivalently (30)) for t = 1 and t = 2 as
−c 0 γ (1) 1 = L g − µε 2 1 +ε 2 , −c 1 γ (1) 2 = − L g − µε 2 L g µ L 2 g +ε 2 µ 2 .
For r 0 to be a NPC direction, we need to haveε 2 ≥ L g /µ. In this case, r 0 = g and we have the relative residual r 0 / g = 1. Otherwise, ifε 2 ≤ L g /µ, then r 1 is a NPC direction. In this case, we have
c 2 = g g , u 2 = ε √ 1 +ε 2 .
Hence, we haveε 2 ≥ c 2 2 /(1 − c 2 2 ), and
r 1 g = s 1 = L g + µ (1/ε 2 + 1)(L 2 g /ε 2 + µ 2 ) ≥ c 2 (L g + µ) L 2 g (1 − c 2 2 )/c 2 2 + µ 2 ,
which is independent of either η or the magnitude of g.
Example 2. Consider a random matrix H with d = 20, containing 18 positive eigenvalues, 1 negative eigenvalue, and a one dimensional null-space. We consider the following vectors. Let g 1 = 1 be the vector of all ones, g 2 be a random vector drawn from the multivariate normal distribution N (0, 1). We scale g 2 appropriately such that g 2 = g 1 . Also, let g 3 = 10 × g 1 and g 4 = 10 × g 2 . According to our observations above, the smallest eigenvalue of T t using either g 1 or g 3 must be identical. Similarly, g 2 and g 4 generate identical Krylov sub-spaces, and hence must detect a NPC direction in an identical manner. Figure 1 depicts the results of Algorithm 1 using these four vectors. While the residual itself depends on the choice of g 1 or g 3 , the relative residual and λ min (T t ) behave identically, resulting in the detection of the NPC condition (7) at the same iterations for both. The same is also seen to be true for g 2 and g 4 . These examples verify our observation that r t−1 / g is independent of the magnitude of g. Note that a zero curvature direction is detected at the last iteration for all four examples, which is expected as per Lemma 2-(ii). These observations lead us to safely conclude that there must exist ω > 0, not depending on η or g k , for which we have r
f (x k+1 ) − f (x k ) ≤ −c 3 g k 3/2 , where c 3 ρω 3/2 6 (1 − ρ) L H ,
L H , ω are defined in Assumptions 2 and 4, respectively, and 0 < ρ < 1 is the line-search parameter.
Proof. Since D type = 'NPC', we have d k = r (t−1) k and we must also have d k ≥ ω g k . From, (29b), we get
d k , g k = r (t−1) k , g k = − r (t−1) k 2 = − d k 2 .
From Assumption 2, it follows that, for any α > 0,
f (x k + αd k ) − f (x k ) ≤ α d k , g k + α 2 2 d k , H k d k + L H 6 α 3 d k 3 ≤ −α d k 2 + L H 6 α 3 d k 3 .
To satisfy the line-search criterion (17), we require that α > 0 satisfies
−α d k 2 + L H 6 α 3 d k 3 ≤ αρ d k , g k = −αρ d k 2 ,
which implies that for the largest α k satisfying the line-search condition (17), we must have
α k ≥ 6 (1 − ρ) L H d k .
It follows that
f (x k + α k d k ) ≤ f (x k ) + α k ρ d k , g k ≤ f (x k ) − ρ 6 (1 − ρ) L H d k d k 2 = f (x k ) − ρ 6 (1 − ρ) L H d k 3/2 ≤ f (x k ) − ρω 3/2 6 (1 − ρ) L H g k 3/2 .
Note that the result of Lemma 9 holds as long as the direction d k is a non-positive curvature direction. In other words, not only does a negative curvature direction can yield descent in f , but also any direction of zero curvature can also be used to obtain reduction in the objective value. Now, altogether, we can obtain the following optimal complexity result for Algorithm 4.
Theorem 3 (Optimal Iteration Complexity of Algorithm 4). Under Assumptions 1 to 4, after at most
K (f (x 0 ) − f ) ε −3/2 g min {c 1 , c 2 , c 3 } ,
iterations of Algorithm 4, the approximate first-order optimality (3a) is satisfied. Here, c 1 , c 2 and c 3 , are defined in Lemmas 8 and 9, and f min f (x).
Proof. Suppose g k ≤ ε g , for the first time, at k = K + 1. Then, we must have g k > ε g , k = 0, . . . , K. Suppose at some iteration 0 ≤ k ≤ K − 1, we have D type = 'SOL' in which case by Lemma 8, we get
f (x k ) − f (x k+1 ) ≥ min c 1 , c 2 ε g , c 2 g k+1 2 /ε g ≥ min {c 1 , c 2 } ε g .
Alternatively, if D type = 'NPC', then by Lemma 9, we have
f (x k ) − f (x k+1 ) ≥ c 3 g k 3/2 ≥ c 3 ε 3/2 g . It follows that f (x 0 ) − f (x K ) ≥ K−1 k=0 f (x k ) − f (x k+1 ) > K k=0 min min {c 1 , c 2 } ε g , c 3 ε 3/2 g > K min {c 1 , c 2 , c 3 } ε 3/2 g > f (x 0 ) − f ,
This implies f > f (x K ), which is a contradiction.
Optimal Operation Complexity
In order to establish a complexity bound for Algorithm 1 to return a direction that either satisfies the inexactness condition (6) or the NPC condition (7), we will now make a few assumptions to uniformly bound some of the quantities that appear in Section 2.2. (9) and (11). There exists µ > 0, and L 2 g / L 2 g + θ 2 ε g ≤ ν ≤ 1 such that, for any x ∈ R d , if g / ∈ Null(H), then at least one of the following holds.
Assumption 5. Recall the decomposition in
(i) If ψ + ≥ 1 and ψ − ≥ 1, there exists some 1 ≤ j ≤ ψ + and ψ + + ψ 0 + 1 ≤ j ≤ ψ for which
min {λ i , |λ j |} ≥ µ (18a) U i+ U i+ + U j− U j− g 2 ≥ ν UU g 2 (18b) (ii) If ψ + ≥ 1, there exists some 1 ≤ i ≤ ψ + , for which λ i ≥ µ (18c) U i+ U i+ g 2 ≥ ν UU g 2 (18d) (iii) If ψ − ≥ 1, there exists some ψ + + ψ 0 + 1 ≤ j ≤ ψ, for which |λ j | ≥ µ (18e) U j− U j− g 2 ≥ ν UU g 2 (18f)
Assumption 5 is relatively mild. In essence, Assumption 5 only necessitates that H has some g-relevant eigenvalue which is sufficiently large, and g has non-trivial projection on the corresponding eigen-space. In fact, (18a) and (18b) are significant relaxations of [62,Assumptions 3 and 4]. Indeed, the regularity conditions (18a) and (18b) are only required to hold for some 1 ≤ i ≤ ψ + and ψ + + ψ 0 + 1 ≤ j ≤ ψ among the g-related spectrum of H, whereas in [62], such conditions are assumed for the entire spectrum of H. More importantly, [62,Assumption 4] partitions R d into to complementary spaces, namely Range(H) and its orthogonal complement, Null(H), and then relates the magnitude of the projection of g on these two sub-spaces. Assumption (18b) does a similar partitioning but restricted to the sub-space Range(U), as opposed to the entire R d . More specifically, (18b) merely relates the magnitude of the projection of g on the sub-space U i+ and/or U j− to that on the complement of this space with respect to U.
In fact, (18b) is always trivially satisfied with ν = 1, i = ψ + and j = ψ + + ψ 0 + 1. As a result, replacing the entirety of Assumption 5 with a stronger requirement that U H † U ≤ 1/µ (or equivalently min λ ψ+ , λ ψ++ψ0+1 ≥ µ) is entirely sufficient to establish all the results of this paper with ν = 1. We also note that none of the properties (i) to (iii) in Assumption 5 is strictly weaker/stronger than the other two. In fact, we can see that either condition (18d) or (18f) implies (18b), where as (18a) is stronger than either of (18c) or (18e). Under Assumptions 1 and 5-(iii), Corollary 2 implies that if ψ − ≥ 1, then after at most
T N min max 2(L g + µ)/µ 4 log 2(L g + µ)(1 − ν) µν + 1 , 1 , g ,(19)
iterations of Algorithm 1, the NPC condition (7) must be detected. Of course, if g ∈ Null(H), then g itself is immediately declared as the zero curvature direction at the outset. If ν = 1, then Assumptions 5-(iii) implies that all non-zero g−relevant eigenvalues of H are in fact negative, which in turn implies T N = 1, i.e., the NPC direction is detected in a single iteration. Also, under Assumption 5, we have from Lemmas 4 and 5 and Corollary 1 that after at most
T S min L g /µ 4 log 4/ η 2 L 2 g + η 2 − (1 − ν) + 1 , g ,(20)
iterations, the inexactness condition (6) is satisfied. Putting this all together, we conclude that if Property (iii) of Assumption 5 holds, Algorithm 1 returns a direction in at most min{T N , T S } iterations. Indeed, in this case, it takes at most T N iterations to detect the NPC condition (7). It also takes at most T S iterations to satisfy the inexactness condition (6). Algorithm 1 terminates if either of these occurs first. If ψ − = 0, however, no negative curvature is detected across iterations, and Algorithm 1 terminates after st most T S iteration. The returned direction in this case always satisfies (6).
We can now combine Theorem 3 with the complexity analysis of MINRES in Section 2.2 to obtain optimal first-order operation complexity, i.e., the total number of gradient and Hessianvector product evaluations for Algorithm 4 to find a point that satisfies (3a). Note that, unlike the iteration complexity of Theorem 3, to obtain operation complexity, we need to consider Assumption 5.
Theorem 4 (Optimal Operation Complexity of Algorithm 4). Suppose d is sufficiently
large. Under Assumptions 1 to 5, after at mostÕ ε −3/2 g gradient and Hessian-vector product evaluations, Algorithm 4 satisfies the approximate first-order optimality (3a).
Proof. Every iterations of Algorithm 1 requires one Hessian-vector product. By the proof of Theorem 3, we see that at most
K S (f (x 0 ) − f ) ε −1 g min {c 1 , c 2 } ,
iterations of Algorithm 4 uses directions with D type = "SOL" from Algorithm 1. Similarly, at most
K N (f (x 0 ) − f ) ε −3/2 g c 3 ,
iterations employs a NPC direction. Assuming d is large enough, and considering (19) and (20), the total number of gradient or Hessian-vector products for Algorithm 4 to reach a solution satisfying (3a) is at most
T S (K S + K N ) ∈Õ (1) O ε −1 g +Õ (1) O ε −3/2 g ∈Õ ε −3/2 g .
We end this section by noting that without Assumption 5, or when d is small, the total number of operations with gradients and Hessian-vector products for Algorithm 4 to obtain (3a) is at most dO ε
−3/2 g .
Newton-MR With Second-order Complexity Guarantee
Algorithm 4 provides a first-order complexity guarantee (3a) and hence the iterations are terminated once g k ≤ ε g . In non-convex optimization, however, g k ≤ ε g , in particular g k = 0, does not necessarily imply that the algorithm has converged to (a vicinity of) a local minimum. Hence, it if often desired to provide stronger convergence guarantees in the form of both (3a) and (3b). Algorithm 5 depicts a variant of Newton-MR for which we can establish second-order complexity guarantees. As it can be seen, for the most part, Algorithm 5 is identical to Algorithm 4. The main difference lies in the cases of g k ≤ ε g where Algorithm 4 terminates, while Algorithm 5 aims to either find a negative curvature direction or to certify (3b).
When g k is too small, calling Algorithm 1 with such g k may cease to provide beneficial utility in obtaining a descent direction. In fact, in the extreme case where g k = 0, Algorithm 1 returns d k = 0 and the optimization algorithm stagnates. As a remedy, and in the context of Newton-CG, [64] replaces g k with some appropriately chosen random vectorg, namely drawn from the uniform distribution over the unit sphere. In addition, in lieu of the true Hessian H k , some perturbation asH k = H k + 0.5ε H I is used within the standard CG method. If λ min (H) ≤ −ε H /2, then (with high probability) a negative curvature direction forH k is encountered in certain numbers of iterations. Otherwise, it is concluded thatH k −ε H /2 (with high probability), which in turn implies (3b). In our Newton-MR setting, we adopt a similar approach.
Optimal Iteration Complexity
We can use the complexity results from Section 3.2 for all the iterations where g k ≥ ε g . The only missing piece is the implication of using the direction obtained from MINRES in the case Algorithm 5 Newton-MR With Second-order Complexity Guarantee 1: Input:
-Initial point: x 0 -Approximate second-order optimality tolerance: 0 < ε g ≤ 1 and 0 < ε H ≤ 1 -Inexactness tolerance: θ > 0 2: k = 0 3: while not terminate do 4: if g k > ε g then
5:
Call Algorithm 1 as [d k , D type ] = MINRES(H k , g k , θ √ ε g ) 6: if D type = 'SOL' then 7:
Find the largest 0 < α k ≤ 1 satisfying (17) with 0 < ρ < 1/2 using Algorithm 2 8:
else 9:
Find the largest α k > 0 satisfying (17) with 0 < ρ < 1 using Algorithm 3 Randomly generateg from the uniform distribution on the unit sphere x k+1 = x k + α k d k
22:
k = k + 1 23: end while 24: Output: x k satisfying second-order optimality (3) where g k ≤ ε g . For small g k , the line-search condition (17) no longer provides a meaningful criterion for choosing the step-size. Hence, we replace the line-search (17) with
f (x k + α k d k ) ≤ f (x k ) + 1 2 ρα 2 k d k , H k d k ,(21)
for some 0 < ρ < 1. Note that we can evaluate the term " d k , H k d k " in (21) without any additional Hessian-vector product. Indeed, from Step 15 of Algorithm 5, we have d k = −Sign g k , d k d k / d k , which using (29c), gives
d k , H k d k = d k ,H k d k − ε H 2 d k 2 = r (t−1) k r (t−1) k ,H k r (t−1) k r (t−1) k − ε H 2 = −c t−1 γ (1) t − ε H 2 ,
where the residual r
f (x k+1 ) − f (x k ) ≤ −c 4 ε 3 H , where c 4 − 9ρ (1 − ρ)
, we have
d k , g k ≤ 0, and d k , H k d k ≤ − ε H 2 d k 2 = − ε H 2 .
Similar to the proof of Lemma 9, we have
f (x k + αd k ) − f (x k ) ≤ α d k , g k + α 2 2 d k , H k d k + L H 6 α 3 d k 3 ≤ α 2 2 d k , H k d k + L H 6 α 3 .
and it follows that for the largest α k satisfying the line-search condition (21), we must have
α k ≥ 3 (1 − ρ) ε H 2L H .
So, it follows that
f (x k + α k d k ) − f (x k ) ≤ 1 2 ρα 2 k d k , H k d k ≤ − 1 4 ρα 2 k ε H < − 9ρ (1 − ρ) 2 ε 3 H 16L 2 H .
Lemma 10 assumes that Step 13 of Algorithm 5 successfully returns a NPC direction, which can be guaranteed to hold with probability one over the draws ofg. Indeed, by randomly drawingg from the uniform distribution on the unit sphere, we ensure that, with probability one, the grade ofg with respect toH k is |Θ(H k )|, i.e., Ψ(H,g) = Θ(H). Hence, Theorem 2 implies that, with probability one,H k 0 if and only if Algorithm 1 detects a NPC direction for H k . This, in turn, amounts to either obtaining a direction of negative curvature for H k or else guaranteeing thatH k 0, which is in fact a certificate for (3b). As a result, if λ min (H k ) ≤ −ε H , Step 13 of Algorithm 5 successfully returns a NPC direction with probability one.
We are now in a position to give the iteration and operation complexities of Algorithm 5 to achieve approximate second-order optimality condition (3).
Theorem 5 (Optimal Iteration Complexity of Algorithm 5).
Define
K 2 (f (x 0 ) − f s) min {c 1 , c 2 , c 3 , c 4 } max ε −3/2 g , ε −3 H + 1 ,
where c 1 , c 2 , c 3 and c 4 are as in Lemmas 8 to 10, and f min f (x). Under Assumptions 1 to 4, after at most K iterations of Algorithm 5, the approximate second-order optimality (3) is satisfied with probability one.
Proof. We provide the analysis, conditioned on the event that Step 13 of Algorithm 5 is successful. Incorporating failure probabilities is straightforward. Suppose g k ≤ ε g and λ min (H k ) ≥ −ε H , for the first time, at k = K + 1. Then, we must have g k > ε g or λ min (H k ) ≤ −ε H for any k = 0, . . . , K. Any iteration k = 0, . . . , K must belong to one of these three sets
K 1 = {0 ≤ k ≤ K | min { g k , g k+1 } > ε g }, K 2 = {0 ≤ k ≤ K | g k ≤ ε g } = {0 ≤ k ≤ K | g k ≤ ε g and λ min (H k ) ≤ −ε H } K 3 = {0 ≤ k ≤ K | g k > ε g ≥ g k+1 }.
So, it follows that
f (x 0 ) − f ≥ f (x 0 ) − f (x K+1 ) = K k=0 f (x k ) − f (x k+1 ) ≥ k∈K1 f (x k ) − f (x k+1 ) ≥ |K 1 | min {c 1 , c 2 , c 3 } ε 3/2 g ,
where the last inequality follows from Lemmas 8 and 9. Similarly, from Lemma 10, we have
f (x 0 ) − f ≥ K k=0 f (x k ) − f (x k+1 ) ≥ k∈K2 f (x k ) − f (x k+1 ) ≥ |K 2 | c 4 ε 3 H .
Lastly, if k ∈ K 3 , then k + 1 ∈ K 2 , and hence |K 3 | ≤ |K 2 | + 1.
Putting this all together, we have
|K 1 | + |K 2 | + |K 3 | ≤ (f (x 0 ) − f ) min {c 1 , c 2 , c 3 } ε −3/2 g + 2 (f (x 0 ) − f ) c 4 ε −3 H + 1.
Optimal Operation Complexity
We now discuss the complexity of Step 13 of Algorithm 5. Since Algorithm 1 builds on the exact same Krylov sub-space as the randomized Lanczos method, in the absence of any additional structural assumption on f , we can use a similar complexity analysis as that in [46,64]. Indeed, from [46,Theorem 4.2] or [64,Lemma 9] (adopted to our setting), with probability at least 1−δ, after at most
t ≥ min 1 4 √ log 2.75d δ 2 + 1 , d , iterations, we have λ min (T t ) − λ min (H) ≤ λ max (H) − λ min (H) ,
where 0 < δ < 1, > 0, andT t is the tridiagonal matrix as in (25) obtained fromH = H+0.5ε H I andg. Letting = ε H /(4L g ), we get
λ min (T t ) − λ min (H) ≤ ε H 4L g λ max (H) − λ min (H) = ε H 4L g (λ max (H) + ε H + (−λ min (H) − ε H )) ≤ ε H 2 , which implies λ min (H) = λ min (H) − ε H /2 ≥ λ min (T t ) − ε H .
As a result, if after
T L min L g /ε H 2 log 2.75d δ 2 + 1 , d ,(22)
iterations, no NPC direction is detected in Step 13 of Algorithm 5, i.e.,T t 0 for all t ≤ T L (cf. Lemma 12), then we must have λ min (H) > −ε H , with probability 1 − δ. In other words, if λ min (H k ) ≤ −ε H , Step 13 of Algorithm 5 can find a direction of negative curvature forH k , with probability 1 − δ, in at most T L iterations.
Step 13 of Algorithm 5 is a probabilistic procedure. Hence, in order to guarantee success over the life of Algorithm 5, we need to ensure a small failure probability for this step across all iterations. In particular, in order to get an overall and accumulative success probability of 1 − δ for the entire K iterations, the per-iteration failure probability is set as (1 − K (1 − δ)) ∈ O(δ/K). This failure probability appears only in the "log factor" in (22), and so it is not a dominating cost. Hence, requiring that all K iterations are successful for a large K, only necessitates a small (logarithmic) increase in worst-case iteration count of Algorithm 1 as part of Step 13 of Algorithm 5. For example, for K ∈ O max ε −3/2 g , ε −3 H , we can set the periteration failure probability to δ · min ε 3/2 g , ε 3 H , and ensure that when Algorithm 5 terminates, Step 13 have been reliably successful, with an accumulative probability of 1 − δ. Proof. Again, we provide a deterministic analysis, which can trivially be modified to incorporate failure probabilities. Every iterations of Algorithm 1 requires one Hessian-vector product.
Hence, similar to the proof of Theorem 4, under Assumption 5, using (20) and (24), the total number of Hessian-vector products is no more than
T S (|K 1 | + |K 3 |) + T L |K 2 | ∈Õ (1) O ε −3/2 g + O ε −3 H +Õ ε −1/2 H O ε −3 H .
Clearly, if ε 2 H = ε g = ε, the operation complexity of Algorithm 5 to achieve (3) isÕ ε −7/4 g , which matches those of alternative algorithms with similar optimal guarantees. Finally, without Assumption 5, or in small dimensional problems, the operation complexity of Algorithm 5 to
obtain (3) is at most d max O ε −3/2 g , O ε −3 H .
Benign Saddle Regions. The bound given in (22) is obtained directly from the complexity result of the randomized Lanczos method [46] without any additional structural assumption on f other than Assumption 1. It turns out that as long as the regions near the saddle points of f exhibit sufficiently large negative curvature, one can obtain an improved operation complexity as compared with Theorem 6.
Assumption 6 (Benign Saddle Property). The function f has the (ι, µ, ς)-benign saddle property, i.e., there exists ι > 0, µ > 0, and 0 ≤ ς < µ, such that for any point x ∈ R d , at least one of the following holds:
(i) ∇f (x) ≥ ι, (ii) λ min (∇ 2 f (x)) ≤ −µ, or (iii) λ min (∇ 2 f (x)) ≥ −ς.
Clearly, if we allow ς = µ, then Assumption 6 would be trivially satisfied by all twice differentiable functions. Non-triviality of Assumption 6 lies in the discrepancy between ς and µ. Coupled with Hessian continuity from Assumption 2, the optimization landscape of functions satisfying Assumption 6 is in essence structured such that going from the vicinity of saddle points to near local minima entails navigating steep regions with large enough gradients.
It turns out that Assumption 6 is in fact a relaxation of the strict saddle property, which has become a standard assumption in analyzing non-convex optimization algorithms that can escape saddle points, e.g., [1,30,48,49,59,60,61,70]. Recall that a function is said to satisfy the (ι, µ, ϑ, ∆)-strict saddle property, if for any x ∈ R d , we have either f (x) ≥ ι > 0, λ min (∇ 2 f (x)) ≤ −µ < 0, or there is a local minimum x such that x − x ≤ ∆ and λ min (∇ 2 f (x)) ≥ ϑ > 0 in the neighborhood x − x ≤ 2∆. It has been shown that many interesting machine learning problems satisfy the strict saddle property, e.g., online tensor decomposition [70], dictionary recovery problems [71], the (generalized) phase retrieval problems [72], and the phase synchronization and community detection problems [3,12]. It is easy to see that (ι, µ, ϑ, ∆)-strict saddle property implies (ι, µ, 0)-benign saddle property. In this light, there are many more functions that enjoy (ι, µ, 0)-benign saddle property than those with a (ι, µ, ϑ, ∆)-strict saddle characteristic.
Leveraging the benign saddle assumption, we will apply Corollary 2 to obtain an alternative bound to T L in (22). For this, we need to find an estimate on the projection ofg on a given eigenspace ofH, i.e., ν j as in (14). Fortunately, the particular choice for generatingg allows us to do just that. Indeed, suppose d ≥ 3 and letg be randomly generated from a uniform distribution on the unit sphere, i.e.,g = [g 1 ,g 2 , . . . ,g d ] / g , where g 2 = d i=1g 2 where u is any unit eigenvector ofH. By the spherical symmetry, the distribution of this dot product is the same that of ( g, e 1 ) 2 , where e 1 is the first column of the identity matrix, so we considerν = ( e 1 ,g ) 2 =g 2 1 / g 2 . Recall that ν ∼ B(1/2, (d − 1)/2), where B denotes the beta distribution. Hence, for any 0 < ν < 1, we obtain
δ Pr(ν ≤ ν) = ν 0 c(d)t −1/2 (1 − t) (d−3)/2 dt ≤ 2c(d) √ ν, where c(d) Γ(d/2) √ πΓ((d − 1)/2) ,(23)
and Γ is the Gamma function. So, it follows that, with probability 1−δ, we haveν ≥ δ 2 / 4c 2 (d) .
Putting this all together, consider any function with (ι, µ, ς)-benign saddle property. Letting 0 < ε g ≤ ι and ς < ε H /2 < µ, we can apply Corollary 2 withH andg, to guarantee that, with probability 1 − δ, in at most
T P min max 1 4 L g + µ µ − 0.5ε H log 4(L g + µ) (µ − 0.5ε H ) 4c 2 (d) δ 2 − 1 + 1 , 3 , d ,(24)
iterations, a NPC direction forH is detected, where c(d) is as in (23). In other words, if a NPC direction is never detected in T P iterations, then with probability 1 − δ, we have H −ς.
Finally, we obtain the following improved operation complexity for functions that enjoy benign saddle property, whose proof is almost identical to Theorem 6 and hence it omitted.
Corollary 4 (Operation Complexity of Algorithm 5 for Function with Benign Saddle
Regions). Suppose d is sufficiently large, and Assumptions 1 to 5 hold. Further, suppose the function f also satisfied the (µ, ι, ς)-benign saddle property as in Assumption 6. Let 0 < ε g ≤ ι and ς < ε H /2 < µ. After at most max Õ ε
Numerical Experiments
In this section, we will evaluate the performance of Algorithm 4 on several examples 2 , namely, non-linear least squares (Section 4.1), deep auto-encoders (Section 4.2), and a series of problems from the CUTEst test collection (Section 4.3). We compare Algorithm 4 with the following alternative Newton-type methods. -Newton-CG-LS-FW. This is identical to Newton-CG-LS, except that, when negative curvature directions are encountered, we incorporate forward/backward tracking line-search, Algorithm 3, within the framework of [64,Algorithm 3]. This is mainly to create as much of a level playing field as possible among various methods, in particular for Newton-CG variants. We also note that this change is in fact consistent with the theoretical analysis of [64] and does not raise any theoretical concerns.
-Newton-CG-TR-Steihaug [57,Algorithm 4.1]. A trust-region method with CG-Steihaug sub-problem solver.
-Newton-CG-TR [22,Algorithm 4.1]. A trust-region method based on Capped-CG algorithm and small Hessian perturbations.
We set the maximum number of iterations and the respective inexactness tolerance for all sub-problems solvers, i.e., CG/CR/MINRES, to be, respectively, 1, 000 and 0.1. For trustregion methods, the radius is enlarged by a factor of 3 when the reduction in the objective function is larger than 20% of what is predicted by the underlying quadratic model. Otherwise, the radius is cut in half. The initial and the maximum trust region radii are, respectively, chosen to be 1, and 10 10 . For line-search methods, the line-search algorithm is initialized with the unit step-size, the Armijo parameter is set to 10 −4 , and the parameter for the Wolfe's curvature condition [57] is 0.1. Also, the maximum iterations of the line-search is set to 1, 000. We terminate the optimization algorithms if the norm of gradient falls below ε g = 10 −10 , which signifies a successful termination. The algorithms fail to converge if they are terminated prematurely, i.e., if the total number of oracle calls exceeds 10 5 , or if the step-size/trust region radius shrink to less than 10 −18 .
For experiments of Sections 4.1 and 4.2, we demonstrate the performance of the algorithms as measured by the objective value and gradient norm in light of the total number of calls to the function oracle (or equivalent operations) [62] as well as the "wall-clock" time. In Section 4.3, however, to empirically evaluate various algorithms on CUTEst problem sets, we plot the performance profiles [27,33] for objective value and gradient norm metrics.
Non-linear Least-square Problem
We first consider a regularized non-linear least squares problem,
f (x) = 1 n n i=1 b i − 1 1 + e − ai,x 2 + λψ(x), where {(a i , b i )} n i=1 ⊂ R d × {0, 1}, and ψ(x) = d i=1 x 2 i /(1 + x 2 i )
is a non-convex regularization. We run the experiments on two datasets, namely Gisette and STL10 3 , and set λ to 10 −7 and 10 −6 , respectively. All algorithms are initialized from the same instance drawn randomly from the standard normal distribution.
In Figure 2, we see that Newton-MR terminates successfully much faster than all other algorithms and it does so by achieving the lowest objective value. This might be related to the observation that, among all methods, Newton-MR is the only one method that leverages the NPC directions when they arise (emphasized by special marks on the plot). When NPC directions are not encountered, Newton-CG-LS-FW and Newton-CG-LS perform almost identically. Newton-CR, L-BFGS, and Newton-CG-TR-Steihaug stagnate around a saddle point for a long time. Eventually, Newton-CR and Newton-CG-TR-Steihaug escape the saddle region, while L-BFGS fails to make sufficient progress. The superior performance of Newton-MR in Figure 3 is even more pronounced. In fact, no other algorithm could obtain a similar quality solution in the same amount of time and computational efforts. Note that using a NPC direction, the step-size returned from forward/backward tracking line-search strategy Algorithm 3 within Newton-MR can reach Figure 2: The performance of various algorithms on the model problem of Section 4.1 using the Gisette dataset [41] (n = 6, 000 and d = 5, 000). The special marks on the plots signify the iterations where a negative/non-positive curvature direction is detected and subsequently used within the respective algorithms. An "oracle call" refers to an operation that is equivalent, in terms of complexity, to a single function evaluation. Time is measured in seconds. Newton-MR achieves a solution faster than all other methods. Also, note that, for this problem, Newton-MR is the only method that leverages the NPC directions when they arise. 10 7 to provide sufficient decrease in f . Similar performance can also be seen in the following experiment.
Auto-encoder
We now consider a non-convex deep auto-encoder problem as
f (x E , x D ) = 1 n n i=1 a i − D(E(a i ; x E ); x D ) 2 + λψ(x E , x D ),
where the structures of the encoder E : R p → R q and the decoder D : R q → R p are as described in [42,52,77]. Here x [x E ; x E ] ∈ R d . We add the same non-convex regularization function ψ as in Section 4.1 and set the regularization parameter to λ = 10 −3 . Unlike [42], we use "Tanh" for the non-linear activation function, and our encoder-decoder structure is entirely symmetric, i.e., we do not apply an extra nonlinear activator at the end of the decoder network. We run the experiments on two popular machine learning datasets, namely CIFAR10 and MNIST; see Table 1.
The algorithms are initialized from the same point, which is chosen randomly near the origin, i.e., the d components of the starting point are drawn independently from a normal distribution with standard deviation 10 −8 . The results are depicted in Figures 4 and 5.
In both experiments, Newton-CG-LS performs very poorly, which is mainly due to encountering NPC directions too often and yet not employing a forward tracking strategy. At the Figure 3: The performance of various algorithms on the model problem of Section 4.1 using the STL10 dataset [20] (n = 5, 000 and d = 27, 648). The special marks on the plots signify the iterations where a negative/non-positive curvature direction is detected and subsequently used within the respective algorithms. An "oracle call" refers to an operation that is equivalent, in terms of complexity, to a single function evaluation. Time is measured in seconds. One can clearly see the superior performance of Newton-MR in achieving a better quality solution in less time/computational efforts. same time, the Newton-CG-LS-FW variant, which employs forward/backward tracking line-search strategy Algorithm 3, shows significantly improved performance. In Figure 4, among all methods, Newton-CG-TR-Steihaug and Newton-MR converge to the most optimal solution, albeit the Newton-CG-TR-Steihaug method converges relatively slower and the norm of the gradient never reaches the preset threshold. With the MNIST dataset Figure 5, which arguably gives rise to a much easier problem than that using the CIFAR10 dataset Figure 4, with the exception of Newton-CG-LS and L-BFGS, all other methods converge to a similar solution within a comparable amount of time and computational effort. However, Newton-MR arrives at a point with a much smaller gradient. Figure 4: Deep auto-encoder with CIFAR10 dataset [45]. The special marks on the plots signify the iterations where a negative/non-positive curvature direction is detected and subsequently used within the respective algorithms. An "oracle call" refers to an operation that is equivalent, in terms of complexity, to a single function evaluation. Time is measured in seconds. Newton-MR clearly outperforms all other methods. Notable is the poor performance of Newton-CG-LS, which is hindered by encountering NPC directions too often and yet not employing a forward tracking strategy.
CUTEst test problems
We now focus our efforts in evaluating the performance of the algorithms on a series of test problems from the CUTEst test collection [35]. In particular, we focused on the unconstrained problems, excluding the problems whose objective function is constant, linear, undefined, or unbounded below. This amounted to a test set of 237 problems. All algorithms are initialized by the same instance drawn randomly from the uniform distribution on the unit sphere. Figure 6 depicts the performance profile [27,33] of various methods as measured by objective value and gradient norm. We found that the default maximum oracle call of 10 5 was adequate to allow all methods to achieve their best possible outcome. To highlight the performance of the methods more clearly, in Figure 7, the horizontal axis is capped at τ = 100. From Figures 6 and 7, clearly Newton-MR outperforms all other algorithms both in terms of obtaining the lowest objective value and the smallest gradient norm. What is somewhat striking, however, is the relatively consistent poor performance of Newton-CG-LS. Of course, incorporating forward/backward tracking line-search in Newton-CG-LS-FW has helped improve the performance. Nonetheless, on these experiments, the advantages of employing MINRES as sub-problem solver as opposed to alternatives such as CG or CR is evidently clear. Figure 5: Deep auto-encoder with MNIST dataset [47]. The special marks on the plots signify the iterations where a negative/non-positive curvature direction is detected and subsequently used within the respective algorithms. An "oracle call" refers to an operation that is equivalent, in terms of complexity, to a single function evaluation. Time is measured in seconds. The MNIST dataset gives rise to an arguably much simpler problem and hence we see that, with the exception of Newton-CG-LS, all methods perform similarly.
(a) Performance profile in terms of f (x k ) (b) Performance profile in terms of ∇f (x k ) Figure 6: The performance profile of various Newton-type methods on 237 CUTEst problems. For a given τ in the x-axis, the corresponding value on the y-axis is the proportion of times that a given solver's performance lies within a factor τ of the best possible performance over all runs. Figure 7: The performance profile of various Newton-type methods on 237 CUTEst problems. For a given τ in the x-axis, the corresponding value on the y-axis is the proportion of times that a given solver's performance lies within a factor τ of the best possible performance over all runs. To distinguish the performance of the methods more clearly, τ is capped at 100.
Conclusions
Building on recent results regarding various properties of MINRES [51], we extended the Newton-MR algorithm, initially proposed in [62] and limited to invex optimization problems, to more general non-convex settings. This is done by leveraging non-positive curvature directions, when they arise, as part of MINRES iterations. We established complexity guarantees for convergence to first and second-order approximate optimality, which are known to be optimal. To achieve this, we provided a novel convergence analysis for MINRES, which improves upon the existing bounds in terms of dependence on the spectrum for indefinite matrices. Furthermore, under the benign saddle property, a novel assumption which is weaker than the widely used strict saddle property, we were able to greatly improve the second-order complexity guarantee and obtain a rate that, to our knowledge, is the state-of-the-art. In contrast to similar alternative methods where CG iterations are greatly modified to extract NPC directions, our algorithms are simple in that the NPC directions, if they exist, arise naturally within MINRES iterations without additional algorithmic modifications. We also demonstrated the superior performance of our algorithm, as compared with several alternative Newton-type methods, on several non-convex problems.
A MINRES: Review and Further Details
In this section, for the sake of completeness, we review MINRES in some details and highlight the useful theoretical results that are relevant to the analysis of this paper. In doing so, we follow the presentation of [51] almost verbatim, however, we adjust the notation to match our setting here. For more details on MINRES and its various theoretical properties, see [51] and references therein. In this section, for notational simplicity, we drop the dependence on MINRES iterations s (t) k on the outer iterate x k , i.e., we use s t instead.
A.1 Algorithmic Details
Recall MINRES, depicted in Algorithm 1, is a method using Krylov subspace methods to solve the symmetric linear least-squares problem (5). MINRES involves three major ingredients:
Lanczos process, QR decomposition, and the update of its iterates.
Lanczos process
With v 1 = g/ g , recall that after t iterations of the Lanczos process, and in the absence of round-off errors, the Lanczos vectors form an orthogonal matrix V t+1 = [v 1 | v 2 | · · · | v t+1 ] ∈ R d×(t+1) , whose columns span K t+1 (H, g) and satisfy the familiar relation HV t = V t+1Tt , whereT t ∈ R (t+1)×t is an upper-Hessenberg matrix of the form
T t = α 1β2 β 2α2β3 β 3α3 . . . . . . . . .β t β tαt β t+1 T t β t+1 e T t .(25)
Subsequently, we get the three-term recursion
Hv t =β t v t−1 +α t v t +β t+1 v t+1 , t ≥ 2, and the Lanczos process is terminated whenβ t+1 = 0. Noting that K t (H, g) = Range(V t ) = Span{v 1 , . . . , v t }, we let s t = V t y t for some y t ∈ R t , and it follows that the residual r t can be written as r t = −g − Hs t = −g − HV t y t = −g − V t+1Tt y t = −V t+1 ( g e 1 +T t y t ).
This gives rise to the well-known sub-problems of MINRES as min yt∈R t β 1 e 1 +T t y t ,β 1 = g .
QR decomposition
Recall that (26) is solved using the QR factorization ofT t . Let Q tTt =R t be the full QR decomposition ofT t where Q t ∈ R (t+1)×(t+1) andR t ∈ R (t+1)×t . Typically, Q t is formed, implicitly, by the application of series of Householder reflections to transformT t to the uppertriangular matrixR t . Recall that each Householder reflection only affects two rows of the matrix that is being triangularized. More specifically, two successive application of Householder reflections can be compactly written by only considering the elements of the matrix that are being affected as
1 0 0 0 c i−1 s i−1 0 s i−1 −c i−1 c i−2 s i−2 0 s i−2 −c i−2 0 0 0 1 γ (1) i−2 δ (1) i−1 0 0 β i−1αi−1βi 0 0β iαiβi+1 = 1 0 0 0 c i−1 s i−1 0 s i−1 −c i−1 γ (2) i−2 δ (2) i−1 i 0 0 γ (1) i−1 δ (1) i 0 0β iαiβi+1 = γ (2) i−2 δ (2) i−1 i 0 0 γ (2) i−1 δ (2) i i+1 0 0 γ (1) i δ (1) i+1 ,
where 3 ≤ i ≤ t − 1 and
c j = γ (1) j γ (2) j , s j =β j+1 γ (2) j , γ (2) j = (γ (1) j ) 2 +β 2 j+1 = c j γ (1) j + s jβj+1 , 1 ≤ j ≤ t.(27)
Here, the 2 × 2 sub-matrix made of c j and s j is the special case of a Householder reflector in dimension two [73, p. 76]. Consequently, we can rewrite Q t andR t in block form as
Q tTt =R t R t 0 , R t = γ (2) 1 δ (2) 2 3 γ (2) 2 δ (2) 3 . . . . . . . . . t γ (2) t−1 δ (2) t γ (2) t . (28a) Q t t i=1 Q i,i+1 , Q i,i+1 I i−1 c t s t s t −c t I t−i ,(28b)
In fact, the same series of transformations are also simultaneously applied toβ 1 e 1 as
Q tβ1 e 1 =β 1 c 1 s 1 c 2 . . . s 1 s 2 . . . s t−1 c t s 1 s 2 . . . s t−1 s t τ 1 τ 2 . . . τ t φ t t t φ t .
With these quantities available, we can solve (26) by noting that min yt∈R t r t = min yt∈R t β 1 e 1 +T t y t = min yt∈R t Q t Q tβ1 e 1 + Q tTt y t = min yt∈R t Q tβ1 e 1 + Q tTt y t = min yt∈R t t t φ t + R t 0 y t .
Note that this in turn implies that φ t = r t . We also trivially have φ 0 =β 1 = g .
Updates
Let t < g and define W t from solving the lower triangular system R t W t = V t where R t is as in (28a). Now, letting V t = [V t−1 | v t ], and using the fact that R is upper-triangular, we get the recursion W t = [W t−1 | w t ] for some vector w t . As a result, using R t y t = t t , one can update the iterate as
s t = V t y t = W t R t y t = W t t t = W t−1 w t t t−1 τ t = s t−1 + τ t w t .
We also always set s 0 = 0. Furthermore, from V t = W t R t , i.e.,
v 1 v 2 . . . v t = w 1 w 2 . . . w t γ(2)δ (2) t γ (2) t ,
we get the following relationship for computing v t as v t = t w t−2 + δ (2) t w t−1 + γ (2) t w t .
All of the above steps constitute MINRES method, which is given in Algorithm 1.
A.2 Other Relevant Properties
Beyond the descent implications discussed in Section 2.1, MINRES offers a plethora of relevant properties that we leverage in developing the algorithms of this paper and obtaining their convergence guarantees. We briefly mention these relevant properties here and invite the reader to consult [51] for further details and proofs.
The following lemma contains several useful facts about the iterations of MINRES.
Lemma 11. Let g be the grade of g with respect to H. In MINRES, for any 1 ≤ t ≤ g, we have
r t , Hr i = 0, i = t, 1 ≤ i ≤ g,(29a)r t , g = − r t 2 ,(29b)r t−1 , Hr t−1 = −c t−1 γ (1) t r t−1 2 ,(29c)Hs t−1 = φ 2 0 − φ 2 t−1 ,(29d)Hr t−1 = φ t−1 (γ (1) t ) 2 + (δ (1) t+1 ) 2 . (29e)
Proof. We get (29a) to (29c) directly from [51, Lemma 3.1]. Applying (29b), we obtain (29d) as
Hs t−1 2 = r t−1 + g 2 = g 2 − r t−1 2 = φ 2 0 − φ 2 t−1 .
By [51,Eqn (3.2)], we can formulate Hr t−1 as
Hr t−1 = φ t−1 (γ (1) t v t + δ (1) t+1 v t+1 ),
which implies (29e).
From (29c), it is immediate that if for some t ≤ g, we have
−c t−1 γ (1) t ≤ 0,(30)
then r t−1 must be a NPC direction for H, i.e., (7) holds. The crucial question, answered in [51], is whether or not the condition (30) is both necessary and sufficient for a NPC direction to be available. This question was investigated by observing a tight connection between the tridiagonal symmetric matrix T t and the condition (30). Indeed, since r t−1 ∈ K t (H, g), we can r t−1 = V t p for some non-zero p ∈ R t . Hence, Clearly, as long as T t 0, the condition (30) cannot hold. The following result from [51] shows the converse also holds, i.e., as soon as T t 0 for some t ≤ g, MINRES declares r (t−1) k as a NPC direction.
Lemma 12 ([51, Theorem 3.3]). Let k ≤ g where g is the grade of g with respect to H. If T t 0, then the NPC condition (30) holds for some t ≤ k. In particular, if t ≤ g is the first iteration where T t 0, then the NPC condition (30) holds.
As long as the NPC condition (30) has not been detected, MINRES enjoys additional properties regarding the signs of certain quadratic functions as well as the monotonicity of certain quantities, which are used in the analysis of this paper. In particular, before (30) is detected, Lemma 13 shows that, not only is s t a descent direction for g 2 , but it can also yield descent for the original objective f . Lemma 13 ( [51, Theorems 3.8 and 3.11]). Let g be the grade of g with respect to H. As long as the NPC condition (30) has not been detected for 1 ≤ t ≤ g, we must have s t , g + s t , Hs t ≤ 0.
Definition 4
4(g-Relevant Eigenvalues of H). Consider the set of eigenvalues of H, i.e., Θ(H) {λ ∈ R | det(H − λI) = 0} and recall the eigenspace corresponding to an eigenvalue λ ∈ Θ(H), i.e., E λ (H)
=
W i W i g/ W i W i g and its non-zero orthogonal complements u {2} i , . . . , u {mi} i in E λi (H). Using u i instead of u {1} ifor notational simplicity, we define
Theorem 2 .
2Assume Ψ(H, g) = Θ(H), i.e., all eigenvalues of H are g-relevant.
Algorithm 2 Backward Tracking Line-Search 1 :: α = 1, 0 < ζ < 1 2: while Line-search criterion is not satisfied with α
11Input
.
The initial trial step-size for Algorithm 2 is set as α = 1. The motivation for this choice is based on the following observation. Suppose MINRES returns a solution direction d k prior to detection of the NPC condition. Consider the second-order Taylor expansion of f at x k as
.
Typically, one might expect the size of the update direction to be directly correlated with the magnitude of the gradient. Indeed, from[50, Lemma 3.11], we have H k s (t) k ≤ g k . Hence,
Lemma 8 .
8Suppose Algorithm 1 returns D type = 'SOL'. Under Assumptions 1 to 3, in Algorithm 4, we have
Figure 1 :
1The behavior of Algorithm 1 in Example 2. Iterations where a NPC direction is detected are highlighted by enlarged marks.
g k . Assumption 4 entails the existence of a uniform ω for all iterates of Algorithm 4.
Assumption 4 .
4There exist some ω > 0 such that for any x k from Algorithm 4, the NPC direction satisfies r(t−1) k ≥ ω g k .Similar to Lemma 8, Lemma 9 gives a worst-case estimate on the reduction in f obtained with a NPC direction.
Call
Algorithm 1 as [d k , D type ] = MINRES H k + ε H 2 I,g, 0 14:if D type = 'NPC' then15: d k = −Sign( g k , d k )d k / d k (NB: "Sign(0)" can be arbitrarily chosen as "±1")
MINRES (H k + 0.5ε H I,g, 0). Lemma 10 gives a worst-case decrease in f using the direction from Step 15 of Algorithm 5.Lemma 10. Suppose λ min (H k ) ≤ −ε H and Step 13 of Algorithm 5 successfully returns a NPC direction. Under Assumption 2, in Algorithm 5, we have
Theorem 6 (
6Operation Complexity of Algorithm 5). Suppose d is sufficiently large. evaluations, Algorithm 5 satisfies the approximate second-order optimality (3) with probability 1 − δ.
N (0, 1). Considerν = ( g, u )
vector product evaluations, Algorithm 5 finds a point x such that g(x) ≤ ε g and λ min (H(x)) ≥ −ς, with probability 1 − δ.
---
L-BFGS [57, Algorithm 7.5]. A limit memory quasi-Newton method with strong Wolfe line-search [57, Algorithm 3.5]. Newton-CR [26, Algorithm 3]. A line search Newton-CR method with strong Wolfe linesearch [57, Algorithm 3.5]. Newton-CG-LS [64, Algorithm 3]. A line search Newton-CG method with small Hessian perturbations.
Table 1: Auto-encoder model for CIFAR10 and MNIST datasets.Dataset
n
d
Encoder network architecture
CIFAR10 50,000 1,664,232
3,072-256-128-64-32-16-8
MNIST
60,000 1,154,784
784-512-256-128-64-32-16
1 δ
(2)
2
3
γ
(2)
2
. . .
. . .
. . .
. . .
t
γ
(2)
t−1
More generally, it can be shown that, the eigenvectors corresponding to any λ ∈ Θ(H) \ Ψ(H, g) will not belong to Kt(H, g) for any 1 ≤ t ≤ g.
16L 2 H , L H , and ρ are as in Assumption 2 and (21).Proof. By assumption, Algorithm 1 returns a NPC direction r(t−1) k = −g −H k s (t−1) k for some 1 ≤ t ≤ d. Since d k = −Sign r (t−1) k , g k r (t−1) k / r (t−1) k
In our implementations, we only aim to reach an approximate first-order optimal point, and leave a thorough numerical evaluation of Algorithm 5 to a follow up empirical work.
The STL10 dataset contains colored images in ten classes. We relabel the even classes as "0" and the odd one as "1".
Global minimizers, strict and non-strict saddle points, and implicit regularization for deep linear neural networks. François El Mehdi Achour, Sébastien Malgouyres, Gerchinovitz, arXiv:2107.13289arXiv preprintEl Mehdi Achour, François Malgouyres, and Sébastien Gerchinovitz. Global minimizers, strict and non-strict saddle points, and implicit regularization for deep linear neural net- works. arXiv preprint arXiv:2107.13289, 2021.
Finding approximate local minima faster than gradient descent. Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, Tengyu Ma, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. the 49th Annual ACM SIGACT Symposium on Theory of ComputingNaman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma. Finding approximate local minima faster than gradient descent. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 1195-1199, 2017.
On the low-rank approach for semidefinite programs arising in synchronization and community detection. S Afonso, Nicolas Bandeira, Vladislav Boumal, Voroninski, Conference on learning theory. PMLRAfonso S Bandeira, Nicolas Boumal, and Vladislav Voroninski. On the low-rank approach for semidefinite programs arising in synchronization and community detection. In Confer- ence on learning theory, pages 361-382. PMLR, 2016.
Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization. Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, IMA Journal of Numerical Analysis. 411Stefania Bellavia, Gianmarco Gurioli, and Benedetta Morini. Adaptive cubic regulariza- tion methods with dynamic inexact Hessian information and applications to finite-sum minimization. IMA Journal of Numerical Analysis, 41(1):764-799, 2021.
Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives. Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Ph L Toint, Journal of Complexity. 68101591Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, and Ph L Toint. Adaptive regular- ization for nonconvex optimization using inexact function values and randomly perturbed derivatives. Journal of Complexity, 68:101591, 2022.
Adaptive regularization algorithms with inexact evaluations for nonconvex optimization. Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Philippe L Toint, SIAM Journal on Optimization. 294Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, and Philippe L Toint. Adap- tive regularization algorithms with inexact evaluations for nonconvex optimization. SIAM Journal on Optimization, 29(4):2881-2915, 2019.
Quadratic and Cubic Regularisation Methods with Inexact function and Random Derivatives for Finite-Sum Minimisation. Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Philippe L Toint, arXiv:2104.00592arXiv preprintStefania Bellavia, Gianmarco Gurioli, Benedetta Morini, and Philippe L Toint. Quadratic and Cubic Regularisation Methods with Inexact function and Random Derivatives for Finite-Sum Minimisation. arXiv preprint arXiv:2104.00592, 2021.
Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Ernesto G Birgin, José Mario Gardenghi, Sandra Augusta Martínez, Ph L Santos, Toint, Mathematical Programming. 1631Ernesto G Birgin, JL Gardenghi, José Mario Martínez, Sandra Augusta Santos, and Ph L Toint. Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Mathematical Programming, 163(1):359-368, 2017.
The use of quadratic regularization with a cubic descent condition for unconstrained optimization. G Ernesto, José Mario Birgin, Martínez, SIAM Journal on Optimization. 272Ernesto G Birgin and José Mario Martínez. The use of quadratic regularization with a cubic descent condition for unconstrained optimization. SIAM Journal on Optimization, 27(2):1049-1074, 2017.
Numerical methods in matrix computations. Åke Björck, SpringerÅke Björck. Numerical methods in matrix computations. Springer, 2015.
Convergence rate analysis of a stochastic trust region method for nonconvex optimization. Jose Blanchet, Coralia Cartis, Matt Menickelly, Katya Scheinberg, arXiv:1609.07428arXiv preprintJose Blanchet, Coralia Cartis, Matt Menickelly, and Katya Scheinberg. Convergence rate analysis of a stochastic trust region method for nonconvex optimization. arXiv preprint arXiv:1609.07428, 2016.
Nonconvex phase synchronization. Nicolas Boumal, SIAM Journal on Optimization. 264Nicolas Boumal. Nonconvex phase synchronization. SIAM Journal on Optimization, 26(4):2355-2377, 2016.
Convex optimization. Stephen Boyd, Lieven Vandenberghe, Cambridge university pressStephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. N I M Cartis, Philip L Gould, Toint, Mathematical Programming. 1272C Cartis, N. I. M. Gould, and Philip L. Toint. Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math- ematical Programming, 127(2):245-295, 2011.
Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function-and derivative-evaluation complexity. N I M Cartis, Philip L Gould, Toint, Mathematical programming. 130C Cartis, N. I. M. Gould, and Philip L. Toint. Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function-and derivative-evaluation com- plexity. Mathematical programming, 130(2):295-319, 2011.
On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems. N I M Coralia Cartis, Philip L Gould, Toint, SIAM journal on optimization. 206Coralia Cartis, N. I. M. Gould, and Philip L Toint. On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems. SIAM journal on optimization, 20(6):2833-2852, 2010.
Optimal Newton-type methods for nonconvex smooth optimization problems. N I M Coralia Cartis, Philip L Gould, Toint, ERGO technical report 11-009. School of Mathematics, University of EdinburghTechnical reportCoralia Cartis, N. I. M. Gould, and Philip L. Toint. Optimal Newton-type methods for nonconvex smooth optimization problems. Technical report, ERGO technical report 11-009, School of Mathematics, University of Edinburgh, 2011.
Complexity bounds for second-order optimality in unconstrained optimization. N I M Coralia Cartis, Philip L Gould, Toint, Journal of Complexity. 281Coralia Cartis, N. I. M. Gould, and Philip L. Toint. Complexity bounds for second-order optimality in unconstrained optimization. Journal of Complexity, 28(1):93-108, 2012.
MINRES-QLP: A Krylov subspace method for indefinite or singular symmetric systems. T Sou-Cheng, Choi, C Christopher, Michael A Paige, Saunders, SIAM Journal on Scientific Computing. 334Sou-Cheng T Choi, Christopher C Paige, and Michael A Saunders. MINRES-QLP: A Krylov subspace method for indefinite or singular symmetric systems. SIAM Journal on Scientific Computing, 33(4):1810-1836, 2011.
An analysis of single-layer networks in unsupervised feature learning. Adam Coates, Andrew Ng, Honglak Lee, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsAdam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215-223. JMLR Workshop and Conference Proceedings, 2011.
Trust region methods. N I M Andrew R Conn, Philip L Gould, Toint, SIAM1Andrew R Conn, N. I. M. Gould, and Philip L. Toint. Trust region methods, volume 1. SIAM, 2000.
Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization. E Frank, Curtis, P Daniel, Robinson, W Clément, Stephen J Royer, Wright, SIAM Journal on Optimization. 311Frank E Curtis, Daniel P Robinson, Clément W Royer, and Stephen J Wright. Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimiza- tion. SIAM Journal on Optimization, 31(1):518-544, 2021.
An Inexact Regularized Newton Framework with a Worst-Case Iteration Complexity of O( −3/2 ) for Nonconvex Optimization. E Frank, Curtis, P Daniel, Mohammadreza Robinson, Samadi, arXiv:1708.00475arXiv preprintFrank E Curtis, Daniel P Robinson, and Mohammadreza Samadi. An Inexact Regularized Newton Framework with a Worst-Case Iteration Complexity of O( −3/2 ) for Nonconvex Optimization. arXiv preprint arXiv:1708.00475, 2017.
A trust region algorithm with a worst-case iteration complexity of O( −3/2 ) for nonconvex optimization. E Frank, Curtis, P Daniel, Mohammadreza Robinson, Samadi, Mathematical Programming. 1621-2Frank E Curtis, Daniel P Robinson, and Mohammadreza Samadi. A trust region algorithm with a worst-case iteration complexity of O( −3/2 ) for nonconvex optimization. Mathemat- ical Programming, 162(1-2):1-32, 2017.
Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization. E Frank, Qi Curtis, Wang, arXiv:2204.11322arXiv preprintFrank E Curtis and Qi WAng. Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization. arXiv preprint arXiv:2204.11322, 2022.
The Conjugate Residual Method in Linesearch and Trust-Region Methods. Marie- , Ange Dahito, Dominique Orban, SIAM Journal on Optimization. 293Marie-Ange Dahito and Dominique Orban. The Conjugate Residual Method in Linesearch and Trust-Region Methods. SIAM Journal on Optimization, 29(3):1988-2025, 2019.
Benchmarking optimization software with performance profiles. D Elizabeth, Jorge J Dolan, Moré, Mathematical programming. 912Elizabeth D Dolan and Jorge J Moré. Benchmarking optimization software with perfor- mance profiles. Mathematical programming, 91(2):201-213, 2002.
A nonmonotone truncated Newton-Krylov method exploiting negative curvature directions, for large scale unconstrained optimization. Giovanni Fasano, Stefano Lucidi, Optimization Letters. 34Giovanni Fasano and Stefano Lucidi. A nonmonotone truncated Newton-Krylov method exploiting negative curvature directions, for large scale unconstrained optimization. Opti- mization Letters, 3(4):521-535, 2009.
Polynomial based iteration methods for symmetric linear systems. Bernd Fischer, SIAM. Bernd Fischer. Polynomial based iteration methods for symmetric linear systems. SIAM, 2011.
Escaping from saddle points-online stochastic gradient for tensor decomposition. Rong Ge, Furong Huang, Chi Jin, Yang Yuan, Conference on learning theory. PMLRRong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points-online stochastic gradient for tensor decomposition. In Conference on learning theory, pages 797- 842. PMLR, 2015.
Gene H Golub, Charles F Van Loan, Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press4 editionGene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, 4 edition, 2012.
G H Golub, C F Van Loan, Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University PressG.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, 2013.
A note on performance profiles for benchmarking software. Nicholas Gould, Jennifer Scott, ACM Transactions on Mathematical Software (TOMS). 43215Nicholas Gould and Jennifer Scott. A note on performance profiles for benchmarking software. ACM Transactions on Mathematical Software (TOMS), 43(2):15, 2016.
Exploiting negative curvature directions in linesearch methods for unconstrained optimization. Optimization methods and software. I M Nicholas, Stefano Gould, Massimo Lucidi, Ph L Roma, Toint, 14Nicholas IM Gould, Stefano Lucidi, Massimo Roma, and Ph L Toint. Exploiting negative curvature directions in linesearch methods for unconstrained optimization. Optimization methods and software, 14(1-2):75-98, 2000.
Cutest: a constrained and unconstrained testing environment with safe threads for mathematical optimization. I M Nicholas, Dominique Gould, Philippe L Orban, Toint, Computational optimization and applications. 60Nicholas IM Gould, Dominique Orban, and Philippe L Toint. Cutest: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Com- putational optimization and applications, 60(3):545-557, 2015.
On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization. J Geovani N Grapiglia, Y Yuan, Yuan, Optimization Methods and Software. 313Geovani N Grapiglia, J Yuan, and Y Yuan. On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization. Optimization Methods and Software, 31(3):591-604, 2016.
A recursive-trust-region method for bound-constrained nonlinear optimization. Serge Gratton, Mélodie Mouffe, Philippe L Toint, Melissa Weber-Mendonça, IMA Journal of Numerical Analysis. 284Serge Gratton, Mélodie Mouffe, Philippe L Toint, and Melissa Weber-Mendonça. A recursive-trust-region method for bound-constrained nonlinear optimization. IMA Jour- nal of Numerical Analysis, 28(4):827-861, 2008.
Recursive trust-region methods for multiscale nonlinear optimization. Serge Gratton, Annick Sartenaer, Philippe L Toint, SIAM Journal on Optimization. 191Serge Gratton, Annick Sartenaer, and Philippe L Toint. Recursive trust-region methods for multiscale nonlinear optimization. SIAM Journal on Optimization, 19(1):414-444, 2008.
Complexity and global rates of trust-region methods based on probabilistic models. Serge Gratton, Clément W Royer, Luís N Vicente, Zaikun Zhang, IMA Journal of Numerical Analysis. 383Gratton, Serge and Royer, Clément W and Vicente, Luís N and Zhang, Zaikun. Complexity and global rates of trust-region methods based on probabilistic models. IMA Journal of Numerical Analysis, 38(3):1579-1597, 2018.
Iterative methods for solving linear systems. Anne Greenbaum, 17SiamAnne Greenbaum. Iterative methods for solving linear systems, volume 17. Siam, 1997.
Result analysis of the nips 2003 feature selection challenge. Isabelle Guyon, Steve Gunn, Asa Ben-Hur, Gideon Dror, Advances in neural information processing systems. 17Isabelle Guyon, Steve Gunn, Asa Ben-Hur, and Gideon Dror. Result analysis of the nips 2003 feature selection challenge. Advances in neural information processing systems, 17, 2004.
Reducing the dimensionality of data with neural networks. science. E Geoffrey, Hinton, R Ruslan, Salakhutdinov, 313Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006.
Cubic regularization methods with secondorder complexity guarantee based on a new subproblem reformulation. Ru-Jun Jiang, Zhi-Shuo Zhou, Zi-Rui Zhou, Journal of the Operations Research Society of China. Ru-Jun Jiang, Zhi-Shuo Zhou, and Zi-Rui Zhou. Cubic regularization methods with second- order complexity guarantee based on a new subproblem reformulation. Journal of the Operations Research Society of China, pages 1-36, 2022.
Sub-sampled Cubic Regularization for Nonconvex Optimization. Jonas Moritz Kohler, Aurelien Lucchi, arXiv:1705.05933arXiv preprintJonas Moritz Kohler and Aurelien Lucchi. Sub-sampled Cubic Regularization for Non- convex Optimization. arXiv preprint arXiv:1705.05933, 2017.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Estimating the largest eigenvalue by the power and Lanczos algorithms with a random start. Jacek Kuczyński, Henryk Woźniakowski, SIAM journal on matrix analysis and applications. 134Jacek Kuczyński and Henryk Woźniakowski. Estimating the largest eigenvalue by the power and Lanczos algorithms with a random start. SIAM journal on matrix analysis and applications, 13(4):1094-1122, 1992.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
First-order methods almost always avoid strict saddle points. Mathematical programming. Ioannis Jason D Lee, Georgios Panageas, Max Piliouras, Simchowitz, Benjamin Michael I Jordan, Recht, 176Jason D Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael I Jordan, and Benjamin Recht. First-order methods almost always avoid strict saddle points. Math- ematical programming, 176(1):311-337, 2019.
Gradient descent only converges to minimizers. Max Jason D Lee, Simchowitz, Benjamin Michael I Jordan, Recht, Conference on learning theory. PMLRJason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent only converges to minimizers. In Conference on learning theory, pages 1246-1257. PMLR, 2016.
Convergence of Newton-MR under inexact Hessian information. Yang Liu, Fred Roosta, SIAM Journal on Optimization. 311Yang Liu and Fred Roosta. Convergence of Newton-MR under inexact Hessian information. SIAM Journal on Optimization, 31(1):59-90, 2021.
MINRES: From Negative Curvature Detection to Monotonicity Properties. Yang Liu, Fred Roosta, SIAM Journal on Optimization. 2022To appearYang Liu and Fred Roosta. MINRES: From Negative Curvature Detection to Monotonicity Properties. SIAM Journal on Optimization, 2022. To appear.
Deep learning via Hessian-free optimization. James Martens, Proceedings of the 27th International Conference on Machine Learning (ICML-10). the 27th International Conference on Machine Learning (ICML-10)James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 735-742, 2010.
Cubic-regularization counterpart of a variablenorm trust-region method for unconstrained minimization. Mario José, Marcos Martínez, Raydan, Journal of Global Optimization. 682José Mario Martínez and Marcos Raydan. Cubic-regularization counterpart of a variable- norm trust-region method for unconstrained minimization. Journal of Global Optimization, 68(2):367-385, 2017.
Invexity and Optimization. K Shashi, Giorgio Mishra, Giorgi, Springer Science & Business Media88Shashi K Mishra and Giorgio Giorgi. Invexity and Optimization, volume 88. Springer Science & Business Media, 2008.
Introductory lectures on convex optimization. Yurii Nesterov, Springer Science & Business Media87Yurii Nesterov. Introductory lectures on convex optimization, volume 87. Springer Science & Business Media, 2004.
Cubic regularization of Newton method and its global performance. Yurii Nesterov, T Boris, Polyak, Mathematical Programming. 1081Yurii Nesterov and Boris T Polyak. Cubic regularization of Newton method and its global performance. Mathematical Programming, 108(1):177-205, 2006.
Numerical optimization. Jorge Nocedal, Stephen Wright, Springer Science & Business MediaJorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.
Solution of sparse indefinite systems of linear equations. C Christopher, Michael A Paige, Saunders, SIAM journal on numerical analysis. 124Christopher C Paige and Michael A Saunders. Solution of sparse indefinite systems of linear equations. SIAM journal on numerical analysis, 12(4):617-629, 1975.
Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions. Ioannis Panageas, Georgios Piliouras, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. Ioannis Panageas and Georgios Piliouras. Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions. In 8th Innovations in Theoretical Com- puter Science Conference (ITCS 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.
First-order methods almost always avoid saddle points: The case of vanishing step-sizes. Ioannis Panageas, Georgios Piliouras, Xiao Wang, Advances in Neural Information Processing Systems. 32Ioannis Panageas, Georgios Piliouras, and Xiao Wang. First-order methods almost always avoid saddle points: The case of vanishing step-sizes. Advances in Neural Information Processing Systems, 32, 2019.
Nonlinear Analysis and Optimization II. Contemporary mathematics. S Reich, A D Ioffe, A Leizarowitz, B S Mordukhovich, I Shafrir, American Mathematical SocietyS. Reich, A.D. Ioffe, A. Leizarowitz, B.S. Mordukhovich, and I. Shafrir. Nonlinear Anal- ysis and Optimization II. Contemporary mathematics -American Mathematical Society. American Mathematical Society, 2010.
Newton-MR: Inexact Newton's Method With Minimum Residual Sub-problem Solver. Fred Roosta, Yang Liu, Peng Xu, Michael W Mahoney, EURO Journal on Computational Optimization. 10100035Fred Roosta, Yang Liu, Peng Xu, and Michael W Mahoney. Newton-MR: Inexact Newton's Method With Minimum Residual Sub-problem Solver. EURO Journal on Computational Optimization, 10:100035, 2022.
A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression. W Clément, Royer, arXiv:2201.08568arXiv preprintClément W Royer et al. A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression. arXiv preprint arXiv:2201.08568, 2022.
A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization. W Clément, Royer, O' Michael, Stephen J Neill, Wright, Mathematical Programming. 1801Clément W Royer, Michael O'Neill, and Stephen J Wright. A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization. Mathematical Programming, 180(1):451-488, 2020.
Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization. W Clément, Stephen J Royer, Wright, SIAM Journal on Optimization. 282Clément W Royer and Stephen J Wright. Complexity analysis of second-order line-search al- gorithms for smooth nonconvex optimization. SIAM Journal on Optimization, 28(2):1448- 1477, 2018.
Iterative methods for sparse linear systems. Yousef Saad, SIAM82Yousef Saad. Iterative methods for sparse linear systems, volume 82. SIAM, 2003.
Numerical methods for large eigenvalue problems: revised edition. SIAM. Yousef Saad, Yousef Saad. Numerical methods for large eigenvalue problems: revised edition. SIAM, 2011.
On the superlinear convergence of MINRES. Valeria Simoncini, B Daniel, Szyld, Numerical Mathematics and Advanced Applications. SpringerValeria Simoncini and Daniel B Szyld. On the superlinear convergence of MINRES. In Numerical Mathematics and Advanced Applications 2011, pages 733-740. Springer, 2013.
The conjugate gradient method and trust regions in large scale optimization. Trond Steihaug, SIAM Journal on Numerical Analysis. 203Trond Steihaug. The conjugate gradient method and trust regions in large scale optimiza- tion. SIAM Journal on Numerical Analysis, 20(3):626-637, 1983.
When are nonconvex problems not scary?. Ju Sun, Qing Qu, John Wright, arXiv:1510.06096arXiv preprintJu Sun, Qing Qu, and John Wright. When are nonconvex problems not scary? arXiv preprint arXiv:1510.06096, 2015.
Complete dictionary recovery over the sphere I: Overview and the geometric picture. Ju Sun, Qing Qu, John Wright, IEEE Transactions on Information Theory. 632Ju Sun, Qing Qu, and John Wright. Complete dictionary recovery over the sphere I: Overview and the geometric picture. IEEE Transactions on Information Theory, 63(2):853- 884, 2016.
A geometric analysis of phase retrieval. Ju Sun, Qing Qu, John Wright, Foundations of Computational Mathematics. 185Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retrieval. Foundations of Computational Mathematics, 18(5):1131-1198, 2018.
. N Lloyd, David Trefethen, Iii Bau, 50SiamNumerical linear algebraLloyd N Trefethen and David Bau III. Numerical linear algebra, volume 50. Siam, 1997.
Stochastic cubic regularization for fast nonconvex optimization. Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, Michael I Jordan , Advances in neural information processing systems. Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, and Michael I Jordan. Stochastic cubic regularization for fast nonconvex optimization. In Advances in neural information processing systems, pages 2899-2908, 2018.
A convergence analysis of the MINRES method for some Hermitian indefinite systems. Ze-Jia Xie, Xiao-Qing Jin, Zhi Zhao, East Asian Journal on Applied Mathematics. 74Ze-Jia Xie, Xiao-Qing Jin, and Zhi Zhao. A convergence analysis of the MINRES method for some Hermitian indefinite systems. East Asian Journal on Applied Mathematics, 7(4):827- 836, 2017.
Newton-type methods for non-convex optimization under inexact Hessian information. Peng Xu, Fred Roosta, Michael W Mahoney, Mathematical Programming. 1841Peng Xu, Fred Roosta, and Michael W Mahoney. Newton-type methods for non-convex optimization under inexact Hessian information. Mathematical Programming, 184(1):35-70, 2020.
Second-order optimization for non-convex machine learning: An empirical study. Peng Xu, Fred Roosta, Michael W Mahoney, Proceedings of the 2020 SIAM International Conference on Data Mining. the 2020 SIAM International Conference on Data MiningSIAMPeng Xu, Fred Roosta, and Michael W Mahoney. Second-order optimization for non-convex machine learning: An empirical study. In Proceedings of the 2020 SIAM International Conference on Data Mining, pages 199-207. SIAM, 2020.
Inexact non-convex Newtontype methods. Zhewei Yao, Peng Xu, Fred Roosta, Michael W Mahoney, Informs Journal on Optimization. 32Zhewei Yao, Peng Xu, Fred Roosta, and Michael W Mahoney. Inexact non-convex Newton- type methods. Informs Journal on Optimization, 3(2):154-182, 2021.
Inexact Newton-CG Algorithms With Complexity Guarantees. Zhewei Yao, Peng Xu, Fred Roosta, J Stephen, Michael W Wright, Mahoney, IMA Journal of Numerical Analysis. 2022To appearZhewei Yao, Peng Xu, Fred Roosta, Stephen J Wright, and Michael W Mahoney. Inexact Newton-CG Algorithms With Complexity Guarantees. IMA Journal of Numerical Analysis, 2022. To appear.
| [] |
[
"Query Combinators",
"Query Combinators"
] | [
"Clark C Evans \nPrometheus Research, LLC\n\n",
"Kyrylo Simonov \nPrometheus Research, LLC\n\n"
] | [
"Prometheus Research, LLC\n",
"Prometheus Research, LLC\n"
] | [] | We introduce Rabbit, a combinator-based query language. Rabbit is designed to let data analysts and other accidental programmers query complex structured data.We combine the functional data model and the categorical semantics of computations to develop denotational semantics of database queries. In Rabbit, a query is modeled as a Kleisli arrow for a monadic container determined by the query cardinality. In this model, monadic composition can be used to navigate the database, while other query combinators can aggregate, filter, sort and paginate data; construct compound data; connect selfreferential data; and reorganize data with grouping and data cube operations. A context-aware query model, with the input context represented as a comonadic container, can express query parameters and window functions. Rabbit semantics enables pipeline notation, encouraging its users to construct database queries as a series of distinct steps, each individually crafted and tested. We believe that Rabbit can serve as a practical tool for data analytics. | null | [
"https://arxiv.org/pdf/1702.08409v1.pdf"
] | 9,516,012 | 1702.08409 | 9bff372a86dcc5832fc0191b85b5aac0f75d4217 |
Query Combinators
Draft of February 28, 2017
Clark C Evans
Prometheus Research, LLC
Kyrylo Simonov
Prometheus Research, LLC
Query Combinators
Draft of February 28, 2017
We introduce Rabbit, a combinator-based query language. Rabbit is designed to let data analysts and other accidental programmers query complex structured data.We combine the functional data model and the categorical semantics of computations to develop denotational semantics of database queries. In Rabbit, a query is modeled as a Kleisli arrow for a monadic container determined by the query cardinality. In this model, monadic composition can be used to navigate the database, while other query combinators can aggregate, filter, sort and paginate data; construct compound data; connect selfreferential data; and reorganize data with grouping and data cube operations. A context-aware query model, with the input context represented as a comonadic container, can express query parameters and window functions. Rabbit semantics enables pipeline notation, encouraging its users to construct database queries as a series of distinct steps, each individually crafted and tested. We believe that Rabbit can serve as a practical tool for data analytics.
Introduction
Combinators are a popular approach to the design of compositional domain-specific languages (DSLs). This approach views a DSL as an algebra of self-contained processing blocks, which either come from a set of predefined atomic primitives or are constructed from other blocks using block combinators.
The combinator approach gives us a roadmap to design a database query language:
• define the model of database queries;
• describe the set of primitive queries;
• describe the combinators for making composite queries.
Dept Emp
Text Int To elaborate on this idea, we need some sample structured data. Throughout this paper, we use a simple database that contains just two classes of entities: departments and employees. Each department entity has one attribute: name. Each employee entity has three attributes: name, position and salary. Each employee belongs to a department. An employee may have a manager, who is also an employee.
In Figure 1, the structure of the sample database is visualized as a directed graph, with attributes and relationships (arcs) connecting entity classes and attribute types (graph nodes). This diagram may suggest that we view attributes and relationships as functions with the given types of input and output, for example department : Emp → Dept, name
: Dept → Text.
This is known as the functional database model [16,22]. This model provides us with a starting point on our combinator roadmap. Indeed, a database query could be seen as a function; then, a set of primitive queries is formed by all the attributes and relationships, while function composition becomes a binary query combinator. With these considerations, we can write our first composite query. In this example, department.name is a query written in Rabbit notation, and Emp → Text is its signature. The period (".") denotes the composition combinator, which is a polymorphic binary operator with a signature − . − : (A → B, B → C) → (A → C).
Even though this query model can express one database query, it does not seem to be powerful enough to become the foundation of a query language. What is this model missing?
First, it is awkward that a query always demands an input. It means that we cannot express an input-free query like show a list of all employees. 1 Further, although the relationships are bidirectional, the model only covers one of their directions. Indeed, we chose to represent the relationship between departments and employees as a primitive with input Emp and output Dept. However, we may just as well be interested in finding, for any given department, the corresponding list of employees. It would be natural to add a primitive for the opposite direction, but it cannot be encoded as a function because its signature Dept → Emp would incorrectly imply that there is exactly one employee per department. Thus, the query model is unable to express multivalued or plural relationships.
The model also fails to capture the semantics of optional attributes and relationships. Such is the relationship between employees and their managers, which, according to Figure 1, should be encoded by a primitive with signature Emp → Emp. But this signature implies that every employee must have a manager, which is untrue. Apparently, a pure functional model is too restrictive to express the variety of relationships between database entities. This paper shows how to complete this query model and build a query language on top of it. It is organized as follows.
In Section 2, we show how to represent optional and plural relationships using the notion of query cardinality, which, following the approach of categorical semantics of computations [18], determines the monadic container for the query output. This lets us establish a compositional model of database queries.
In Section 3, we show how common data operations can be expressed as query combinators. Specifically, we describe combinators that extract, aggregate, filter, sort and paginate data; construct compound data; and connect self-referential data.
In Section 4, we show how grouping and data cube operations can be implemented as combinators that reorganize the intrinsic hierarchical structure of the database.
In Section 5, using the approach to the semantics of dataflow programming [25], we extend the query model 1 We italicize business questions that specify database queries.
to include a comonadic query context, which allows us to express query parameters and window functions.
In Section 6, we summarize the query model and briefly discuss some related work.
Query Cardinality
In Section 1, we suggested that a database query could be modeled as a function. However, this naïve model failed to represent optional and plural relationships as well as queries lacking apparent input. In this section, we resolve these issues by introducing the notion of query cardinality.
We found it difficult to model these two relationships:
(i) An employee may have a manager.
(ii) A department is staffed by a number of employees.
We were also puzzled on how to express input-free queries such as:
(iii) Show a list of all employees.
We could attempt to represent optional and plural output values as instances of the container types Opt{A} and Seq{A}, where the option container Opt{A} holds zero or one value of type A, and the sequence container Seq{A} holds an ordered list of values of type A. Using these containers, relationships (i) and (ii) could be expressed as primitive queries with signatures manager : Emp → Opt{Emp}, employee : Dept → Seq{Emp}.
Moreover, we could guess the output of query (iii). Indeed, a list of all employees can only mean Seq{Emp}.
To describe the input of query (iii), we introduce a singleton type
Void.
The type Void has a unique inhabitant ( : Void), and because there is no freedom in choosing a value of this type, it can designate input that can never affect the result of a query. Using the singleton type, we can express (iii) as a class primitive employee : Void → Seq{Emp}.
Although both (ii) and (iii) are denoted by the same name, we can still distinguish them by their input type. Unfortunately, although containers let us represent optional and plural output, they do not compose well. For example, it is tempting to express for a given employee, find their manager's salary as a composition manager.salary, ( ) To utilize monadic composition, we distinguish the output type of a query from the output container, which we call the query cardinality. For example, we say that query (i) is an optional query from Emp to Emp, (ii) is a plural query from Dept to Emp, and (iii) is a plural query from Void to Emp. Then, any two queries should compose, regardless of their cardinalities, so long as they have compatible intermediate types; furthermore, the least upper bound of their cardinalities is the cardinality of their composition.
Specifically, given two queries Using this rule, we can justify the queries ( ) and ( ) and give them signatures manager.salary : Emp → Opt{Int}, employee.name : Void → Seq{Text}.
Let us work out the details. Query cardinalities are ordered with respect to inclusions A Opt{A} Seq{A}, which, using the notation for container instances At last, we are ready to present the design of a combinator-based query language.
Query model. A database query is characterized by its input type A, its output type B and its cardinality M , and can be represented as a function of the form
p : A → M {B},
where M {B} is one of B, Opt{B} or Seq{B}; the respective queries are called singular, optional or plural.
Primitives. Recall that the original, incomplete set of primitives was obtained from the schema graph in Figure 1. To reflect the full set of primitives, we should add the Void node and the remaining arcs (see Figure 2). Furthermore, we can transform the schema graph into an (infinite) tree by unfolding it starting from the Void node (see Figure 3). The unfolded tree represents the functional database in a universal hierarchical form.
Combinators. The composition combinator sends two queries Other common combinators are listed in Table 1.
Query Combinators
In this section, we show how the query model defined in Section 2 can support a wide range of operations on data.
Extracting Data
By traversing the tree of Figure 3, we can extract data from the database.
department.name
This example is constructed by descending through nodes department and name, which represent primitives
department : Void → Seq{Dept}, name
: Dept → Text. This query produces a list of employee names. Since each employee belongs to exactly one department, the list should contain the name of every employee. The order in which the names appear in the output depends on the intrinsic order of the department and employee primitives, but, in any case, employees within the same department will be coupled together.
The same collection of names, although not necessarily in the same order, is produced by the following example.
employee.name
On the other hand, the next example is very different from the apparently similar Example 3.1.
employee.department.name
Here, we should see a list of department names, but each name will appear as many times as there are employees in the corresponding department.
employee.position
Similarly, employee.position will output duplicate position titles. We will see how to produce a list of unique positions in Section 4.
Example 3.6 Show all employees.
employee This example emits a sequence of employee entities, which, in practice, could be represented as records with employee attributes.
Summarizing Data
Let us show how the extracted data can be summarized.
count(department)
This query produces a single number, so that its signature is
count(department) : Void → Int.
It is constructed by applying the count combinator to a query that generates a list of all departments department : Void → Seq{Dept}.
Comparing the signatures of these two queries, we can derive the signature of the count combinator, in this specific case
(Void → Seq{Dept}) → (Void → Int),
and, in general
count : (A → Seq{B}) → (A → Int).
Identity and constants
here
:
A → A = a → a 150000 : A → Int = a → 150000 home : A → Void = a → null : A → Opt{B} = a → ⊥ Some scalar combinators =, = : (A → B, A → B) → (A → Bool) <, ≤, >, ≥ : (A → B, A → B) → (A → Bool) &, | : (A → Bool, A → Bool) → (A → Bool) +, − : (A → Int, A → Int) → (A → Int) length : (A → Text) → (A → Int) Aggregate combinators count : (A → Seq{B}) → (A → Int) exists : (A → Seq{B}) → (A → Bool) any, all : (A → Seq{Bool}) → (A → Bool) sum : (A → Seq{Int}) → (A → Int) max, min : (A → Seq{Int}) → (A → Opt{Int}) Sequence transformers filter : (A → Seq{B}, B → Bool) → (A → Seq{B}) sort : (A → Seq{B}, B → C1, . . . , B → Cn) → (A → Seq{B}) take : (A → Seq{B}, A → Int) → (A → Seq{B}) unique : (A → Seq{B}) → (A → Seq{B})
Selector and modifiers In other words, the count combinator transforms any sequence-valued query to an integer-valued query. It is implemented by lifting the function that computes the length of a sequence | − | : Seq{A} → Int to a query combinator
select : (A → M {B}, B → M1{C1}, . . . , B → Mn{Cn}) → (A → M { M1{C1}, . . . , Mn{Cn} }) define : (A → M {B}, B → T ) → (A → M {B}) asc, desc : (A → B) → (A → B ≶ ) Hierarchical connector connect : (A → Opt{A}) → (A → Seq{A}) Grouping group : (A → Seq{B}, B → C1, . . . , B → Cn) → (A → Seq{ C1, . . . , Cn, Seq{B} }) rollup : (A → Seq{B}, B → C1, . . . , B → Cn) → (A → Seq{ Opt{C1}, . . . , Opt{Cn}, Seq{B} })
Context primitives and combinators
frame : (Rel{A} → M {B}) → (A → M {B}) before, around : Rel{A} → Seq{A} given : (EnvT {A} → M {B}, A → T ) → (A → M {B}) PARAM : EnvT {A} → Tcount(q) = a → |q(a)|.
Unary combinators that transform a plural query to a singular (or optional) query are called aggregate combinators. This query is optional since it produces no output when the database contains no employees. Applying the combinator max to the query above, we answer the following question.
Pipeline Notation
Queries are often constructed incrementally, by extracting relevant data and then shaping it into the desired form with a chain of combinators. This construction is made apparent with the pipeline notation.
In pipeline notation, the first argument of a combinator is placed in front of it, separated by colon (":"):
p : F ≡ F (p), p : F (q 1 , . . . , q n ) ≡ F (p, q 1 , . . . , q n ).
For example, count(department) could also be written department : count.
A more sophisticated query written in pipeline notation is shown in the following example. : take(10)
Without pipeline notation, this query is much less intelligible: take(select(sort(filter( employee, department.name = "POLICE"), desc(salary)), name, position, salary), 10).
The combinators filter, sort, desc, select, and take are described below.
Filtering Data
We can now demonstrate how to produce entities that satisfy a certain condition. This query introduces several concepts. First, the integer literal 150000 represents a primitive query that for any given employee, produces the number 150000 150000 : Emp → Int = e → 150000.
Second, the relational symbol > denotes a binary combinator that builds a query for a given employee, show whether their salary is higher than $150k salary > 150000 : Emp → Bool. Third, a binary combinator filter emits those employee entities that satisfy the condition salary > 150000. In general, given
p : A → Seq{B}, q : B → Bool, a query filter(p, q) : A → Seq{B}
produces the values of p that satisfy condition q
filter(p, q) = a → [ b | b ← p(a), q(b) = true ].
The following example shows how filter could be used in tandem with aggregate combinators.
Sorting and Paginating Data
The combinator sort, applied to a plural query, sorts the query output in ascending order. Here, the sort key is wrapped with the combinator desc to indicate the descending sort order.
It is not immediately obvious how to implement desc without violating the query model. Naïvely, desc acts like a negation operator, however, not every type supports negation. Instead, we make the sort order a part of the type definition, so that Int ≤ and Int ≥ could indicate the integer type with ascending and descending sort order respectively. Then, desc could be considered a type conversion combinator with the signature desc : : take(count(employee) ÷ 100)
(A → B) → (A → B ≥ ).
In this example, only the first 1% of employees are retained by the combinator take, which has two arguments: a query that produces a sequence of employees employee : sort(salary : desc) : Void → Seq{Emp} and a query that returns how many employees to keep count(employee) ÷ 100 : Void → Int.
Notice that both arguments of take have the same input (Void in this case), which is reflected in the signature
take : (A → Seq{B}, A → Int) → (A → Seq{B}).
Query Output
The combinator select customizes the query output.
Previously, we constructed a query to show the number of employees for each department (see Example 3.9):
department.count(employee).
However, this query only produces a list of bare numbers-it does not connect them to their respective departments. This is corrected in the following example. The select combinator generates a sequence of records by applying each field query to every entity produced by the base query, giving this example a signature Void → Seq{ name : Text, size : Int }.
The declaration
name : Text, size : Int defines a record type with two fields: a text field name and an integer field size. The names of the record fields are derived from the tags of the field queries, which could be set using the tagging notation. For example, size ⇒ count(employee) binds a tag size to the query count(employee). Since the tag does not materially affect the query it annotates, we do not expose the tag in the query model.
A more complex output structure could be defined with nested select combinators. : select(name, salary))
In this example, the query output has the type Seq{ name : Text, top salary : Opt{Int}, manager : Seq{ name : Text, salary : Int } }.
Recall that we represented the data source in a universal hierarchical form (see Figure 3). Furthermore, the query output could also be represented as a hierarchical database, whose structure is determined by the query signature (see Figure 4). Thus, queries could be seen as transformations of hierarchical databases.
Query Aliases
A complex query could often be simplified by replacing duplicate expressions with aliases. : sort(size : desc)
: select(name, size)
: take (3) In this example, the alias size is created in two steps: first, the tag size is bound to the query count(employee) : Dept → Int, and then size is added to scope of Dept by the combinator define.
Although this query could have been written as department : sort(count(employee) : desc)
: select(name, count(employee))
: take (3), the use of an alias makes this example more legible, not only by reducing redundancy, but also by assigning a name to a key concept of the query.
Hierarchical Relationships
Hierarchical relationships are encoded by self-referential primitives.
For example, the relationship between an employee and their manager is expressed with manager : Emp → Opt{Emp}.
Example 3.21 Find all employees whose salary is higher than the salary of their manager. Namely, > expects its arguments to be singular, but the output of manager.salary is optional.
To legitimize this query, we adopt the following rule. When one argument of a scalar combinator has a nontrivial cardinality, this cardinality can be promoted to the output of the combinator. This rule gives > a signature
− > − : (A → Int, A → M {Int}) → (A → M {Bool})
or, in this specific case, salary > manager.salary : Emp → Opt{Bool}.
Finally, we need to let filter accept predicate queries with optional output, by treating ⊥ as false.
Using expressions manager, manager.manager, manager.manager.manager, . . .
we can build queries that involve the manager, the manager's manager, etc. We can also obtain the complete management chain for the given employee with connect(manager) : Emp → Seq{Emp}. Here, the query connect(manager).position : Emp → Seq{Text} produces the positions of all managers above the given employee.
In general, the combinator connect maps an optional self-referential query to a plural self-referential query by taking its transitive closure:
Quotient Classes
Previously, we demonstrated how to group and aggregate data-so long as the structure of the data reflects the hierarchical form of the database. In this section, we show how to overcome this limitation.
In Figure 3, the schema graph is unfolded into an infinite tree, shaping the data into a hierarchical form. A section of this hierarchy could be extracted using the select combinator. To make a "virtual" entity class from all distinct values of an attribute and inject this class into the database structure, we use the group combinator. For example (see Figure 5) Once the database hierarchy is rearranged to include the class Pos, we can answer any questions about position entities. : select(department.name, employee))
Nested group combinators can construct a hierarchical output of an arbitrary form. In this example, we rebuild the database hierarchy to place positions on top, then departments, and then employees. Notably, the nested group expression has a signature employee : group(department) :
Emp position → Seq Emp department . : select(level, count(employee))
In order to apply group to a calculated attribute, such as the level in the organization chart count(connect(manager)) : Emp → Int, we need to bind an explicit tag to this attribute. To summarize data along several dimensions, we can apply group to more than one attribute. When the summary data has to include subtotals and totals, we replace group with rollup.
In this example, the query In addition to the records that would be generated by group, rollup emits one "subtotal" record per each department and one "grand total" record. The former has the position field set to ⊥ and an employee list containing all employees in the given department. The latter has both department and position set to ⊥ and employee containing the full list of employees.
Query Context
In this section, we extend the query model to support context-aware queries: parameterized queries and queries with window functions. The query environment is populated using the combinator given. In this example, the first argument of given is a parameterized query The query environment is one example of a query context, a comonadic container wrapping the query input. It could be shown that the environment is compatible with query composition (cf. Section 2), which permits us to incorporate it into the query model.
Another example of a query context is the input flow, a container of all input values seen by the query. We denote this context type by Rel{A} and its values by [a 1 , . . . , ((a j )), . . . , a n ] : Rel{A}, where a j is the current input value, a 1 , . . . , a j−1 are the values seen in the past, and a j+1 , . . . , a n are the values to be seen in the future. The input flow can be used for an alternative implementation of Example 5.2.
Example 5.2 Which employees have higher than average salary? employee : filter(salary > mean(around.salary))
To relate each value in a dataset to the dataset as a whole, we use the plural primitive around, which materializes its input flow as a sequence:
around : Rel{A} → Seq{A} around = [a 1 , . . . , ((a j )), . . . , a n ] → [a 1 , . . . , a j , . . . , a n ].
In this example, around produces, for a selected employee, a list of all employees. By composing it with salary, we get, for a selected employee, a list of all salaries around.salary : Rel{Emp} → Seq{Int}, which lets us establish the average salary as a contextaware attribute mean(around.salary) : Rel{Emp} → Opt{Num}.
Example 5.3
In the Police department, show employees whose salary is higher than the average for their position. ((a j )), . . . , a n ]
→ [ a i | q(a i ) = q(a j ) ].
Note the use of two separate filter combinators. If we switch them, around(position) would list employees with the same position across all departments.
We can exploit the input flow to calculate running aggregates. The primitive before exposes its input flow up to and including the current input value:
before : Rel{A} → Seq{A} before = [a 1 , . . . , ((a j )), . . . , a n ] → [a 1 , . . . , a j ].
Using before, we can enumerate the rows in the output count(before) : Rel{Emp} → Int as well as calculate the running sum of salaries sum(before.salary) : Rel{Emp} → Int.
Conclusion and Related Work
In this paper, we introduce a combinator-based query language, Rabbit, and, using the framework of (co)monads and (bi-)Kleisli arrows [18,25], describe the denotation of database queries. The functional database model presents the database as a collection of extensionally defined arrows in some underlying category of serializable data. We bootstrap the query model by assuming that a query with the input of type A and the output of type B can be expressed in this category as an arrow
A → B.
To model optional and plural queries, we wrap their output in a monadic container and represent them as Kleisli arrows
A → M {B}.
The containers should form a family M of monads equipped with a join-semilattice structure: for any M 1 , M 2 ∈ M, there exists M 1 M 2 ∈ M with natural injections
M 1 {A} → (M 1 M 2 ){A} ← M 2 {A}.
To represent query parameters and the input flow, we wrap the query input in a comonadic container, expressing context-aware queries as bi-Kleisli arrows
W {A} → M {B}.
Dually, the comonadic containers form a meet-semilattice W of comonads: for any W 1 , W 2 ∈ W, there exists W 1 W 2 ∈ W with natural projections
W 1 {A} ← (W 1 W 2 ){A} → W 2 {A}.
Moreover, for any monad M ∈ M and comonad W ∈ W, there should exist a distributive law
W {M {A}} → M {W {A}}.
Then, the composition of queries
(W = W 1 W 2 , M = M 1 M 2 )
constructed using the lattice structures of M and W, compositional properties of monads and comonads, and the distributive law for M and W . Rabbit has its roots in the authors' work on a URLbased query language [11], which provided a navigational interface to SQL databases. While looking for a way to formally specify this language, we arrived at the combinator-based query model.
Early on, we adopted the navigational approach of XPath [7], which led us to represent the schema as a rooted graph (e.g., Figure 2) and queries as paths in this graph. We recognized that each graph arc has some cardinality, and, consequently, so does each path. Next came the realization that, for any dataset, the dataset values are all related to each other, and this relationship can be denoted as a plural self-referential arc around. We discovered that the rule for composing around with other plural arcs is exactly the distributive law for the Rel comonad over the Seq monad, which pushed us to model database queries as Kleisli arrows.
Monads and their Kleisli arrows came to be a standard tool in denotational semantics after Moggi [18] used them to define a generic compositional model of computations. By varying the choice of monad, he expressed partiality, exceptions, input-output, and other computational effects. Uustalu and Vene [25] used a dual model of comonads and co-Kleisli arrows to describe semantics of dataflow programming. They also discussed distributive laws of a comonad over a monad. In the context of databases, Spivak [23] suggested using monads to encode data with complex structure. Monad comprehensions [24,4] form the core of query interfaces such as Kleisli [27] and LINQ [17]. In contrast with Rabbit, which is based on Kleisli arrows and monadic composition, these interfaces are designed around monadic containers and the monadic bind operator.
The graph representation of the database schema is a variation of the functional database model [16,22], which gave rise to a number of query languages: FQL [3], DAPLEX [21], GENESIS [1], Kleisli [27] and others; see [13] for a comprehensive survey. Among them, FQL and its derivatives are remarkably close to Rabbit-Example 1.1 is a valid query in both. The key difference is that we interpret the period (".") as a composition of Kleisli arrows, which implies, for instance, that we cannot define count as Seq{A} → Int and write employee.count for the number of employees. Instead, we have to accept count as a query combinator.
Combinators are higher-order functions that serve to construct expressions without bound variables. They were introduced as the building blocks of mathematical logic [20,8], from where they migrated to programming practice, becoming a popular tool for constructing DSLs; examples are found in diverse domains such as parsers [26,14], reactive animation [9], financial contracts [15], and the view-update problem [12].
Although a few combinator-based query models have been proposed [3,2,1,10,6], it is generally accepted that "combinator-style languages are difficult for users to master and thus ill-suited as query languages" [6]. Examples presented in this paper prove otherwise. Moreover, the syntax of a combinator-based DSL directly mirrors its semantics, making it an executable specification. This is an attractive property for a language oriented towards domain experts-if the semantics does not contradict the experts' intuition.
In Rabbit, the intuition relies upon the hierarchical data model, which is simple, familiar and prolific. For querying purposes, we view the database as a composite hierarchical data structure obtained by unfolding the database schema into a potentially infinite schema tree (e.g., Figure 3). We were inspired by concurrency theory, where static "system" models are unfolded into runtime "behavior" models [19], but this technique has also been used in database theory to relate the network and hierarchical data models [5].
Rabbit's query model lets us rigorously define the basic notions of data analysis. Indeed, it can naturally express optional and plural relationships; database navigation; transitive closure of hierarchical relationships; aggregate, grouping and data cube operations; query parameters and window functions. In fact, any data operation could be lifted to a query combinator.
For specific application domains, Rabbit can provide an extensible query framework. Applications can implement native domain operations by extending the sets of primitives, combinators, and (co)monadic containers. For example, we adapted Rabbit to the field of medical informatics by adding graph operations over hierarchical ontologies and temporal operations on medical observations. For its users, Rabbit can provide a collaborative data processing platform. Database queries should be seen as artifacts of informatics collaboration-transparent, executable specifications that are written, shared, and discussed by software developers, data analysts, statisticians, and subject-matter experts. We believe that a compositional query model focused on data relationships can enable this dialog.
Figure 1 :
1Sample database
Example 1. 1
1Given an employee entity, show the name of their department. department.name : Emp → Text
p
: A → M 1 {B}, q : B → M 2 {C} we first promote their output to a common cardinality M = M 1 M 2 , and then use the monadic composition combinator − . − : (A → M {B}, B → M {C}) → (A → M {C}).to construct p . q : A → M {C}.
⊥Figure 2 :
2, a : Opt{A}, [a 1 , . . . , a n ] : Database schema in folded form are defined by ⊥ : Opt{A} −→ [ ] : Seq{A}, a : A −→ a : Opt{A} −→ [a] : Seq{A}. This order lets us, whenever necessary, promote any query A → M {B} to a query A → M {B} with a greater cardinality M M . Monadic composition for the option and sequence containers is well known. For optional queries p : A → Opt{B}, q : B → Opt{C}, it is defined by p . q : A → Opt{C}, p . q : a → c (p(a) = b , q(b) = c ), ⊥ (otherwise). For plural queries p : A → Seq{B}, q : B → Seq{C}, the sequence (p . q)(a) is calculated by applying p to a a p −→ [b 1 , b 2 , . . .], then applying q to every element of p(a) [b 1 , b 2 , . . .]
p
: A → M 1 {B}, q : B → M 2 {C} to their composition p . q : A → M {C} (M = M 1 M 2 ).
Example 3. 1
1Show the name of each department.
Figure 3 :
3Database schema in unfolded formThe composition of the primitives inherits the input of the first component and the output of the second component. Since one of the components is plural, the composition is also plural, which gives it a signature department.name : Void → Seq{Text}.
Example 3. 2
2For each department, show the name of each employee. department.employee.name This example takes a path through department : Void → Seq{Dept}, employee : Dept → Seq{Emp}, name : Emp → Text to construct a query department.employee.name : Void → Seq{Text}.
Example 3. 4
4For each employee, show the name of their department.
Example 3. 5
5Show the position of each employee.
Example 3. 7
7Show the number of departments.
Example 3. 8
8What is the highest employee salary? max(employee.salary)In this example, we extract the relevant data with employee.salary : Void → Seq{Int} and summarize it using the max aggregate max(employee.salary) : Void → Opt{Int}.
Example 3. 9
9For each department, show the number of employees. department.count(employee) In this example, we transform a plural relationship, all employees in the given department employee : Dept → Seq{Emp} to a calculated attribute, the number of employees in the given department count(employee) : Dept → Int. Then we attach it to department : Void → Seq{Dept} to get the number of employees in each department department.count(employee) : Void → Seq{Int}.
Example 3.10 How many employees are in the largest department? max(department.count(employee))
Example 3.11 Show the top 10 highest paid employees in the Police department. employee : filter(department.name = "POLICE") : sort(salary : desc): select(name, position, salary)
Example 3. 12
12Which employees have a salary higher than $150k? employee : filter(salary > 150000)
−
> − : (A → Int, A → Int) → (A → Bool)is implemented by lifting the relational operator − > − : (Int, Int) → Bool to an operation on queries (p > q) = a → (p(a) > q(a)).
Example 3.13 How many departments have more than 1000 employees? department : filter(count(employee) > 1000): count
Example 3.14 Show the names of all departments in alphabetical order.sort(department.name)The combinator sort is implemented by lifting a sequence function sort : Seq{A} → Seq{A} to a query combinator sort : (A → Seq{B}) → (A → Seq{B}), sort(p) = a → sort(p(a)).
Example 3.15 Show all employees ordered by salary.employee : sort(salary)In this example, a list of employees is sorted by the value of the attribute salary, which is supplied as the second argument to the sort combinator. In this form, sort has a signature sort : (A → Seq{B}, B → C) → (A → Seq{B}).Example 3.16 Show all employees ordered by salary, highest paid first.employee : sort(salary : desc)
Example 3. 17
17Who are the top 1% of the highest paid employees? employee : sort(salary : desc)
number of employees. department : select(name, size ⇒ count(employee)) In this example, the combinator select takes three arguments: the base query department : Void → Seq{Dept} and two field queries name : Dept → Text, count(employee) : Dept → Int.
Example 3.19 For every department, show the top salary and a list of managers with their salaries. department : select(name, top salary ⇒ max(employee.salary), manager ⇒ employee : filter(exists(subordinate))
Figure 4 :
4Output database for Example 3.19
Example 3. 20
20Show the top 3 largest departments and their sizes.department : define(size ⇒ count(employee))
employee : filter(salary > manager.salary) This example uses familiar combinators filter and > (seeExample 3.12), but an alert reader will notice the disagreement between the signature of the combinator− > − : (A → Int, A → Int) → (A → Bool)and the signatures of its arguments salary: Emp → Int, manager.salary : Emp → Opt{Int}.
Example 3. 22
22Find all direct and indirect subordinates of the City Treasurer. employee : filter(any(connect(manager).position = "CITY TREASURER"))
connect : (A → Opt{A}) → (A → Seq{A}), connect(p) = a → [ p(a), p(p(a)), . . . , p (n) (a) ] (p (n) (a) = ⊥, p (n+1) (a) = ⊥).
Example 4. 1
1Show all departments, and, for each department, list the associated employees.department : select(name, employee) But what if we ask for positions instead of departments?Example 4.2 Show all positions, and, for each position, list the associated employees. Unlike the previous example, this query does not match the structure of the database and, therefore, cannot be constructed as easily. Indeed, Example 4.1 is built from the primitives department : Void → Seq{Dept}, name : Dept → Text, employee : Dept → Seq{Emp}. To construct Example 4.2 in a similar fashion, we need a hypothetical class Pos of position entities and a set of queries with the corresponding signatures Void → Seq{Pos}, Pos → Text, Pos → Seq{Emp}. ( ) However, there is no built-in class of position entities and we only have the following primitives available: employee : Void → Seq{Emp}, position : Emp → Text.
, a list of all distinct employee positions can be produced with the query employee : group(position) : Void → Seq{Pos}. The virtual Pos class comes with the primitives position : Pos → Text, employee : Pos → Seq{Emp}, which, given a position entity, produce respectively the position name and a list of associated employees. This gives us all the query components (see ( ) above) needed to complete the example.
Figure 5 :
5Action of the group combinator
Example 4. 2
2Show all positions, and, for each position, list the associated employees.employee : group(position) : select(position, employee) The query employee : group(position) correlates all distinct values emitted by position with the respective employee entities and packs them together into the records of type Pos ≡ position : Text, employee : Seq{Emp} . We call Pos a quotient class and denote it by Emp position .
Example 4. 3
3In the Police department, show all positions with the number of employees and the top salary.employee : filter(department.name = "POLICE") : group(position) : select(position, count(employee), max(employee.salary))Here, for each position in the Police department, we determine two calculated attributes, the number of employees and the top salary:count(employee): Emp position → Int, max(employee.salary) : Emp position → Opt{Int}.
Example 4. 5
5Show all positions available in more than one department, and, for each position, list the respective departments.employee : group(position) : define(department ⇒ unique(employee.department)): filter(count(department) > 1): select(position, department.name)This example uses the unique combinator to find all distinct entities in a list of departments. The unique combinator can be expressed via group by forgetting the plural component of the quotient class. In this example, unique(employee.department) is equivalent to employee : group(department).department.
Example 4. 6
6How many employees at each level of the organization chart? employee : group(level ⇒ count(connect(manager)))
employee : rollup(department, position) produces a sequence of records of type Emp department ⊥ , position ⊥ ≡ department : Opt{Dept}, position : Opt{Text}, employee : Seq{Emp} .
Example 5. 1
1Show all employees in the given department D with the salary higher than S, where D = "POLICE", S = 150000.employee : filter(department.name = D & salary > S) : given(D ⇒ "POLICE", S ⇒ 150000) Practical database queries often depend upon query parameters, which collectively form the query environment. The environment is represented by a container, such as Env D:Text,S:Int {A} ≡ A, D : Text, S : Int , that encapsulates both the regular input value and the values of the parameters. The parameters can be extracted from the environment with the primitives D : Env D:Text {A} → Text, S : Env S:Int {A} → Int.
employee : filter(department.name = D & salary > S) : Env D:Text,S:Int {Void} → Seq{Emp}. The other two arguments are the constant queries "POLICE" : Void → Text, 150000 : Void → Int that specify the values of the parameters. The combined query does not depend upon the parameters, and, hence, has a signature Void → Seq{Emp}. In general, given takes a parameterized query p : Env x1:T1,...,xn:Tn {A} → M {B}, n queries that evaluate the parameters q 1 : A → T 1 , . . . , q n : A → T n and combines them into a context-free query given(p, q 1 , . . . , q n ) : A → M {B}, given(p, q 1 , . . . , q n ) = a → p( a, q 1 (a), . . . , q n (a) ). Example 5.2 Which employees have higher than average salary? employee : filter(salary > MS) : given(MS ⇒ mean(employee.salary)) This example uses the query environment to pass information between different scopes. The parameter MS is calculated in the scope of Void by the query mean(employee.salary) : Void → Opt{Num} and is extracted in the scope of Emp by the primitive MS : Env MS:Opt{Num} {Emp} → Opt{Num}.
employee : filter(department.name = "POLICE") : filter(salary > mean(around(position).salary)) Here, each employee is matched with other employees having the same position using a variant of around: around : (A → B) → (Rel{A} → Seq{A}) around(q) = [a 1 , . . . ,
Example 5. 5
5For each department, show employee salaries along with the running total; the total should be reset at the department boundary.
:
select(name, salary, sum(before.salary)) : frame) The input flow propagates through composition, so that a query executed within the context of department.employee : Void → Seq{Emp} will see the input flow containing all the employees in all departments. To reset the input flow at a certain boundary, we use the combinator frame : (Rel{A} → M {B}) → (A → M {B}).
However, if we look at the signatures of the components manager : Emp → Opt{Emp}, salary : Emp → Int, employee : Void → Seq{Emp}, name : Emp → Text, we see that their intermediate types do not agree, which means their compositions are ill-formed.A technique for composing queries can be found in the categorical semantics of computational effects[18]. In this semantics, a program that maps the input of type A to the output of type B is seen as aKleisli arrow A → M {B}, where M is a monad that encapsulates the program's effects. Further, a sequential execution of programs A → M {B} and B → M {C} is represented by their monadic composition, which is again a Kleisli arrow A → M {C}.or show the names of all employees as
employee.name.
( )
The set of primitives includes classesdepartment : Void → Seq{Dept},
employee : Void → Seq{Emp};
attributes
name : Dept → Text,
name : Emp → Text,
position : Emp → Text,
salary : Emp → Int;
and relationships
department : Emp → Dept,
employee : Dept → Seq{Emp},
manager
: Emp → Opt{Emp},
subordinate : Emp → Seq{Emp}.
Table 1 :
1Some primitives and combinators
Example 5.4 Show a numbered list of employees and their salaries along with the running total.employee
: select(no ⇒ count(before),
name,
salary,
total ⇒ sum(before.salary))
p : W 1 {A} → M 1 {B}, q : W 2 {B} → M 2 {C} could be defined as a query of the form p . q : W {A} → M {C}
AcknowledgementsWe are indebted to Catherine Devlin for her early support of the project, and our colleagues at Prometheus Research for their continuous feedback.Bibliography
Implementation concepts for an extensible data model and data language. D S Batory, T Y Leung, T E Wise, ACM Transactions on Database Systems. 133D. S. Batory, T. Y. Leung, and T. E. Wise. Implementation concepts for an extensible data model and data language. ACM Transactions on Database Systems, 13(3):231-262, 1988.
Using FP as a query language for relational data-bases. A Bossi, C Ghezzi, Computer Languages. 91A. Bossi and C. Ghezzi. Using FP as a query language for relational data-bases. Computer Lan- guages, 9(1):25-37, 1984.
FQL -A functional query language. P Buneman, R E Frankel, SIGMOD '79. P. Buneman and R. E. Frankel. FQL -A func- tional query language. In SIGMOD '79, pages 52- 58, 1979.
. P Buneman, L Libkin, D Suciu, V Tannen, L Wong, Comprehension syntax. SIGMOD Record. 231P. Buneman, L. Libkin, D. Suciu, V. Tannen, and L. Wong. Comprehension syntax. SIGMOD Record, 23(1):87-96, 1994.
Formalizing the network and hierarchical data models -an application of categorical logic. J , CTCS '85. J. Cartmell. Formalizing the network and hierar- chical data models -an application of categorical logic. In CTCS '85, pages 466-492, 1985.
Rule languages and internal algebras for rule-based optimizers. M Cherniack, S B Zdonik, SIGMOD '96. M. Cherniack and S. B. Zdonik. Rule languages and internal algebras for rule-based optimizers. In SIGMOD '96, pages 401-412, 1996.
XML path language (XPath) version 1.0. J Clark, S Derose, REC-xpath- 199911163Technical ReportJ. Clark and S. DeRose. XML path language (XPath) version 1.0. Technical Report REC-xpath- 19991116, W3C, 1999.
Grundlagen der Kombinatorischen Logik. H B Curry, American Journal of Mathematics. 523H. B. Curry. Grundlagen der Kombinatorischen Logik. American Journal of Mathematics, 52(3):509-536, 1930.
Functional reactive animation. C Elliott, P Hudak, ICFP '97. C. Elliott and P. Hudak. Functional reactive ani- mation. In ICFP '97, pages 263-273, 1997.
A functional DBPL revealing high level optimizations. M Erwig, U W Lipeck, DBPL '91. M. Erwig and U. W. Lipeck. A functional DBPL revealing high level optimizations. In DBPL '91, pages 306-321, 1991.
HTSQL -a native web query language. C C Evans, ICOMP '07. C. C. Evans. HTSQL -a native web query lan- guage. In ICOMP '07, pages 439-445, 2007.
Combinators for bi-directional tree transformations: a linguistic approach to the view update problem. J N Foster, M B Greenwald, J T Moore, B C Pierce, A Schmitt, POPL '05. J. N. Foster, M. B. Greenwald, J. T. Moore, B. C. Pierce, and A. Schmitt. Combinators for bi-directional tree transformations: a linguistic ap- proach to the view update problem. In POPL '05, pages 233-246, 2005.
Introduction to the use of functions in the management of data. P M D Gray, P J H King, A Poulovassilis, The Functional Approach to Data Management. P. M. D. Gray, L. Kerschberg, P. J. H. King, and A. PoulovassilisBerlin, HeidelbergSpringerP. M. D. Gray, P. J. H. King, and A. Poulovassilis. Introduction to the use of functions in the man- agement of data. In P. M. D. Gray, L. Kerschberg, P. J. H. King, and A. Poulovassilis, editors, The Functional Approach to Data Management, pages 1-54. Springer, Berlin, Heidelberg, 2004.
Monadic parser combinators. G Hutton, E Meijer, NOTTCS-TR-96-4School of Computer Science and IT, University of NottinghamTechnical ReportG. Hutton and E. Meijer. Monadic parser combina- tors. Technical Report NOTTCS-TR-96-4, School of Computer Science and IT, University of Notting- ham, 1996.
Composing contracts: an adventure in financial engineering. S L P Jones, J Eber, J Seward, ICFP '00. S. L. P. Jones, J. Eber, and J. Seward. Composing contracts: an adventure in financial engineering. In ICFP '00, pages 280-292, 2000.
A functional data base model. L Kerschberg, J E S Pacheco, 2/1976BrazilPontificia Universidade Catolica, Rio de JaneiroDepartamento de InformaticaL. Kerschberg and J. E. S. Pacheco. A functional data base model. Technical Report 2/1976, Depar- tamento de Informatica, Pontificia Universidade Catolica, Rio de Janeiro, Brazil, 1976.
LINQ: reconciling object, relations and XML in the .NET framework. E Meijer, B Beckman, G M Bierman, SIGMOD '06. 706E. Meijer, B. Beckman, and G. M. Bierman. LINQ: reconciling object, relations and XML in the .NET framework. In SIGMOD '06, page 706, 2006.
Notions of computation and monads. Information and Computation. E Moggi, 93E. Moggi. Notions of computation and monads. Information and Computation, 93(1):55-92, 1991.
Relationships between models of concurrency. M Nielsen, V Sassone, G Winskel, REX '93. M. Nielsen, V. Sassone, and G. Winskel. Relation- ships between models of concurrency. In REX '93, pages 425-476, 1994.
Über die Bausteine der mathematischen Logik. M Schönfinkel, Mathematische Annalen. 923M. Schönfinkel.Über die Bausteine der mathema- tischen Logik. Mathematische Annalen, 92(3):305- 316, 1924.
The functional data model and the data language DAPLEX. D W Shipman, ACM Transactions on Database Systems. 61D. W. Shipman. The functional data model and the data language DAPLEX. ACM Transactions on Database Systems, 6(1):140-173, 1981.
Data architecture and data model considerations. E H Sibley, L Kerschberg, AFIPS '77. E. H. Sibley and L. Kerschberg. Data architec- ture and data model considerations. In AFIPS '77, pages 85-96, 1977.
Kleisli database instances. D I Spivak, abs/1209.1011D. I. Spivak. Kleisli database instances. CoRR, abs/1209.1011, 2012.
Improving list comprehension database queries. P Trinder, P Wadler, TENCON '89. P. Trinder and P. Wadler. Improving list compre- hension database queries. In TENCON '89, pages 186-192, 1989.
The essence of dataflow programming. T Uustalu, V Vene, CEFP '05. T. Uustalu and V. Vene. The essence of dataflow programming. In CEFP '05, pages 135-167, 2005.
How to replace failure by a list of successes. P Wadler, FPCA '85. P. Wadler. How to replace failure by a list of suc- cesses. In FPCA '85, pages 113-128, 1985.
Kleisli, a functional query system. L Wong, Journal of Functional Programming. 10L. Wong. Kleisli, a functional query system. Jour- nal of Functional Programming, 10(1):19-56, 2000.
| [] |
[
"AGN TYPE-CASTING: MRK 590 NO LONGER FITS THE ROLE",
"AGN TYPE-CASTING: MRK 590 NO LONGER FITS THE ROLE"
] | [
"K D Denney ",
"G De Rosa ",
"K Croxall ",
"A Gupta ",
"M C Bentz ",
"M M Fausnaugh ",
"C J Grier ",
"P Martini ",
"S Mathur ",
"B M Peterson ",
"R W Pogge ",
"B J Shappee "
] | [] | [] | We present multi-wavelength observations that trace more than 40 years in the life of the active galactic nucleus (AGN) in Mrk 590, traditionally known as a classic Seyfert 1 galaxy. From spectra recently obtained from HST, Chandra, and the Large Binocular Telescope, we find that the activity in the nucleus of Mrk 590 has diminished so significantly that the continuum luminosity is a factor of 100 lower than the peak luminosity probed by our long baseline observations. Furthermore, the broad emission lines, once prominent in the UV/optical spectrum, have all but disappeared. Since AGN type is defined by the presence of broad emission lines in the optical spectrum, our observations demonstrate that Mrk 590 has now become a "changing look" AGN. If classified by recent optical spectra, Mrk 590 would be a Seyfert ∼1.9−2, where the only broad emission line still visible in the optical spectrum is a weak component of Hα. As an additional consequence of this change, we have definitively detected UV narrow-line components in a Type 1 AGN, allowing an analysis of these emission-line components with high-resolution COS spectra. These observations challenge the historical paradigm that AGN type is only a consequence of the line of sight viewing angle toward the nucleus in the presence of a geometrically-flattened, obscuring medium (i.e., the torus). Our data instead suggest that the current state of Mrk 590 is a consequence of the change in luminosity, which implies the black hole accretion rate has significantly decreased. | 10.1088/0004-637x/796/2/134 | [
"https://arxiv.org/pdf/1404.4879v2.pdf"
] | 119,288,303 | 1404.4879 | 42de2f457fec0e6c6f4de418546d5d801a9ec9ab |
AGN TYPE-CASTING: MRK 590 NO LONGER FITS THE ROLE
30 Sep 2014 Draft version October 2, 2014 Draft version October 2, 2014
K D Denney
G De Rosa
K Croxall
A Gupta
M C Bentz
M M Fausnaugh
C J Grier
P Martini
S Mathur
B M Peterson
R W Pogge
B J Shappee
AGN TYPE-CASTING: MRK 590 NO LONGER FITS THE ROLE
30 Sep 2014 Draft version October 2, 2014 Draft version October 2, 2014(Accepted 2014 September 10)Preprint typeset using L A T E X style emulateapj v. 04/17/13galaxies: active -galaxies: nuclei -quasars: emission lines
We present multi-wavelength observations that trace more than 40 years in the life of the active galactic nucleus (AGN) in Mrk 590, traditionally known as a classic Seyfert 1 galaxy. From spectra recently obtained from HST, Chandra, and the Large Binocular Telescope, we find that the activity in the nucleus of Mrk 590 has diminished so significantly that the continuum luminosity is a factor of 100 lower than the peak luminosity probed by our long baseline observations. Furthermore, the broad emission lines, once prominent in the UV/optical spectrum, have all but disappeared. Since AGN type is defined by the presence of broad emission lines in the optical spectrum, our observations demonstrate that Mrk 590 has now become a "changing look" AGN. If classified by recent optical spectra, Mrk 590 would be a Seyfert ∼1.9−2, where the only broad emission line still visible in the optical spectrum is a weak component of Hα. As an additional consequence of this change, we have definitively detected UV narrow-line components in a Type 1 AGN, allowing an analysis of these emission-line components with high-resolution COS spectra. These observations challenge the historical paradigm that AGN type is only a consequence of the line of sight viewing angle toward the nucleus in the presence of a geometrically-flattened, obscuring medium (i.e., the torus). Our data instead suggest that the current state of Mrk 590 is a consequence of the change in luminosity, which implies the black hole accretion rate has significantly decreased.
INTRODUCTION
The dichotomy separating active galactic nuclei (AGN) into Type 1 -those observed to have broad emission lines -and Type 2 -those without broad linesoriginated from the first observations of Seyfert (1943) that demonstrated that the nebular lines in the nuclei of these nearby "Seyfert" galaxies sometimes had broad emission wings superposed with a narrow core and sometimes did not. A working definition of Type 1 and Type 2, based on the relative line widths of the forbidden and Balmer lines was then elucidated by the work of Khachikian & Weedman (1974). Osterbrock & Koski (1976) and Osterbrock (1977Osterbrock ( , 1981 later note that the observed emission spectra of Seyfert galaxies are not so simple and argue for a continuum of intermediate types between these two extremes, e.g., Seyfert 1.2, 1.5, 1.8, and 1.9 based on the relative strength of the narrow Hβ component with respect to the broad component. Antonucci & Miller (1985) presented a plausible scenario to physically explain this dichotomy by invoking different line of sight orientations: the broad line-emitting region (BLR) in Type 2 AGN (Sy2s) is obscured by an optically thick, geometrically flattened medium (often referred to as the torus) leading to a "hidden" BLR (HBLR), whereas the BLR is visible in Type 1 AGN (Sy1s) because our line of sight is not obscured by this optically thick medium. This unified model of AGN (see also Antonucci 1993) followed from the discovery by Antonucci & Miller (1985) that broad emission lines were present in the polarization spectrum of NGC 1068.
The key assumption in this model is that all AGN systems are physically similar, with generally disk-like geometries. Thus, it is only the line of sight orientation of the BLR with respect to the obscuring medium that results in the observed type differences.
The unified model for Sy1 and Sy2 systems has been challenged by observations that suggest that not all AGN conform to this paradigm. Sy2s that do not have broad lines observed even through spectropolarimetry have been coined non-HBLR or "true" Sy2s, and some may, in fact, be "bare" Sy2s that are postulated to be devoid of the typical obscuring medium surrounding the BLR (see e.g., Barth et al. 1999;Tran 2001Tran , 2003Panessa & Bassani 2002;Laor 2003;Zhang & Wang 2006). On the other hand, Antonucci (2012) advises caution before classifying objects as non-HBLR objects, as finding reflected broad lines is largely serendipitous and dependent on the particular source having a well-placed scattering medium with the right properties. Thus, the existence of true or bare Sy2's may not be well quantified. Nonetheless, there are also objects known as "changing look" AGN, which are more easily identifiable because an obvious change has occurred in the observed spectrum to warrant the application of this classification. This characterization was originally coined based on Xray observations in which sources appear alternately Compton-thin or "reflection-dominated" (likely Compton thick) over the course of years (e.g., Bianchi et al. 2005). This qualifier has recently been extended to include objects which sometimes appear to have Sy2 and sometimes Sy1 characteristics in their optical spectrum (e.g., Shappee et al. 2013b). Several examples of these optical changing look AGNs appear the literature over the years. One of the most well-cited early examples is that of NGC 4151, one of Seyfert's original galaxies assigned Type 1.5 by Osterbrock (1977), but in which the optical broad lines all but disappeared (except for weak and possibly asymmetric wings) in the 1980's (Antonucci & Cohen 1983;Lyutyǐ et al. 1984;Penston & Perez 1984) and have since returned (see, e.g., Shapovalova et al. 2010).
There are two typically accepted postulates to explain AGN changing their type: (1) variable obscuration or (2) variable accretion rate. Variations in the obscuring medium is more suited to the unification paradigm of type resulting from different viewing angles and is typically invoked to explain changing look AGN in the Xray regime (e.g., Bianchi et al. 2005;Risaliti et al. 2009;Marchese et al. 2012;Marin et al. 2013). In contrast, it has been suggested that variations in luminosity and accretion rate are not only responsible for type changes in individual objects but are also responsible for the whole AGN typing sequence. In other words, the structure of the BLR changes with accretion rate, and objects evolve from high accretion rate when the AGN turns on -Type 1 -to low accretion rate once the AGN has depleted its nuclear material -Type 2 (Tran 2003;Wang & Zhang 2007;Elitzur et al. 2014), possibly oscillating between Type 1 and Type 2 and/or intermediate types between high and low accretion states while the nucleus remains active (Penston & Perez 1984;Korista & Goad 2004). At sufficiently low accretion rates, many postulates exist demonstrating that a radiatively efficient BLR simply cannot be supported, due to, e.g., (1) a dearth of ionizing photon flux (Korista & Goad 2004), (2) the critical radius at which the accretion disk changes from gas pressure-dominated to radiation pressure-dominated becoming smaller than the innermost stable orbit (Nicastro 2000;Nicastro et al. 2003), (3) mass conservation considerations in the paradigm that the BLR arises in a disk wind with a fixed radial column, where the mass outflow rate cannot be sustained (Elitzur & Ho 2009), or (4) the accretion disk structure changing, replacing the disk-wind BLR with a radiatively inefficient accretion flow (RIAF) consisting of a fully ionized, low-density plasma incapable of producing broad lines (Trump et al. 2011). Laor (2003 similarly suggests a bolometric luminosity below which the BLR cannot exist, though with a slightly different argument following from a maximally observed broad line width.
Observational results suggest that both of these physical processes are likely at play in different objects. For example, Alexander et al. (2013) present a serendipitously detected NuSTAR source at redshift z = 0.510 for which observations suggest that significant obscuration may have moved into our line of sight to the nucleus of this source, changing it from a Type 1 to Type 2: SED template fitting (following Assef et al. 2010) show its SED is currently consistent with other optically identified Type 2 AGN in the same sample and an E(B − V ) = 0.6±0.5 mag, yet based on archival observations, its SED was previously consistent with that of a Type 1 AGN with estimated E(B − V ) = 0.00±0.01 mag. Alexander et al. (2013) also presents a recent optical spectrum of this source, J183443+3237.8, that shows narrow emission lines and significant reddening of the continuum, though there is evidence for a weak broad component of Mg ii. On the other hand, Shappee et al. (2013a) report on the interesting behavior of NGC 2617, which has changed from a Sy1.8 (Moran et al. 1996) to a Sy1, likely (though not conclusively) due to a recent outburst that triggered a transient source alert within the All-Sky Automated Survey for SuperNovae (ASAS-SN 6 ). Subsequent observations of this AGN demonstrated a continued increase of the optical through X-ray emission by about an order of magnitude compared to past observations Shappee et al. (2013b). Due to the outburst nature of this activity, which occurred over relatively short time scales, it is highly unlikely that obscuration moving out of the line of sight could be responsible for this change in type. Additionally, the optical spectral changes between Type 2 and Type 1.9 of NGC 2992 also seem to be at least loosely correlated with large variability amplitudes (factors of a few tens) observed in the X-rays over both short (year) and long (decades) time scales, such that broad Hα only seems to be observed coincident with high X-ray states (see Gilli et al. 2000;Murphy et al. 2007;Trippe et al. 2008).
In this work, we examine the interesting and extreme phenomenon of Mrk 590 (alt. NGC 863) -a classic Sy1 galaxy (Osterbrock 1977;Weedman 1977) at z = 0.026385 -which appears to have changed from Type 1.5 to Type 1 then to Type ∼1.9−2, based on chronologically catalogued multi-wavelength observations. Observations from MDM Observatory in late 2012 revealed that the broad Balmer emission lines seem to have disappeared from Mrk 590. Observations with higher S/N obtained in 2013 February with the MODS1 spectrograph on the Large Binocular Telescope confirmed that there was no longer any broad component to Hβ and possibly only a very weak broad Hα component. Because this system is so well-studied, we have been able to gain additional understanding of this change through multiwavelength observations taken in its past and present states. In Section 2 we describe the new and archival data we have gathered. We present our optical continuum fitting methods in Section 3 and discuss the overall observed trends in Section 4. Section 5 follows with a discussion of the properties of the nuclear region of Mrk 590 as well as potential implications of the observed behavior of Mrk 590. We summarize our findings and plans for future work in Section 6. A cosmology with Ω m = 0.3, Ω Λ = 0.70, and H 0 = 70 km sec −1 Mpc −1 is assumed where necessary.
NEW AND ARCHIVAL OBSERVATIONS
We have collected both archival and new data covering the X-ray, UV, and optical wavelength regimes in an attempt to better understand the recent behavior of Mrk 590.
Optical Data
We present a selection of 10 optical spectra that span more than four decades of observations of Mrk 590.
These have been gleaned from the personal archives of the authors, the literature, the Sloan Digital Sky Survey (SDSS), and new observations. Data are presented in chronological order of the known or presumed date of observation:
1. Historical Lick Observatory spectra of Mrk 590 from 1973 were measured from copies of the original reduced spectra provided by the late Prof. Donald Osterbrock to one of the authors (RWP). Details of the observation and reduction of these spectra are given in Osterbrock (1977), however, we note two important items about these spectra. First, no explicit date is available, either in the spectral image headers (which were converted from raw IDS output to FITS format in 1996), or in the Osterbrock (1977) paper. Second, Osterbrock (1977) states that all of the spectra were taken "in the years 1974−1976", so it is possible that these spectra are from 1974, though it is equally likely that 1973 is correct, and they were taken early during the commissioning of the IDS in 1972/73. The entrance aperture of this spectrograph was 2. ′′ 7 × 4 ′′ .
2. Spectra of Mrk 590 were obtained at the 1.8-m Perkins telescope at Lowell Observatory with the Ohio State University Image Dissector Scanner (Byard et al. 1981) through a 7 ′′ round aperture in 1984−1986 and have been previously discussed in the literature by Ferland et al. (1990). Here, we present spectra from JD2445672 and JD2445674 -1983 December -which cover the blue (Hβ) and red (Hα) sides of the spectrum, respectively. We chose these spectra from the full set because of the near simultaneity of the red and blue wavelength coverage, fewer relative flux calibration issues, and/or the longer combined wavelength coverage than afforded by other spectra in this set. We merged these nearly contemporaneous spectra, multiplicatively scaling the red spectrum by a factor of 1.08 to match the blue spectrum.
3. Mrk 590 was targeted in an early reverberation mapping campaign, during which 102 spectra were obtained of the Hβ emission region between JD2447837 andJD2450388 (1989 Nov -1996 Oct) and from which the black hole mass was determined to be (4.75 ± 0.74) × 10 7 M ⊙ (Peterson et al. 2004). Details of the campaign and observations are described by Peterson et al. (1998). For completeness we note that the spectra were taken through a 5 ′′ slit and extracted through a 7. ′′ 6 aperture. We took spectra from three epochs during this campaign: JD2447837 -1989 November -the beginning of the campaign, JD2448999 -1993 January -chosen to be roughly 10 years after the previous 1983 spectrum, and JD2450365 -1996 Oct -which was near the end of the campaign.
4. Mrk 590 was observed by the Sloan Digital Sky Survey (SDSS) on MJD2452649 -2003 Januaryserendipitously continuing our roughly 10 year interval between archival optical spectra. The SDSS spectra were obtained through a 3 ′′ diameter fiber and spectrophotometric calibration is based on simultaneously observed field stars.
A spectrum of Mrk 590 was obtained on the 1.3m
McGraw-Hill telescope at MDM observatory on JD2454000 -2006 Sep -using the CCD Spectrograph with the 350 l/mm grating. The observation was obtained with a 1 ′′ slit and extracted through a 12 ′′ aperture. Unfortunately, all of the data obtained during this observing run suffer from the presence of an unexplained low-level and timevariable signal in the underlying bias levels that somewhat distorted the continuum shape of the Mrk 590 spectrum. Due to the time-variable nature of the signal, we were unable to correct the spectra, but we truncated the most blueward and redward wavelength ranges that were the most affected. As a result, the flux calibration of the overall spectrum, and even that as a function of wavelength, is not highly reliable. Nonetheless, we did not deem this issue problematic enough to justify omitting this spectrum, as it represents an important epoch in the changing nature of this object, as discussed below.
6. We used the MODS1 spectrograph on the Large Binocular Telescope to obtain an optical spectrum of Mrk 590 on 2013 February 14. We used both the red and blue channel of MODS1 with the dichroic to obtain wavelength coverage from 3200−10000Å. The blue-channel spectra used the G400L grating (400 lines mm −1 in first order), and the red-channel spectra used the G670L grating (250 lines mm −1 in first order). For all spectra we used the 1. ′′ 2 segmented long-slit mask (LS5x60x1.2), oriented along the parallactic angle to minimize the effects of differential atmospheric refraction. The spectra were extracted with a 4 ′′ aperture.
7. We next observed Mrk 590 on 2013 December 18 during commissioning of the new Kitt Peak Ohio State Multi-Object Spectrograph (KOSMOS) on the Mayall 4-m telescope. Observations were taken through an 0. ′′ 9 longslit, using both the red and blue VPH grisms to cover the wavelength range ∼4000−9000Å. The spectra were extracted with an 11. ′′ 6 aperture.
8. Most recently, on 2014 January 7, another spectrum was taken on the 1.3m McGraw-Hill telescope at MDM observatory with the CCD Spectrograph and the 350 l/mm grating. This time, the spectrum was taken through a 5 ′′ slit and extracted with a 15 ′′ aperture because it was observed at the start of a reverberation mapping program that requires these large apertures. Figure 1 shows all 10 of the optical spectra discussed above. These spectra were de-redshifted to the restframe of Mrk 590 and resampled onto separate linear wavelength scales similar to the natural dispersion of each respective spectrum near 5100Å. The absolute spectrophotometry of several of these observations is not strictly reliable due to the uncertain conditions or known poor conditions under which they were taken. The spectra with the most reliable absolute flux calibration are (1) the 1989, 1993, and 1996 RM campaign spectra, which were calibrated to the assumed constant [O iii] λ5007 line flux measured from data taken under photometric conditions (see Peterson et al. 1998, for details), (2) the 2003 SDSS spectrum, for which the absolute spectrophotometry is good to ∼ 4%, and (3) the most recent MDM spectrum from 2014 January, which was taken along with a spectrophotometric standard star under clear conditions.
UV Data
After discovering the disappearance of the optical broad emission lines from the 2013 MODS1 spectrum, we obtained Director's Discretionary time (PID 13185) on the Hubble Space Telescope (HST) to investigate how this change affected the UV continuum and emission lines. We obtained this new HST observation of Mrk 590 on 2013 July 17 with the G140L mode of the FUV detector on the Cosmic Origins Spectrograph (COS). The data were processed with the standard pipelines, and the final 1D spectrum was de-redshifted and binned to a linear dispersion of 0.4Å pixel −1 . Mrk 590 was previously observed in the UV by the International Ultraviolet Explorer (IUE) on 1982 July 20 and 1991 January 14. We downloaded these spectra from the IUE archives and converted them to the multispec FITS format in IRAF. We only analyzed the SWP channel spectra, which cover a similar wavelength range as the new COS observations, although LWP observations from both dates also exist. We de-redshifted the IUE spectra to rest frame wavelengths and resampled them onto a linear wavelength scale of 1.5Å pixel −1 . These spectra are shown in the left panels of Figure 2.
X-ray Data
We also obtained Chandra Director's Discretionary time contemporaneous with the new HST observations, and thereby observed Mrk 590 with the HETGS on 2013 June 17 for ∼30 ks to investigate the consequences of the observed change in Mrk 590 on the X-ray spectrum. The HETGS consists of two grating assemblies, a high-energy grating (HEG) and a mediumenergy grating (MEG). The HEG (MEG) bandpass covers 0.8−10 keV (0.5−10 keV), but the effective area of both instruments falls off rapidly at either end of the bandpass. We reduced the data for both gratings using the standard Chandra Interactive Analysis of Observations (CIAO) software (v4.3) and Chandra Calibration Database (CALDB, v4.4.2) and followed the standard Chandra data reduction threads 7 . To increase S/N we co-added the negative and positive first-order spectra and built the effective area files (ARFs) using the fullgarf CIAO script. We then binned the MEG spectrum (at least 20 counts per bin). Additionally, we analysed the zeroth order spectrum which has somewhat higher S/N; there are 620 total counts (0.2-8 keV) in the zeroth order, compared to 450 counts in the combined 1st order of MEG. Nonethess, we plot the MEG spectrum in Figure 3 to show how these new data compare with previous Chandra archival HETGS observations from 2004 ) that were reprocessed here in a similar manner. 7 http://cxc.harvard.edu/ciao/threads/index.html
HOST GALAXY TEMPLATE AND OPTICAL AGN
POWER-LAW CONTINUUM FITTING
The active nucleus of Mrk 590 sits within a nearby, typical SA(s)a galaxy. As a result a significant portion of the light entering our spectroscopic apertures is host galaxy starlight rather than emission from the AGN itself. This contamination must be subtracted to accurately assess the changing luminosity of the AGN. Bentz et al. (2009Bentz et al. ( , 2013 used HST images to measure the host galaxy starlight contribution to the observed flux at rest frame 5100Å, but the amount of host flux will depend on the spectroscopic aperture. Bentz et al. report the flux within the spectroscopic aperture used for the Peterson et al. (1998) RM campaign spectra of Mrk 590 to be (3.965 ± 0.198) × 10 −15 erg s −1 cm −2Å−1 .
The optical spectra discussed here were obtained through a variety of entrance apertures, so we estimate the starlight flux by fitting a starlight model to the 2013 MODS1 spectrum, chosen for its combination of long wavelength coverage, high spectral resolution ( 3Å), and paucity of thermal AGN continuum emission. We modeled the underlying stellar continuum using the STARLIGHT 8 spectral synthesis code (Cid Fernandes et al. 2005). We have fit Bruzual & Charlot (2003) models to the continuum between 3320 -9500Å, using a Calzetti et al. (2000) reddening law and masking nebular emission lines and the location of the Meinel bands where sky subtraction is challenging (∼7500Å, Meinel 1950). The intrinsic dispersion of the STARLIGHT model fit is 1Å pixel −1 , so we resampled the model to the dispersion of each of our sample optical spectra and then matched the resampled model to each by applying a multiplicative scale factor to account for the different entrance apertures. The resulting scale factors roughly follow expectations based on the aperture differences between spectra, though a strict scaling vs. aperture relation is not expected because of seeing effects and because the host galaxy starlight is not symmetric about the nucleus (see Bentz et al. 2009). This method leads to an estimate of the starlight contribution near 5100Å in the Peterson et al. (1998) RM campaign spectra considered here from 1989, 1993, and 1996, which were all taken through the same aperture, of (3.7 ± 0.2) × 10 −15 erg s −1 cm −2Å−1 , which is consistent with the Bentz et al. (2013) measurement.
There is no evidence for any underlying AGN thermal continuum emission in the 2013 MODS1 spectrum. However, for epochs where an additional power-law component from the AGN continuum was required to account for all of the observed emission, we first scaled the starlight model to match the strength of host galaxy stellar absorption features (e.g., Mg Ib and/or NaD), and then included an additional power-law component of the form F pl,λ = F pl,0 (λ/λ 0 ) α , where the flux and wavelength normalization, F pl,0 and λ 0 , respectively, and the power-law slope, α, were left as free parameters. These host starlight and power-law continuum components, where applicable, are shown by the red and blue curves, respectively, in Figure 1. Rest frame optical spectra of Mrk 590 spanning more than 40 years. The black curves show the observed spectra, the red curves show a host galaxy starlight template that was fit to the 2013 MODS1 spectrum and has been scaled to all other spectra (see Section 3), and the blue curves show power law continuum fits for the epochs in which the stellar component could not account for all the observed continuum flux. All spectra are on the same flux scale, given in the ordinate axis label, but a constant has been added (given in parentheses to the right of each spectrum) to all but the most recent epoch to clearly visualize observed changes between epochs.
40 YEARS OF CONTINUUM AND EMISSION LINE PROPERTIES
4.1. X-ray, UV, and Optical Continuum Properties Dramatic changes are clearly visible in Figure 1 in the optical nuclear emission from Mrk 590 over the last four decades. As mentioned above, the relative strength of the composite starlight component needed to fit each spectrum is a strong function of the spectroscopic aperture, but otherwise, as expected, the appearance and strength of the stellar features have not changed over the time spanned by our observations, and the model that was fit to the 2013 February spectrum is a good match to the features seen at all other epochs.
The same cannot be said of the AGN continuum contribution, which can be estimated from the power law component. However, because of (1) the varying (and often short) wavelength range of the available data, (2) the degeneracy between the power-law and starlight components, and (3) the sometimes questionable accuracy of the flux calibration and response curve correction within this heterogeneous data set, the details of the powerlaw fit parameters are not individually reliable (and are therefore not provided). This is particularly true of the weak power-law component to the most recent, 2014 January spectrum, where the necessity of this component is not statistically significant, even at the 1σ level, given the S/N and the short wavelength coverage of the spectrum. Nonetheless, the relative increase (and then decrease) in strength and steepness of the power-law slope as the broad lines strengthen (and then weaken) from the 1970s to 1990s (the 1990s to the 2000s) is qualitatively significant.
The rise and fall of the optical AGN continuum over the past four decades can be quantitatively evaluated from the 5100Å AGN continuum flux estimates for each of the spectra in our sample, provided in Table 1 and shown in Figure 4, based on the starlight-subtracted optical spectra. For the 2013 and 2014 spectra, the estimated continuum fluxes are strict upper limits because the weak residuals left after subtraction of the starlight fit are as likely to be noise and the consequence of an imperfect stellar model as they are minimal contributions from the AGN. Nonetheless, this still gives an indication of the change in flux over this time. Based on this sample, the peak in both the broad emission line strength and the continuum flux is observed from the 1989 spectrum. However, work by and Ferland et al. (1990) indicate that the peak continuum state may have been as early as 1988, as they report the optical, non-stellar continuum flux at 5000Å to be 6.81×10 −15 erg s −1 cm −2Å−1 from a spectrum taken on 1988 Sept 8, which is ∼20% higher than that in our 1989 Table 1 and the discussion in Section 4, the earliest X-ray flux is taken to be a lower limit representing the sum of the 1984 EXOSAT and 1991 RASS measurements (the error bar represents the time spanning these observations), and the more recent optical continuum and Hβ emission line fluxes are upper limits. Uncertainties on the other flux values are not given per Table Note 1b. spectrum. These estimates suggest a weakening in the strength of the AGN continuum component of 2−3 orders of magnitude since the late 1980's, but stronger constraints on the changes in the AGN SED can be set with shorter wavelength observations, where the host galaxy no longer contributes significantly to the measured flux.
The UV continuum flux measurements from the IUE and COS spectra, also presented in Table 1 and Figure 4, are affected neither by aperture and seeingdependent contributions from the host galaxy starlight nor by weather-related flux calibration issues (though instrument-related flux calibration uncertainties are still possible). Sampling the ionizing flux responsible for producing the emission line flux is not possible due to Galactic absorption, but the NUV continuum is the closest proxy we have and can therefore provide insight into the changes in the ionizing continuum. We find that the flux at 1450Å followed the same trends as in the optical, demonstrating a significant, factor of 4−5, increase in flux between 1982 and 1991, followed by a decline in flux of nearly a factor of 100 between 1991 and 2013. Interestingly, the slope of the UV continuum, at least over this relatively short wavelength range, has remained relatively constant, unlike the optical continuum slope, insofar as it can be judged from the power-law fits.
Similar trends are present in the new and archival X-ray data and are visible in Figure 4. We performed spectral fitting of the Chandra data shown in Figure 3 using the CIAO fitting package Sherpa. The intrinsic continuum of the source is well-fit with an absorbed (N H = 2.7 × 10 20 cm −2 ; Dickey & Lockman 1990) power law. The 0.5−8 keV power-law slope fit to our new observations is Γ =1.52±0.12 for the MEG spectrum and Γ =1.53±0.14 for the zeroth order spectrum. These are consistent with each other and with the value of Γ =1.58±0.15 measured from the the high state spectrum from 2004 (both shown in Figure 3). The intrinsic absorption in both cases is consistent with zero; we find intrinsic N H = 0.0 +0.06 × 10 20 cm −2 for the MEG spectrum and N H = 0.0 +0.08 × 10 20 cm −2 for the zeroth order spectrum, with lower limit pegged at zero in both cases. We measured the flux to be (1.3 ± 0.3) × 10 −12 erg s −1 cm −2 over 0.5−10 keV. This measured X-ray flux is lower by an order of magnitude compared to 2004 Chandra , and also shown in Figure 3) and XMM observations Bianchi et al. 2009), the latter of which show a 2−10 keV flux similar to that measured with Suzaku observations as recently as 2011 , though the soft-excess observed earlier had disappeared during the Suzaku observations. These 2004 observations, in turn, are nearly an order of magnitude lower than the combined X-ray flux from (1) 0.1−2.4 keV observations taken in 1991 as part of the ROSAT All-Sky Survey (RASS; Voges et al. 1999;Mahony et al. 2010) and (2) 2−10 keV EXOSAT observations from 1984 (Turner & Pounds 1989). We take the combined RASS+EXOSAT flux shown in Figure 4 as a lower limit to the expected soft+hard flux for the epoch ∼1990 (the point appears at the average calendar year and the x-axis error bar spans the range between the two observations). This is because the optical and UV data suggest the continuum luminosity was rising throughout the 1980s, so that the 1984 EXOSAT flux is likely underestimated compared what may have been expected if observations had been taken ∼6 years later. All X-ray fluxes are summarized in Table 1. We have been also awarded an additional 70 ks of Chandra time to observe Mrk 590 again later this Cycle, the results of which will appear in a future contribution. While we have not yet been able to locate spectra taken between the years 1996 and 2003, the MAGNUM telescope photometrically monitored the continuum variability of Mrk 590 from 2001 Sept 11 -2007 Aug 8. We can therefore use this and previous monitoring data (particularly that of Peterson et al. 1998) to comment on the continuum variability trends over the last couple of decades. The MAGNUM light curves are presented by Sakata et al. (2010) and show that the amplitude of variability of Mrk 590 was not as large as other typical Seyferts (e.g., NGC 4151 and NGC 5548) monitored by the same program. Nonetheless, Mrk 590 continued to exhibit variability typical of this AGN luminosity regime, i.e., relatively larger amplitude, secular variability over longer, inter-year time scales, as well as smaller amplitude, more rapid, intra-year variability patterns. The maximum B-band flux over this period occurred around JD2453300 -2004 Oct -and was similar to the flux at the start of the MAGNUM observations. By the end of the MAGNUM observations, the broad Hβ emission had disappeared, as seen in Figure 1, but the B-band flux was only ∼15% lower than the maximum 2004 level, and the MAGNUM light curves show neither a steady decline nor even a leveling off of the continuum to a constant flux by the end of the monitoring period. So despite the lack of broad lines and the relatively lower mean luminosity compared to when the broad lines were strong, the continuum variability behavior was relatively normal for a Sy1 of this luminosity and black hole mass. Similarly, the continuum light curves from the reverberation mapping campaign of Peterson et al. (1998) are not particularly enlightening with regard to the transformation of Mrk 590, except that the AGN simply appears to be in a higher luminosity/accretion state over this time period. The observed variability is larger in amplitude than the MAGNUM campaign, but still well within typical for moderate luminosity AGN. Furthermore, there are no apparent 'flaring' events, such as that observed in NGC2617 (Shappee et al. 2013b), at least within the data we have collected. It therefore is unlikely that this intriguing transformation of Mrk 590 could easily have been gleaned simply from optical/NIR broadband monitoring, likely due to contamination from the host galaxy in the optical. Generally speaking, broad emission-line profile changes are not particularly informative, given our current understanding of the BLR. Significant changes are well-known to occur over long timescales and in objects that have not "turned off" as Mrk 590 appears to have done (see, e.g., Wanders & Peterson 1996;Shapovalova et al. 2004Shapovalova et al. , 2009. The flux of the broad lines also naturally varies over time, so, for example, the changes observed within the RM campaign spectra from 1989, 1993, and 1996 are typical for Sy1s. However, the dramatic changes observed here, i.e., between 1973 and 1989 and again between 1996 and 2013, are likely connected to the dramatic changes occurring on similar timescales in the level of the AGN continuum flux. These observed changes have been predicted from photoionization modeling of the BLR, which demonstrates that the equivalent widths and responsivity of the optical recombination lines vary with the incident continuum flux (Korista & Goad 2004). Furthermore, Korista & Goad (2004) find that optical depth effects within the BLR predict larger responsivities for higher order Balmer lines compared to the lower order lines, with the effect being a steepening (flattening) of the broad-line Balmer decrement in low (high) continuum states. These results predict the variation of the intermediate Seyfert type (1.5−1.9) over large changes in the continuum flux, as observed in these changing type Seyferts, such at NGC 4151, NGC 2617, possibly NGC 2992, and here in Mrk 590.
Emission Line Properties
While the broad emission line flux changes are the most dramatic, the relative changes in the NLR emission lines are actually the most intriguing, as well as physically enlightening, and allow a possible history of the central region of Mrk 590 to be at least partially assembled. Narrow line fluxes are given in Table 1 and the [O iii] λ5007 emission line light curve is shown in Figure 4, both of which demonstrate that the [O iii] λ5007 flux varied significantly across these four decades. Reverberation mapping studies have shown that while the BLR emission is variable on relatively short timescales -hours to weeks -the NLR emission remains constant over much longer timescales because of the radial distance from the central source to the NLR, geometric damping or smoothing of any variability due to the large spatial extent of the NLR, and long recombination times due to the low densities in the NLR. The NLR will not "reverberate" in response to small changes in the ionizing continuum emission as the BLR does. Narrow-line variability has only been previously noted in the literature a couple of times. Antonucci (1984) report changes in narrow lines flux and flux ratios, but only for narrow line radio galaxies, and Ferland et al. (1979), Zheng et al. (1995), and Zheng (1996) measure changes in the narrow-line fluxes in radio-loud AGN 3C 390.3. In the case of the latter object, the narrow line fluxes are observed to decline within only ∼ 4 months of the continuum decline, and Zheng et al. (1995) and Zheng (1996) suggest the narrow-line emission in this source may not only be exceptionally compact and dense, but also be affected by the jet, which is known to produce a beamed component of the continuum emission.
It was demonstrated only recently by Peterson et al. (2013) that the NLR emission in a radio quiet AGN, NGC 5548, does vary over long time scales, i.e., a few to tens of years, and responds to large changes in the ionizing continuum with a damping, or smoothing, time scale of ∼ 15 years for this object. We observe similar changes in the narrow-line flux in Mrk 590. Our absolute flux calibration is admittedly somewhat uncertain for many of our optical spectra, but the general trend seen and the magnitude of the changes are too large to be accounted for by the expected flux uncertainties, which are likely a couple tens of percent, at most. The fluxes in Table 1 show that the NLR emission increased by a factor of ∼2 over ∼20 years (1973−1989), coincident with the increase in continuum and broad emission-line flux. It then entered a stable phase for ∼1−2 decades, serendipitously contemporaneous with the RM campaign monitoring of Peterson et al. (1998). The relative stability during this period is based on the 2003 SDSS spectrum and 17 spectra taken during JD2448514−JD2450122 (1991Sept.−1996 under photometric conditions. Emission-line flux measurements made from these 18 spectra demonstrate the rms deviations in the mean [O iii] λ5007 flux was only 4.3%. The flux has since decreased again by a factor of ∼2 in the past 10 years.
We see the same general trends in the UV emission lines, with the exception that weak, broad C iv and Lyα components are still clearly visible in the recent COS spectrum. Because the BLR is ionization stratified, these emission lines originate nearer the central source than the Hβ emission that has completely disappeared. The availability of UV observations is limited by the difficulty of obtaining such data, and as such, the spectra we present are the only available. It is therefore more difficult to put constraints on (1) the timescales with which the broad UV emission lines have declined, (2) how well they track the decline in optical emission lines, and (3) whether there truly is an ionization dependence on the remaining presence of broad line emission. One possible explanation for the remaining broad emission is simply one of equivalent width. We note that a possible, weak Hα broad component also remains in our most recent optical spectra. Nonetheless, we see no evidence of broad He ii in the optical (λ4686) or the UV (λ1640). If the ionizing continuum has simply become so weak that the line luminosities have become equivalently weak, then the lower equivalent width lines will 'disappear' before the higher equivalent width lines, but a complete BLR (both low and high ionization lines) may still remain intact below our detection limit.
On the other hand, we can put the (near) disappearance of the optical broad lines in context to theoretical models discussed above, if, for instance, the accretion rate of Mrk 590 has dropped below the limit necessary for the production of a BLR. We estimate the bolometric AGN luminosity from the 1450Å continuum flux to be L bol = 3.4 × 10 42 erg s −1 , from the 2013 COS spectrum 9 . This is still orders of magnitude above the advection dominated limit suggested by Elitzur & Ho (2009), which for Mrk 590 is L bol ∼ 1.4 × 10 40 erg s −1 , and is also not yet below the limit assumed by Laor (2003) of L bol ∼ 1.4 × 10 41 erg s −1 for the black hole mass of Mrk 590. However, it is somewhat below the limit predicted by Nicastro (2000) for the ionizing luminosity expected at the minimum accretion rate necessary to support a disk-wind BLR -L ion (ṁ min ) ∼ 2.5× 10 42 erg s −1 for the black hole mass of Mrk 590 -where, based on our bolometric luminosity and that approximately 24% of the bolometric luminosity is emitted with wavelengths below 912Å 10 , we find L ion ∼ 8.5 × 10 41 erg s −1 . Such a result would suggest a physical explanation for the disappearing broad lines if, indeed, the accretion rate has dropped below that necessary for generating a radiation pressuresupported and -driven wind. However, the reader should keep in mind that these approximations of L bol and L ion have significant uncertainties, so care should be taken in interpreting these results. Additionally, such interpretations make it difficult to explain the remaining broad components in C iv, Lyα, and Hα, if the Hβ broad line has disappeared as a result of the central engine entering a radiatively inefficient accretion state after this significant decrease in luminosity. Furthermore, the maximum line width we measure from our data is ∼9000 km s −1 for the weak, broad emission component of C iv from the 2013 COS spectrum, suggesting that the BLR velocities are nowhere near the ∆v max ∼20,000−25,000 km s −1 limits discussed by Nicastro (2000) and Laor (2003). Nonetheless, if the luminosity of Mrk 590 continues its downward trend, it may soon enter these interesting regimes if even the Hα and UV broad lines continue to disappear.
An additional serendipitous investigation we can perform with the recent UV and optical data (2013 MODS1 and COS spectra) that show only very weak or nonexistent broad emission components is to probe NLR line ratios of both the high and low ionization lines. Many narrow lines are typically 'contaminated' by the broad-line emission, such that the similarities and differences between Sy1 and Sy2 line ratios are not well studied, particularly for rest frame UV lines such as Lyα and C iv, where not only are observations harder to obtain but absorption is also a problem (see, e.g., Crenshaw & Kraemer 2005;Stern & Laor 2013, and references and discussions therein). We fit the Lyα and C iv emission lines in the COS spectrum first with a broad 6th order Gauss-Hermite (GH) polynomial, masking over the narrow line core and any other non-BLR line-specific emission in the profiles (e.g., Lyα and O i λ1301 geocoronal emission, N v). We subtracted this component and then fit the residual narrow emission component of each line with a new GH polynomial profile, from which we measure the presumed NLR contribution to the flux. The broad and total profile models are shown in red in Figure 2 for each of these lines. We find the following observed line strengths, relative to narrow Hβ, for these and other common narrow emission lines in the optical/UV: Lyα 40.26; C iv 13.50; He ii λ1640 0.23; [O iii] λ5007 12.52; Hα 2.75. Rest-frame UV narrow-line components were detected and ratios reported for one other Type 1 AGN, 3C 390.3 (Ferland et al. 1979), but given the lower resolution and S/N of that data and the fact that narrowline fluxes in this object may be affected by jet effects (Zheng et al. 1995;Zheng 1996), a comparison with the present results is of questionable merit. On the other hand, our measured values are very similar to the relative intensities given by Ferland & Osterbrock (1986) for Sy2 spectra. This could indicate that the NLR line ratios of Sy1s and Sy2s are intrinsically similar. However, it could instead be possible that Sy1 and Sy2 ratios are intrinsically different, but by virtue of the transition Mrk 590 is experiencing, we have simply recovered the ratios expected for the state that Mrk 590 appears to be subsuming, i.e., a Sy2.
A final note of interest is simply that we detect a significant narrow emission component of C iv. There has been extensive discussion in the literature about whether a "true" narrow, or more accurately, an NLR component of C iv exists (i.e., that emitted co-spatially with forbidden lines from a low-density, very spatially extended gas distribution). This is of particular interest for estimating black hole masses based on C iv, as this affects the necessity (or not) for subtraction of such a component from the observed C iv profile (e.g., see discussions by Baskin & Laor 2005;Vestergaard & Peterson 2006;Sulentic et al. 2007;Shen & Liu 2012;Denney 2012, and references therein). The COS spectrum in Figure 2 clearly shows such a component in Mrk 590, and the line strengths given in Table 1 show it to be as strong as [O iii] λ5007. The FWHM of this and other narrow emission line components from the optical/UV spectra, also given in Table 1, demonstrate that while this C iv component is "narrow", neither the C iv nor Lyα integrated narrow emission-line flux are being emitted cospatially with the integrated emission from the other narrow forbidden or Balmer lines. Instead, taking the velocity widths as virial velocities suggests the narrow C iv-emitting gas resides roughly a factor of 3 closer to the nucleus than that emitting [O iii] λ5007. This supports the argument that such emission is coming from an "inner" narrow line region (see e.g., Nelson et al. 2000, Kraemer et al. 2000, and discussion by Denney 2012 and should therefore be removed prior to BLR line width measurements. However, defining and separating such a contribution to a C iv profile such as that visible in the Mrk 590 IUE spectra, for example, has historically been the difficulty. Therefore, reliably subtracting it in the presence of a strong broad emission component without an unblended template for the velocity width, which does not exist, remains problematic. The closest proxy would be using the approach of Crenshaw et al. (e.g., Crenshaw & Kraemer 2007), who have previously used the narrow component of He ii λ1640 as a template for a narrow C iv component. Our narrow line-width measurements here suggest this approach is better than using a forbidden narrow emission line, but we still find the FWHM of He ii to be narrower than that of C iv. Furthermore, Crenshaw et al. have typically used this method with very high quality (typically low redshift, space-based) data; however, the intensity of narrow He ii is much weaker than that of C iv (by a factor of ∼150), so application of this method to high-redshift AGN spectra, where assessing the uncontaminated BLR velocity from C iv is most desirable for black hole mass estimates, would be, unfortunately, tenuous at best.
DISCUSSION
The seeming ability of objects like Mrk 590 to transition between Type 1 and Type 2 without an otherwise obvious flaring event like what Shappee et al. (2013b) observed in NGC 2617 raises the question of what assigning a type is really saying about an AGN. It is possible that such classifications should be taken as more of a statement of current 'state' rather than a true, fixed 'type', as Mrk 590 further indicates that type can change. This distinction may be of particular importance since it is becoming apparent that whether a particular AGN appears at any given time to be Type 1 or Type 2 (or somewhere in between) may be saying something more significant about the AGN physics and the immediate environment of the black hole at that particular time than strictly about the viewing angle. For Mrk 590, the decrease in the BLR, the UV, and the X-ray emission all seem to point to a decrease in total luminosity, thus accretion rate, and not simply a change in obscuration. Furthermore, the dynamical timescale of the Hβ emitting region is roughly 8 yrs in the rest frame of Mrk 590, based on the black hole mass and Hβ FWHM given by Peterson et al. (2004). An optically thick medium capable of occulting the BLR could only reside outside the dust sublimation radius and would therefore have a much longer dynamical timescale. Yet, the broad Hβ emission transformed from strong to non-existent in 10 years (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006). Furthermore, we see a dramatic change in the strength of not only the BLR emission but also the continuum and the NLR emission. If obscuration were responsible for this, the obscuring medium would need to cover both our line of sight to the continuum source, in which case we may expect to see evidence of an obscurer in our Xray continuum model, which we do not, as well as that between the continuum and the NLR, which is highly improbable. Therefore, though it cannot be definitively ruled out, line of sight obscuration is an unlikely explanation for the observations we present here. In general, the acquisition of sufficient data to determine the bolometric luminosity and spectral energy distribution of an AGN before and after a change in type should be able to distinguish between these two physical origins for the change. Changes in accretion rate should affect the flux at all wavelengths. However, flux changes due to obscuration will be wavelength dependent, where the decrease in UV/optical emission will be countered by an increase in the mid-to-far IR flux, since the obscuring medium absorbs the higher energy photons but then reradiates them as thermal emission at these long wavelengths.
A more likely explanation for the suite of observations presented here is that an accretion event occurred 40+ years ago. This event was capable of stimulating the production of ionizing photons that have excited the BLR emission over the subsequent 40 years. However, the energy produced in such an event has now been depleted. The symmetry of the narrow emission lines and the time scales over which we see significant changes in both the broad and narrow emission lines are too short for it to be plausible that this accretion event actually created these regions, i.e., to initially form and light up the AGN from a quiescent state. Instead, it is likely that at least the bulk of the nuclear gas was already present, just not emitting significant broad-line emission. This suggests that the appearance of a BLR may be episodic over the accretion history of supermassive black holes, depending on the reservoir of material available in the nucleus for accretion. It further demonstrates the importance of repeat observations of larger samples of Seyfert galaxies over longer baselines as a means to better quantify how often this behavior may occur. Transition objects like Mrk 590 may also be a link to other 'abnormal' AGN found from large survey samples. For example, Roig et al. (2014) found unusual broad-line objects among the BOSS sample of luminous galaxies that show a broad Mg ii emission line component but little to no broad Balmer emission. An interesting possibility is that these objects are in a similar transition state as Mrk 590. Unfortunately, a recent spectrum of the Mg ii region in Mrk 590 has not yet been obtained, so more data are needed to connect the likely time variable nature of the broad Balmer lines in Mrk 590 with this larger class of objects. We have been awarded time to obtain a spectrum of the Mg ii region in Mrk 590 with HST in Cycle 21 for this purpose and will therefore be able to address this in future work. Under the assumption that the observed behavior of Mrk 590 was the consequence of a past accretion event, we can infer additional physical properties of this system. The BLR would have known and responded to a new accretion event almost instantaneously, given the Hβ radius of ∼10−20 light days (Peterson et al. 2004) and the short recombination time in this region. However, this is not the case for gas at the distance and densities of the NLR. The behavior and subsequent properties of the [O iii] λλ4959, 5007 lines, as well as the heterogeneous data in our sample, allow us to put various independent limits on the NLR radius. One of the weakest, but nonetheless most reliable constraints on the NLR size comes from the fact that we measure nearly identical [O iii] fluxes from the 2003 SDSS spectrum as we do from the large aperture RM campaign spectra taken in the 1990's. This indicates that most of the [O iii] emission must arise from the spatially unresolved region with a 1.7 kpc diameter -the physical size subtended by the SDSS aperture at the distance of Mrk 590. Next, insofar as we can rely on the observed increase of the [O iii] λ5007 flux by 40% between 1973 and 1983, light travel time arguments suggest that the NLR emission must be 3 pc from the central source. In addition, the fluxes measured from the 2003 SDSS spectrum show that the continuum and BLR emission had significantly declined already, yet the [O iii] flux has not yet changed. However, the [OIII] flux had diminished by the time the 2006 MDM spectrum was obtained (again assuming at least some amount of trust in this flux measurement). Thus, even if this decline began right after the final RM campaign [OIII] flux calibration spectrum was obtained, we deduce a light travel timescale of 10 years, consistent with the distance inferred from the earlier increase in flux.
Next, we briefly consider the velocities of the [O iii]emitting gas, though the evidence for such a discussion is less robust than that derived from the fluxes. This is because large (∼100 km s −1 ) systematic errors in the velocity widths derived from individual spectra taken through large spectroscopic apertures are possible. This is on account of the difficulty involved in accurately measuring and correcting for the spectral resolution given that the PSF does not fill the slit. Vrtilek & Carleton (1985) measure the [O iii] λ5007 velocity width to be 397±22 km s −1 from a high (23 km s −1 ) resolution spectrum taken sometime between 1980 June and 1981 July. This is consistent with our resolution corrected velocities, for the most part. If we at least assume that these velocities approximately represent the virial velocity at the radius of the NLR, this also indicates consistent distances with other limits we considered above, with velocities ∼250−550 km s −1 corresponding to distances of ∼0.7−3.3 pc. While we regrettably cannot trust the reliability of the 1973 line width measurement, it is notably larger than the other measurements. This could lead to the interesting, though highly speculative, consideration that with this early observation, we are actually observing the NLR 'lighting up' from the inside out as the AGN continuum luminosity begins to increase significantly. Assuming a 10 4 K gas, the recombination time for gas near the critical density of [O iii] λλ4959, 5007 will be on the order of days; however, as the density drops with increasing radius to the expected average density of the NLR of ∼2000 cm −3 (Koski 1978), the recombination time approaches a decade. Such effects could lead to a scenario in which we see narrow-line emission only from the higher-velocity, more dense gas in the innermost regions of the NLR as the AGN continuum first brightens, and only after years will the velocity widths of the integrated [O iii]-emitting gas decrease as the spatial extent of the emission increases. Certainly, the range of velocities measured from the narrow component of the various emission lines we consider, which are significantly larger than the line width uncertainties, indicate that "narrow" emission line gas is emitted from an extended spatial region with a density gradient. The innermost emission from the high ionization species, such as C iv, Lyα, and He ii is coming from well within sub-pc scales.
Finally, to complete the picture of events, our observations indicate that as the AGN continuum luminosity increased (decreased), the BLR emission strengthened (diminished) faster than the NLR emission. This behavior suggests the spatial propagation of information from the central source outward because of density effects and light travel time. This again implies that an "event" of some sort may have triggered the creation of enough ionizing photons to excite the observed changes in BLR emission and subsequent strengthening of the NLR emission (as discussed above). There is nothing in the observations that can yet point to "what" may have been accreted to cause the observed sequence of events. However, we can roughly estimate the total mass needed to power the bolometric luminosity integrated over the span of our observations with a simple, back of the envelope calculation. We take the 5100Å fluxes from all optical spectra shown in Figure 1 (values in Table 1) and convert these to bolometric luminosities assuming a bolometric correction of 8.10 (Runnoe et al. 2012a). We then estimate the integrated luminosity both under the rough continuum emission light curve shown in Figure 4 and by extending the observed rise and decline, modeled as Gaussian wings, to both the past and future to conservatively cover a temporal range from 1950 to 2020. Finally, assuming that the mass-to-energy conversion efficiency is 7%, we find that the observed continuum output during this event can be accounted for with only ∼1−2 M ⊙ of total mass being accreted over these 70 years.
SUMMARY
We have presented optical, UV, and X-ray observations of the 'Classical' Seyfert 1 Mrk 590 that span the past 40+ years. This interesting object brightened by a factor of 10's between the 1970's and 1990's and then faded by a factor of a 100 or more at all continuum wavelengths between the mid-1990's and present day. Notably, there is no evidence in the current data set that this recent, significant decline in flux is due to obscuration; in particular, the most recent X-ray observations are consistent with zero intrinsic absorption. There were similarly dramatic changes in the emission-line fluxes. The most striking change is the complete disappearance of the broad component of the Hβ emission line, which had previously been strong (equivalent widths ∼20−60 A) and the focus of a successful reverberation mapping campaign that resulted in a secure estimate of the supermassive black hole contained within the nucleus of Mrk 590 of ∼ 5 × 10 7 M ⊙ . As a result of these significant changes, the optical spectrum of Mrk 590 currently looks more like that of a Sy2 AGN, with predominantly only narrow emission lines and a strong host galaxy stellar continuum.
The changes in emission line properties over this time period allowed us to determine that Mrk 590, at least in its current state, has NLR emission line ratios that are similar to those measured in Sy2 spectra. We were also able to put limits on the radius of the [O iii]-emitting region of the NLR to be ∼0.7−3.3 pc based on the observed changes in the integrated line flux and the velocity width of [O iii] λ5007. These results are consistent with the NLR size determined by Peterson et al. (2013) for NGC 5548, another nearby Sy1 galaxy containing a BH of similar mass to Mrk 590.
The implications arising from this long time series of Mrk 590 are that (1) Mrk 590 is a direct challenge to the historical paradigm that AGN type is exclusively a geometrical effect, and (2) there may not be a strict, oneway evolution from Type 1 to Type 1.5−1.9 to Type 2 as recently suggested by Elitzur et al. (2014). Instead, for at least some objects, the presence of BLR emission may coincide only with episodic accretion events throughout a single active phase of an AGN. If true, such behavior may be more prominent in Seyfert galaxies, where accretion has been theorized to be a consequence of secular processes and therefore likely more episodic than quasar activity, which may be triggered more predominantly by major mergers (see, e.g., Sanders et al. 1988;Treister et al. 2012). The final possibility, of course, is
Figure 1 .
1Figure 1. Rest frame optical spectra of Mrk 590 spanning more than 40 years. The black curves show the observed spectra, the red curves show a host galaxy starlight template that was fit to the 2013 MODS1 spectrum and has been scaled to all other spectra (see Section 3), and the blue curves show power law continuum fits for the epochs in which the stellar component could not account for all the observed continuum flux. All spectra are on the same flux scale, given in the ordinate axis label, but a constant has been added (given in parentheses to the right of each spectrum) to all but the most recent epoch to clearly visualize observed changes between epochs.
Figure 2 .
2Rest frame UV spectra of Mrk 590. The left panels show the full spectra from the available IUE spectra and new COS observations, with the year of the observation included with the source in the upper corner of each panel. The flux units and ordinate range is the same for all spectra to demonstrate the factor of ∼100 change in luminosity of Mrk 590 across these epochs. The middle and right panels in each row are zoomed in on the Lyα and C iv emission lines, respectively. The red curves in the HST Lyα and C iv panels are functional fits to both the broad component and total line flux (see Section 4.2 for details), and the gray shaded regions show the location of geocoronal Lyα and O i emission from the Earth's atmosphere.
Figure 3 .
3Chandra observations of Mrk 590. The top spectrum is based on archival data from 2004, while the bottom spectrum is from the new observations described in this work. The black points with error bars show the observations and the colored curves are absorbed power-law fits to the observed spectrum.
Figure 4 .
4Light curves showing the multi-wavelength changes observed in Mrk 590 over the past four decades in the hard+soft X-ray flux (green triangles), UV continuum flux (filled blue circles), optical starlight-subtracted continuum flux (black squares), and Hβ and [O iii] λ5007 emission line fluxes (open red and black points, respectively). The top panel shows observed fluxes, where the UV and optical continuum fluxes are given in units of erg s −1 cm −2Å−1 , and the X-ray and emission-line fluxes are given in units of erg s −1 cm −2 . The bottom panel shows fluxes in the same units but relative to the geometric mean flux state of each emission component over the course of our observations,F λ . Following from
Figure 5
5shows a zoomed-in view of the Hβ and [O iii] λλ4959, 5007 emission lines from the starlight-and continuum-(where applicable) subtracted optical spectra presented in Figure 1. Significant changes in the emission-line properties are obvious in both the broad and narrow emission lines. The most dramatic change is the initial strengthening and then complete disappearance of the broad Hβ line. This trend is also visible in the narrow emission-line-subtracted Hβ light curve shown with the open red circles in Figure 4, where the final 3 epochs shown as upper limits are from the recent MODS1, KOSMOS, and 2014 MDM spectra, where the broad Hβ line is no longer visible in the spectrum.
Figure 5 .
5The starlight and power-law continuum-subtracted, residual spectra of Mrk 590, zoomed to exhibit only the Hβ and [O iii] λλ4959, 5007 emission-line region of the 10 optical spectra of Mrk 590 fromFigure 1. The year and/or month of observation is given to the left of each spectrum. The dashed lines represent the artificial zero-level continuum under each spectrum after adding the flux offsets given in parentheses to the right of each spectrum. The spectra are otherwise on the same flux scale.
Table 1
1Mrk 590 Multi-wavelength Spectral Properties
http://www.astronomy.ohio-state.edu/∼assassin
www.starlight.ufsc.br
This was estimated assuming a bolometric correction at 1450Å of 4.20(Runnoe et al. 2012b,a) 10 This was estimated using the mean "Low Luminosity" SED template presented byKrawczyk et al. (2013).
that we are witnessing the final stages in the life of this AGN, and Mrk 590 is completely turning off. This would be an incredible find, but is likely the most improbable explanation, given the low duty cycles for low redshift AGN (see, e.g.,Schulze & Wisotzki 2010;Shankar et al. 2013). We plan to continue monitoring Mrk 590 to look for further changes in its current state, but only time and more observations will tell whether the BLR returns. Support for program number GO-13185 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. KDD is supported by an NSF AAPF fellowship awarded under NSF grant AST-1302093. BMP and GDR are grateful for NSF support through grant AST-1008882 to The Ohio State University. MCB acknowledges the NSF for support through the CAREER Grant AST-1253702 to Georgia State University. This paper uses data taken with the MODS spectrographs built with funding from NSF grant AST-9987045 and the NSF Telescope System Instrumentation Program (TSIP), with additional funds from the Ohio Board of Regents and the Ohio State University Office of Research. This work was based in part on observations made with the Large Binocular Telescope. The LBT is an international collaboration among institutions in the United States, Italy and Germany. The LBT Corporation partners are: the University of Arizona on behalf of the Arizona university system; the Istituto Nazionale di Astrofisica, Italy; the LBT Beteiligungsgesellschaft, Germany, representing the Max Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; the Ohio State University; and the Research Corporation, on behalf of the University of Notre Dame, the University of Minnesota, and the University of Virginia.ObservationYear of Continuum Continuum Narrow Line Narrow Line Narrow Line IdentifierObservation He ii λ1640 (narrow) 0.0075 612 km s −1 X-ray Spectra c EXOSAT 1984 2−10 keV 27. ±2.7 · · · · · · · · · RASS 1991 0.1−2.4 keV 46.3±28.7 · · · · · · · · · XMM 2004 0.2−2 keV 3.31±0.11 · · · · · · · · · XMM 2004 2−10 keV 6.95±0.11 · · · · · · · · · Chandra 2004 0.5 − 10 keV 11.9±0.3 · · · · · · · · · Suzaku 2011 2−10 keV 6.8 · · · · · · · · · Chandra 2013 0.5 − 10keV 1.3±0.1 · · · · · · · · ·
. D M Alexander, ApJ. 773125Alexander, D. M., et al. 2013, ApJ, 773, 125
. R Antonucci, Astronomical and Astrophysical Transactions. 27557ARA&AAntonucci, R. 1993, ARA&A, 31, 473 -. 2012, Astronomical and Astrophysical Transactions, 27, 557
. R R J Antonucci, ApJ. 281112Antonucci, R. R. J. 1984, ApJ, 281, 112
. R R J Antonucci, R D Cohen, ApJ. 271564Antonucci, R. R. J., & Cohen, R. D. 1983, ApJ, 271, 564
. R R J Antonucci, J S Miller, ApJ. 297621Antonucci, R. R. J., & Miller, J. S. 1985, ApJ, 297, 621
. R J Assef, ApJ. 713970Assef, R. J., et al. 2010, ApJ, 713, 970
. A J Barth, A V Filippenko, E C Moran, ApJ. 525673Barth, A. J., Filippenko, A. V., & Moran, E. C. 1999, ApJ, 525, 673
. A Baskin, A Laor, MNRAS. 3561029Baskin, A., & Laor, A. 2005, MNRAS, 356, 1029
. M C Bentz, B M Peterson, H Netzer, R W Pogge, M Vestergaard, ApJ. 697160Bentz, M. C., Peterson, B. M., Netzer, H., Pogge, R. W., & Vestergaard, M. 2009, ApJ, 697, 160
. M C Bentz, ApJ. 767149Bentz, M. C., et al. 2013, ApJ, 767, 149
. S Bianchi, M Guainazzi, G Matt, A&A. 442185Bianchi, S., Guainazzi, M., Matt, G., et al. 2005, A&A, 442, 185
. S Bianchi, M Guainazzi, G Matt, N Fonseca Bonilla, G Ponti, A&A. 495421Bianchi, S., Guainazzi, M., Matt, G., Fonseca Bonilla, N., & Ponti, G. 2009, A&A, 495, 421
. G Bruzual, S Charlot, MNRAS. 3441000Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000
. P L Byard, C B Foltz, H Jenkner, B M Peterson, PASP. 93147Byard, P. L., Foltz, C. B., Jenkner, H., & Peterson, B. M. 1981, PASP, 93, 147
. D Calzetti, L Armus, R C Bohlin, ApJ. 533682Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682
. Cid Fernandes, R Mateus, A Sodré, L Stasińska, G Gomes, J M , MNRAS. 358363Cid Fernandes, R., Mateus, A., Sodré, L., Stasińska, G., & Gomes, J. M. 2005, MNRAS, 358, 363
. D M Crenshaw, S B Kraemer, ApJ. 625250ApJCrenshaw, D. M., & Kraemer, S. B. 2005, ApJ, 625, 680 -. 2007, ApJ, 659, 250
. K D Denney, ApJ. 75944Denney, K. D. 2012, ApJ, 759, 44
. M Elitzur, L C Ho, ApJ. 70191Elitzur, M., & Ho, L. C. 2009, ApJ, 701, L91
. M Elitzur, L C Ho, J R Trump, MNRAS. 4383340Elitzur, M., Ho, L. C., & Trump, J. R. 2014, MNRAS, 438, 3340
. G J Ferland, K T Korista, B M Peterson, ApJ. 36321Ferland, G. J., Korista, K. T., & Peterson, B. M. 1990, ApJ, 363, L21
. G J Ferland, D E Osterbrock, ApJ. 300658Ferland, G. J., & Osterbrock, D. E. 1986, ApJ, 300, 658
UV and optical continuum fluxes are in units of 10 −16 erg s −1 cm −2Å−1 , X-ray fluxes are in units of 10 −12 erg s −1 cm −2 , and emission line fluxes are in units of 10 −13 erg s −1 cm −2 . We have not included uncertainties in the optical/UV flux or line width measurements because the data from which these quantities are measured are typically very high S/N, so that uncertainties in these quantities will be dominated by unmeasurable systematics present in this extremely heterogeneous data set. a UV and optical continuum regions refer to rest frame wavelengths, but the energy range of the X-ray observations is that observed. b. rather than by observational or statistical sourcesa UV and optical continuum regions refer to rest frame wavelengths, but the energy range of the X-ray observations is that observed. b UV and optical continuum fluxes are in units of 10 −16 erg s −1 cm −2Å−1 , X-ray fluxes are in units of 10 −12 erg s −1 cm −2 , and emission line fluxes are in units of 10 −13 erg s −1 cm −2 . We have not included uncertainties in the optical/UV flux or line width measurements because the data from which these quantities are measured are typically very high S/N, so that uncertainties in these quantities will be dominated by unmeasurable systematics present in this extremely heterogeneous data set, rather than by observational or statistical sources.
References for past X-ray data include EXOSAT. Turner & Poundsc References for past X-ray data include EXOSAT: Turner & Pounds (1989);
. Rass: Voges, RASS: Voges et al. (1999);
. Mahony, Mahony et al. (2010);
. Xmm: Longinotti, XMM: Longinotti et al. (2007);
. Bianchi, Bianchi et al. (2009); Chandra 2004: Longinotti et al. (2007);
. : Suzaku, Rivers, Suzaku: Rivers et al. (2012).
. G J Ferland, M J Rees, M S Longair, M A Perryman, MNRAS. 18765Ferland, G. J., Rees, M. J., Longair, M. S., & Perryman, M. A. C. 1979, MNRAS, 187, 65P
. R Gilli, R Maiolino, A Marconi, A&A. 355485Gilli, R., Maiolino, R., Marconi, A., et al. 2000, A&A, 355, 485
. E Y Khachikian, D W Weedman, ApJ. 192581Khachikian, E. Y., & Weedman, D. W. 1974, ApJ, 192, 581
. K T Korista, Columbus, K T Korista, M R Goad, ApJ. 606749Ohio State UniversityPhD thesisKorista, K. T. 1990, PhD thesis, Ohio State University, Columbus. Korista, K. T., & Goad, M. R. 2004, ApJ, 606, 749
. A T Koski, ApJ. 22356Koski, A. T. 1978, ApJ, 223, 56
. S B Kraemer, D M Crenshaw, J B Hutchings, ApJ. 531278Kraemer, S. B., Crenshaw, D. M., Hutchings, J. B., et al. 2000, ApJ, 531, 278
. C M Krawczyk, G T Richards, S S Mehta, ApJS. 2064Krawczyk, C. M., Richards, G. T., Mehta, S. S., et al. 2013, ApJS, 206, 4
. A Laor, ApJ. 59086Laor, A. 2003, ApJ, 590, 86
. A L Longinotti, S Bianchi, M Santos-Lleo, A&A. 47073Longinotti, A. L., Bianchi, S., Santos-Lleo, M., et al. 2007, A&A, 470, 73
. V M Lyutyǐ, V L Oknyanskiȋ, K K Chuvaev, Soviet Astronomy Letters. 10335Lyutyǐ, V. M., Oknyanskiȋ, V. L., & Chuvaev, K. K. 1984, Soviet Astronomy Letters, 10, 335
. E K Mahony, S M Croom, B J Boyle, MNRAS. 4011151Mahony, E. K., Croom, S. M., Boyle, B. J., et al. 2010, MNRAS, 401, 1151
. E Marchese, V Braito, R Della Ceca, A Caccianiga, P Severgnini, MNRAS. 4211803Marchese, E., Braito, V., Della Ceca, R., Caccianiga, A., & Severgnini, P. 2012, MNRAS, 421, 1803
. F Marin, D Porquet, R W Goosmann, MNRAS. 4361615Marin, F., Porquet, D., Goosmann, R. W., et al. 2013, MNRAS, 436, 1615
. I A B Meinel, ApJ. 111555Meinel, I. A. B. 1950, ApJ, 111, 555
. E C Moran, J P Halpern, D J Helfand, ApJS. 106341Moran, E. C., Halpern, J. P., & Helfand, D. J. 1996, ApJS, 106, 341
. K D Murphy, T Yaqoob, Y Terashima, ApJ. 66696Murphy, K. D., Yaqoob, T., & Terashima, Y. 2007, ApJ, 666, 96
. C H Nelson, D Weistrop, J B Hutchings, ApJ. 531257Nelson, C. H., Weistrop, D., Hutchings, J. B., et al. 2000, ApJ, 531, 257
. F Nicastro, ApJ. 53065Nicastro, F. 2000, ApJ, 530, L65
. F Nicastro, A Martocchia, G Matt, ApJ. 58913Nicastro, F., Martocchia, A., & Matt, G. 2003, ApJ, 589, L13
. D E Osterbrock, ApJ. 215462ApJOsterbrock, D. E. 1977, ApJ, 215, 733 -. 1981, ApJ, 249, 462
. D E Osterbrock, A T Koski, MNRAS. 17661Osterbrock, D. E., & Koski, A. T. 1976, MNRAS, 176, 61P
. F Panessa, L Bassani, A&A. 394435Panessa, F., & Bassani, L. 2002, A&A, 394, 435
. M V Penston, E Perez, MNRAS. 21133Penston, M. V., & Perez, E. 1984, MNRAS, 211, 33P
. B M Peterson, I Wanders, R Bertram, ApJ. 50182Peterson, B. M., Wanders, I., Bertram, R., et al. 1998, ApJ, 501, 82
. B M Peterson, ApJ. 779109ApJPeterson, B. M., et al. 2004, ApJ, 613, 682 -. 2013, ApJ, 779, 109
. G Risaliti, MNRAS. 3931Risaliti, G., et al. 2009, MNRAS, 393, L1
. E Rivers, A Markowitz, R Duro, R Rothschild, ApJ. 75963Rivers, E., Markowitz, A., Duro, R., & Rothschild, R. 2012, ApJ, 759, 63
. B Roig, M R Blanton, N P Ross, ApJ. 78172Roig, B., Blanton, M. R., & Ross, N. P. 2014, ApJ, 781, 72
. J C Runnoe, M S Brotherton, Z Shang, MNRAS. 4272677MNRASRunnoe, J. C., Brotherton, M. S., & Shang, Z. 2012a, MNRAS, 427, 1800 -. 2012b, MNRAS, 426, 2677
. Y Sakata, ApJ. 711461Sakata, Y., et al. 2010, ApJ, 711, 461
. D B Sanders, B T Soifer, J H Elias, ApJ. 32574Sanders, D. B., Soifer, B. T., Elias, J. H., et al. 1988, ApJ, 325, 74
. A Schulze, L Wisotzki, A&A. 51687Schulze, A., & Wisotzki, L. 2010, A&A, 516, A87
. C K Seyfert, ApJ. 9728Seyfert, C. K. 1943, ApJ, 97, 28
. F Shankar, D H Weinberg, J Miralda-Escudé, MNRAS. 428421Shankar, F., Weinberg, D. H., & Miralda-Escudé, J. 2013, MNRAS, 428, 421
. A I Shapovalova, New A Rev. 422106A&AShapovalova, A. I., et al. 2004, A&A, 422, 925 -. 2009, New A Rev., 53, 191 -. 2010, A&A, 509, A106
B J Shappee, arXiv:1310.2241The Astronomer's Telegram. 5059in pressShappee, B. J., et al. 2013a, The Astronomer's Telegram, 5059, 1 -. 2013b, ApJ, in press, arXiv:1310.2241
. Y Shen, X Liu, ApJ. 753125Shen, Y., & Liu, X. 2012, ApJ, 753, 125
. J Stern, A Laor, MNRAS. 431836Stern, J., & Laor, A. 2013, MNRAS, 431, 836
. J W Sulentic, R Bachev, P Marziani, C A Negrete, D Dultzin, ApJ. 666757Sulentic, J. W., Bachev, R., Marziani, P., Negrete, C. A., & Dultzin, D. 2007, ApJ, 666, 757
. H D Tran, ApJ. 554632ApJTran, H. D. 2001, ApJ, 554, L19 -. 2003, ApJ, 583, 632
. E Treister, K Schawinski, C M Urry, B D Simmons, ApJ. 75839Treister, E., Schawinski, K., Urry, C. M., & Simmons, B. D. 2012, ApJ, 758, L39
. M L Trippe, D M Crenshaw, R Deo, M Dietrich, AJ. 1352048Trippe, M. L., Crenshaw, D. M., Deo, R., & Dietrich, M. 2008, AJ, 135, 2048
. J R Trump, ApJ. 73360Trump, J. R., et al. 2011, ApJ, 733, 60
. T J Turner, K A Pounds, MNRAS. 240833Turner, T. J., & Pounds, K. A. 1989, MNRAS, 240, 833
. M Vestergaard, B M Peterson, ApJ. 641689Vestergaard, M., & Peterson, B. M. 2006, ApJ, 641, 689
. W Voges, A&A. 349389Voges, W., et al. 1999, A&A, 349, 389
. J M Vrtilek, N P Carleton, ApJ. 294106Vrtilek, J. M., & Carleton, N. P. 1985, ApJ, 294, 106
. I Wanders, B M Peterson, ApJ. 466174Wanders, I., & Peterson, B. M. 1996, ApJ, 466, 174
. J.-M Wang, E.-P Zhang, ApJ. 6601072Wang, J.-M., & Zhang, E.-P. 2007, ApJ, 660, 1072
. D W Weedman, ARA&A. 1569Weedman, D. W. 1977, ARA&A, 15, 69
. E.-P Zhang, J.-M Wang, ApJ. 653137Zhang, E.-P., & Wang, J.-M. 2006, ApJ, 653, 137
. W Zheng, AJ. 1111498Zheng, W. 1996, AJ, 111, 1498
. W Zheng, E Perez, S A Grandi, M V Penston, AJ. 1092355Zheng, W., Perez, E., Grandi, S. A., & Penston, M. V. 1995, AJ, 109, 2355
| [] |
[
"Incorporating Vision Bias into Click Models for Image-oriented Search Engine",
"Incorporating Vision Bias into Click Models for Image-oriented Search Engine"
] | [
"Ningxin Xu [email protected] \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Cheng Yang [email protected] \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Yixin Zhu [email protected] \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Xiaowei Hu [email protected] \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Changhu Wang [email protected] \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Ningxin Xu \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Cheng Yang \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Yixin Zhu \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Xiaowei Hu \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n",
"Changhu Wang \nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China\n"
] | [
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China",
"ByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nByteDance AI Lab\nCarnegie Mellon University\nChina, China, China, China"
] | [
"KDD '27: International Conference on Knowledge Discovery and Data Mining"
] | Most typical click models assume that the probability of a document to be examined by users only depends on position, such as PBM and UBM. It works well in various kinds of search engines. However, in a search engine where massive candidate documents display images as responses to the query, the examination probability should not only depend on position. The visual appearance of an image-oriented document also plays an important role in its opportunity to be examined.In this paper, we assume that vision bias exists in an imageoriented search engine as another crucial factor affecting the examination probability aside from position. Specifically, we apply this assumption to classical click models and propose an extended model, to better capture the examination probabilities of documents. We use regression-based EM algorithm to predict the vision bias given the visual features extracted from candidate documents. Empirically, we evaluate our model on a dataset developed from a real-world online image-oriented search engine, and demonstrate that our proposed model can achieve significant improvements over its baseline model in data fitness and sparsity handling.CCS CONCEPTS• Information systems → Users and interactive retrieval. KEYWORDS click model, search engine, user behavior, ranking, vision bias ACM Reference Format: | null | [
"https://arxiv.org/pdf/2101.02459v1.pdf"
] | 230,799,275 | 2101.02459 | 6cab278bbd9d5a6ef30e6389fbc24500711fd8e2 |
Incorporating Vision Bias into Click Models for Image-oriented Search Engine
ACMCopyright ACM2020. August 14-18, 2021. August 14-18, 2021
Ningxin Xu [email protected]
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Cheng Yang [email protected]
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Yixin Zhu [email protected]
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Xiaowei Hu [email protected]
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Changhu Wang [email protected]
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Ningxin Xu
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Cheng Yang
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Yixin Zhu
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Xiaowei Hu
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Changhu Wang
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
ByteDance AI Lab
Carnegie Mellon University
China, China, China, China
Incorporating Vision Bias into Click Models for Image-oriented Search Engine
KDD '27: International Conference on Knowledge Discovery and Data Mining
Singapore, SG; New York, NY, USA, KDD '27; Singapore, SGACM2020. August 14-18, 2021. August 14-18, 202110.1145/nnnnnnn.nnnnnnn10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
Most typical click models assume that the probability of a document to be examined by users only depends on position, such as PBM and UBM. It works well in various kinds of search engines. However, in a search engine where massive candidate documents display images as responses to the query, the examination probability should not only depend on position. The visual appearance of an image-oriented document also plays an important role in its opportunity to be examined.In this paper, we assume that vision bias exists in an imageoriented search engine as another crucial factor affecting the examination probability aside from position. Specifically, we apply this assumption to classical click models and propose an extended model, to better capture the examination probabilities of documents. We use regression-based EM algorithm to predict the vision bias given the visual features extracted from candidate documents. Empirically, we evaluate our model on a dataset developed from a real-world online image-oriented search engine, and demonstrate that our proposed model can achieve significant improvements over its baseline model in data fitness and sparsity handling.CCS CONCEPTS• Information systems → Users and interactive retrieval. KEYWORDS click model, search engine, user behavior, ranking, vision bias ACM Reference Format:
INTRODUCTION
A commercial search engine provides services of retrieve and ranking for massive users. Improved ranking performance leads to desired results ranked higher, adds to user activity and makes profits for enterprises. Models have been designed to construct result pages to satisfy users' information needs more productively. In a commercial search engine, numerous click-through logs record user activities on search pages every day at low cost, but provide implicit, abundant and renewable user feedback. Many models have been proposed to take advantage of click-through logs to improve document rankings. These models are called click models.In [3], click models are described as models that take noise and bias in user behavior data into account by introducing random variables and dependencies between them, and concern clicks as the main observed user interaction with a search system. According to [3], user browsing model (UBM) [13], dynamic Bayesian network model (DBN) [5], click chain model (CCM) [19], general click model (GCM) [32], position-based model (PBM) [11] and cascade model (CM) [11] are typical click models to utilize the abundant click-though logs.
However, click models are faced with well-known challenges from the inherent biases contained in user behaviors. Examinationbased click models such as PBM, CM and UBM have been proposed to estimate this bias to extract the real relevance of the documents. The more accurate relevance leads to better rankings. These click models have an excellent performance in search engines, but do not distinguish between different types of candidate documents such as image, text, links and etc. In recent years, search engines with images as candidate documents have emerged in large numbers, and the models now have to consider the nature of candidate documents. Compared with texts and links, images have strong visual impacts. In this case, the opportunity to be observed and examined of a candidate document more depends on the visual appearance of the image itself, not only on the position. Specifically, GIF images among static images, or images with abundant content aligned with monotonous ones, or images with bright colors compared with those in dim colors, are usually more likely to be observed and examined. Relevant proofs can be found in [14]. In this case, the importance of focusing on this bias should be reflected in the design of user behavior modelings. In this paper, we define this kind of bias as vision bias.
An intuitive approach to estimate the vision bias caused by visual appearance is to utilize the information contained in the candidate image-oriented documents. This idea inspires us to propose an approach where a model captures the bias caused by visual features directly from the documents. In this paper, every image-oriented document is encoded with its feature vector, and infer its vision bias by regression-based EM. Other parameters, such as relevance and position bias, are estimated via standard EM algorithm. This paper incorporates the concepts of vision bias to explain how user clicks are affected by visual features, and to extract more accurate document relevance to achieve better rankings. Click models which adopt the vision bias can estimate more unbiased relevance in image-oriented search engines. Empirically, we apply the assumption of vision bias to PBM and UBM, and evaluate the proposed method on click logs produced by real users from the online image-oriented search engine. Proposed model extracts the relevance between the query and the candidate documents, and then the results are compared with PBM and UBM on the test dataset.
In summary, the main contributions of this paper are as follow:
• We propose vision bias from click data that captures the influence of the visual features on the user examination when it comes to image-oriented search engines. • We propose an reasonable equation to incorporate the vision bias into click models by introducing a parameter representing it. In the experiments, we demonstrate that this equation is effective for better rankings. • We use regression-based EM algorithm to estimate vision bias, and show that this can effectively handle data sparsity problem in click models. • Our method achieve high extensibility. Vision bias can be added to the examination hypothesis of most of widely-used click models in image-oriented search engines.
In the following sections, this paper discusses how to propose appropriate ways to combine position bias and vision bias, and demonstrates the advantage of this method. We review related works in Section 2, introduce the modeling approach in Section 3 and discuss the parameter inference method in Section 4. In Section 5 we present the results and analysis of our experiments. Finally, we conclude the paper in Section 6.
RELATED WORKS
In the past, various biases have been addressed in typical click models, such as trust bias [2] and intent bias [21] [27]. Other works, as presented in [7], also incorporate inherent noise in the user behavior into click models. These works have studied factors that make click data noisy, and most of them focus on the influence of noisy click data on users' judgement on document relevance. This paper differs from them by introducing a bias into the examination instead of relevance.
Biases caused by what documents look like, known as appearance/presentation bias, have been studied in the previous works such as [9], [25] and [28]. The study [9] suggests the existence of bias and notices that when the title of documents have more matching terms with query, even though they are not very relevant, they get more clicks. [25] incorporates the appearance/presentation bias into its model by introducing a parameter, and it assumes that this bias has impacts on the estimation of the relevance. In [28], it is pointed out that the "more attractive" results make the perceived relevance of some documents differ significantly from others. Our work differs from these papers by modeling a bias in the imageoriented search engines. We argue that vision bias affects the user behavior by influencing the examination probability, and propose effective methods incorporating visual features to estimate it.
Work on using visual features for image search is long-standing, such as in [17], [8], [31] and [22]. [15] learns visual features in learning-to-rank tasks via deep learning techniques. [29] focuses on the examination behavior of image search users, and discovers that the content of image results (e.g.,visual saliency) affects examination behavior. It can be seen from these works that there are many techniques for image search which utilize visual features. In this paper we limit the research scope within click models, and extend classical click models by applying regression-based EM algorithm to them. Particularly, [30] proposes a novel interaction behavior model, grid-based user browsing model, which shares similarities with click model techniques, and this model works in image search. However, it does not explicitly model appearance/presentation bias, which is the difference between [30] and this paper.
Our work is not the first to take the document type into account. Many click models are proposed for aggregated search, as shown in federated click model [6] and vertical click model [24]. But the model introduced in this paper does not aim to study aggregated search. In fact, in aggregated search, candidate results are from multiple sources and have different types. [6] and [24] model such heterogeneity. However, we pay attention to a specific and rarely mentioned type, which is image-oriented document.
Existing click models are mainly based on probabilistic graphical model (PGM) framework. Two kinds of user behaviors, a document is examined by a user and a document attracts a user, are represented as hidden events. In existing click models, while the probability of the former is modeled differently, the latter is usually modeled by a parameter with a query and a document as its indicator, and the parameters are usually estimated by generative maximum likelihood under specific assumptions. Many different click models have been developed in the past, among which PBM and CM are two classical ones. UBM extends the hypothesis of PBM by introducing a new examination parameter. DBN and CCM extend CM to model the probability for a document to get examined depending on the previous documents. This paper chooses PBM and UBM to extend them by incorporating vision bias, but any model with an examination bias can be extended in a similar way. Feature vectors are used to represent a parameter, vision bias, and we estimate its value by regression following [2], [23] and [26]. This approach handles the data sparsity problem better, and also contributes to the extensibility of our work.
In recent years, other approaches taking advantage of dense vectors of context attributes to estimate biases in click models are proposed, such as [1] and [4]. These works use complex deep neural networks to build up click models and encode the context attributes by vectors. Compared with our model, the models in [1] and [4] demand much time for finding a solution and consume a large amount of computing resources.
As is mentioned above, the parameter inference method in this paper follows [2], [23] and [26]. Since neural network processes dense vectors more efficient than Gradient Boosted Decision Tree (GBDT) [16], regression-based EM altorithm in this paper uses a simple neural network (NN) as its predictor. Because we assume that image features have impacts on the estimation of examination probabilities, regression-based EM algorithm is taken as an approach to infer the examination probability instead of relevance, which is different from [2] and [26]. Particularly, [23] assumes that context attributes influence the estimation of examination probability. But our paper differs from [23] in two aspects. First, the image-oriented search engine is an independent and specific case of estimating biases by incorporating context attributes. Second, [23] continues to use probabilistic graph structure in previous classical click models such as PBM and UBM, and incorporate any bias into the model by making examination probability dependent of different context attributes. However, our method explicitly introduces a new structure for the probabilistic graph. This explicit modeling approach makes the model more expressive.
MODELING
This section states the vision bias, and derives the expressions of the probabilities for relevant hidden variable events. Then we introduce the methods to infer the parameters proposed in our model.
Before introducing any specific click model, we first declare definitions and notations used in this paper. In any search session where a query is given by the user and the documents are displayed, a document at rank is denoted by . We define four binary variables, , , and , as the user click, user examination based on position, real user examination and document relevance event at rank . Specifically, = 1 when a user clicks the document at rank , = 1 when any document gets examined at rank , = 1 when a certain document at rank is examined, and = 1 when the document at rank is relevant to the query . In this paper, we use PBM and UBM as baseline models and apply vision bias into their assumptions to extend them.
Vision Bias
When vision bias is applied to PBM and UBM, the proposed hypothesis partially preserves the original hypothesis, but adds vision bias to it as a new part. The proposed hypothesis assumes that the real examination probability of a given document in the search session not only depends on its position or the position of its previously clicked document, but also is under the influence of its visual features. Besides, a document is clicked if and only if it is examined by the user and recognized as relevant. As in original PBM and UBM, the relevance of the given document is uniquely affected by the query and the content of the document.
To further describe how vision bias works together with the original position-based examination bias, we conclude the extended model by four hypotheses as below:
• A document (image-oriented) is clicked if and only if it is examined and relevant. • Position of the document influences its examination probability. • Even if an document is not placed in a position where documents tend to be examined, eye-catching visual features can increase its examination probability. • The relevance only depends on the query and the document. In Figure 1, each circle represents a certain event, and the edge connecting any event describes the dependency between them. Each document is placed in a certain position in the layout of the imageoriented search engine. When they are displayed, a thumbnail is shown so that their visual features is directly exposed to the users. Each user comes to the search engine with a formulated query, and the search engine provides a list of documents. The possibility for a document to be examined is different when it is placed on different positions. If the position where it is placed tends to be examined by the user, then it is more likely that the image placed there get examined. However, different documents placed on the same position still have different opportunities to be examined. If an image-oriented document is not placed on a position that tends to be examined, but has eye-catching visual features like large size or bright colors, still it is more likely for this document to be examined. In Figure 1(b), the event that a document is examined is divided into two stages: First, only how position influences the examination probability for the document is considered. Second, we also take image visual features into consideration to obtain its real examination probability. When the users examine a document, they decide whether it is relevant to the query. If an examined document is relevant, then the user will click the document.
The examination bias in PBM and UBM measures the possibility of examination based on position, while vision bias measures the examination probability under the influence of any factor determined by the visual features. In fact, PBM and UBM can be regarded as special cases of the extended model. If the effects of visual features on the image-oriented documents' examination probability are ignored, in other words, vision bias is ignored, and we only consider the examination probability as an event only affected by position, then the examination in our extended model is equivalent to the original examination hypothesis.
Incorporation
The switch from standard PBM and UBM to the extended models is quite simple. The details of PBM and UBM can be found in [11] and [13], but in this paper we will not discuss about it. This section only takes UBM as an example to introduce how to extend a baseline model. The method to incorporate vision bias into PBM is similar. For UBM, we only need to replace the original model assumptions with the following ones without changing any other specifications. We use < to represent all the click events previous to rank , and use C to represent all the click events in the search session.
′ is the position bias, and is the vision bias. and in the equations are values corresponding to the document at rank , but the values of and should not only depend on the rank of the current document. Instead, because the conditional probability in Equation 2 depends on the document itself, should also depend on a specific document.
= 1 | < = ′ (1) = 1 | = 0 = (2) = 1 | = 1 = 1(3)
If is alway set to 0, the event is equivalent to the event . In other words, the effects of visual features on the examination probability are ignored, and we consider an image-oriented document's examination probability only affected by the position. In this case, the extended model degrades into a standard one.
The extended UBM model has
= 1 | C = = 1 | < · = 1 | = 1 + = 0 | < · = 1 | = 0 = = 1 | < + 1 − = 0 | < · = 1 | = 0 = ′ + 1 − ′(4)
Thus, in the extended UBM model, the conditional probability given the clicks of previous documents for a click event of document at rank is
( = 1 | C) = = 1 | < · ( = 1) = ′ + 1 − ′(5)
Here, the relevance between query and document is denoted as . The probability of a click event at rank is given as
−1 ∑︁ =0 = 1 · −1 = +1 1 − + 1 − · + 1 − (6) ( 0 = 1) = 1(7)
The parameters of the extended UBM model can be estimated via EM algorithm using click-through logs. Thus, the log-likelihood of generating log data D is given as
log (D) = ∑︁ ( , , , ∈ D) · log ′ + 1 − ′ + (1 − ) · log 1 − ′ + 1 − ′ (8) ′ = { ∈ {0, ..., − 1} : = 1}
where denotes whether a document is clicked.
PARAMETER INFERENCE
In EM algorithm, parameters are estimated by alternatively carrying out Expectation and Maximization steps (E-steps and M-steps) until the estimations converge. Parameters at iteration are estimated based on the results at iteration − 1.
Standard EM Algorithm
In the Expectation step of standard EM algorithm, the goal is to maximize the log-likelihood of the model if the click-through logs are given. In order to simplify the computation, as is shown in [12], we derive the posterior distribution of the hidden variables representing the events in the model, and aim to find a value for any parameter to optimize this probability in the M-step. Due to the space limitation, this paper directly give iterative formulas for parameters in the extended UBM model as follows.
+1 = ∈ + (1 − ) 1− ′ + − ′ 1− ′ + 1− ′ ∈ 1 (9) +1 ′ = ∈ ′ ′ + 1− ′ + (1 − ) ′ 1− 1− ′ + 1− ′ ∈ 1 (10) +1 = ∈ 1− ′ ′ + 1− ′ + (1 − ) 1− ′ 1− 1− ′ + 1− ′ ∈ 1− ′ ′ + 1− ′ + (1 − ) 1− ′ 1− 1− ′ + 1− ′(11)
For the extended PBM model, the formulas are given as
+1 = ∈ + (1 − ) 1− + − 1− +(1− ) ∈ 1 (12) +1 = ∈ + (1− ) + (1 − ) 1− 1− +(1− ) ∈ 1 (13) +1 = ∈ (1− ) +(1− ) + (1 − ) (1− ) 1− 1− +(1− ) ∈ (1− ) +(1− ) + (1 − ) (1− ) 1− 1− +(1− )(14)
Regression-based EM Algorithm
In order to capture more sophisticated information contained in the image-oriented documents, specifically the visual features of images, regression-based EM algorithm is applied to infer the parameters. Regression-based EM algorithm is proposed and developed in [2], [23] and [26]. In our extended model, regression-based EM algorithm is used in the estimation of the parameter . It does not change the E-step in standard EM algorithm. The goal of its M-step is to find a proper function that reflects the dependency between x and +1 , and thus to maximize the likelihood inferred from the E-step.
Regression-based EM algorithm is preferred in the extended model for several reasons: (1) User behaviors are sparse. In the training data generated from click logs, some documents appear for many times, while others might only appear for a few times. (2) To incorporate the visual features of documents, the identifiers of documents used in standard EM algorithm have to be replaced by the feature vectors of documents. Regression-based EM algorithm is used in the extended model to overcome these difficulties. It extracts a feature vector x from every image-oriented document to implement the M-step, and incorporates the visual features of image-oriented documents in the estimation of the parameter .
Regression-based EM algorithm implements the function via regression, and fits the neural network (NN) to regress the visual features x to the derived target value = 1 | = 0 . As in [2], [23] and [26], we convert the regression problem to a classification problem and specific reasons for converting is beyond the scope of this paper. The structure of NN classifier is fairly concise. It is composed of multilayer perceptron (MLP) with sigmoid activation function , and softmax output layer [18]. When implementing regression-based EM algorithm in the estimation of vision bias, we sample based on the value of given in the M-step in standard EM algorithm with an average threshold to determine positive and negative samples. After NN classifier is trained, predicted probabilities for a sample to be in the positive class will be the estimation of the parameter corresponding to it. Algorithm 1 summarizes the complete process of regression-based EM algorithm for the extended model. Sample ∈ (0, 1) from to form training dataset D. = (x ) 11: until Convergence. 12: return (x), ′ , and . The 64-dimensional visual features are extracted from the imageoriented documents. The feature is 64-dimensional, which have enough capacity to represent the visual information of a document. The direct output of ResNet [20] has high dimension, which is redundant and hard to be optimized. Therefore, we pretrain a neural network with user click behavior to classify whether the image was clicked or not, and treat the image-oriented document as an input image and emplot ResNet-50 network which is widely used for image representation. After coming through all layers in ResNet, an input image becomes a 2048-dimensional feature. Then a fully connected layer is employed to reduce the redundant information in the past feature. Finally, the 64-dimensional features are extracted as the visual feature that we adopt for regression-based EM algorithm.
EXPERIMENTS
In this section, we evaluate the performance of different models on the dataset collected from an online image-oriented search engine in Douyin (www.douyin.com, a video community app from ByteDance). To the best of our knowledge, no public datasets are available for the experiments in this paper, and Douyin dataset is the only dataset that we can use. This dataset is generated from click-through logs searching for memes as shown in Figure 2. Images (memes) appear on the bottom of the screen horizontally. When users are chatting, they can type in queries to search images to send. The number of examined images in each search session is not fixed, since users can swipe the screen to the left or to the right to browse different images. The dataset does not involve any private user information. Based on our knowledge, there are no other public datasets with both click-through logs and original images from an image-oriented search engine. Thus, the experiments are only performed on this dataset.
Experiment Settings
The dataset is created based on the raw click-through logs. The raw logs have 3,460,178 records, each for a search session containing the clicked image, all the displayed images in that search session and the corresponding query. In order to obtain the dataset, we first filter the queries by standardizing synonyms and removing uncommon characters. Then the dataset is splited into two parts as the training dataset and the test dataset. The first 75% search sessions are the training dataset and the last 25% are the test dataset. The generation of test dataset requires dropping queries and documents that do not appear in the training dataset from the test dataset, following the experiments in [3]. In the end, the dataset contains 3,251,801 search sessions randomly sampled from the click-through logs. In the Table 1.
Following [4], [7], and [2], we compare the extended click models with their baseline models without considering models beyond the scope of click models or other biases. The initial value and in UBM are set to 0.2 and 0.5 as is recommended in [13]. They are set to 0.5 and 0.5 in PBM. In the neural network used in regressionbased EM algorithm, the learning rate is set to 0.05. The multilayer perceptron (MLP) [18] consists of two hidden layers with 16 and 8 perceptrons separately. The whole framework is implemented via PySpark and any other parameters can be found in default setting of MultilayerPerceptronClassifier in PySpark. In this section, we define vUBM-2 to represent the extended UBM model, and vPBM-2 to represent the extended PBM model.
To discuss about another possible Bayesian relationship between and in 1(b), here two additional models, vUBM-1 and vPBM-1, are built up for comparison. The proposal of these two models is inspired by [21], where the author proposes a new bias and makes it multiplied by the relevance. In a similar way, vUBM-1 and vPBM-1 model vision bias as = 1 | = 1 , and make it multipled by the original position bias. To obtain the probabilistic graph structure for these two models, we only need to replace Equation 4 with Equation 15. When we use Equation 15, the model should be re-trained and inferred by the new formula to keep the comparison fair.
= 1 | C = = 1 | < · = 1 | = 1 = ′(15)
Click Prediction
This section evaluates how models perform in click prediction tasks. Several different metrics can be used in the evaluation. Modeling Vision bias, the inherent bias that this paper studies, helps improve the performance of the extended models.
Log-likelihood.
Log-likelihood is a widely-used metric to measure the performance of click models in click predictions. Standard log-likelihood is computed as follows. Each search session is represented as and there are documents in a session. This metric is always non-positive. The higher the log-likelihood is, the better the click model performs. A perfect click model will have a log-likelihood equal to 0. The results of the experiment are shown in Table 2. The numbers in this table for vUBM-2 and vPBM-2 have passed a paired t-test (p<0.01). Both vUBM-2 and vPBM-2 significantly outperform baselines, which means better performance in click prediction. The results for comparison models vPBM-1 and vUBM-1 are also shown in Table 2.
(D) = ∑︁ ∈ ∑︁ =1 log = | < = <
Perplexity.
Perplexity is another widely-used metric to measure the data fitness of click models. Perplexity is computed for click events at each rank of a search session. The perplexity of the whole test dataset is the average of perplexities at different ranks. The formula to compute perplexity at rank is given as
= 2 − 1 | | ∈ ( log 2 +(1− ) log 2 (1− ))
where each search session is represented as and is the probability for a user to click the document at rank in the search session . A smaller perplexity indicates better data fitness, and a perfect click model will have a perplexity of 1. The improvement of perplexity value 1 over 2 is calculated as 2 − 1 2 −1 × 100%. The total perplexity of different models are reported in Table 3. Since in the dataset, a query corresponds to an unfixed number of images and a few query can have a very large number of candidate images, when computing the total perplexity, only perplexities over the first 10 ranks are averaged. It is observed that both vUBM-2 and vPBM-2 significantly outperform baselines, which indicates better data fitness. The results for comparison models vPBM-1 and vUBM-1 in perplexity are also shown in Table 3.
Comparison.
This section discusses the necessity of Equation 4 compared with modeling vision bias by Equation 15. As is mentioned in the last section, this is to demonstrate the rationality of the probabilistic graph structure in the extended models.
The experiment results can be found in Table 2 and 3. From these tables it can be found that the extended models outperform the two compared models, vUBM-1 and vPBM-1, both in log-likelihood and perplexity. It shows that in the click prediction task, our choice of Equation 4 is superior to Equation 15 to model vision bias. Thus, the following sections only use Equation 4 as a modeling foundation. Table 4: Performance on log-likelihood for different query frequencies and corresponding improvements. Extended models achieve the greatest improvements on the most sparse training data.
Sparsity Handling
This section discusses about how extended models contribute to sparsity handling. In user click logs, queries and documents are sparse. The parameters in click models are tricky to be accurately estimated since the sparsity. We independently experiment on the situations where extended models deal with sparse training data, specifically, low-frequency queries and higher rank positions, and the results show that our proposed models have superior performance on the sparse user data.
Data fitness.
We first use log-likelihood to evaluate the extended model by computing it separately for different query frequencies. Table 4 shows the log-likelihood of different models for queries appearing in the training dataset for less than 100 times. Both extended models achieve better performance than their baselines in these low-frequency queries. In this table, the lower the query frequency is, the greater improvements the extended models achieve, and the greatest improvements are both achieved on the least frequent queries. Thus, on low-frequency queries, the extended models perform well in log-likelihood. Following [13] and [19], we use perplexity to evaluate the extended model, and compute it separately for different ranks and different query frequencies. When computing perplexity for different ranks, because on average each query corresponds to 5 documents, figures and tables in this section only show the results from rank 1 to rank 5. In Table 5, the extended models achieve the best perplexity in the highest rank. Figure 3 shows the improvement trend over different ranks. With the rank raising up, the improvements also increase. Because in the dataset, not all search sessions have the same number of documents, higher rank means less training data. Thus, with higher rank and more sparse training data, extended models performs better than their baselines in perplexity, and indicates better capability to handle sparsity. Figure 3: Improvements of perplexity for different ranks. With the rank becoming higher, the training data for that rank becomes more sparse. In this figure, extended models achieve significant improvements in sparse training data. To further compare the extended models, we divide the search sessions in the test dataset into several parts based on how many times the query appears in the training dataset. Similar to the analysis of log-likelihood, we report the perplexity results for queries appearing for less than 100 times in the training dataset. In Table 6, extended models always achieve better performance in perplexity for low-frequency queries than baselines. It demonstrates that extended models are capable of tackling sparse data effectively.
Ranking
Effectiveness. In addition to data fitness, this paper also discusses if extended models handle sparsity better when it is evaluated by ranking effectiveness. Following [10], to compare different click models on their effectiveness in rankings, we use the relevance parameters in the click models and compare the accuracy in relevance prediction. Mean Reciprocal Rank (MRR) is such a metric to evaluate the performance of click models on their ranking effectiveness. Formally, given the test dataset with n queries, the MRR is defined as:
= 1 ∑︁ =1 1(16)
The experiment results are given in the Table 7. From this table there comes the conclusion that when PBM and UBM incorporates vision bias as a part of model hypothesis, it predicts the ranking better on low-frequency queries than their baseline models. It demonstrates that extended models can handle sparse data well.
Visualization Analysis
This section studies how vision bias is distributed. In order to find if vision bias has physical meaning in real-world image-oriented search engines, we first compare pairs of images corresponding to the same query with a large and a small vision bias. We then analyze what characteristics in images influence the value of vision bias by discussing images with higher and lower vision biases that do not correspond to the same query.
The images in the first pair of Figure 4 differ in two ways. First, the one with larger bias has brighter color. Second, the one with larger bias has texts written in colors contrasting with its background. Texts and brightness can be factors that influence image vision bias. In Figure 4(b), the main difference between images mainly lays on colors. The image with smaller bias, compared with the other one, has a background of similar colors, which declines its attractiveness to users. The last pair in Figure 4 shows two totally different images. The emoji one has concise shapes, but it is easy to understand and its colors are bright. The image with smaller bias looks dim, and the picture quality is not quite clear. These factors can be the reason for the difference between their biases.
The images in Figure 5 and Figure 6 differ a lot. We find that the former with contrasting colors has larger vision bias. The backgrounds in Figure 5(c) and Figure 5(d) are clean, thus making the cartoon characters stand out. Although Figure 5(b), Figure 6(b) and Figure 6(d) have dark background, colors of character in Figure 5(b) are more bright and eye-catching, which can be another way for an image to get higher vision bias. As for the texts in the images, comparisons can be made between Figure 5(a), Figure 5(d) and Figure 6(a), Figure 6(c). In this comparison, images with larger vision bias have texts very different from background. Compared with texts in similar color to backgrounds, this can be another possible reason that leads to larger vision bias. Besides, Figure 5(c) and Figure 5(d) are different from others, because they are cartoon characters with quite simple lines and shapes. Among other images, such concise characters make them stand out.
In fact, vision bias should not be attributed to enumerable factors. Our model can understand and catch it more deeply and comprehensively from high-dimensional visual features.
CONCLUSIONS
In this paper, we propose vision bias for the click model application working on image-oriented search engines, which captures the influence of the visual features on user examination probability. Few previous works have independently studied this topic and provided a specific solution in details. This paper makes up for this deficiency and provides a solution for documents retrieval in the image-oriented search engine by proposing a novel probabilistic graph structure for click models to incorporate the vision bias into them. We assume that aside from position, visual features of imageoriented documents are another important factor that influences the examination probability of each document, and introduces a parameter representing vision bias in the probabilistic graph. The assumptions about vision bias are extensible, since it can be applied to the probabilistic graph of many classical click models such as PBM and UBM. Parameters of existing click models can be estimated by standard EM algorithm. When we use a click model in the imageoriented search engine, visual features should be soundly utilized. Thus, we use regression-based EM algorithm and estimate vision bias to obtain better performance, which also has an advantage in sparsity handling.
In the experiments, we evaluate our approach on the dataset obtained from an online image-oriented search engine. This dataset includes both click-through logs and candidate images. We extend PBM and UBM by modeling vision bias as part of them. The experiment results demonstrate that the extended models are more effective for ranking and click prediction tasks. We also evaluate how well the extended models can handle the sparsity problem, and the experiment results show that in both data fitness and ranking effectiveness, extended models significantly outperform their baseline models on the sparse user data.
Since various deep learning techniques have been proposed to handle image features, future work can include more sophisticated neural network structure for vision bias parameter inference in order to capture more demanded information from visual features. (c) Images with a large or small vision bias corresponding to the same query "Thanks for your love". The left one has concise but expressive shape.
SUPPLEMENTAL MATERIAL A EM ALGORITHM
By giving the specific formula, we explain the E-step and the M-step of the standard EM algorithm to estimate the parameter .
A.1 E-step
A.2 M-step
In the Maximization step, the standard EM algorithm updates the parameters in iteration + 1 by maximizing the likelihood of the posterior probabilities obtained in the Expectation step given the click data. For example, to update the parameter , we use the following equation +1 = ∈ = 1, = 0, < | C ∈ = 0, < | C represents the search session where is given as the query. According to the formulas above, finally the parameter in the iteration + 1 can be updated based on parameters obtained in the iteration .
To summarize, all probabilities of hidden variables in the E-step in the extended UBM model can be computed by the following formula.
′ 1− 1− ′ + 1− ′ ∈ 1 +1 = ∈ 1− ′ ′ + 1− ′ + (1 − ) 1− ′ 1− 1− ′ + 1− ′ ∈ 1− ′ ′ + 1− ′ + (1 − ) 1− ′ 1− 1− ′ + 1− ′
Figure 1 :
1The graphical models of the examination hypothesis and the vision bias hypothesis
Figure 1 (
1a) shows the structure of PBM and UBM. In comparison, Figure 1(b) illustrates the hypothesis of vision bias. The figure shows that a latent event is inserted between the click event and the event . This event is an intermediate state where the document is examined. It represents the result for a document to be examined, and depends on the document position and its visual features. Though in the figure, it is represented by , in fact it not only depends on the rank of the document. Here, only means that corresponds to the document at rank .
) = softmax ( (x))10:
Figure 2 :
2The image-oriented search engine with a Chinese query (hey hey hey).
Figure 4 :
4Images with a large or small vision bias corresponding to the same query.
Vision bias: 0.604
Figure 5 :
5Images with large vision biases. They have bright or contrasting background color, concise shapes and colorful texts.
Figure 6 :
6Images with small vision biases. They look dim and not eye-catching.
= 0 ,
0< | C = = 0 | < , C · < | C = ( = 0 | C)because the click events at different ranks are independent or conditionally independent. Similarly, the prior distribution of the event = 1 and = 0 can be derived as follows.= 1, = 0, < | C = = 1, = 0 | < ,Above are the formula for the parameter to optimize the model in the next step.
Accordingly, the M-step uses the following formulas to update all the parameters in the extended UBM model. We need to set up initial values for the parameters at iteration 0.
Algorithm 1 Regression-based EM algorithm for the extended model.Input: .
Output: (x), ′ , and
.
1: Extract visual features x .
2: Set up ′ ,
and .
3: repeat
4:
for each ∈ do
5:
Estimate vision bias and update
and ′ .
6:
end for
7:
Table 1 :
1Query Frequency StatisticsQuery frequency
Training
dataset
Test dataset
1-10
88,875
39,289
10-30
25,080
11,224
30-100
9,034
4,259
100-500
3,871
1,801
500-2000
1,104
493
2000-10000
404
159
≥ 10000
157
62
training dataset, there are 2,438,434 search sessions, 128,525 unique
queries and 184,335 unique images. In the test dataset, there are
813,367 search sessions, 57,287 unique queries and 141,546 unique
images. We pay attention to the frequency of queries, and summa-
rize it in
Table 2 :
2Performance on log-likelihood. Extended models
vPBM-2 and vUBM-2 always have the greatest improve-
ments.
Log-likelihood Improvement
UBM
-0.433
-
vUBM-1
-0.431
+0.46%
vUBM-2
-0.409
+5.54%
PBM
-0.429
-
vPBM-1
-0.426
+0.70%
vPBM-2
-0.409
+4.66%
Table 3 :
3Performance on total perplexity. Extended models vPBM-2 and vUBM-2 have the lowest perplexity and the greatest improvements.Perplexity Improvement
UBM
1.417
-
vUBM-1
1.409
+1.92%
vUBM-2
1.388
+6.95%
PBM
1.412
-
vPBM-1
1.402
+2.43%
vPBM-2
1.381
+7.52%
Table 5 :
5Performance on perplexity for different ranks and corresponding improvements. The higher the rank, the more sparse the training data. Both extended models achieve the greatest improvements on the highest rank, which corresponds to the most sparse training data.Rank UBM
vUBM-2
PBM
vPBM-2
1
1.701
1.666 (+4.99%)
1.708
1.708 (+0.00%)
2
1.609
1.577 (+5.25%)
1.608
1.592 (+2.63%)
3
1.531
1.480 (+9.60%)
1.508
1.475 (+6.50%)
4
1.413
1.372 (+9.93%)
1.399 1.345 (+13.53%)
5
1.331 1.293 (+11.48%) 1.318 1.254 (+20.13%)
Table 6 :
6Performance on perplexity for different query fre-
quencies. Extended models achieve the greatest improve-
ments when query frequency is less than 10 times in the
training dataset. The more sparse the training data is, the
greater improvements the extended models have.
Query Freq. UBM
vUBM-2
PBM
vPBM-2
1-10
1.389 1.238 (+38.82%) 1.381 1.228 (+40.16%)
10-30
1.381 1.298 (+21.78%) 1.375 1.284 (+24.27%)
30-100
1.374 1.330 (+11.76%) 1.370 1.319 (+13.78%)
Table 7 :
7Performance on MRR for different query frequencies. Extended models achieve further improvements when query frequency is less than 100 times in the training dataset.Query Freq. UBM
vUBM-2
PBM
vPBM-2
1-10
0.817 0.835 (+2.20%) 0.817 0.833 (+1.96%)
10-30
0.733 0.758 (+3.41%) 0.733 0.759 (+3.55%)
30-100
0.665 0.677 (+1.80%) 0.665 0.683 (+2.71%)
i. OCR: Hello, beauty. Vision bias: 0.603 ii. W/O OCR Vision bias: 0.345 i. OCR: Send you my love. Vision bias: 0.602 ii. Am I your beloved boy? Vision bias: 0.272 i. OCR: Harry Potter and the chamber of secrets. Vision bias: 0.531 ii. W/O OCR Vision bias: 0.418 (a) Images with a large or small vision bias corresponding to the same query "Yes, beauty". The left one has colorful texts and brighter color. i. OCR: Hello, beauty. Vision bias: 0.603 ii. W/O OCR Vision bias: 0.345 i. OCR: Send you my love. Vision bias: 0.602 ii. Am I your beloved boy? Vision bias: 0.272 i. OCR: Harry Potter and the chamber of secrets. Vision bias: 0.531 ii. W/O OCR Vision bias: 0.418 (b) Images with a large or small vision bias corresponding to the same query "Try hard". The left one has a background of contrasting colors. i. OCR: Hello, beauty. Vision bias: 0.603 ii. W/O OCR Vision bias: 0.345 i. OCR: Send you my love. Vision bias: 0.602 ii. OCR: Am I your beloved boy? Vision bias: 0.272 i. OCR: Harry Potter and the chamber of secrets. Vision bias: 0.531 ii. W/O OCR Vision bias: 0.418
A Novel Click Model and Its Applications to Online Advertising. New York, NY, USAA Novel Click Model and Its Applications to Online Advertising. New York, NY, USA.
Addressing Trust Bias for Unbiased Learning-to-Rank. Aman Agarwal, Xuanhui Wang, Cheng Li, Mike Bendersky, Marc Najork, WWWAman Agarwal, Xuanhui Wang, Cheng Li, Mike Bendersky, and Marc Najork. 2019. Addressing Trust Bias for Unbiased Learning-to-Rank. WWW (2019), 4-14.
. Ilya Maarten De Rijke Aleksandr Chuklin, Markov, Click Models for Web Search. 7Maarten de Rijke Aleksandr Chuklin, Ilya Markov. 2015. Click Models for Web Search. Vol. 7. 1-115 pages.
A Neural Click Model for Web Search. Alexey Borisov, Ilya Markov, Maarten Rijke, Pavel Serdyukov, WWWAlexey Borisov, Ilya Markov, Maarten Rijke, and Pavel Serdyukov. 2016. A Neural Click Model for Web Search. WWW, 531-541.
A Dynamic Bayesian Network Click Model for Web Search Ranking. Olivier Chapelle, Ya Zhang, WWWOlivier Chapelle and Ya Zhang. 2009. A Dynamic Bayesian Network Click Model for Web Search Ranking. WWW (2009), 1-10.
Beyond Ten Blue Links: Enabling User Click Modeling in Federated Web Search. Danqi Chen, Weizhu Chen, Haixun Wang, Zheng Chen, Qiang Yang, WSDM. Danqi Chen, Weizhu Chen, Haixun Wang, Zheng Chen, and Qiang Yang. 2012. Beyond Ten Blue Links: Enabling User Click Modeling in Federated Web Search. WSDM (2012), 463-472.
Adish Singla, and Qiang Yang. 2012. A Noise-aware Click Model for Web Search. Weizhu Chen, Dong Wang, Yuchen Zhang, Zheng Chen, WSDM. Weizhu Chen, Dong Wang, Yuchen Zhang, Zheng Chen, Adish Singla, and Qiang Yang. 2012. A Noise-aware Click Model for Web Search. WSDM (2012), 313-322.
iLike: Bridging the Semantic Gap in Vertical Image Search by Integrating Text and Visual Features. Knowledge and Data Engineering. Yuxin Chen, Hariprasad Sampathkumar, Bo Luo, Xue-Wen Chen, IEEE Transactions on. 25Yuxin Chen, Hariprasad Sampathkumar, Bo Luo, and Xue-wen Chen. 2013. iLike: Bridging the Semantic Gap in Vertical Image Search by Integrating Text and Visual Features. Knowledge and Data Engineering, IEEE Transactions on 25 (10 2013), 2257-2270.
The Influence of Caption Features on Clickthrough Patterns in Web Search. L A Charles, Eugene Clarke, Susan Agichtein, Ryen W Dumais, White, SIGIR. Charles L. A. Clarke, Eugene Agichtein, Susan Dumais, and Ryen W. White. 2007. The Influence of Caption Features on Clickthrough Patterns in Web Search. SIGIR (2007), 135-142.
Intent Based Relevance Estimation from Click Logs. Mandayam Prakash, Comar, H Srinivasan, Sengamedu, Prakash Mandayam Comar and Srinivasan H Sengamedu. 2017. Intent Based Relevance Estimation from Click Logs. CIKM (2017), 59-66.
An Experimental Comparison of Click Position-bias Models. Nick Craswell, Onno Zoeter, Michael Taylor, Bill Ramsey, WSDM. Nick Craswell, Onno Zoeter, Michael Taylor, and Bill Ramsey. 2008. An Experi- mental Comparison of Click Position-bias Models. WSDM (2008), 87-94.
Maximum Likelihood from Incomplete Data via the EM algorithm. Arthur Dempster, Natalie Laird, Donald Rubin, Journal of the Royal Statistical Society. 39Arthur Dempster, Natalie Laird, and Donald Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM algorithm. Journal of the Royal Statistical Society 39 (1977), 1-38.
A User Browsing Model to Predict Search Engine Click Data from Past Observations. Georges E Dupret, Benjamin Piwowarski, SIGIR. Georges E. Dupret and Benjamin Piwowarski. 2008. A User Browsing Model to Predict Search Engine Click Data from Past Observations. SIGIR (2008), 331-338.
Color and Psychological Functioning: A Review of Theoretical and Empirical Work. Andrew Elliot , Frontiers in psychology. 6368Andrew Elliot. 2015. Color and Psychological Functioning: A Review of Theoret- ical and Empirical Work. Frontiers in psychology 6 (04 2015), 368.
Yixing Fan, Jiafeng Guo, Yanyan Lan, Liang Pang, and Xueqi Cheng. 2017. Learning Visual Features from Snapshots for Web Search. CIKM. Yixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, Liang Pang, and Xueqi Cheng. 2017. Learning Visual Features from Snapshots for Web Search. CIKM (10 2017).
Greedy Function Approximation: A Gradient Boosting Machine. Jerome H Friedman, Annals of Statistics. 29Jerome H. Friedman. 2001. Greedy Function Approximation: A Gradient Boosting Machine. Annals of Statistics 29 (2001), 1189-1232.
The role of attractiveness in web image search. Bo Geng, Linjun Yang, Xian-Sheng Hua, Shipeng Li, MM. Bo Geng, Linjun Yang, Xian-Sheng Hua, and Shipeng Li. 2011. The role of attractiveness in web image search. MM (11 2011), 63-72.
. Ian Goodfellow, Yoshua Bengio, Aaron Courville, Yoshua Bengio, Deep learning. 1MIT press CambridgeIan Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning. Vol. 1. MIT press Cambridge.
Click Chain Model in Web Search. Fan Guo, Chao Liu, Anitha Kannan, Tom Minka, Michael Taylor, Yi-Min Wang, Christos Faloutsos, WWWFan Guo, Chao Liu, Anitha Kannan, Tom Minka, Michael Taylor, Yi-Min Wang, and Christos Faloutsos. 2009. Click Chain Model in Web Search. WWW (2009), 11-20.
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. CVPR (2016), 770-778.
Characterizing Search Intent Diversity into Click Models. Botao Hu, Yuchen Zhang, Weizhu Chen, Gang Wang, Qiang Yang, WWWBotao Hu, Yuchen Zhang, Weizhu Chen, Gang Wang, and Qiang Yang. 2011. Characterizing Search Intent Diversity into Click Models. WWW (2011), 17-26.
Multimedia Search Reranking: A Literature Survey. Tao Mei, Yong Rui, Shipeng Li, Qi Tian, ACM Computing Surveys (CSUR). 46Tao Mei, Yong Rui, Shipeng Li, and Qi Tian. 2014. Multimedia Search Reranking: A Literature Survey. ACM Computing Surveys (CSUR) 46 (01 2014).
Attribute-based Propensity for Unbiased Learning in Recommender Systems: Algorithm and Case Studies. Zhen Qin, J Suming, Donald Chen, Yongwoo Metzler, Jingzheng Noh, Xuanhui Qin, Wang, KDD. Zhen Qin, Suming J. Chen, Donald Metzler, Yongwoo Noh, Jingzheng Qin, and Xuanhui Wang. 2020. Attribute-based Propensity for Unbiased Learning in Recommender Systems: Algorithm and Case Studies. KDD (2020).
Incorporating Vertical Results into Search Click Models. Chao Wang, Yiqun Liu, Min Zhang, Shaoping Ma, Meihong Zheng, Jing Qian, Kuo Zhang, SIGIR. Chao Wang, Yiqun Liu, Min Zhang, Shaoping Ma, Meihong Zheng, Jing Qian, and Kuo Zhang. 2013. Incorporating Vertical Results into Search Click Models. SIGIR (2013), 503-512.
Contentaware Click Modeling. Hongning Wang, Chengxiang Zhai, Anlei Dong, Yi Chang, WWWHongning Wang, ChengXiang Zhai, Anlei Dong, and Yi Chang. 2013. Content- aware Click Modeling. WWW, 1365-1376.
Position Bias Estimation for Unbiased Learning to Rank in Personal Search. Xuanhui Wang, Nadav Golbandi, Michael Bendersky, Donald Metzler, Marc Najork, WSDM. Xuanhui Wang, Nadav Golbandi, Michael Bendersky, Donald Metzler, and Marc Najork. 2018. Position Bias Estimation for Unbiased Learning to Rank in Personal Search. WSDM (2018), 610-618.
The Influence of Image Search Intents on User Behavior and Satisfaction. Zhijing Wu, Yiqun Liu, Qianfan Zhang, Kailu Wu, Zhang Min, Shaoping Ma, WSDM. Zhijing Wu, Yiqun Liu, Qianfan Zhang, Kailu Wu, Zhang Min, and Shaoping Ma. 2019. The Influence of Image Search Intents on User Behavior and Satisfaction. WSDM (01 2019), 645-653.
Beyond Position Bias: Examining Result Attractiveness as a Source of Presentation Bias in Clickthrough Data. Www, WWW. 2010. Beyond Position Bias: Examining Result Attractiveness as a Source of Presentation Bias in Clickthrough Data. (2010), 1011-1018.
Investigating Examination Behavior of Image Search Users. Xiaohui Xie, Yiqun Liu, Xiaochuan Wang, Meng Wang, Zhijing Wu, Yingying Wu, Min Zhang, Shaoping Ma, SIGIR. Xiaohui Xie, Yiqun Liu, Xiaochuan Wang, Meng Wang, Zhijing Wu, Yingying Wu, Min Zhang, and Shaoping Ma. 2017. Investigating Examination Behavior of Image Search Users. SIGIR (08 2017), 275-284.
Constructing an Interaction Behavior Model for Web Image Search. Xiaohui Xie, Jiaxin Mao, Maarten Rijke, Ruizhe Zhang, Zhang Min, Shaoping Ma, SIGIR. Xiaohui Xie, Jiaxin Mao, Maarten Rijke, Ruizhe Zhang, Zhang Min, and Shaoping Ma. 2018. Constructing an Interaction Behavior Model for Web Image Search. SIGIR (06 2018), 425-434.
Learning to Rank Using User Clicks and Visual Features for Image Retrieval. Jun Yu, Dacheng Tao, Meng Wang, Yong Rui, IEEE transactions on cybernetics. 45Jun Yu, Dacheng Tao, Meng Wang, and Yong Rui. 2014. Learning to Rank Us- ing User Clicks and Visual Features for Image Retrieval. IEEE transactions on cybernetics 45 (07 2014).
Weizhu Zeyuan Allen Zhu, Tom Chen, Minka, Chenguang Zhu, and Zheng Chen. 2010. A Novel Click Model and Its Applications to Online Advertising. WSDM. Zeyuan Allen Zhu, Weizhu Chen, Tom Minka, Chenguang Zhu, and Zheng Chen. 2010. A Novel Click Model and Its Applications to Online Advertising. WSDM (2010), 321-330.
| [] |
[
"Time-dependent density functional theory for many-electron systems interacting with cavity photons",
"Time-dependent density functional theory for many-electron systems interacting with cavity photons"
] | [
"I V Tokatly \nNano-bio Spectroscopy group and ETSF Scientific Development Centre\nDepartamento de Física de Materiales\nUniversidad del País Vasco UPV/EHU\nE-20018San SebastíanSpain\n\nIKERBASQUE\nBasque Foundation for Science\n48011BilbaoSpain\n"
] | [
"Nano-bio Spectroscopy group and ETSF Scientific Development Centre\nDepartamento de Física de Materiales\nUniversidad del País Vasco UPV/EHU\nE-20018San SebastíanSpain",
"IKERBASQUE\nBasque Foundation for Science\n48011BilbaoSpain"
] | [] | Time-dependent (current) density functional theory for many-electron systems strongly coupled to quantized electromagnetic modes of a microcavity is proposed. It is shown that the electron-photon wave function is a unique functional of the electronic (current) density and the expectation values of photonic coordinates. The Kohn-Sham system is constructed, which allows to calculate the above basic variables by solving selfconsistent equations for noninteracting particles. We suggest possible approximations for the exchange-correlation potentials and discuss implications of this approach for the theory of open quantum systems. | 10.1103/physrevlett.110.233001 | [
"https://arxiv.org/pdf/1303.1947v1.pdf"
] | 35,645,073 | 1303.1947 | 003b64ba5844f87a78b6906be33fc6b2385fe567 |
Time-dependent density functional theory for many-electron systems interacting with cavity photons
8 Mar 2013
I V Tokatly
Nano-bio Spectroscopy group and ETSF Scientific Development Centre
Departamento de Física de Materiales
Universidad del País Vasco UPV/EHU
E-20018San SebastíanSpain
IKERBASQUE
Basque Foundation for Science
48011BilbaoSpain
Time-dependent density functional theory for many-electron systems interacting with cavity photons
8 Mar 2013
Time-dependent (current) density functional theory for many-electron systems strongly coupled to quantized electromagnetic modes of a microcavity is proposed. It is shown that the electron-photon wave function is a unique functional of the electronic (current) density and the expectation values of photonic coordinates. The Kohn-Sham system is constructed, which allows to calculate the above basic variables by solving selfconsistent equations for noninteracting particles. We suggest possible approximations for the exchange-correlation potentials and discuss implications of this approach for the theory of open quantum systems.
Time-dependent density functional theory (TDDFT) is a theoretical framework which, similarly to the ground state DFT [1], relies on the one-to-one mapping of the density of particles to the external potential [2]. The unique density-potential correspondence implies a possibility to calculate the exact time-dependent density by solving Hartree-like equations for fictitious noninteracting Kohn-Sham (KS) particles. This tremendous simplification of the problem makes TDDFT one of the most popular ab initio approaches for describing quantum dynamics of realistic many-body systems [3,4].
Standard TDDFT is formulated for systems of quantum particles driven by classical electromagnetic fields [2][3][4], which covers most traditional problems in physics and chemistry. However, nowadays the experimental situation is rapidly changing. Progress in the fields of cavity and circuit quantum electrodynamics (QED) opens a possibility to study many-electron systems strongly interacting with quantum light. Notable examples are atoms in optical cavities in cavity-QED [5][6][7], or mesoscopic systems such as superconducting qubits [8][9][10] and quantum dots [8][9][10] in circuit-QED. Recently a strong coupling of molecular states to microcavity photons and the modification of chemical landscapes by cavity vacuum fields have been reported [11][12][13]. Obviously, the classical treatment of external fields prevents application of TDDFT to this new interesting class of problems. This paper presents TDDFT for systems of electrons strongly coupled to (or driven by) a quantized electromagnetic field [14]. We prove the generalized mapping theorems for TDDFT and for time-dependent current density functional theory (TDCDFT). In both cases we analyze the structure and properties of the exchange correlation (xc) potentials and discuss possible approximation strategies. Finally we make a connection of the present theory to TDDFT for open quantum systems.
Consider a system of N electrons, e. g., an atom or a molecule, placed inside a cavity hosting M photon modes. In the Schrödinger picture the configuration of the system is specified by the positions {x j } N j=1 of the electrons and the set {q α } M α=1 of photonic coordinates. The full system is described by the wave function Ψ({x j }, {q α }, t). Assuming as usual [15] that the wavelength of relevant photon modes is much larger than the size of the electronic system we adopt the dipole approximation for the electron-photon coupling. The Hamiltonian of the system takes the following form
H = N j=1 1 2m i∇ j + A ext (x j , t) + α λ α q α 2 + i>j W xi−xj + M α=1 − 1 2 ∂ 2 qα + 1 2 ω 2 α q 2 α − J α ext (t)q α (1)
where W xi−xj is the electron-electron interaction, ω α are the frequencies of the photon modes, and λ α describes coupling to the α-mode. The electron and photon subsystems can be driven externally by the classical vector potential A ext (x, t) and the external "currents" J α ext (t). The vector potential describes (in a temporal gauge) forces from the ions in atoms and molecules and all other possible classical fields. The currents J α ext (t) allow for an external excitation of the cavity modes. The time evolution from a given initial state Ψ 0 ({x j }, {q α }) is governed by the Schrödinger equation
i∂ t Ψ({x j }, {q α }, t) =ĤΨ({x j }, {q α }, t).(2)
Solution of this equation gives a complete description of the system for a fixed configuration of the external fields. All DFT-like approaches assume that the state of the system is also uniquely determined by a small set of basic observables, such the density in TDDFT, the current in TDCDFT, and possibly something else in other generalizations of the theory. Below we construct two generalizations of TDDFT for electron-photon systems described by the Hamiltonian of Eq. (1).
Let us start with a technically simpler current-based theory. Consider the electronic current j(x, t) and expec-tation values Q α (t) of photon coordinates as the basic variables. These variables are defined as follows
Q α = Ψ|q α |Ψ ,(3)j = Ψ|ĵ p (x)|Ψ − n m A ext − α λ α m Ψ|q αn (x)|Ψ ,(4)
where n(x, t) = Ψ|n(x)|Ψ is the electronic density, and
n(x) = j δ(x − x j ) andĵ p (x) = −i 2m j {∇ j , δ(x − x j )
} are the density and paramagnetic current operators.
Equations of motion for Q α follow Eqs. (3) and (2):
Q α + ω 2 α Q α = λ α J(t) + J α ext (t)(5)
where J(t) = j(x, t)dx is the space-averaged electronic current. Equation (5) is simply the Maxwell equation for the cavity vector potential projected on the α-mode.
Equations (1)-(4) determine the wave function Ψ(t), and the basic variables j and Q α as functionals of the initial state Ψ 0 , and the external fields A ext and J α ext . This defines a unique map
{Ψ 0 , A ext , J α ext } → {Ψ, j, Q α }. TD- CDFT assumes the existence of a unique "inverse map" {Ψ 0 , j, Q α } → {Ψ, A ext , J α ext }.
That is, given the initial state and the basic observables one can uniquely recover the full wave function and the external fields that generate the prescribed dynamics of the basic variables.
To prove the uniqueness of the inverse map we follow the nonlinear Schrödinger equation (NLSE) approach [16,17]. Assume that j(x, t) and Q α (t) are given, and express the external fields from Eqs. (4) and (5) as follows,
A ext = m n Ψ|ĵ p (x) − j|Ψ − α λ α n Ψ|q αn (x)|Ψ , (6) J α ext =Q α + ω 2 α Q α − λ α J.(7)
This defines the external fields as explicit functionals of the observables j(x, t) and Q α (t), and the instantaneous state Ψ(t). Substitution of Eqs. (6) and (7) into the Hamiltonian (1) turns Eq. (2) into the many-body NLSE
i∂ t Ψ(t) =Ĥ[j, Q α , Ψ]Ψ(t),(8)
whereĤ[j, Q α , Ψ] is an instantaneous functional of Ψ(t), which depends parametrically on j(x, t) and Q α (t). The uniqueness of a solution to Eq. (8) can be proven easily under the usual in TD(C)DFT assumption of tanalyticity [2,18,19]. Assuming that j(x, t) and Q α (t) are analytic functions in time, we represent them and unknown Ψ(t), by the Taylor series
j(t) = ∞ k=0 j (k) t k , Q α (t) = ∞ k=0 Q (k) α t k , Ψ(t) = ∞ k=0 Ψ (k) t k .
After inserting these series into Eq. (8) one observes that all coefficients Ψ (k) with k > 0 can be expressed recursively in terms of j (k) , Q
α , and Ψ (0) ≡ Ψ 0 . The simple reason for this is that the right hand side in Eq. (8) is an instantaneous functional of Ψ(t), while the left hand side ∼ ∂ t Ψ(t). As the recursion produces the unique Taylor series for Ψ(t), the many-body wave function is a unique functional of the initial state and the basic variables, Ψ[Ψ 0 , j, Q α ]. By substituting this wave function into Eq. (6) we find the functional A ext [Ψ 0 , j, Q α ], which completes the proof of the TDCDFT mapping theorem.
The KS system for this theory is constructed as follows. Consider a system of N noninteracting particles coupled to the photon modes at the mean field level. This system is described by a set of N one-particle KS orbitals ϕ j (x, t) which satisfy the following equations
i∂ t ϕ j = 1 2m i∇ + A S + α λ α Q α 2 ϕ j ,(9)
where Q α (t) is the solution to Eq. (5) with J(t) being replaced by the space-average of the KS current density
j S = 1 m j Im(ϕ * j ∇ϕ j ) − n m A S + α λ α Q α . (10)
Using the above NLSE argumentation (or the standard TDCDFT mapping [19]) we find that ϕ j (x, t) are unique functionals of the KS current j S (x, t) and the KS initial state Ψ S 0 . A comparison of Eqs. (4) and (10) shows that the KS current reproduces the physical current
j S = j if A S in Eq. (9) is defined as A S = A ext + A Hxc , where A Hxc = 1 n j Im(ϕ * j ∇ϕ j ) − m Ψ|ĵ p (x)|Ψ + α λ α Ψ|∆q α ∆n(x)|Ψ(11)
Here ∆q α = q α − Q α (t), and ∆n(x) =n(x) − n(x, t) are the fluctuation operators for the photonic coordinates and the electronic density, respectively. By construction, for given initial states Ψ 0 and Ψ S 0 , the potential A Hxc is a gauge invariant universal functional of j and Q α .
Therefore the current and the photonic coordinates can be calculated from a system of Eqs. (9), (5) describing noninteracting fermions driven by a selfconsistent field and coupled to a set of classical harmonic oscillators. There is a deep reason for explicitly separating the "mean-field" part A mf = α λ α Q α of the selfconsistent potential in Eq. (9). This part accounts for the net force exerted on electrons from the photons. The remaining Hxc-part A Hxc does not produce a global force,
j × (∇ × A Hxc ) − n∂ t A Hxc dx = 0,(12)
which can be checked directly using Eqs. (11), (2), and (9). Apparently both Hartree and xc contributions to A Hxc = A H +A xc satisfy the identity of Eq. (12) independently. Equation (12) is a generalization of the zero-force theorem [20] for the considered electron-photon system.
In this regard A xc is similar to the xc potential in the usual TDCDFT for closed purely electronic systems. If the electrons are driven by a scalar external potential V ext (x, t), one can choose the density n(x, t) as the basic variable for electronic degrees of freedom. Let us identify the photonic basic observables for the electronphoton TDDFT. First, we transform of the photon field in Eq. (1) from the velocity to the length gauge [15]. Then we perform the canonical transformation of photon variables, i∂ qα → ω α p α , q α → −iω −1 α ∂ pα , which "exchanges" the photon coordinates and momenta while preserving their commutation relations. The system is now described by the wave function Φ({x j }, {p α }, t) that is the Fourier transform of Ψ({x j }, {q α }, t). The final transformed Hamiltonian takes the form
H = j − ∇ 2 j 2m + V ext (x j , t) + i>j W xi−xj + α − 1 2 ∂ 2 pα + ω 2 α 2 p α − λ α ω αX 2 +J α ext (t) ω α p α ,(13)
whereX =
N j=1 x j . The structure of Eq. (13) suggests that the proper basic variables are the density n(x, t) and the expectation values P α (t) of the photon momenta n(x, t) = Φ|n(x)|Φ , P α (t) = Φ|p α |Φ .
Equations of motion for the basic variables read
P α + ω 2 α P α − ω α λ α R = −J α ext /ω α ,(15)mn + ∇F str + α ∇f α = ∇(n∇V ext ),(16)
where R(t) = Φ|X|Φ = xn(x, t)dx is the expectation value of the center of mass coordinate, and F str = im Φ|[T +Ŵ ,ĵ p ]|Φ = −∇Π ↔ is the usual electronic stress force which is equal to the divergence of the electronic stress tensor. The force f α (x, t) exerted from the αphoton mode on electrons is given by the expression
f α (x, t) = λ α Φ|(ω α p α − λ αX )n(x)|Φ .(17)
Now we are ready to prove the uniqueness of the map {Φ 0 , n, P α } → {Φ, V ext , J α ext } from the initial state and the observables to the time-dependent wave function and the external fields. The corresponding many-body NLSE is constructed using Eqs. (15) and (16).
By solving Eqs. (15) and (16) forJ α ext and V ext we get the external fields as functionals of the basic variables and the instantaneous wave function Φ(t), V ext [Φ, n] anḋ J α ext [n, P α ]. Inserting these functionals into Eqs. (13) we obtain a Φ-dependent HamiltonianĤ[n, P α , Φ] of the many-body NLSE that determines the wave function for a given initial state Φ 0 , the density n(t) and photon momenta P α (t). The uniqueness of a solution to this NLSE is demonstrated in exactly the same way as for the above TDCDFT case, provided the standard t-analyticity conditions are fulfilled. This proves the generalized TDDFT mapping theorem: the many-body wave function and the external fields are the unique functionals of the basic variables, n and P α , and the initial state [21].
The KS system can be again constructed explicitly. Consider a system of noninteracting particles described by N KS orbitals which satisfy the equations
i∂ t φ j = − ∇ 2 2m φ j + V S + α (ω α P α −λ α R)λ α x φ j ,(18)
where the second term in the square brackets is the meanfield analog of the electron-photon interaction term in Eq. (13). The force balance equation for this system takes the form
mn + ∇F S str + ∇ α λ α (ω α P α − λ α R)n = ∇(n∇V S ), (19) where F S str = im Φ S |[T ,ĵ p ]|Φ S = −∇Π ↔
S is the kinetic stress force of noninteracting fermions [Φ S (t) is the KS Slater determinant]. By applying the NLSE arguments to Eqs. (18)- (19) we conclude that ϕ j and V S are unique functionals of n(x, t), P α (t), and the KS initial state Φ S 0 . Then, from Eqs. (16) and (19) one finds that the KS density reproduces the exact density if V S is of the form
V S = V ext + V el Hxc + α V α xc ,(20)
where the universal functionals V el Hxc [n, P ] and V α xc [n, P ] are defined via the following Sturm-Liouville problems
∇(n∇V el Hxc ) = ∇(F S str − F str ) = ∇(∇Π ↔ Hxc ),(21)∇(n∇V α xc ) = ∇λ α Φ|(λ α ∆X − ω α ∆p α )∆n|Φ ,(22)
with ∆p α = p α − P α (t) and ∆X =X − R(t) being the fluctuation operators for the photon momenta and the center of mass coordinate of the electrons. Interestingly, in TDDFT the total xc potential is naturally separated into the usual electronic stress contribution and the contributions assigned to each photon mode. It is obvious from Eqs. (21) and (22) that each contribution to the total xc potential satisfies the zero-force theorem, n∇V el Hxc dx = n∇V α xc dx = 0. The net photon force exerted on electrons is fully captured by the mean field electron-photon potential in Eq. (18). The zero force theorem is a consequence of the harmonic potential theorem (HPT) [20,22], which also holds true here as photons form a set of harmonic oscillators coupled bilinearly to the electronic center of mass.
In practice any DFT-type approach requires approximations for xc potentials. In the present generalization of the theory we succeeded to define the xc potential in such a way that it has the same general properties, and obeys the same set of constraints as the xc potential in the usual purely electronic TDDFT. This suggests natural strategies for constructing approximations.
(i) The first possibility is a velocity gradient expansion. At zero level we set V α xc = 0 and take V el xc = V ALDA xc , the xc potential in the standard adiabatic local density approximation (ALDA). This seemingly naive approximation exactly reproduces the correct HPT-type dynamics as for the rigid motion with a uniform velocity v = j/n the effect of photons on the density dynamics is exhausted by the mean-field contribution. The zero level approximation can be viewed as a generalization of ALDA. The dynamical corrections should be proportional to velocity gradients. It should be possible to derive them perturbatively in the TDCDFT scheme along the lines of the Vignale-Kohn approximation [23,24].
(ii) Probably a more promising strategy is to make a connection of the effective KS potential to the many-body theory [25]. Beyond the mean field level the electronphoton coupling generates a retarded photon-mediated interaction between the electrons. The corresponding photon propagators will enter the diagrams for the electronic self energy as additional interaction lines. The new contribution to the self energy can then be connected to the xc potential via Sham-Schlütter equation [26,27]. In principle the corresponding xc potential can be constructed perturbatively to any desired order in the coupling constant [28]. However, already the simplest approximation, generated by the exchange-like diagram, is expected to capture the important physics. Formally this approximation for V α xc is an analog of the x-only optimized effective potential [27,29]. Physically it should be responsible for the Lamb shift effects and for the spontaneous photon emission in nonequilibrium situations.
Exploring practical performance of these approximations is an interesting direction for the future research.
If the functionals V Hxc [n, P α ] and V α xc [n, P α ] are known, the basic variables, n(x, t) and P α (t), can be calculated by solving Eqs. (18), (15). In general the KS Eq. (18) should be solved numerically, while Eq. (15) always admits an analytic solution. For example, for the equilibrium initial state and J α ext = 0 this solution reads [30,31]. Therefore, as a byproduct we obtained TDDFT for open systems coupled to the CL bath of harmonic oscillators. Let us assume Ohmic spectral density of the bath, π α λ µ α λ ν α δ(ω − ω α ) = 2ηδ µν , where η is the friction constant. In this case the selfconsistent potential in Eq. (18) reduces to the form V eff = V Hxc +ηNṘx. The last, mean field term is exactly the potential in the phenomenological dissipative NLSE proposed by Albrecht [32,33]. Hence already at zero level we recover one of the heuristic theories of quantum dissipation. Deficiencies of the Albrecht equation should be corrected by going beyond the zero level approximation. Currently there are several formulations of TDDFT for open systems, based on the master equation for the density matrix [34][35][36], or on the stochastic Schrödinger equation [37,38]. At the level of the final KS equations our theory is similar to the formulation of Refs. [35,36] which also allows for the unitary propagation of KS orbitals. The conceptual difference is that in the present case both the TDDFT mapping and the approximation strategies are universally valid for the cavity situation with a few discrete photon modes and for the bath with a continuous spectral density. The bath in traced out at the very last step after setting up the TDDFT framework together with approximations.
P α (t) = t 0 sin[ω α (t − t ′ )]λ α R(t ′ )dt ′ .(23)
In conclusion, TD(C)DFT for systems strongly coupled to the cavity photon fields is proposed. We proved the corresponding generalizations of the mapping theorems, established the existence of the KS system, and suggested a few technically feasible approximation strategies. In the limit of dense spectrum of photon modes this approach naturally leads to TD(C)DFT for open quantum systems. This work is a step towards ab initio theory of various cavity/circuit QED experiments, and practical TDDFT for dissipative systems.
This work was supported by the Spanish MEC (FIS2007-65702-C02-01).
By substituting Eq. (23) into Eq. (18) we eliminate the photon variables and get the KS equation involving only the electronic density. Thus we obtain TDDFT for an open quantum system -now the KS equation describes in a closed form only the electronic part of the full electronphoton system. Formally Eq. (13) is a version of the Caldeira-Leggett (CL) model
. P Hohenberg, W Kohn, Phys. Rev. 136P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964)
. E Runge, E K U Gross, 10.1103/PhysRevLett.52.997Phys. Rev. Lett. 52997E. Runge and E. K. U. Gross, Phys. Rev. Lett. 52, 997 (1984)
Fundamentals of Time-Dependent Density Functional Theory. M. A. Marques, N. T. Maitra, F. M. Nogueira, E. Gross, and A. RubioBerlinSpringerFundamentals of Time-Dependent Density Functional Theory, edited by M. A. Marques, N. T. Maitra, F. M. Nogueira, E. Gross, and A. Rubio (Springer, Berlin, 2012)
C A Ullrich, Time-Dependent Density-Functional Theory: Concepts and Applications. New YorkOxford University PressC. A. Ullrich, Time-Dependent Density-Functional The- ory: Concepts and Applications (Oxford University Press, New York, 2012)
. H Mabuchi, A C Doherty, 10.1126/science.1078446Science. 2981372H. Mabuchi and A. C. Doherty, Science 298, 1372 (2002)
. J M Raimond, M Brune, S Haroche, 10.1103/RevModPhys.73.565Rev. Mod. Phys. 73565J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. 73, 565 (2001)
. H Walther, B T Varcoe, B.-G Englert, T Becker, 10.1088/0034-4885/69/5/R02Rep. Prog. Phys. 691325H. Walther, B. T. Varcoe, B.-G. Englert, and T. Becker, Rep. Prog. Phys. 69, 1325 (2006)
. A Blais, R.-S Huang, A Wallraff, S M Girvin, R J Schoelkopf, 10.1103/PhysRevA.69.062320Phys. Rev. A. 6962320A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A 69, 062320 (2004)
. A Wallraff, D I Schuster, A Blais, L Frunzio, R.-S , A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S.
. J Huang, S Majer, S M Kumar, R J Girvin, Schoelkopf, 10.1038/nature02851Nature. 431162Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf, Nature 431, 162 (2004)
. J Q You, F Nori, 10.1038/nature10122Nature. 474589J. Q. You and F. Nori, Nature 474, 589 (2011)
Ebbesen. T Schwartz, J A Hutchison, C Genet, T W , 10.1103/PhysRevLett.106.196405Phys. Rev. Lett. 106196405T. Schwartz, J. A. Hutchison, C. Genet, and T. W. Ebbe- sen, Phys. Rev. Lett. 106, 196405 (2011)
. J A Hutchison, T Schwartz, C Genet, E Devaux, T W Ebbesen, 10.1002/anie.201107033Angew. Chem. Int. Ed. 511592J. A. Hutchison, T. Schwartz, C. Genet, E. Devaux, and T. W. Ebbesen, Angew. Chem. Int. Ed. 51, 1592 (2012)
. A F Morral, F Stellacci, 10.1038/nmat3284Nat. Mat. 11272A. F. i Morral and F. Stellacci, Nat. Mat. 11, 272 (2012)
A recent fully relativistic QED-based formulation of TD-CDFT [39] hints at existence of such theories. A recent fully relativistic QED-based formulation of TD- CDFT [39] hints at existence of such theories.
F H M , Theory of Multiphoton Processes. New YorkPlenum PressF. H. M. Faisal, Theory of Multiphoton Processes (Plenum Press, New York, 1987)
. I V Tokatly, 10.1016/j.chemphys.2011.04.005Chem. Phys. 39178I. V. Tokatly, Chem. Phys. 391, 78 (2011)
. N T Maitra, T N Todorov, C Woodward, K Burke, 10.1103/PhysRevA.81.042525Phys. Rev. A. 8142525N. T. Maitra, T. N. Todorov, C. Woodward, and K. Burke, Phys. Rev. A 81, 042525 (2010)
. R Van Leeuwen, 10.1103/PhysRevLett.82.3863Phys. Rev. Lett. 823863R. van Leeuwen, Phys. Rev. Lett. 82, 3863 (1999)
. G Vignale, 10.1103/PhysRevA.77.062511Phys. Rev. A. 7762511G. Vignale, Phys. Rev. A 77, 062511 (2008)
. G Vignale, Phys. Rev. Lett. 743233G. Vignale, Phys. Rev. Lett. 74, 3233 (1995)
The level of mathematical rigor and the conditions of the present mapping theorems are exactly the same as for the standard purely electronic TD(C)DFT. Strictly speaking, no extra assumption is required if the number of photon modes is arbitrary large. but finiteThe level of mathematical rigor and the conditions of the present mapping theorems are exactly the same as for the standard purely electronic TD(C)DFT. Strictly speaking, no extra assumption is required if the number of photon modes is arbitrary large, but finite.
. J F Dobson, Phys. Rev. Lett. 732244J. F. Dobson, Phys. Rev. Lett. 73, 2244 (1994)
. G Vignale, W Kohn, Phys. Rev. Lett. 772037G. Vignale and W. Kohn, Phys. Rev. Lett. 77, 2037 (1996)
. G Vignale, C A Ullrich, S Conti, Phys. Rev. Lett. 794878G. Vignale, C. A. Ullrich, and S. Conti, Phys. Rev. Lett. 79, 4878 (1997)
. M Gatti, V Olevano, L Reining, I V Tokatly, Phys. Rev. Lett. 9957401M. Gatti, V. Olevano, L. Reining, and I. V. Tokatly, Phys. Rev. Lett. 99, 057401 (2007)
. L J Sham, M Schlüter, 10.1103/PhysRevLett.51.1888Phys. Rev. Lett. 511888L. J. Sham and M. Schlüter, Phys. Rev. Lett. 51, 1888 (1983)
. R Van Leeuwen, 10.1103/PhysRevLett.76.3610Phys. Rev. Lett. 763610R. van Leeuwen, Phys. Rev. Lett. 76, 3610 (1996)
. I V Tokatly, O Pankratov, 10.1103/PhysRevLett.86.2078Phys. Rev. Lett. 862078I. V. Tokatly and O. Pankratov, Phys. Rev. Lett. 86, 2078 (2001)
. C A Ullrich, U J Gossmann, E K U Gross, Phys. Rev. Lett. 74872C. A. Ullrich, U. J. Gossmann, and E. K. U. Gross, Phys. Rev. Lett. 74, 872 (1995)
. A Caldeira, A Leggett, 10.1016/0378-4371(83)90013-4Physica A. 121587A. Caldeira and A. Leggett, Physica A 121, 587 (1983)
. A Caldeira, A Leggett, 10.1016/0003-4916(83)90202-6Ann. Phys. 149374A. Caldeira and A. Leggett, Ann. Phys. 149, 374 (1983)
. K Albrecht, 10.1016/0370-2693(75)90283-XPhys. Lett. B. 56127K. Albrecht, Phys. Lett. B 56, 127 (1975)
. R W Hasse, 10.1063/1.522431J. Math. Phys. 162005R. W. Hasse, J. Math. Phys. 16, 2005 (1975)
. K Burke, R Car, R Gebauer, 10.1103/PhysRevLett.94.146803Phys. Rev. Lett. 94146803K. Burke, R. Car, and R. Gebauer, Phys. Rev. Lett. 94, 146803 (2005)
. J Yuen-Zhou, C Rodriguez-Rosario, A Aspuru-Guzik, 10.1039/B903064FPhys. Chem. Chem. Phys. 114509J. Yuen-Zhou, C. Rodriguez-Rosario, and A. Aspuru- Guzik, Phys. Chem. Chem. Phys. 11, 4509 (2009)
. J Yuen-Zhou, D G Tempel, C A Rodríguez-Rosario, A Aspuru-Guzik, 10.1103/PhysRevLett.104.043001Phys. Rev. Lett. 10443001J. Yuen-Zhou, D. G. Tempel, C. A. Rodríguez-Rosario, and A. Aspuru-Guzik, Phys. Rev. Lett. 104, 043001 (2010)
. M , Di Ventra, R D'agosta, 10.1103/PhysRevLett.98.226403Phys. Rev. Lett. 98226403M. Di Ventra and R. D'Agosta, Phys. Rev. Lett. 98, 226403 (2007)
. R , M. Di Ventra, 10.1103/PhysRevB.78.165105Phys. Rev. B. 78165105R. D'Agosta and M. Di Ventra, Phys. Rev. B 78, 165105 (2008)
. M Ruggenthaler, F Mackenroth, D Bauer, 10.1103/PhysRevA.84.042107Phys. Rev. A. 8442107M. Ruggenthaler, F. Mackenroth, and D. Bauer, Phys. Rev. A 84, 042107 (2011)
| [] |
[
"KÜNNETH FORMULAS FOR MOTIVES AND ADDITIVITY OF TRACES",
"KÜNNETH FORMULAS FOR MOTIVES AND ADDITIVITY OF TRACES"
] | [
"Jin Fangzhou ",
"Enlin And ",
"Yang "
] | [] | [] | We prove several Künneth formulas in motivic homotopy categories and deduce a Verdier pairing in these categories following SGA5, which leads to the characteristic class of a constructible motive, an invariant closely related to the Euler-Poincaré characteristic. We prove an additivity property of the Verdier pairing using the language of derivators, following the approach of May and Groth-Ponto-Shulman; using such a result we show that in the presence of a Chow weight structure, the characteristic class for all constructible motives is uniquely characterized by proper covariance, additivity along distinguished triangles, refined Gysin morphisms and Euler classes. In the relative setting, we prove the relative Künneth formulas under some transversality conditions, and define the relative characteristic class. | 10.1016/j.aim.2020.107446 | [
"https://arxiv.org/pdf/1812.06441v3.pdf"
] | 85,543,310 | 1812.06441 | 5f9f32fe49ea6350ad531b009086fbcde5e993fb |
KÜNNETH FORMULAS FOR MOTIVES AND ADDITIVITY OF TRACES
29 Mar 2019
Jin Fangzhou
Enlin And
Yang
KÜNNETH FORMULAS FOR MOTIVES AND ADDITIVITY OF TRACES
29 Mar 2019arXiv:1812.06441v3 [math.AG]
We prove several Künneth formulas in motivic homotopy categories and deduce a Verdier pairing in these categories following SGA5, which leads to the characteristic class of a constructible motive, an invariant closely related to the Euler-Poincaré characteristic. We prove an additivity property of the Verdier pairing using the language of derivators, following the approach of May and Groth-Ponto-Shulman; using such a result we show that in the presence of a Chow weight structure, the characteristic class for all constructible motives is uniquely characterized by proper covariance, additivity along distinguished triangles, refined Gysin morphisms and Euler classes. In the relative setting, we prove the relative Künneth formulas under some transversality conditions, and define the relative characteristic class.
1.1.1. The Euler-Poincaré characteristic (or Euler characteristic) is an important invariant of topological spaces in algebraic topology which gives rise to various generalizations in geometry, homological algebra and category theory. In topology, the Euler characteristic of a finite CW-complex is the alternating sum of the dimensions of its singular homology groups. In algebraic geometry, this notion is generalized forétale sheaves as follows: let X be a separated scheme of finite type over a perfect field k of characteristic p; if ℓ is a prime different from p and F is a constructible complex of ℓ-adicétale sheaves over X, then the Euler characteristic (with compact support)
(1.1.1.1) χ c (Xk, F) = i⩾0 (−1) i ⋅ dim H i c (Xk, F)
is a well-defined integer, by the finiteness theorems in [SGA4.5].
1.1.2. Morel and Voevodsky introduced motivic homotopy theory ( [MV98]) where one study cohomology theories over algebraic varieties by means of the homotopy theory relative to the affine line A 1 , leading to several triangulated categories of motives: the stable motivic homotopy category SH classifies cohomology theories which satisfy A 1 -homotopy invariance, and Voevodsky's category of motivic complexes DM ([VSF00]) computes motivic cohomology. These categories are built in a style very close to the derived category of ℓ-adicétale sheaves: the work of Ayoub ([Ayo07]) and Cisinski-Déglise ([CD19]) establish a six functors formalism similar to the powerful machinery in [SGA4], and theétale realization functor ( [Ayo14], [CD16]) gives a map from motives to the derived categoryétale sheaves which preserves the six functors, generalizing the cycle class map inétale cohomology [SGA4.5, Cycle].
1.1.3. A natural question arises to define the Euler characteristic of a motive. However, for constructible objects in the categories of motives mentioned above, the very definition with (1.1.1.1) apparently does not work, since motivic cohomology groups, or equivalently Bloch's higher Chow groups ( [Blo86]), are in general infinite-dimensional as vector spaces. Instead, there is a more categorical approach using the trace of a morphism: recall that if C is a symmetric monoidal category with unit 1, M is a (strongly) dualizable object in C (which corresponds to locally constant or smooth sheaves in theétale setting) with dual M ∨ and u ∶ M → M is an endomorphism of M , then the trace of u is the map
(1.1.3.1) T r(u) ∶ 1 η → M ∨ ⊗ M id⊗u → M ∨ ⊗ M ≃ M ⊗ M ∨ ǫ → 1
considered as an endomorphism of the unit 1, where η and ǫ are unit and counits of the duality. The Euler characteristic of M is defined as the trace of the identity map of M . If k is a field, in the stable motivic homotopy category SH(k) the endomorphism ring of the unit 1 k is identified as
(1.1.3.2) End SH(k) (1 k ) ≃ GW (k)
where GW (k) is the Grothendieck-Witt ring of k, that is, the Grothendieck group of non-degenerate quadratic forms over k. Therefore the Euler characteristic of motives in this case is an invariant in terms of quadratic forms, refining the usual integer-valued Euler characteristic.
1.1.4. For example, if f ∶ X → Y is a smooth and proper morphism, then the motive M Y (X) = f # 1 X is dualizable ( [Hoy15], [Lev18a]); the motivic Gauss-Bonnet formula states that the Euler characteris- [SGA5,III], this formula for constant coefficients is generalized to a more general form, called the Lefschetz-Verdier formula. In order to express this last formula, a very general cohomological pairing, called the Verdier pairing, is constructed from several Künneth type formulas forétale sheaves and a delicate analysis on the six functors. The idea behind this construction is the formalism of Grothendieck-Verdier local duality ([ CD19,4.4.23]) which, in the setting of the six functors, gives rise to a generalized trace map (see (1.3.2.1) below), in the way that the usual formalism of (strong) duality produces the trace map (1.1.3.1); the construction works not only for dualizable objects, but also for all constructible ones, which can be considered as weakly dualizable. If X is a scheme, F is a constructible complex of ℓ-adić etale sheaves over X and u is an endomorphism of F, this generalized trace for u is an element in the group of global sections of the dualizing complex over X, called the characteristic class of u, or the characteristic class of F when u is the identity map ([AS07, Definition 2.1.1]).
1.1.6. The characteristic class is closely related to the Euler characteristic: when X is the spectrum of the base field k, the characteristic class agrees with the Euler characteristic; more generally, if f ∶ X → Spec(k) is a proper morphism, the Lefschetz-Verdier formula implies that the degree of the characteristic class of F agrees with the Euler characteristic of Rf * F.
1.1.7. The main goal of this paper is to define the characteristic class for constructible motives and study its properties. Given the six functors formalism in the motivic context, analogous to the classical one, we would like to use the very definition of [SGA5,III] to define the Verdier pairing. For this, a major input is the proof of some Künneth formulas for motives.
1.2. Künneth formulas for motives.
1.2.1. In Section 2, we prove several general Künneth formulas for motives that would lead to the Verdier pairing, summarized as follows:
Theorem 1.2.2 (see Theorem 2.4.6). Let f 1 ∶ X 1 → Y 1 and f 2 ∶ X 2 → Y 2 be two morphisms between separated schemes of finite type over a field k, with the following commutative diagram
X 1 f 1 X 1 × k X 2 p 1 o o p 2 / / f X 2 f 2 Y 1 Y 1 × k Y 2 p ′ 1 o o p ′ 2 / / Y 2 .
(1.
2.2.1)
Let T c be the category of constructible motivic spectra SH c or the category of constructible cdh-motives DM cdh,c (more generally, T c can be the subcategory of constructible objects in a motivic triangulated category, see Definition 2.0.1). We assume resolution of singularities (by blowups or by alterations, see the condition (RS) in 2.1.12 below). For i = 1, 2, consider objects L i ∈ T c (X i ) and M i , N i ∈ T c (Y i ). Then there are canonical isomorphisms p ′ * 1 f 1 * L 1 ⊗ p ′ * 2 f 2 * L 2 → f * (p * 1 L 1 ⊗ p * 2 L 2 ); (1.2.2.2) p ′ * 1 f 1! L 1 ⊗ p ′ * 2 f 2! L 2 → f ! (p * 1 L 1 ⊗ p * 2 L 2 ); (1.2.2.3)
p * 1 f ! 1 M 1 ⊗ p * 2 f ! 2 M 2 → f ! (p ′ * 1 M 1 ⊗ p ′ * 2 M 2 ); (1.2.2.4) p ′ * 1 Hom(M 1 , N 1 ) ⊗ p ′ * 2 Hom(M 2 , N 2 ) → Hom(p ′ * 1 M 1 ⊗ p ′ * 2 M 2 , p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ).
(1.2.2.5) 1.2.3. The proof of Theorem 2.4.6 is quite different from the classical case: the main ingredient of the proof is the strong devissage property (Definition 2.1.10), which says that under resolution of singularities, the category of constructible motives is generated by (relative) Chow motives as a thick subcategory; we therefore reduce to the case of Chow motives, in which case a careful manipulation of the functors gives the desired isomorphisms. The isomorphism (1.2.2.3) involving f ! is quite formal, and holds more generally when we replace the base field by any base scheme, while the other ones fail to hold in general. We will see later in Section 6 that under some assumptions they also hold in the relative case.
1.3. The Verdier pairing and the characteristic class.
1.3.1. In Section 3, we use the Künneth formulas to define the Verdier pairing, following [SGA5,III]. Let X 1 and X 2 be two separated schemes of finite type over k. Let c ∶ C → X 1 × k X 2 and d ∶ D → X 1 × k X 2 be two morphisms. We denote by E = C × X 1 × k X 2 D. For i = 1, 2, we denote by
p i ∶ X 1 × k X 2 → X i the projections, c i = p i ○ c ∶ C → X i , d i = p i ○ d ∶ D → X i and let L i ∈ T c (X i ).
Then given two maps u ∶ c * 1 L 1 → c ! 2 L 2 and v ∶ d * 2 L 2 → d ! 1 L 1 , the Verdier pairing ⟨u, v⟩ is an element of the bivariant group (or Borel-Moore theory group) H 0 (E k) (see Definition 5.1.3) seen as a map ⟨u, v⟩ ∶ 1 E → K E (1.3.1.1)
where K E = D(1 E ) is the dualizing object (Definition 3.1.8). The Lefschetz-Verdier formula (Proposition 3.1.6) states that this pairing is compatible with proper direct images. In Proposition 3.2.5 we show that this pairing can always be reduced to a generalized trace map, which we explicitly identify in Proposition 3.2.8. where the second map is deduced from the Künneth formulas. We denote C X (M ) = C X (M, 1 M ). The bivariant group H 0 (X k) in which it lives can be computed in many cases:
• If M is a constructible cdh-motive in DM cdh,c (X, Z[1 p]), the characteristic class C X (M ) is a 0-cycle in the Chow group CH 0 (X)[1 p] = CH 0 (X) ⊗ Z Z[1 p] of X up to p-torsion. • If M is a constructible element in the homotopy category of KGL-modules over X, the characteristic class C X (M ) is an element in the 0-th algebraic G-theory group G 0 (X). In other words, the characteristic class associates to every constructible motive a concrete object, which is realized as either a 0-cycle, a formal sum of coherent sheaves or a Milnor-Witt 0-cycle. It lifts the ℓ-adic characteristic class to the cycle-theoretic level, therefore giving an illustration of the general philosophy of mixed motives.
1.3.3. The results in Section 3 are rather transcriptions of classical results in our context, but will be important for the next sections. A particular case of the construction is already given in [Ols16]. For h-motives in DM h , the characteristic class is also defined independently in [Cis19]. Fer05]). This failure may be explained by the defect of the axioms of a triangulated category, where the cone of a morphism exists uniquely only up to isomorphism but not up to unique isomorphism; more concretely, a commutative diagram between distinguished triangles in the derived category does not always reflect a commutative diagram in the category of complexes, which would mean to be "truly commutative". Behind such a phenomena lies the idea of higher category theory as illustrated by the vast theory of (∞, 1)-categories ([Lur09]).
1.4.2. A landmarking breakthrough in this direction is made by [May01], where it is shown that the trace map is additive along distinguished triangles for triangulated categories satisfying some extra axioms that arise naturally from topology; with the same spirit, the additivity of traces is generalized to stable derivators in [GPS14], where it is shown that the additivity holds for an endomorphism of distinguished triangles if the diagram commutes in the strong sense of derivators. A derivator, in some sense, lies between a 1-category and an (∞, 1)-category, in which one can define left and right homotopy Kan extensions using only the 1-categorical language, and which carries enough information to characterize homotopy limits and colimits by 1-categorical universal properties. In particular the axioms of a stable derivator produce functorial cone objects, fixing the problem above for triangulated categories.
L i / / Γ i M i * / / N i (1.4.4.1) be a coherent biCartesian square in T c (X i , ◻). Let f ∶ c * 1 Γ 1 → c ! 2 Γ 2 and g ∶ d * 2 Γ 2 → d ! 1
Γ 1 be morphisms of coherent squares in T c (C, ◻) and T c (D, ◻). Then the Verdier pairing satisfies
⟨f M , g M ⟩ = ⟨f L , g L ⟩ + ⟨f N , g N ⟩ (1.4.4.2) where f M ∶ c * 1 M 1 → c ! 2 M 2
is the restriction of f , and similarly for the other maps. 1.4.5. The above result corresponds to [SGA5, III (4.13.1)] which claims the additivity of the Verdier pairing in the filtered derived category. The strategy of the proof is to first use Proposition 3.2.5 to reduce the pairing to a generalized trace map with one single entry. Then we follow closely the same steps of proof as in [GPS14], where we need to check the same axioms for the local duality functor instead of the usual duality functor. Note that all the usual examples such as SH c or DM cdh,c arise from constructible motivic derivators, so working with derivators is not a restriction in practice.
1.5. A characterization of the characteristic class of a motive.
1.5.1. There has been an extensive study in the literature around the Euler characteristic ofétale sheaves via ramification theory, see for example [AS07], [KS08] and [Sai17]. In this paper, we use a different approach to give a description of the characteristic class for cdh-motives in DM cdh,c . In Section 5 we start with the study of some elementary properties of the characteristic class, using the (Fultonstyle) intersection theory developed in [DJK18]. The main result is the following characterization of the characteristic class for cdh-motives over a perfect field: Theorem 1.5.2 (see Theorem 5.2.6). Assume that the base field k is perfect, and let X be a scheme. Then the map
DM cdh,c (X, Z[1 p]) → CH 0 (X)[1 p] M ↦ C X (M ) (1.5.2.1)
is the unique map satisfying the following properties:
(1) For any distinguished triangle L → M → N → L[1] in DM cdh,c (X), C X (M ) = C X (L) + C X (N ). (2) Let f ∶ Y → X be a proper morphism with Y smooth of dimension d over k and let M be the direct summand of the Chow motive f * 1 Y (n) defined by an endomorphism u. Then u is identified as a cycle u ′ ∈ CH d (Y × X Y )[1 p], and we have C X (M ) = C X (f * 1 Y (n), u) = f * ∆ ! u ′ ∈ CH 0 (X)[1 p] (1.5.2.2) where f * ∶ CH 0 (Y )[1 p] → CH 0 (X)[1 p] is the proper push-forward and ∆ ! ∶ CH d (Y × X Y )[1 p] → CH 0 (Y )[1 p] is the refined Gysin morphism ([Ful98, 6.2]) associated to the Carte- sian square Y δ Y X / / ∆ Y × X Y Y δ Y k / / Y × k Y. (1.5.2.3)
There is an alternative description using the Euler class (i.e. top Chern class), see Proposition 5.1.15 below.
1.5.3. The idea is as follows: Bondarko's theory of weight structures ( [Bon10], [BI15]) implies that DM cdh,c is generated by Chow motives not only as a thick triangulated category but also as a triangulated category, and therefore by additivity of traces, it suffices to compute the characteristic class for Chow motives, which can be achieved using intersection theory. This description also holds when we replace DM cdh,c by homotopy category of KGL-modules, since the Chow weight structure also exists by the results of [BL16]. In general, the characterization holds over the sub-triangulated category generated by direct summands of Chow motives.
1.5.4. While our result gives an abstract characterization for the characteristic class, we expect it to be related with the Grothendieck-Ogg-Shafarevich type results in [AS07]. When the base field is not perfect, there is a similar characterization by perfection using the work of [EK18], see Remark 5.2.7 below.
1.5.5. In Section 5.3 we show the compatibility between the characteristic class and Riemann-Roch transformations. If X is a scheme and M ∈ SH c (X), then we can canonically associate to M a constructible element in the homotopy category of KGL-modules over X, as well as an element in DM cdh,c (X) (see 5.3.1). Then the Riemann-Roch transformation [Ful98,Theorem 18.3] sends the characteristic class of the former to that of the latter. In Corollary 5.3.4 we prove a more general version of such a result.
τ X ∶ G 0 (X) → ⊕ i∈Z CH i (X) Q constructed in
1.6. The relative case.
1.6.1. In Section 6 we prove some relative Künneth formulas, following the approach in [YZ18]. We first introduce the transversality conditions (Definition 6.1.3), which are closely related to the notion of purity in [DJK18] (see 6.1.5); instead of making use of the geometric notion of singular support as in [YZ18], our definition is a more categorical one extracted from the spirit of [Sai17]. We show that under such conditions and some smoothness assumptions, the Künneth formulas (1.2.2.4) and (1.2.2.5) still holds over a general base scheme (see Theorem 6.2.7). The proof uses the Künneth formulas over a field in Section 2. As a special case, we obtain the following result: Corollary 1.6.2. (see Corollary 6.2.4) Let T c be the subcategory of constructible objects in a motivic triangulated category which satisfies resolution of singularities (see condition (RS) in 2.1.12). Let S be a smooth k-scheme, let π∶ X → S be a smooth morphism and let F ∈ T c (X). If π is universally F -transversal (see Definition 6.1.3 below), then there is a canonical isomorphism
p * 1 F ⊗ p * 2 Hom(F, π ! 1 S ) ∼ → Hom(p * 2 F, p ! 1 F ) (1.6.2.1) where p i ∶ X × S X → X is the projection for i = 1, 2.
1.6.3. These Künneth formulas are sufficient to define the relative Verdier pairing, as well as the relative characteristic class (Definition 6.2.13), in the same way as the absolute case. In the case of DM cdh , if S is a smooth scheme of dimension n, then the relative characteristic class is given by a n-cycle up to p-torsion which, via Fulton's specialization of cycles ([Ful98, Section 10.1]), specializes to the 0-cycles given by the characteristic class of its fibers. We prove a more general version in Proposition 6.2.16.
1.6.4. In Section 6.3 we establish an equivalence between several notions of local acyclicity and transversality conditions (Proposition 6.3.5 and Proposition 6.3.8). We also give an application using the Fulton style specialization map in [DJK18,4.5.6] (Corollary 6.3.10). interaction between motives and ramification theory, and we would like to thank him for helpful discussions. The first named author would like to thank Frédéric Déglise and Adeel Khan for the collaboration in [DJK18] in which we constructed the intersection-theoretic tools in motivic homotopy theory that are used in this paper. We would like to thank Marc Levine for suggesting the formulation of Theorem 4.2.8. We would like to thank Lie Fu for telling us the trick in Lemma 2.2.3. We would like to thank Oliver Röndigs and Jakob Scholbach for helpful comments on a preliminary version. The first named author is partially supported by the DFG Priority Programme SPP 1786. The second named author is supported by Peking University's Starting Grant Nr.7101302006. Part of this work is done while both authors were members of SFB 1085 Higher Invariants at Universität Regensburg.
Notation and Conventions.
(1) Throughout the paper, we denote by k a field, and a scheme stands for a separated scheme of finite type over k. The category of schemes is denoted by Sch.
(2) For any pair of adjoint functors (F, G) between two categories, we denote by ad (F,G) ∶ 1 → GF and ad ′ (F,G) ∶ GF → 1 the unit and conuit maps of the adjunction. (3) We say that a morphism of schemes f ∶ X → Y is local complete intersection (abbreviated as "lci") if it factors as the composition of a regular closed immersion followed by a smooth morphism. 1 We denote by L f or L X Y its virtual tangent bundle in K 0 (X). the unit and counit maps of the monoidal structure.
KÜNNETH FORMULAS FOR MOTIVES
In this section we prove several Künneth formulas for motives whose analogues for ℓ-adicétale sheaves are proven in [SGA4.5] and [SGA5]. We start by recalling the axioms of a motivic triangulated category in the sense of [CD19, Definition 2.4.45]. We denote by SM T R the 2-category of symmetric monoidal triangulated categories with (strong) monoidal functors.
Definition 2.0.1. A motivic triangulated category is a (non-strict) 2-functor (T, ⊗) ∶ Sch op → SM T R satisfying the following properties:
(1) The value of T at the empty scheme T(∅) is the zero category.
(2) For every morphism of schemes f ∶ Y → X, the functor
f * ∶ T(X) → T(Y ) is monoidal. (3) For every smooth morphism f ∶ Y → X, the functor f * ∶ T(X) → T(Y ) has a left adjoint f # ,
such that (a) For any commutative square of schemes
(2.0.1.1) Y q / / g ∆ X f T p / / S
The following natural transformation is an isomorphism:
(2.0.1.2) q # g * ad (p # ,p * ) → q # g * p * p # = q # q * f * p # ad ′ (q # ,q * ) → f * p # .
(b) For any smooth morphism f ∶ Y → X and any objects M ∈ T(Y ), N ∈ T(X), the following transformation is an isomorphism:
(2.0.1.3) f # (M ⊗ f * N ) ad (f # ,f * ) → f # (f * f # M ⊗ f * N ) ≃ f # f * (f # M ⊗ N ) ad ′ (f # ,f * ) → f # M ⊗ N.
(4) For every morphism of schemes f ∶ Y → X, the functor f * ∶ T(X) → T(Y ) has a right adjoint f * .
(5) For any scheme S with p ∶ A 1 S → S the canonical projection, the unit map 1 ad (p * ,p * ) → p * p * is an isomorphism. In other words, for any scheme X we have a triangulated category T(X), and such a formation satisfies the six functors formalism by [CD19, Theorem 2.4.50]. Examples are given by the stable motivic homotopy category SH, the category of cdh-motivic complexes DM cdh or the category of modules over a motivic ring spectrum. We gradually recall the formal properties we need in this section.
For any scheme X, we denote by 1 X ∈ T(X) the unit object. We are mainly interested in constructible motives, see the discussion in 2.1.12 below.
2.1. Local acyclicity and Künneth formula for f * . In this section, we work with a motivic triangulated category T. 2.1.4. For any morphism q ∶ Y → X and objects K ∈ T(X), K ′ ∈ T(Y ), there is a canonical natural transformation
Y q / / g ∆ X f T p / / S there is a canonical natural transformation (2.1.1.2) f * p * ad (q * ,q * ) → q * q * f * p * = q * g * p * p * ad ′ (p * ,p * ) → q * g * which(2.1.4.1) K ⊗ q * K ′ ad (q * ,q * ) → q * q * (K ⊗ q * K ′ ) ≃ q * (q * K ⊗ q * q * K ′ ) ad ′ (q * ,q * ) → q * (q * K ⊗ K ′ ).
The map (2.1.4.1) is an isomorphism for any proper morphism q.
2.1.5. For any commutative square ∆ as in (2.1.1.1), any K ∈ T(X) and any L ∈ T(T ), there is a canonical natural transformation
(2.1.5.1) Ex(∆ * * , ⊗) ∶ K ⊗ f * p * L → q * (q * K ⊗ g * L) defined as the composition K ⊗ f * p * L (2.1.1.2) → K ⊗ q * g * L (2.1.4.1) → q * (q * K ⊗ g * L).
(2.1.5.2) 2.1.6. For any Cartesian square ∆ as in (2.1.1.1) with p proper, the map (2.1.5.1) is an isomorphism. Our aim is to study the cases where the map (2.1.5.1) is an isomorphism for arbitrary p. The following definition is inspired by [SGA4.5, Th. finitude, Définition 2.12]:
Definition 2.1.7. Let f ∶ X → S be a morphism of schemes and K ∈ T(X). We say that f is strongly locally acyclic relatively to K if for any morphism p ∶ T → S with a Cartesian square ∆ as in (2.1.1.1) and any object L ∈ T(T ), the map (2.1.5.1) is an isomorphism. We say that f is universally strongly locally acyclic relatively to K if for any Cartesian square
X ′ φ / / f ′ X f S ′ / / S (2.1.7.1)
the base change f ′ of f is strongly locally acyclic relatively to K X ′ = φ * K.
Lemma 2.1.8. Let S is a class of morphisms which is stable by base change and suppose that T satisfies S-base change. Let f ∶ X → S be a morphism, φ ∶ W → X be a proper morphism such that the composition f ○ φ ∶ W → S lies in S. Then f is universally strongly locally acyclic relatively to the object φ * 1 W .
Proof. Consider the following commutative diagram with Cartesian squares
V r / / ψ W ′ / / φ ′ W φ Y q / / g ∆ X ′ ξ / / f ′ X f T p / / S ′ / / S (2.1.8.1)
and let L ∈ T(T ). We want to show that the map
(2.1.8.2) Ex(∆ * * , ⊗) ∶ ξ * φ * 1 W ⊗ f ′ * p * L → q * (q * ξ * φ * 1 W ⊗ g * L)
is an isomorphism. On the left hand side, by assumptions we have
(2.1.8.3) ξ * φ * 1 W ⊗ f ′ * p * L ≃ φ ′ * 1 W ′ ⊗ f ′ * p * L ≃ φ ′ * φ ′ * f ′ * p * L ≃ φ ′ * r * ψ * g * L where
we use the fact that the morphism f ′ ○ φ ′ lies in S. For the right hand side of (2.1.8.2), we have
(2.1.8.4) q * (q * ξ * φ * 1 W ⊗ g * L) ≃ q * (ψ * 1 V ⊗ g * L) ≃ q * ψ * ψ * g * L ≃ φ ′ * r * ψ * g * L.
It is not hard to check that the composition of (2.1.8.3) and the inverse of (2.1.8.4) agrees with the map (2.1.8.2), and therefore (2.1.8.2) is an isomorphism.
2.1.9. The category T has a family of Tate twists 1(n), which are ⊗-invertible objects that form a Cartesian section. By [FHM03, 3.2], we have a canonical isomorphism f * (M ⊗ 1(n)) ≃ f * (M ) ⊗ 1(n), and therefore Tate twists commute with both f * and f * in a canonical way. More generally, all the six functors commute with Tate twists ([CD19, Section 1.1.d]) via canonical isomorphisms, and therefore it is safe to ignore them in the proof of Künneth formulas.
Definition 2.1.10.
(1) For any scheme X, a projective motive in T(X) is an object of the form ϕ * 1 W (n), where ϕ ∶ W → X is a projective morphism, and a primitive Chow motive is a projective motive with W smooth over a finite purely inseperable extension of k. 2 (2) We say that T satisfies weak devissage if for any scheme X, the category T(X) agrees with the smallest thick triangulated subcategory which is stable by direct sums and contains all projective motives.
(3) We say that T satisfies strong devissage if for any scheme X, the category T(X) agrees with the smallest thick triangulated subcategory which is stable by direct sums and contains all primitive Chow motives. (4) We denote by S k the family of morphisms p 2 ∶ X × k Y → Y where X, Y are schemes and p 2 is the projection onto the second factor. We say that T satisfies S k -strong local acyclicity if any morphism of the form f ∶ X → k is universally strongly locally acyclic relatively to any object in T(X).
We denote by R the family of finite surjective radicial morphisms, namely the family of universal homeomorphisms.
Proposition 2.1.11. We suppose that T satisfies weak devissage. Then (1) T satisfies S k -strong local acyclicity if and only if it satisfies S k -base change.
(2) If T satisfies strong devissage and one of the following conditions hold: (a) k is perfect;
(b) T satisfies R-base change.
Then T satisfies S k -strong local acyclicity.
Proof.
(1) The S k -base change property is a particular case of S k -strong local acyclicity since strong local acyclicity relative to the unit object is equivalent to base change property. The other direction follows from weak devissage by applying Lemma 2.1.8.
(2) Since strong local acyclicity is stable under distinguished triangles, direct summands, direct sums and Tate twists, the result is straightforward from Lemma 2.1.8.
2.1.12. Following [BD17,2.4.1], we consider the following conditions on resolution of singularities:
(RS 1) The field k is perfect, over which the strong resolution of singularities holds, in the sense that (a) For every separated integral scheme X of finite type over k, there exists a proper birational surjective morphism X ′ → X with X ′ regular; (b) For every separated integral regular scheme X of finite type over k and every nowhere dense closed subscheme Z of X, there exists a proper birational surjective
morphism b ∶ X ′ → X such that X ′ is regular, b induces an isomorphism b −1 (X − Z) ≃ X − Z, and b −1 (Z) is a strict normal crossing divisor in X ′ . (RS 2)
The category T is Z[1 p] linear where p is the characteristic exponent of k, and there exists a premotivic adjunction SH ⇌ T.
(RS) We say that the category T satisfies (RS) if it satisfies one of the above conditions (RS 1) and (RS 2).
2.1.13. We recall the following facts about devissage:
( 2.1.15. For any scheme X, we denote by T c (X) the subcategory of constructible objects, which is the thick triangulated subcategory of T(X) generated by elements of the form (1') If T is motivic, then for any scheme X, the category T c (X) agrees with the smallest thick triangulated subcategory which contains all projective motives; (2') If T is motivic and satisfies the condition (RS) in 2.1.12, then for any scheme X, the category T c (X) agrees with the smallest thick triangulated subcategory which contains all primitive Chow motives.
f # 1 Y (n), where f ∶ Y → X is a smooth morphism ([CD19,
Remark 2.1.16.
(1) S k -strong local acyclicity can be generalized to quasi-compact quasi-separated schemes over a field by a passing to the limit argument ([SGA4.5, Th. finitude Corollaire 2.16], [Hoy15, Appendix C]).
(2) In the derived category ofétale sheaves, S k -base change is a particular case of Deligne's generic base change theorem ([SGA4.5, Th. finitude, Théorème 2.13]). This theorem is proved for h-motives in [Cis19] using a similar method. (3) Under the assumption (RS), it follows from strong devissage that every object in T c (k) is (strongly) dualizable, which is a well-known result (see for example [Hoy15, Section 3] or [EK18, Theorem 3.2.1]).
The following notation will be used repeatedly in the study of Künneth formulas:
Notation 2.1.17. Let S be a scheme and let
f 1 ∶ X 1 → Y 1 , f 2 ∶ X 2 → Y 2 be two S-morphisms. Denote by p i ∶ X 1 × S X 2 → X i , p ′ i ∶ Y 1 × S Y 2 → Y i the projections, and f 1 × S f 2 ∶ X 1 × S X 2 → Y 1 × S Y 2 the fiber product.
We have the following commutative diagram
X 1 f 1 ∆ 1 X 1 × S X 2 p 1 o o p 2 / / f ∆ 2 X 2 f 2 Y 1 Y 1 × S Y 2 p ′ 1 o o p ′ 2 / / Y 2 .
(2.1.17.1)
For K 1 ∈ T(X 1 ) and K 2 ∈ T(X 2 ), we denote
K 1 ⊠ S K 2 ∶= p * 1 K 1 ⊗ p * 2 K 2 (2.1.17.2)
which is an object of T(X 1 × S X 2 ).
2.1.18. The first Künneth formula is related to the functor f * . For any morphism f ∶ X → S and any A, B ∈ T(X), there is a canonical map
(2.1.18.1) f * A ⊗ f * B → f * (A ⊗ B)
defined as the composition
f * A ⊗ f * B ad (f * ,f * ) → f * f * (f * A ⊗ f * B) =f * (f * f * A ⊗ f * f * B) ad ′ (f * ,f * ) ⊗ad ′ (f * ,f * ) → f * (A ⊗ B). (2.1.18.2) 2.1.19. For i = 1, 2, let L i be an element of T(X i ). We have a canonical map (2.1.19.1) Kun * ∶ p ′ * 1 f 1 * L 1 ⊗ p ′ * 2 f 2 * L 2 → f * (p * 1 L 1 ⊗ p * 2 L 2 ) defined as the composition p ′ * 1 f 1 * L 1 ⊗ p ′ * 2 f 2 * L 2 → f * p * 1 L 1 ⊗ f * p * 2 L 2 (2.1.18.1) → f * (p * 1 L 1 ⊗ p * 2 L 2 ). (2.1.19.2)
By a classical argument (see [SGA5, III 1.6.4]), the following is a consequence of S k -strong local acyclicity by Proposition 2.1.11:
Proposition 2.1.20 (Künneth formula for f * ). Let T be a motivic triangulated category. If X i → S is universally strongly locally acyclic relatively to L i for i = 1, 2, then the map (2.1.19.1) is an isomorphism.
In particular, if S = Spec(k) and if T satisfies the condition (RS) in 2.1.12, the map (2.1.19.1) is an isomorphism for any L i ∈ T(X i ), i = 1, 2.
2.2.
Künneth formula for f ! .
2.2.1. The second Künneth formula is concerned with the exceptional direct image functor. As part of the six functors formalism, for any morphism of schemes f ∶ X → S, there is an exceptional direct image functor (or direct image with compact support)
f ! ∶ T(X) → T(S) (2.2.1.1) which is compatible with compositions, such that f * = f ! if f is proper. We also have
(1) For any morphism f ∶ X → S, any object K ∈ T(X) and any L ∈ T(S), there is an invertible natural transformation
(2.2.1.2) Ex(f * ! , ⊗) ∶ (f ! K) ⊗ L → f ! (K ⊗ f * L) which agrees with the map (2.1.4.1) if f is proper.
(2) For any Cartesian square
(2.2.1.3) Y q / / g ∆ X f T p / / S there is an invertible natural transformation (2.2.1.4) Ex(∆ * ! ) ∶ f * p ! → q ! g *
which is compatible with horizontal and vertical compositions of squares, and agrees with the map (2.1.1.2) if p is proper.
2.2.2. We now state a Künneth formula for the functor f ! . We use the assumptions and notation as in Notation 2.1.17, with the following diagram
X 1 f 1 X 1 × S X 2 p 1 o o p 2 / / f X 2 f 2 Y 1 Y 1 × S Y 2 p ′ 1 o o p ′ 2 / / Y 2 . (2.2.2.1) Lemma 2.2.3. For i = 1, 2, let L i be an element of T(X i ). There is a canonical isomorphism (2.2.3.1) Kun S,(f 1 ,f 2 ),! (L 1 , L 2 ) ∶ p ′ * 1 f 1! L 1 ⊗ p ′ * 2 f 2! L 2 ≃ f ! (p * 1 L 1 ⊗ p * 2 L 2 ).
Proof. We have the following commutative diagram
X 1 f 1 ∆ 1 X 1 × S X 2 p 1 o o p 2 $ $ ❏ ❏ ❏ ❏ ❏ ❏ ❏ ❏ ❏ ❏ f 1 ×id Y 1 Y 1 × S X 2 p ′ 1 o o p 2 / / id×f 2 ∆ 2 X 2 f 2 Y 1 × S Y 2 p ′ 1 d d ❏ ❏ ❏ ❏ ❏ ❏ ❏ ❏ ❏ ❏ p ′ 2 / / Y 2 (2.2.3.2)
and the following composition:
p ′ * 1 f 1! L 1 ⊗ p ′ * 2 f 2! L 2 id⊗Ex(∆ * 2! ) → p ′ * 1 f 1! L 1 ⊗ (id × f 2 ) ! p * 2 L 2 Ex((id×f 2 ) * ! ,⊗) → (id × f 2 ) ! (p * 2 L 2 ⊗ (id × f 2 ) * p ′ * 1 f 1! L 1 ) = (id × f 2 ) ! (p * 2 L 2 ⊗ p ′ * 1 f 1! L 1 ) Ex(∆ * 1! ) → (id × f 2 ) ! (p * 2 L 2 ⊗ (f 1 × id) ! p * 1 L 1 ) Ex((f 1 ×id) * ! ,⊗) → (id × f 2 ) ! (f 1 × id) ! (p * 1 L 1 ⊗ (f 1 × id) * p * 2 L 2 ) = f ! (p * 1 L 1 ⊗ p * 2 L 2 ). (2.2.3.3)
All the maps involved are isomorphisms, and the result follows.
Remark 2.2.4.
(1) The Künneth formula for f ! is very formal and is valid over a general base S, while all the other ones are valid only when S is the spectrum of a field.
(2) By definition, the map (2.2.3.1) agrees with the composition
p ′ * 1 f 1! L 1 ⊗ p ′ * 2 f 2! L 2 Kun S,(id Y 1 ,f 2 ),! (f 1! L 1 ,L 2 ) → (id Y 1 × S f 2 ) ! (p ′ * 1 f 1! L 1 ⊗ p * 2 L 2 ) Kun S,(f 1 ,id X 2 ),! (L 1 ,L 2 ) → f ! (p * 1 L 1 ⊗ p * 2 L 2 ).
(2.2.4.1)
In other words, the map (2.2.3.1) is the composition of two maps of the same type where one of the f i 's equals identity.
2.3.
Künneth formula for f ! .
2.3.1. The third Künneth formula is concerned with the exceptional inverse image functor. We recall the following facts from the six functors formalism:
(1) For any morphism of schemes f ∶ X → S, the functor f ! has a right adjoint, with the following pair of adjoint functors
f ! ∶ T(X) ⇌ T(S) ∶ f ! , (2.3.1.1) such that f ! = f * if f isétale.
(2) For any closed immersion i with complementary open immersion j, the functor i * is conservative, and there is a canonical distinguished triangle
(2.3.1.2) i * i ! ad ′ (i * ,i ! ) → 1 ad (j * ,j * ) → j * j * B → i * i ! [1].
(3) For any Cartesian square
(2.3.1.3) Y q / / g ∆ X f T p / / S
which will be denoted as Y -X-S-T (this notation will be used in the proof of Proposition 2.3.5), there is a canonical invertible natural transformation
(2.3.1.4) Ex(∆ ! * ) ∶ q * f ! ≃ g ! p * .
When p and q are proper, the map (2.3.1.4) agrees with the composition
q * f ! = q ! f ! ad (f ! ,f ! ) → f ! f ! q ! g ! ≃ f ! p ! g ! g ! ad ′ (g ! ,g ! ) → f ! p ! = f ! p * . (2.3.1.5) (4) For any Cartesian square as (2.3.1.3), we deduce from the map Ex(∆ * ! ) in (2.2.1.4) a canonical natural transformation (2.3.1.6) Ex(∆ * ! ) ∶ g * p ! ad (q ! ,q ! ) → q ! q ! g * p ! (Ex(∆ * ! )) −1 ≃ q ! f * p ! p ! ad ′ (p ! ,p ! ) → q ! f * ,
which is an isomorphism when f is smooth. Proof. By localizing p we may suppose p quasi-projective, and therefore we only need to deal with the cases where p is either a closed immersion or a smooth morphism. The smooth case follows from purity, and the case of a closed immersion follows from the localization sequence 2.3.1.2 and the base change property.
In particular, if T is a motivic triangulated category which satisfies the condition (RS) in 2.1.12, by Proposition 2.1.11, Lemma 2.3.2 applies whenever f ∈ S k (see Definition 2.1.10).
2.3.3. We now state a Künneth formula for the functor f ! . We use the assumptions and notation as in Notation 2.1.17, with the following diagram
X 1
The results then follows from Remark 2.2.4 and the naturality of the functors f ↦ f ! and f ↦ f ! . Proposition 2.3.5. Suppose that T is a motivic triangulated category which satisfies the condition (RS) in 2.1.12. Then for S = Spec(k), the map (2.3.3.2) is an isomorphism.
Proof. By Lemma 2.3.4, it suffices to show that the maps (2.3.3.5) and (2.3.3.6) are isomorphisms. By symmetry, it suffices to show that (2.3.3.5) is an isomorphism. In other words we can assume that X 1 = Y 1 and f 1 = id X 1 . By weak devissage we may suppose that M 1 = p * 1 W , where p ∶ W → Y 1 = X 1 is a proper morphism. The map (2.3.3.2) then becomes the following
(2.3.5.1) Kun ! ∶ p * 1 p * 1 W ⊗ p * 2 f ! 2 M 2 → (id X 1 × k f 2 ) ! (p ′ * 1 p * 1 W ⊗ p ′ * 2 M 2 ).
We have the following commutative diagram
W p W × k X 2 p 1W o o p× k id p 2W / / id× k f 2
using the fact that p is proper and p 2W ∶ W × X 2 → X 2 belongs to S k . 3 Therefore we are reduced to show that the following diagram is commutative:
p * 1 p * 1 W ⊗ p * 2 f ! 2 M 2 (2.3.5.3) (2.3.3.2) s s ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ (id Y 1 × k f 2 ) ! (p ′ * 1 p * 1 W ⊗ p ′ * 2 M 2 ) (2.3.5.4) / / (id Y 1 × k f 2 ) ! (p × k id Y 2 ) * p ′ * 2W M 2 .
(2.3.5.5)
This follows from a diagram chase of the following form:
(id X1 × k f 2 ) ! (p * 1 p * 1 W ⊗ p * 2 f ! 2 M 2 ) Ex(∆ * 1! ) ∼ / / Ex(∆ * 2! ) ≀ (id X1 × k f 2 ) ! ((id X1 × k f 2 ) * (p × k id Y2 ) * 1 W × k Y2 ⊗ p * 2 f ! 2 M 2 ) ≀ (Ex((id X 1 × k f2) * ! ,⊗)) −1 (id X1 × k f 2 ) ! ((p × k id X2 ) * 1 W ×X2 ⊗ p * 2 f ! 2 M 2 ) (Ex(∆ ′ * 3! )) −1 ∼ 1 1 ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ Ex((p× k id X 2 ) * ! ,⊗) ≀ (p × k id Y2 ) * 1 W × k Y2 ⊗ (id X1 × k f 2 ) ! p * 2 f ! 2 M 2 ≀ Ex((p× k id Y 2 ) * ! ,⊗) (id X1 × k f 2 ) ! (p × k id X2 ) * p * 2W f ! 2 M 2 (p × k id Y2 ) * (p × k id Y2 ) * (id X1 × k f 2 ) ! p * 2 f ! 2 M 2 ≀ (Ex(∆ * 5! )) −1 (p × k id Y2 ) * (id W × k f 2 ) ! p * 2W f ! 2 M 2 ad ((id W × k f 2 ) ! ,(id W × k f 2 ) ! ) (Ex(∆ * 3! )) −1 ∼ 1 1 ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ (p × k id Y2 ) * p ′ * 2W f 2! f ! 2 M 2 (p × k id Y2 ) * (id W × k f 2 ) ! (id W × k f 2 ) ! (id W × k f 2 ) ! p * 2W f ! 2 M 2 (Ex(∆ * 4! )) −1 ∼ / / (p × k id Y2 ) * (id W × k f 2 ) ! (id W × k f 2 ) ! p ′ * 2W f 2! f ! 2 M 2 . ad ′ ((id W × k f 2 ) ! ,(id W × k f 2 ) ! ) O O
The commutativity of the hexagon follows from the following general fact: for any Cartesian diagram of the form
Y q / / g ∆ X f T p / / S (2.3.5.6)
and any object M ∈ T(X), we have a commutative diagram
f ! (q ! 1 Y ⊗ M ) Ex(q * ! ,⊗) ≀ f ! (f * p ! (1 T ⊗ M )) Ex(∆ ′ * ! ) ∼ o o p ! 1 T ⊗ f ! M Ex(f * ! ,⊗) ∼ o o Ex(p * ! ,⊗) ≀ f ! q ! q * M ∼ / / p ! g ! q * M p ! p * f ! M Ex(∆ * ! ) ≀ o o (2.3.5.7)
where the square ∆ ′ is the transpose of the square ∆, oriented as Y -T -S-X. The rest of the diagram follows from standard adjunctions of functors and the fact that the maps of the form Ex(∆ * ! ) are compatible with horizontal and vertical compositions of Cartesian squares, given that • The square ∆ 2 is the composition of ∆ 1 and ∆ ′ 3 ; • The square ∆ 4 is the composition of ∆ 3 and ∆ 5 .
2.4. Künneth formula for Hom. 3 Alternatively, we could also use strong devissage and suppose that p 2W is smooth.
2.4.1. The last Künneth formula is concerned with the internal Hom functor, the remaining one of the six functors. If T is motivic, its monoidal structure is closed, namely the tensor product on T has a right adjoint given by the internal Hom functor Hom(⋅, ⋅), such that we have natural bijections
Hom(A ⊗ B, C) = Hom(A, Hom(B, C)). (2.4.1.1)
For any A, we denote by η A ∶ B → Hom(A, A ⊗ B) the unit (or coevaluation) map, and ǫ A ∶ A ⊗ Hom(A, B) → B the counit (or evaluation) map. We have the following exchange structures: for any morphism f ∶ X → Y and objects K ∈ T(X), L, M ∈ T(Y ) we have the following natural isomorphisms:
(2.4.1.2) Ex(f ! , Hom) ∶ Hom(f ! K, L) ∼ → f * Hom(K, f ! L); (2.4.1.3) Ex(f ! , Hom) ∶ f ! Hom(L, M ) ∼ → Hom(f * L, f ! M ).
We deduce the following isomorphism:
(2.4.1.4) Hom(f ! 1 X , L) ∼ → f * Hom(1 X , f ! L) = f * f ! L.
When f is proper, the isomorphism (2.4.1.4) fits into the two following commutative diagrams:
(2.4.1.5)
Hom(f ! 1 X , L) (2.4.1.4) ≀ / / Hom(f ! 1 X , L) ⊗ f ! 1 X ǫ f ! 1 X f ! f ! L ad ′ (f ! ,f ! ) / / L (2.4.1.6) f ! M η f ! 1 X / / ad (f ! ,f ! ) Hom(f ! 1 X , f ! M ⊗ f ! 1 X ) (2.4.1.4) ≀ f ! f ! f ! M / / f * f ! (f ! M ⊗ f ! 1 X )
where the two unlabeled horizontal maps are induced by the unit 1 Y → f * 1 X = f ! 1 X .
2.4.2. Let Y 1 and Y 2 be two S-schemes. For i = 1, 2, we denote by p ′ i ∶ Y 1 × S Y 2 → Y i the canonical projection, and let M i and N i be objects of T(Y i ). We have a canonical map Proof. By weak devissage we may suppose that
(2.4.2.1) p ′ * 1 M 1 ⊗ p ′ * 1 Hom(M 1 , N 1 ) ⊗ p ′ * 2 M 2 ⊗ p ′ * 2 Hom(M 2 , N 2 ) → p ′ * 1 N 1 ⊗ p ′ * 2 N 2 which induces a map (2.4.2.2) p ′ * 1 Hom(M 1 , N 1 ) ⊗ p ′ * 2 Hom(M 2 , N 2 ) → Hom(p ′ * 1 M 1 ⊗ p ′ * 2 M 2 , p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ).M i = f i! 1 X i where f i ∶ X i → Y i is a proper morphism.
We use the following notations:
X 1 f 1 X 1 × k X 2 p 1 o o p 2 / / f X 2 f 2 Y 1 Y 1 × k Y 2 p ′ 1 o o p ′ 2 / / Y 2 .
(2.4.3.1)
Then the map (2.4.2.2) becomes
p ′ * 1 Hom(f 1! 1 X 1 , N 1 ) ⊗ p ′ * 2 Hom(f 2! 1 X 2 , N 2 ) → Hom(p ′ * 1 f 1! 1 X 1 ⊗ p ′ * 2 f 2! 1 X 2 , p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ).
(2.4.3.2)
As in the proof of Proposition 2.3.5, we transform this map into other isomorphisms. By Lemma 2.2.3 and Proposition 2.3.5, we have the following two composition maps, where all arrows are isomorphisms:
p ′ * 1 Hom(f 1! 1 X 1 , N 1 ) ⊗ p ′ * 2 Hom(f 2! 1 X 2 , N 2 ) (2.4.1.4) ≃ p ′ * 1 f 1 * f ! 1 N 1 ⊗ p ′ * 2 f 2 * f ! 2 N 2 (2.2.3.1) ≃ f * (p * 1 f ! 1 N 1 ⊗ p * 2 f ! 2 N 2 ) (2.3.3.2) ≃ f * f ! (p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ); (2.4.3.3) Hom(p ′ * 1 f 1! 1 X 1 ⊗ p ′ * 2 f 2! 1 X 2 , p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ) (2.2.3.1) ≃ Hom(f ! 1 X 1 × k X 2 , p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ) (2.4.1.4) ≃ f * f ! (p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ).
(2.4.3.4)
Therefore we are reduced to show that the following diagram is commutative:
p ′ * 1 Hom(f 1! 1 X 1 , N 1 ) ⊗ p ′ * 2 Hom(f 2! 1 X 2 , N 2 ) (2.4.2.2) (2.4.3.3) + + ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ Hom(p ′ * 1 f 1! 1 X 1 ⊗ p ′ * 2 f 2! 1 X 2 , p * 1 N 1 ⊗ p * 2 N 2 ) (2.4.3.4) / / f * f ! (p ′ * 1 N 1 ⊗ p ′ * 2 N 2 ).
(2.4.3.5)
By the same argument as in the proof of Proposition 2.3.5, we can assume that X 1 = Y 1 and f 1 = id X 1 . The result then follows from a diagram chase, using the following general fact: for any Cartesian diagram of the form
Y g / / q ∆ X p T f / / S (2.4.3.6)
with f and g proper, and any objects A ∈ T(X), B ∈ T(S), there is a canonical isomorphism
A ⊗ p * f ! f ! B Ex(∆ * ! ) ∼ / / A ⊗ g ! q * f ! B Ex(g * ! ,⊗) ∼ / / g ! (g * A ⊗ q * f ! B) (2.4.3.7)
and a commutative diagram of the form
A ⊗ p * Hom(f ! 1 T , B) η p * f ! 1 T / / ≀ (2.4.1.4) Hom(p * f ! 1 T , A ⊗ p * Hom(f ! 1 T , B) ⊗ p * f ! 1 T ) ǫ f ! 1 T / / Ex(∆ * ! ) ≀ Hom(p * f ! 1 T , A ⊗ p * B) Ex(∆ * ! ) ≀ A ⊗ p * f * f ! B ≀ (2.4.3.7) Hom(g ! 1 Y , A ⊗ p * Hom(f ! 1 T , B) ⊗ p * f ! 1 T ) ǫ f ! 1 T / / (2.4.1.4) ≀ Hom(g ! 1 Y , A ⊗ p * B) (2.4.1.4) g ! (g * A ⊗ q * f ! B) ad (g ! ,g ! ) g * g ! (A ⊗ p * Hom(f ! 1 T , B) ⊗ p * f ! 1 T ) ǫ f ! 1 T / / ≀ (2.4.1.4) g * g ! (A ⊗ p * B) g ! g ! g ! (g * A ⊗ q * f ! B) g * g ! (A ⊗ p * f * f ! B ⊗ p * f ! 1 T ) g ! g ! (A ⊗ p * f * f ! B) ad ′ (f ! ,f ! ) O O o o ∼ (2.M, N ∈ T(X), the canonical map f * Hom(M, N ) → Hom(f * M, f * N ) is an isomorphism.
2.4.5. We end this section by summarizing the Künneth formulas we have obtained:
Theorem 2.4.6. Let S be a scheme and let
f 1 ∶ X 1 → Y 1 , f 2 ∶ X 2 → Y 2 be two S-morphisms. Denote by f ∶ X 1 × S X 2 → Y 1 × S Y 2 be their product. Let T be a motivic triangulated category. For i = 1, 2, consider objects L i ∈ T(X i ) and M i , N i ∈ T(Y i ).
Then with the notations of 2.1.17 there are canonical maps
f 1 * L 1 ⊠ S f 2 * L 2 → f * (L 1 ⊠ S L 2 ) (2.4.6.1) f 1! L 1 ⊠ S f 2! L 2 → f ! (L 1 ⊠ S L 2 ) (2.4.6.2) f ! 1 M 1 ⊠ S f ! 2 M 2 → f ! (M 1 ⊠ S M 2 ) (2.4.6.3) Hom(M 1 , N 1 ) ⊠ S Hom(M 2 , N 2 ) → Hom(M 1 ⊠ S M 2 , N 1 ⊠ S N 2 ) (2.4.6.4) such that (1) The map (2.4.6.2) is an isomorphism. (2) If T
THE VERDIER PAIRING
In this section, we define the Verdier pairing using the Künneth formulas in Section 2, following [SGA5,III]. This pairing is compatible with proper direct images, which implies the Lefschetz-Verdier formula. We assume that T c is a motivic triangulated category of constructible objects which satisfies the condition (RS) in 2.1.12.
3.1. The pairing.
3.1.1. Let X 1 and X 2 be two schemes. We denote by X 12 = X 1 × k X 2 and p i ∶ X 12 → X i the projections. Let L i ∈ T c (X i ) and let q i ∶ X i → Spec(k) be the structure map for i = 1, 2. We denote by
D(L i ) = Hom(L i , K X i ) = Hom(L i , q ! i 1 k ). See [CD19,(3.1.1.1) p * 1 D(L 1 ) ⊗ p * 2 L 2 ≃ Hom(p * 1 L 1 , p ! 2 L 2 ),
given by the composition
p * 1 D(L 1 ) ⊗ p * 2 L 2 = p * 1 Hom(L 1 , K X 1 ) ⊗ p * 2 L 2 (2.4.2.2) ≃ Hom(p * 1 L 1 , p * 1 K X 1 ⊗ p * 2 L 2 ) (2.3.3.2) ≃ Hom(p * 1 L 1 , p ! 2 L 2 ).
(3.1.1.2)
By symmetry, we also get an isomorphism
(3.1.1.3) p * 2 D(L 2 ) ⊗ p * 1 L 1 ≃ Hom(p * 2 L 2 , p ! 1 L 1 ).
The particular case of (3.1.1.1) for L 1 = 1 X 1 and L 2 = K X 2 gives an isomorphism
(3.1.1.4) p * 1 K X 1 ⊗ p * 2 K X 2 ≃ K X 12 .
3.1.2. Now let c ∶ C → X 12 and d ∶ D → X 12 be two morphisms, and form the following Cartesian square
E c ′ / / d ′ ∆ D d C c / / X 12 (3.1.2.1)
Denote by e ∶ E → X 12 the canonical morphism. For any two objects P, Q ∈ T(X 12 ), we have a canonical map
d ′ * c ! P ⊗ c ′ * d ! Q ad (e ! ,e ! ) → e ! e ! (d ′ * c ! P ⊗ c ′ * d ! Q) (2.2.3.1) → e ! (c ! c ! P ⊗ d ! d ! Q) ad ′ (c ! ,c ! ) ⊗ad ′ (d ! ,d ! ) → e ! (P ⊗ Q) (3.1.2.2)
where we use Lemma 2.2.3 relative to the base scheme X 12 and the morphisms c and d. Therefore we deduce a canonical map
(3.1.2.3) c * c ! P ⊗ d * d ! Q → e * e ! (P ⊗ Q)
which is given by the composition
c * c ! P ⊗ d * d ! Q ad (e * ,e * ) → e * e * (c * c ! P ⊗ d * d ! Q) = e * (e * c * c ! P ⊗ e * d * d ! Q) = e * (d ′ * c * c * c ! P ⊗ c ′ * d * d * d ! Q) ad ′ (c * ,c * ) ⊗ad ′ (d * ,d * ) → e * (d ′ * c ! P ⊗ c ′ * d ! Q) (3.1.2.2)
→ e * e ! (P ⊗ Q).
i = p i ○ c ∶ C → X i , d i = p i ○ d ∶ D → X i , and let L i ∈ T c (X i )
. By 3.1.1 and the projection formula for Hom we have an isomorphism
Hom(c * 1 L 1 , c ! 2 L 2 ) = Hom(c * p * 1 L 1 , c ! p ! 2 L 2 ) (2.4.1.3) ≃ c ! Hom(p * 1 L 1 , p ! 2 L 2 ) (3.1.1.1) ≃ c ! (p * 1 D(L 1 ) ⊗ p * 2 L 2 ).
(3.1.3.1)
By symmetry we have
(3.1.3.2) Hom(d * 2 L 2 , d ! 1 L 1 ) ≃ d ! (p * 2 D(L 2 ) ⊗ p * 1 L 1 ).c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 ) → e * K E
as the composition
c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 ) ≃c * c ! (p * 1 D(L 1 ) ⊗ p * 2 L 2 ) ⊗ d * d ! (p * 2 D(L 2 ) ⊗ p * 1 L 1 ) (3.1.2.3) →e * e ! (p * 1 D(L 1 ) ⊗ p * 2 L 2 ⊗ p * 2 D(L 2 ) ⊗ p * 1 L 1 ) ≃e * e ! (p * 1 L 1 ⊗ p * 1 D(L 1 ) ⊗ p * 2 L 2 ⊗ p * 2 D(L 2 )) ǫ L 1 ⊗ǫ L 2 →e * e ! (p * 1 K X 1 ⊗ p * 2 K X 2 ) (3.1.1.4) ≃ e * e ! K X 12 = e * K E .
(3. 1.3.4) 3.1.4. We now construct a proper functoriality of the map (3.1.3.3). Let X ′ 1 and X ′ 2 be two schemes, and denote by X ′ 12 = X ′ 1 × k X ′ 2 . Let f 1 ∶ X 1 → X ′ 1 and f 2 ∶ X 2 → X ′ 2 be two morphisms, and denote by f = f 1 × k f 2 ∶ X 12 → X ′ 12 their product. For i = 1, 2, we denote by p ′ i ∶ X ′ 12 → X i the projection, and let L i ∈ T c (X i ). There is a canonical isomorphism
(3.1.4.1) f * Hom(p * 1 L 1 , p ! 2 L 1 ) ≃ Hom(p ′ * 1 f 1! L 1 , p ′! 2 f 2 * L 2 )
given by the composition
f * Hom(p * 1 L 1 , p ! 2 L 1 ) (3.1.1.1) ≃ f * (p * 1 D(L 1 ) ⊗ p * 2 L 2 ) (2.1.19.1) ≃ p ′ * 1 f 1 * D(L 1 ) ⊗ p ′ * 2 f 2 * L 2 ≃ p ′ * 1 D(f 1! L 1 ) ⊗ p ′ * 2 f 2 * L 2 (3.1.1.1) ≃ Hom(p ′ * 1 f 1! L 1 , p ′! 2 f 2 * L 2 ) (3.1.4.2)
where we use the canonical duality isomorphism D(
f 1! L 1 ) ≃ f 1 * D(L 1 ) ([CD19,E ′ / / D ′ C ′ / / X ′ 12 (3.1.5.1)
such that there is a commutative cube
E / / f E ❆ ❆ ❆ ❆ ❆ ❆ ❆ D f D # # ❋ ❋ ❋ ❋ ❋ ❋ ❋ ❋ ❋ d E ′ / / D ′ d ′ C c / / f C ❄ ❄ ❄ ❄ ❄ ❄ ❄ ❄ X 12 f " " ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ C ′ c ′ / / X ′ 12 .
(3.1.5.2)
Assume that f 1 , f 2 , c, c ′ , d, d ′ are proper. Consider the following commutative diagram
C c / / f C ) ) c * * C ′ × X ′ 12 X 12 c ′ f / / f C ′ X 12 f C ′ c ′ / / X ′ 12 (3.1.5.3)
where c is necessarily proper. There is a natural transformation of functors
(3.1.5.4) f * c * c ! = c ′ * f C ′ * c * c ! c ′! f ad ′ (c * ,c ! ) → c ′ * f C ′ * c ′! f (2.3.1.4) ≃ c ′ * c ′! f * . For i = 1, 2, denote by c ′ i = p ′ i ○ c ′ ∶ C ′ → X ′ i , d ′ i = p ′ i ○ d ′ ∶ D ′ → X ′ i . Then there is a canonical map (3.1.5.5) f * c * Hom(c * 1 L 1 , c ! 2 L 2 ) → c ′ * Hom(c ′ * 1 f 1 * L 1 , c ′! 2 f 2 * L 2 )
given by the composition
f * c * Hom(c * 1 L 1 , c ! 2 L 2 ) (2.4.1.3) ≃ f * c * c ! Hom(p * 1 L 1 , p ! 2 L 2 ) (3.1.5.4) → c ′ * c ′! f * Hom(p * 1 L 1 , p ! 2 L 2 ) (3.1.4.1) ≃ c ′ * c ′! Hom(p ′ * 1 f 1! L 1 , p ′! 2 f 2 * L 2 ) (2.4.1.3) ≃ c ′ * Hom(c ′ * 1 f 1 * L 1 , c ′! 2 f 2 * L 2 ).
(3.1.5.6) By symmetry we have a map
(3.1.5.7) f * d * Hom(d * 2 L 2 , d ! 1 L 1 ) → d ′ * Hom(d ′ * 2 f 2 * L 2 , d ′! 1 f 1 * L 1 ).
Proposition 3.1.6. The square
f * c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ f * d * Hom(d * 2 L 2 , d ! 1 L 1 ) / / f * e * K E c ′ * Hom(c ′ * 1 f 1 * L 1 , c ′! 2 f 2 * L 2 ) ⊗ d ′ * Hom(d ′ * 2 f 2 * L 2 , d ′! 1 f 1 * L 1 ) (3.1.3.3) / / e ′ * K E ′ (3.1.6.1)
is commutative, where the left vertical map is given by the maps (3.1.5.5) and (3.1.5.7), and the upper horizontal row is the composition
f * c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ f * d * Hom(d * 2 L 2 , d ! 1 L 1 ) (2.1.18.1) → f * (c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 )) (3.1.3.3) → f * e * K E . (3.1.6.2)
The proof is identical to the one given in [SGA5,III 4.4], see also [YZ18, Theorem 3.3.2].
Remark 3.1.7. Following [DJK18, 4.2.1] (see also 6.1.5 below), there is a natural transformation f * → f ! , given a lci morphism f together with an isomorphism T h X (L f ) ≃ 1 X , where the left hand side is the Thom space of the trivial virtual tangent bundle. 4 For exampleétale morphisms satisfy this property, but the class of such morphisms is bigger. The map (3.1.3.3) is also compatible with pull-backs along these morphisms, which we do not develop here.
Definition 3.1.8. With the notations of (3.1.2), for two maps u ∶ c * 1 L 1 → c ! 2 L 2 and v ∶ d * 2 L 2 → d ! 1 L 1 which are called (cohomological) correspondences, we define the Verdier pairing
1 X 12 → c * 1 C ⊗ d * 1 D c * η L 1 ⊗d * η L 2 → c * Hom(c * 1 L 1 , c * 1 L 1 ) ⊗ d * Hom(d * 2 L 2 , d * 2 L 2 ) u * ⊗v * → c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 ) (3.1.3.3) → e * K E . (3.1.8.2)
Remark 3.1.9.
(1) The map ⟨u, v⟩ can be seen as an element of the bivariant group H 0 (X k), and the map (3.1.10.3) below corresponds to the natural proper functoriality (see [DJK18]). We will come back to this point of view later in Section 5.
(2) In the case where every scheme is equal to Spec(k), the assumptions imply that every object L ∈ T c (k) is dualizable and D(L) = L ∨ is the dual object. In this case, given two maps u ∶ L 1 → L 2 , v ∶ L 2 → L 1 , the map ⟨u, v⟩ is the composition
1 k η L 1 ⊗η L 2 →L ∨ 1 ⊗ L 1 ⊗ L ∨ 2 ⊗ L 2 u⊗1⊗v⊗1 → L ∨ 1 ⊗ L 2 ⊗ L ∨ 2 ⊗ L 1 ≃L 1 ⊗ L ∨ 1 ⊗ L 2 ⊗ L ∨ 2 ǫ L 1 ⊗ǫ L 2 → 1 k . (3.1.9.1)
It is not hard to check that ⟨u, v⟩ agrees with the trace of the composition v ○ u ∶ L 1 → L 1 . We will see a more general result in Proposition 3.2.5 below.
(3) For T c = DM cdh,c , we recover the construction in [Ols16, 5.8] as a particular case of the Verdier pairing.
3.1.10. Consider the setting of 3.1.4. We can define the proper direct image of correspondences as follows: given two correspondences u ∶ c * 1 L 1 → c ! 2 L 2 and v ∶ d * 2 L 2 → d ! 1 L 1 , using the maps (3.1.5.5) and (3.1.5.7), we obtain the following maps
f * u ∶ c ′ * 1 f 1 * L 1 → c ′! 2 f 2 * L 2 , (3.1.10.1) f * v ∶ d ′ * 2 f 2 * L 2 → d ′! 1 f 1 * L 1 (3.1.10.2)
regarded as correspondences between f 1 * L 1 and f 2 * L 2 . On the other hand, since the canonical morphism f E ∶ E → E ′ is proper, for any map w ∶ 1 E → K E we define the proper direct image as
(3.1.10.3) f E * w ∶ 1 E ′ → f E * 1 E w → f E * K E = f E * f ! E K E ′ ad ′ (f E * ,f ! E ) → K E ′ .
With the conventions above, the following is a consequence of Proposition 3.1.6:
Corollary 3.1.11. The Verdier pairing is compatible with proper direct images, i.e. we have
⟨f * u, f * v⟩ = f E * ⟨u, v⟩ ∶ 1 E ′ → K E ′ . (3.1.11.1) When X ′ 1 = X ′ 2 = C ′ = D ′ = Spec(k)
, the maps f * u ∶ f 1 * L 1 → f 2 * L 2 and f * v ∶ f 2 * L 2 → f 1 * L 1 are maps between dualizable objects in T c (Spec(k)), and the map f E * w is known as the degree map and is traditionally written as ∫ E w. As a particular case of Corollary 3.1.11, we obtain the following Lefschetz-Verdier formula ([SGA5, III 4.7]): 3.2. Composition of correspondences and generalized traces. In this section, we study compositions of correspondences and show that the general Verdier pairing (3.1.8.1) can be reduced to a generalized trace map, which will be a key ingredient for the general form of additivity of traces in Section 4.
Corollary 3.1.12 (Lefschetz-Verdier formula). When X ′ 1 = X ′ 2 = C ′ = D ′ = Spec(k), we have the following equality T r(f * v ○ f * u) = E ⟨u, v⟩
3.2.1. Let X 1 , X 2 and X 3 be three schemes. Denote by X ijl = X i × k X j × k X l , and p ijl i ∶ X ijl → X i and p i ∶ X i → k the canonical projections, and we use similar notations for other schemes and morphisms.
For i ∈ {1, 2, 3}, let L i be an object of T c (X i ). We have a canonical map (3.2.1.1) p 123 * 12 Hom(p 12 * 1 L 1 , p 12! 2 L 2 ) ⊗ p 123 * 23 Hom(p 23 * 2 L 2 , p 23! 3 L 3 ) → p 123! 13 Hom(p 13 * 1 L 1 , p 13! 3 L 3 )
given by the composition p 123 * 12 Hom(p 12 * 1 L 1 , p 12! 2 L 2 ) ⊗ p 123 * 23 Hom(p 23 * 2 L 2 , p 23! 3 L 3 ) 3.2.2. Now consider two morphisms c 12 ∶ C 12 → X 12 and c 23 ∶ C 23 → X 23 . Let C 13 = C 12 × X 2 C 23 together with a canonical morphism c 13 123 ∶ C 13 → X 123 . We denote by c 13 = p 123 13 ○ c 13 123 ∶ C 13 → X 13 , and where all the four squares are Cartesian. For any objects P ∈ T c (X 12 ) and Q ∈ T c (X 23 ), we have a canonical map ≃ Hom(c 13 * 1 L 1 , c 13! 3 L 3 ).
c ij i = p ij i ○ c ij ∶ C ij → X i , etc. Consider the following diagram C 13 q ′ 1 q ′ 3 / / q 3 & & q 1 c 13 123 " " ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ C ′ 23 c ′ 23 q 23 / / C 23 c 23 C ′ 12 c ′ 12 / / qq * 1 c ! 12 P ⊗ q * 3 c ! 23 Q = q ′ * 1 q * 12 c ! 12 P ⊗ q ′ * 3 q * 23 c ! 23 Q (2.3.1.6) → q ′ * 1 c ′! 12 p 123 * 12 P ⊗ q ′ * 3 c ′!L 1 , c 12! 2 L 2 ) ⊗ q * 3 Hom(c 23 * 2 L 2 , c 23! 3 L 3 ) → Hom(c 13 * 1 L 1 , c 13! 3 L 3 ) given by the composition q * 1 Hom(c 12 * 1 L 1 , c 12! 2 L 2 ) ⊗ q * 3 Hom(c 23 * 2 L 2 , c 23! 3 L 3 ) (2.4.1.3) ≃ q * 1 c ! 12 Hom(p 12 * 1 L 1 , p 12! 2 L 2 ) ⊗ q * 3 c !
(3.
2.2.4)
Definition 3.2.3. Given two correspondences u ∶ c 12 * 1 L 1 → c 12! 2 L 2 and v ∶ c 23 * 2 L 2 → c 23! 3 L 3 , we deduce from the map (3.2.2.3) a correspondence from L 1 to L 3 vu ∶ c 13 * 1 L 1 → c 13! 3 L 3 (3.2.3.1) which we call the composition of the correspondences u and v.
Now assume that
X 1 = X 3 and L 1 = L 3 . Consider two morphisms c ∶ C → X 12 , d ∶ D → X 21 ≃ X 12 . For i ∈ {1, 2}, we denote by c i = p 12 i ○ c ∶ C → X i and d i = p 21 i ○ d ∶ D → X i the canonical maps. Let F = C × X 2 D and E = C × X 12 D, then there is a Cartesian diagram of the form E f ′ / / δ ′ X 12 p 12 1 / / δ 1 X 1 δ F f / / X 121 p 121 11 / / X 11 (3.2.4.1)
and E is canonically identified with the fiber product F × X 11 X 1 via the diagonal map δ ∶ X 1 → X 11 . As in 3.2.2 we denote by q 1 ∶ F → C and q 3 ∶ F → D the canonical maps, and f 1 , f 3 the compositions
f 1 ∶ F q 1 → C c → X 12 p 12 1 → X 1 (3.2.4.2) f 3 ∶ F q 3 → D d → X 21 p 21 1 → X 1 . (3.2.4.3) Suppose that u ∶ c * 1 L 1 → c ! 2 L 2 and v ∶ d * 2 L 2 → d !⟨u, v⟩ = ⟨vu, 1⟩ ∶ 1 E → K E (3.2.5.1)
where 1 is the identity correspondence id ∶ 1 X 1 → 1 X 1 .
Proof. Denote by p 11 1 , p 11 3 ∶ X 11 → X 1 the projections to the first and the second summands. We have a canonical map
(3.2.5.2) 1 X 1 η L 1 → Hom(L 1 , L 1 ) (2.4.1.3) ≃ δ ! Hom(p 11 * 3 L, p 11! 1 L) (3.1.1.1) ≃ δ ! (p 11 * 3 D(L) ⊗ p 11 * 1 L), from which we deduce a canonical map (3.2.5.3) δ ′ * f * f * Hom(f * 1 L 1 , f ! 3 L 1 ) → f ′! p 12! 1 (D(L 1 ) ⊗ L 1 ) given by the composition δ ′ * f * f * Hom(f * 1 L 1 , f ! 3 L 1 ) (2.4.1.3) ≃ δ ′ * f * f * f ! p 121! 11 Hom(p 11 * 1 L 1 , p 11! 3 L 1 ) (2.4.1.3) ≃ δ ′ * f * f * f ! p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 3 L 1 ) (3.2.5.2) → δ ′ * f * (f * f ! p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 3 L 1 ) ⊗ δ 1 * p 12 * 1 δ ! (p 11 * 3 D(L) ⊗ p 11 * 1 L)) (2.3.1.6) → δ ′ * f * (f * f ! p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 3 L 1 ) ⊗ δ 1 * δ ! 1 p 121 * 11 (p 11 * 3 D(L) ⊗ p 11 * 1 L)) (3.1.2.3) → δ ′ * f * f * δ ′ * δ ′! f ! (p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 3 L 1 ) ⊗ p 121 * 11 (p 11 * 3 D(L) ⊗ p 11 * 1 L))) ad ′ ((f ○δ ′ ) * ,(f ○δ ′ ) * )
→ δ ′! f ! (p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 3 L 1 ) ⊗ p 121 * 11 (p 11 * 3 D(L) ⊗ p 11 * 1 L))) (6.1.1.1)
→ δ ′! f ! p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 3 L 1 ⊗ p 11 * 3 D(L) ⊗ p 11 * 1 L)
ǫ L 1 → δ ′! f ! p 121! 11 (p 11 * 1 D(L 1 ) ⊗ p 11 * 1 L ⊗ p 11 * 3 K X 1 ) (2.3.3.2) ≃ δ ′! f ! p 121! 11 p 11! 1 (D(L 1 ) ⊗ L 1 ) = f ′! p 12! 1 (D(L 1 ) ⊗ L 1 ).
(3.2.5.4) Note that we have natural transformations f ′ * c * → f * f * q * 1 and f ′ * d * → f * f * q * 3 . We want to show that both the maps ⟨u, v⟩ and ⟨vu, 1⟩ are equal to the following composition
1 E → δ ′ * f * f * 1 F vu → δ ′ * f * f * Hom(f * 1 L 1 , f ! 3 L 1 ) (3.2.5.3) → f ′! p 12! 1 (D(L 1 ) ⊗ L 1 ) ǫ L 1 → f ′! p 12! 1 K X 1 = K E .
This follows from the commutativty of the following diagram
1 E / / + + ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ f ′ * (c * 1 C ⊗ d * 1 D ) u⊗v / / f ′ * (c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 )) (3.1.3.3) r r δ ′ * f * p 121 * 11 p 121 11 * f * 1 F / / vu δ ′ * f * f * 1 F vu δ ′ * f * f * (q * 1 Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ q * 3 d * Hom(d * 2 L 2 , d ! 1 L 1 )) (3.2.2.3) r r ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡ δ ′ * f * p 121 * 11 p 121 11 * f * Hom(f * 1 L 1 , f ! 3 L 1 ) / / 1 1 δ ′ * f * f * Hom(f * 1 L 1 , f ! 3 L 1 ) (3.2.5.3) / / f ′! p 12! 1 (D(L 1 ) ⊗ L 1 )
where each subdiagram follows either from definition or from a straightforward check.
Remark 3.2.6. Proposition 3.2.5 says that the Verdier pairing (3.1.8.1) can always be reduced to the case where X 1 = X 2 , L 1 = L 2 and one of the correspondences is the identity. As such we reduce the pairing with two entries to a generalized trace map, therefore making it much easier to deal with additivity along distinguished triangles.
3.2.7. We now give a more explicit description of the map ⟨u, 1⟩ (see [AS07, Proposition 2.1.7] 5 ). Let X be a scheme and c ∶ C → X × k X be a morphism. We use the notation in 3.1.2, with D = X, d = δ ∶ X → X × k X the diagonal morphism and the Cartesian diagram
E c ′ / / δ ′ ∆ X δ C c / / X × k X.
(3.2.7.1) Proposition 3.2.8. Let L ∈ T c (X) and u ∶ c * 1 L → c ! 2 L be a correspondence. Denote by 1 = id L ∶ L → L the identity correspondence, and by u ′ the following map:
1 C η c * 1 L → Hom(c * 1 L, c * 1 L) u * → Hom(c * 1 L, c ! 2 L) (2.4.1.3) ≃ c ! Hom(p * 1 L, p ! 2 L) (3.1.1.1) ≃ c ! (p * 1 D(L) ⊗ p * 2 L).
(3.2.8.1)
Then the map ⟨u, 1⟩ ∶ 1 E → K E is obtained by adjunction from the map
c ′ ! 1 E (Ex(∆ * ! )) −1 ≃ δ * c ! 1 C u ′ → δ * c ! c ! (p * 1 D(L) ⊗ p * 2 L) ad ′ (c ! ,c ! ) → δ * (p * 1 D(L) ⊗ p * 2 L) = D(L) ⊗ L ≃ L ⊗ D(L) ǫ L → K X . (3.2.8.2)
Proof. Similarly to the map (3.2.5.2), we denote by η ′ the following map
1 X η L →Hom(L, L) = Hom(δ * p * 2 L, δ ! p ! 1 L) (2.4.1.3) ≃ δ ! Hom(p * 2 L, p ! 1 L) (3.1.1.1) ≃ δ ! (p * 2 D(L) ⊗ p * 1 L).
(3.2.8.3)
We are reduced to show the commutativity of the following diagram:
1 E ad (c ′ ! ,c ′! ) / / u ′ c ′! c ′ ! 1 E (Ex(∆ * ! )) −1 ∼ / / c ′! δ * c ! 1 C u ′ δ ′ * c ! (p * 1 D(L) ⊗ p * 2 L) (2.3.1.6) / / η ′ c ′! δ * (p * 1 D(L) ⊗ p * 2 L) η ′ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ c ′! δ * c ! c ! (p * 1 D(L) ⊗ p * 2 L) ad ′ (c ! ,c ! ) o o δ ′ * c ! (p * 1 D(L) ⊗ p * 2 L) ⊗ c ′ * δ ! (p * 2 D(L) ⊗ p * 1 L) (2.3.1.6) / / (3.1.2.2) c ′! δ * (p * 1 D(L) ⊗ p * 2 L) ⊗ c ′ * δ ! (p * 2 D(L) ⊗ p * 1 L) (6.1.1.1) r r ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡❡ ❡ c ′! (D(L) ⊗ L) ≀ c ′! δ ! (p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L) ⊗ p * 1 L) ∼ / / c ′! δ ! (p * 1 L ⊗ p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L)) ǫ L ⊗ǫ L c ′! (L ⊗ D(L)) ǫ L c ′! δ ! (p * 1 K X ⊗ p * 2 K X ) ∼ / / c ′! K X . 5
The statement in loc. cit. holds for c a closed immersion, but our modified version holds in general.
We show the commutativity of the octagon, while the rest of the diagram follows from a straightforward check. We are reduced to the following diagram:
δ * (p * 1 D(L) ⊗ p * 2 L) η ′ ad (δ ! ,δ ! ) + + ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ D(L) ⊗ L ∼ / / L ⊗ D(L) ad (δ ! ,δ ! ) ǫ L | | δ * (p * 1 D(L) ⊗ p * 2 L) ⊗ δ ! (p * 2 D(L) ⊗ p * 1 L) (6.1.1.1) δ ! δ ! δ * (p * 1 D(L) ⊗ p * 2 L) ∼ / / δ ! δ ! (L ⊗ D(L)) ad (q ! ,q ! ) δ ! (p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L) ⊗ p * 1 L) ≀ δ ! (p * 1 D(L) ⊗ p * 2 L ⊗ δ ! 1 X ) Ex(δ * ! ,⊗) ≀ O O η ′ s s ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ ❣ p ! p ! (L ⊗ D(L)) ǫ L δ ! (p * 1 L ⊗ p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L)) ǫ L ⊗ǫ L / / ǫ L 1 1 ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ δ ! (p * 1 K X ⊗ p * 2 K X ) (3.1.1.4) ∼ / / K X
where p ∶ X → Spec(k) and q ∶ X × k X → Spec(k) are structural morphisms. We are reduced to the commutativity of the pentagon, namely the following diagram:
δ ! δ * (p * 1 D(L) ⊗ p * 2 L) ∼ / / δ ! (L ⊗ D(L)) ad (q ! ,q ! ) / / q ! q ! δ ! (L ⊗ D(L)) p * 1 D(L) ⊗ p * 2 L ⊗ δ ! 1 X η ′ ad (q ! ,q ! ) / / Ex(δ * ! ,⊗) ≀ O O q ! q ! (p * 1 D(L) ⊗ p * 2 L ⊗ δ ! 1 X ) η ′ ∼ 3 3 ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ p * 1 D(L) ⊗ p * 2 L ⊗ δ ! δ ! (p * 2 D(L) ⊗ p * 1 L) ad (q ! ,q ! ) / / ad ′ (δ ! ,δ ! ) q ! q ! (p * 1 D(L) ⊗ p * 2 L ⊗ δ ! δ ! (p * 2 D(L) ⊗ p * 1 L)) ad (δ ! ,δ ! ) q ! (p ! (L ⊗ D(L)) ⊗ p ! K X ) (2.2.3.1) ≀ ad ′ (p ! ,p ! ) O O p * 1 L ⊗ p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L) ad (q ! ,q ! ) / / q ! q ! (p * 1 L ⊗ p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L)) ǫ L / / q ! q ! (p * 1 (L ⊗ D(L)) ⊗ p * 2 K X ).
By considering the hexagon in the diagram above, we are reduced to the following diagram:
q ! (p * 1 D(L) ⊗ p * 2 L ⊗ δ ! 1 X ) ∼ / / η ′ q ! δ ! (L ⊗ D(L)) ad ′ (δ ! ,δ ! ) q ! (p * 1 D(L) ⊗ p * 2 L ⊗ δ ! δ ! (p * 2 D(L) ⊗ p * 1 L)) ad ′ (δ ! ,δ ! ) q ! p ! 1 (L ⊗ D(L)) p ! (L ⊗ D(L)) ⊗ p ! K X ad ′ (p ! ,p ! ) j j ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ (2.2.3.1) ∼ t t ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ q ! (p * 1 D(L) ⊗ p * 2 L ⊗ p * 2 D(L) ⊗ p * 1 L) ǫ L / / q ! (p * 1 (L ⊗ D(L)) ⊗ p * 2 K X ). (2.3.3.2) ≀ O O
The right part is a straightforward check, and the left part is reduced to the following diagram:
p * 2 L ⊗ δ ! 1 X η ′ / / Ex(δ * ! ,⊗) ≀ p * 2 L ⊗ p * 2 D(L) ⊗ p * 1 L ǫ L / / p * 1 L ⊗ p * 2 K X (2.3.3.2) ≀ δ ! L ad ′ (δ ! ,δ ! ) / / p ! 1 L which commutes since the composition δ ! L = δ ! δ * p * 2 L → δ ! (δ * p * 2 L ⊗ Hom(δ * p * 2 L, δ ! p ! 1 L)) → δ ! δ ! p ! 1 L = δ ! L (3.2.8.4)
is the identity map.
Remark 3.2.9. In the special case where c = δ ∶ X → X × k X is the diagonal map, the Künneth formulas indeed produce a map (3.2.9.1)
1 X = δ * δ ! 1 X η ′ → δ * (p * 2 D(L) ⊗ p * 1 L) = D(L) ⊗ L.
Together with the map ǫ L ∶ L ⊗ D(L) → K X , they can be seen as the counit and unit maps of a duality formalism similar to the usual (strong) duality, where the usual dualizing functor is replaced by the local duality functor D, which gives rise to trace maps without requiring L to be strongly dualizable; the general Verdier pairing is a more general form of the trace map in such a duality formalism. In Section 4 we will combine this point of view with the approach in [May01] in the study of additivity of traces. In particular, as mentioned in the introduction, the characteristic class of L is the composition
1 X (3.2.9.1) → D(L) ⊗ L ≃ L ⊗ D(L) ǫ L → K X (3.2.9.2)
which will be studied in more details in Section 5.
ADDITIVITY OF THE VERDIER PAIRING
In this section we prove the additivity of the Verdier pairing following [May01] and [GPS14], using the language of derivators ([Ayo07], [GPS14]).
May's axioms in stable derivators.
In this section we recall the notion of closed symmetric monoidal stable derivators and obtain May's axioms following [GPS14]; the statements we need are slightly different from loc.cit. and can be obtained with very minor changes from the original proof. Since we are mostly interested in constructible motives, we only consider finite diagrams for convenience.
Notation 4.1.1. We denote by F inCat the 2-category of finite categories, CAT the 2-category of categories and T R the subcategory of CAT of triangulated categories and triangulated functors.
We denote by ∅ the empty category, 0 the terminal category and 1 = (0 → 1) the category with two objects and one non-identity morphism between them. We denote by ◻ the category 1 × 1, written as Definition 4.1.2. A (strong) stable derivator is a (non-strict) 2-functor T c ∶ F inCat op → CAT satisfying the following properties:
(1) T c sends coproducts to products. In particular T c (∅) = 0.
(2) For any I, J ∈ F inCat, the canonical functor T c (I × J) → F un(I op , T c (J)) is conservative.
(3) For any A ∈ F inCat, the canonical functor T c (A × 1) → F un(1, T c (A)) is full and essentially surjective. (6) For any I ∈ F inCat the category T c (I) has a zero object, i.e. an object which is both initial and terminal. (7) For any I ∈ F inCat and X an object of T c (◻ × I), X is Cartesian (i.e. the canonical map X → i ⌟ * i * ⌟ X is invertible) if and only if X is coCartesian (i.e. the canonical map i ⌜# i * ⌜ X → X is invertible). We also say that X is biCartesian.
b * u * → p bnA * p * bnA b * u * → p bnA * j * A b u * u * → p bnA * j * A b (4.1.2.1) p bnA# j * bnA → p bnA# j * bnA u * u # → p bnA# p * bnA b * u # → b * u # .
An object in T c (A) is called an (A-shaped) coherent diagram, who has an underlying incoherent diagram in F un(A, T c (0)).
The cofiber functor cof ∶ T c (1) → T c (1) is the composition
T c (1) (0,⋅) * → T c (⌜) (i⌜) # → T c (◻) (⋅,1) * → T c (1). (4.1.2.3)
If T c is a stable derivator, for any I ∈ F inCat we define a functor Σ ∶ T c (I) → T c (I) by setting
Σ = (1, 1) * (i ⌜ ) # (i ⌜ ) * (0, 0) * . For a biCartesian X ∈ T c (◻ × I) depicted as x f / / y g 0 / / z (4.1.2.4)
with x, y, z ∈ T c (I), we define a canonical map z h → Σx as follows:
h ∶ z = (1, 1 * )X ∼ ← (1, 1 * )(i ⌜ ) # (i ⌜ ) * X → (1, 1 * )(i ⌜ ) # (i ⌜ ) * (0, 0) * (0, 0) * X = Σ(0, 0) * X = Σx. (4.1.2.5)
Then the category T c (I) has the structure of a triangulated category by considering Σ as the shift functor and letting distinguished triangles to be the ones isomorphic to a triangle of the form
x f → y g → z h → Σx. (4.1.2.6)
Therefore we can also see stable derivators as 2-functors T c ∶ F inCat op → T R.
Notation 4.1.3. We denote by SM T R the 2-category of symmetric monoidal triangulated categories with (strong) monoidal functors.
As in [GPS14], if ⊙ ∶ T c,1 × T c,2 → T c,3 is a two-variable morphism of stable derivators, it induces an internal product denoted as (2) For any A, B, C ∈ F inCat, u ∶ A → B, X ∈ T c (A), Y ∈ T c (C), the following canonical map of external products is an isomorphism:
⊙ A ∶ T c,1 (A) × T c,2 (A) → T c,3 (A).T c,1 (A) × T c,2 (B) π * A ×π * B → T c,1 (A × B) × T c,2 (A × B) ⊙ A×B → T c,3 (A × B).(u × 1) # (X ⊗ Y ) → (u × 1) # (u * u # X ⊗ Y ) ∼ → (u × 1) # (u × 1) * (u # X ⊗ Y ) → u # X ⊗ Y. (4.1.4.1)
It is closed if the functor ⊗ has a right adjoint Hom(⋅, ⋅).
The following is [GPS14, Corollary 4.5]:
Proposition 4.1.5 (TC1). Let T c be a symmetric monoidal stable derivator and denote by 1 the unit in T c (0).Then for any object x in T c (0) there is a natural equivalence α ∶ Σx ≃ x ⊗ Σ1 such that the composition
ΣΣ1 α → Σ1 ⊗ Σ1 s ≃ Σ1 ⊗ Σ1 α −1 → ΣΣ1 (4.1.5.1)
is the multiplication by −1, where s is the isomorphism that exchanges the two summands.
The following is [GPS14, Theorems 4.8 and 9.12]:
Proposition 4.1.6 (TC2). Let T c be a closed symmetric monoidal stable derivator. Then for any distinguished triangle
x f → y g → z h → Σx (4.1.6.1)
in T c (0) and any t ∈ T c (0), the following triangles are distinguished:
x ⊗ t f ⊗1 → y ⊗ t g⊗1 → z ⊗ t h⊗1 → Σ(x ⊗ t). (4.1.6.2) t ⊗ x 1⊗f → t ⊗ y 1⊗g → t ⊗ z 1⊗h → Σ(t ⊗ x).→ b 1 to a 2 f 2 → b 2 are pairs of morphisms b 1 h → b 2 and a 2 g → b 1 such that f 2 = hf 1 g. There is a canonical map (t op , s op ) ∶ tw(A) op → A op × A sending a f → b to (b, a).
If T c is a stable derivator and X ∈ T c (A op × A), the coend of X is defined as
A X ∶= (π tw(A) op ) # (t op , s op ) * X ∈ T c (0) (4.1.8.1) where π tw(A) op ∶ tw(A) op → 0 is the canonical map.
If ⊙ ∶ T c,1 × T c,2 → T c,3 is a two-variable morphism of stable derivators and X ∈ T c,1 (A op ), Y ∈ T c,2 (A), the cancelling tensor product of X and Y is defined as
X ⊙ [A] Y ∶= A (X ⊙ Y ) ∈ T c,3 (0). (4.1.8.2)
If T c is a closed symmetric monoidal derivator, we have two natural two-variable morphisms ⊗ ∶ T c × T c → T c and Hom ∶ T op c × T c → T c . We denote by X ⊗ [A] Y and Hom [A] (X, Y ) respectively the corresponding cancelling tensor products.
Proposition 4.1.9 (TC3). Let T c be a closed symmetric monoidal derivator and let x f → y and x ′ f ′ → y ′ be two maps which give rise to distinguished triangles in T c (0)
x f → y g → z h → Σx. (4.1.9.1) x ′ f ′ → y ′ g ′ → z ′ h ′ → Σx ′ . (4.1.9.2)
Then the following properties hold:
(1) For v ∶= Hom [1] (cof(f ), f ′ ) there are distinguished triangles
Hom(y, x ′ ) p 1 → v j 1 → Hom(z, z ′ ) Hom(g,h ′ )
→ ΣHom(y, x ′ ) (4.1.9.3)
Σ −1 Hom(x, z ′ ) p 2 → v j 2 → Hom(y, y ′ ) −Hom(f,g ′ ) → Hom(x, z ′ ) (4.1.9.4) Hom(z, y ′ ) p 3 → v j 3 → Hom(x, x ′ ) Hom(h,f ′ )
→ ΣHom(z, y ′ ) (4.1.9.5) Hom(z, z ′ )
Hom(g,1 z ′ ) s s −Hom(1z,h ′ ) v v • • • • • • • • • • • • • Hom(x, y ′ ) ΣHom(z, x ′ )
Hom(y, z ′ ).
(2) For w ∶= Hom [1] (cof(f ), cof (f ′ )), there are distinguished triangles
Hom(z, z ′ ) k 1 → w q 1 → Hom(x, y ′ ) Hom(h,g ′ ) → ΣHom(z, z ′ ) (4.1.9.6) Hom(y, y ′ ) k 2 → w q 2 → ΣHom(z, x ′ ) −ΣHom(g,f ′ ) → ΣHom(y, y ′ ) (4.1.9.7) Hom(x, x ′ ) k 3 → w q 3 → Hom(y, z ′ ) Hom(f,h ′ ) → ΣHom(x, x ′ ) (4.1.9.8) with a similar coherent diagram. (3) For u ∶= f ⊗ [1] cof(f ′ ), there are distinguished triangles x ⊗ z ′ l 1 → u r 1 → z ⊗ y ′ h⊗g ′ → Σx ⊗ z ′ (4.1.9.9) y ⊗ y ′ l 2 → u r 2 → Σx ⊗ x ′ −Σ(g⊗f ′ ) → Σy ⊗ y ′ (4.1.9.10) z ⊗ x ′ l 3 → u r 3 → y ⊗ z ′ f ⊗h ′ → Σz ⊗ x ′ (4.1.9.11)
with a similar coherent diagram. We call these statements respectively (T C3D), (T C3D ′ ) and (T C3 ′ ).
Proof. The proof of (T C3D) is the same as that of [GPS14, Theorem 6.2], where we replace everywhere ⊗ by Hom. The statement of (T C3 ′ ) is proved in [GPS14, Section 7], and (T C3D ′ ) follows from a similar argument.
The following has the same proof as [GPS14, Theorem 7.3]: → Hom(y, t)
Hom(f,t) → Hom(x, t) Hom(Σ −1 h,t) → ΣHom(z, t).
(4.1.11.2) By [GPS14, Lemma 7.1], we have (4.1.11.3) and we denote by u this object. The following follows from the proof of [GPS14, Theorem 10.3]:
Hom(f, t) ⊗ [1] f ≃ Hom(g, t) ⊗ [1] g ≃ Hom(h, t) ⊗ [1] h
Proposition 4.1.12 (TC5a). With the notations above, there is a mapǭ ∶ u → t in T c (0) such that the following incoherent diagrams commute: Definition 4.1.14. We say that an object t ∈ T c (0) is dualizing if for any x ∈ T c (0), the following canonical map is an isomorphism:
Hom(x, t) ⊗ x ǫ L & & ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ l 1 / / uǭ Hom(y, t) ⊗ y ǫy & & ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ l 2 / / uǭ Hom(z, t) ⊗ z ǫz & & ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ l 3 / / uǭ t t t.
x → Hom(Hom(x, t), t). (4.1.15.1)
Hom(a, b) → D t (a ⊗ D t (b)).
Proof. The proof is the same as [CD19, Corollary 4.4.24]: we have a canonical isomorphism
D t (a × c) ≃ Hom(a, D t (c))
and since t is dualizing, the result follows by replacing c by D t (b) in the previous map.
Thanks to Lemma 4.1.15, the proof of the following is similar to [GPS14, Theorem 11.12]:
Proposition 4.1.16 (TC5b). Consider a distinguished triangle
x f → y g → z h → Σx (4.1.16.1)
in T c (0) and let t ∈ T c (0) be a dualizing object. Then the (TC3') diagram specified in (TC5a) for the triangles (D t g, D t f, D t Σ −1 h) and (f, g, h) is isomorphic to the t-dual of the (TC3D) diagram for the triangles (f, g, h) and (f, g, h).
Remark 4.1.17. The original (TC5b) statement also requires the (TC4) axiom for the dual diagram up to an involution, which is also true in our case if we write down the corresponding (TC3) and (TC3D') diagrams; but such a fact is not used in [GPS14] for the proof of the additivity of traces.
The following is obtained by taking the t-dual of the (TC5a):
G(j) F (α(i)) / / F (α(i)).
(4.2.1.1) If X ∈ Sch and J ∈ F inCat, we denote by (X, J) the object in DiaSch with constant value X.
The following definition is almost identical to [Ayo07, Definition 2.4.13]:
Definition 4.2.2. A stable algebraic derivator is a (non-strict) 2-functor T c ∶ (DiaSch) op → T R satisfying the following properties:
(1) T c sends coproducts to products.
(2) For any 1-morphism (f, α) ∶ (F, J) → (G, J ′ ) in DiaSch, the functor (f, α) * has a right adjoint
(f, α) * . (3) For any 1-morphism (f, α) ∶ (F, J) → (G, J ′ ) in DiaSch which is termwise smooth, the functor (f, α) * has a left adjoint (f, α) # . (4) If f ∶ G → F is a morphism of J-diagrams of schemes and α ∶ J ′ → J is a functor in F inCat,
then the exchange 2-morphism
α * f * → (f J ′ ) * α * (4.2.2.1)
associated to the following Cartesian square in DiaSch is invertible:
(4.2.2.2) (G ○ α, J ′ ) α / / f J ′ (G, J) f (F ○ α, J ′ ) α / / (F, J).
(5) In the situation of (4), if f is Cartesian and termwise smooth, then the following exchange 2morphism associated the square (4.2.2.2) is invertible:
(f J ′ ) # α * → α * f # .
(6) For any X ∈ Sch, the 2-functor
T c (X, ⋅) ∶ F inCat op → T R J ↦ T c (X, J) (4.2.2.3)
is a stable derivator (Definition 4.1.2).
(7) The 2-functor T c (⋅, 0) ∶ Sch op → T R X ↦ T c (X, 0) (4.2.2.4)
is the subcategory of constructible objects in a motivic triangulated category T ([CD19, Definition 4.2.1]). We call T the underlying motivic triangulated category.
Note that by [CD19,Corollary 4.4.24] and [CD15, Theorem 7.3], if the underlying motivic triangulated category satisfies (RS), then for any scheme X, K X is a dualizing object of T c (X, 0). We will then write D for D K X in coherence with the previous sections. 4.2.3. By [Ayo07, Section 2.4.4], given a stable algebraic derivator T c , one can extend the four functors f , f * , f ! , f ! to diagrams of schemes in the following form:
• For any 1-morphism (f, α) ∶ (F, J) → (G, J ′ ) in DiaSch, there is a pair of adjoint functors (f, α) * ∶ T c (G, J ′ ) ⇌ T c (F, J) ∶ (f, α) * . (4.2.3.1) • For any J ∈ F inCat and any Cartesian J-shaped 1-morphism f ∶ (F, J) → (G, J) in DiaSch,
there is a pair of adjoint functors
f ! ∶ T c (F, J) ⇌ T c (G, J) ∶ f ! . (4.2.3.2)
If f ∶ X → Y is a morphism of schemes, then the four functors associated to morphisms of the form (f, J) ∶ (X, J) → (Y, J) commute with finite limit and colimits along diagrams, and therefore commute with the coend construction (Definition 4.1.8): we have a commutative diagram
T c (Y, A op × A) f * / / ∫ A T c (X, A op × A) ∫ A T c (Y, 0) f * / / T c (X, 0) (4.2.3.3)
and similarly for the other functors f * , f ! and f ! .
We now deal with the monoidal structure.
f # (f * A ⊗ G,J B) → A ⊗ F,J f # B. (4.2.4.2) δ * c ! v / / o o δ * c ! Hom(c * 1 M, c * 1 M ) u u • • • • • • • • • • • • • • • φ M * δ * c ! w δ * c ! Hom(c * 1 L, c ! 2 L) ⊕ Hom(c * 1 N, c ! 2 N ) / / δ * c ! w ′ δ * c ! Hom(c * 1 M, c ! 2 M ) o o δ * c ! c ! Hom(p * 1 L, p ! 2 L) ⊕ Hom(p * 1 N, p ! 2 N ) / / ≀ Ex O O δ * c ! c ! w ′′ ≀ O O δ * c ! c ! Hom(p * 1 M, p ! 2 M ) o o ≀ Ex O O δ * Hom(p * 1 L, p ! 2 L) ⊕ Hom(p * 1 N, p ! 2 N ) / / δ * w ′′ δ * Hom(p * 1 M, p ! 2 M ) o o (L ⊗ D(L)) ⊕ (N ⊗ D(N )) / / ≀ s⊕s ≀ Kun O O u ≀ O O ≀ M ⊗ D(M ) o o ≀ Kun O O ≀ s (D(L) ⊗ L) ⊕ (D(N ) ⊗ N ) / / ǫ L +ǫ N + + ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ u ′ǭ D(M ) ⊗ M o o ǫ M u u ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ K X
where the objects v, w, w ′ , w ′′ , u, u ′ are specified by May's axioms in Section 4.1:
• v ∶= Hom [1] (c * 1 g, c * 1 f ) is specified in the (TC3D) diagram for the triangles (c * 1 f, c * 1 g, c * 1 h) and (c * 1 f, c * 1 g, c * 1 h). • w ∶= Hom [1] (c * 1 g, c * 1 g) is specified in the (TC3D') diagram for the triangles (c * 1 f, c * 1 g, c * 1 h) and (c * 1 f, c * 1 g, c * 1 h).
• w ′ ∶= Hom [1] (c * 1 g, c ! 2 g) is specified in the (TC3D') diagram for the triangles (c * 1 f, c * 1 g, c * 1 h) and (c ! 2 f, c ! 2 g, c ! 2 h). • w ′′ ∶= Hom [1] (p * 1 g, p ! 2 g) is specified in the (TC3D') diagram for the triangles (p * 1 f, p * 1 g, p * 1 h) and (p ! 2 f, p ! 2 g, p ! 2 h). • u ∶= g⊗ [1] Dg is specified in the (TC3D') diagram for the triangles (f, g, h) and (Dg, Df, DΣ −1 h).
• u ′ ∶= Dg⊗ [1] g is specified in the (TC3') diagram for the triangles (Dg, Df, DΣ −1 h) and (f, g, h). The commutativity of the diagram follows from the following facts:
• The two triangles on the top commute by (TC5b), and the two on the bottom commute by (TC5a); the quadrilateral involving δ * c ! v and δ * c ! w commute by (TC4). • Since φ is a map of biCartesian squares, the functoriality of the (TC3D') diagram gives rise to a dotted map w → w ′ making the two adjacent trapezoids commute. • By 4.2.3, the functor c ! commutes with the coend construction, and the functoriality of the (TC3D') diagram together with the exchange isomorphisms (2.4.1.3) give rise to an isomorphism c ! w ′′ ≃ w ′ making the two adjacent squares commute. • Similarly, the functor δ * commutes with the coend construction, and the functoriality of the (TC3D') diagram together with the Künneth formulas (3.1.1.1) give rise to an isomorphism u ≃ δ * w ′′ making the two adjacent squares commute. • The map δ * c ! c ! w ′′ → δ * w ′′ is simply the conuit of the adjunction, which clearly makes the two squares in the middle commute. • By [GPS14, Lemma 6.9], there exists an isomorphism u ≃ u ′ making the two adjacent squares commute.
Lemma 4.2.7. Consider the setting in 3.2.2. For i ∈ {1, 2, 3}, let
L i / / Γ i M i * / / N i (4.2.7.1)
be a biCartesian square in T c (X i , ◻). Let φ ∶ c 12 * 1 Γ 1 → c 12! 2 Γ 2 and ψ ∶ c 23 * 2 Γ 2 → c 23! 3 Γ 3 be morphisms of squares in T c (C 12 , ◻) and T c (C 23 , ◻). Then the composition of correspondences (Definition 3.2.3) can be lifted to a morphism of squares
ψφ ∶ c 13 * 1 Γ 1 → c 13! 3 Γ 3 . (4.2.7.2)
Proof. This is because we can lift the six functors to the level of diagrams in T c (⋅, ◻), and consequently the same is true for the composition of correspondences.
From Proposition 3.2.5, Proposition 4.2.6 and Lemma 4.2.7 we deduce the following Theorem 4.2.8. Let T c be a constructible motivic derivator whose underlying motivic triangulated category T satisfies (RS). We use the notations in 3.1.2. For i ∈ {1, 2}, let where φ M ∶ c * 1 M 1 → c ! 2 M 2 is the restriction of φ, and similarly for the other maps.
L i / / Γ i M i * / / N i (4.2.8.1) be a biCartesian square in T c (X i , ◻). Let φ ∶ c * 1 Γ 1 → c ! 2 Γ 2 and ψ ∶ d * 2 Γ 2 → d ! 1 Γ 1 be
Remark 4.2.9. One could also formulate the additivity of traces using the language of motivic (∞, 1)categories developped in [Kha16]. Given our result for derivators, it suffices to check that the homotopy derivator of a motivic (∞, 1)-category of coefficients ([Kha16, Chapter 2 3.5.2]) satisfies the axioms of a constructible motivic derivator. As remarked in [GPS14], it is expected but is yet to be verified that monoidal structures are carried through this construction.
THE CHARACTERISTIC CLASS OF A MOTIVE
In this section we come back to 1-categorical concerns. Let T be the underlying motivic triangulated category of a constructible motivic derivator which satisfies the condition (RS) in 2.1.12.
5.1. The characteristic class and first properties.
5.1.1. We recall briefly the formalism we need from [DJK18]: 6
Recall 5.1.2. For any scheme X, the Thom space construction is a well-defined group homomorphism T h X ∶ K 0 (X) → P ic(T(X)) ([DJK18, 2.1.4]).
Let f ∶ X → S be a morphism of schemes and let V be a virtual vector bundle over X. The (twisted) bivariant group is defined as 7
H 0 (X S, V ) = Hom Tc(X) (T h X (V ), f ! 1 S ). (5.1.2.1)
For any proper morphism p ∶ Y → X, there is a proper covariant functoriality
p * ∶ H 0 (Y k, p * V ) → H 0 (X k, V ). (5.1.2.2)
If V is a vector bundle over X, the Euler class e(V ) ∶ 1 X → T h X (V ) is an analogue of the top Chern class in the classical setting ([DJK18, Definition 3.1.2]). When V is the trivial virtual bundle, we use the notation H 0 (X k) = H 0 (X k, 0). If X is a smooth scheme, the class e(L X k ) ∶ 1 X → T h X (L X k ) ≃ K X is an element of H 0 (X k). 8 Remark 5.1.6.
(1) For every scheme X, the additivity of traces yields a well-defined homomorphism of abelian groups
K 0 (T c (X)) → H 0 (X k)
[M ] ↦ C X (M ) (5.1.6.1) 6 The formalism in loc. cit. is constructed for the stable motivic homotopy category SH, but since the construction is quite formal, it can also be done in any motivic triangulated category. 7 This is a particular case of the general formalism, see [DJK18, Definition 2.12]. 8 Note that for Tc = DM cdh,c , the group H0(X k, V ) is the Chow group of algebraic cycles of dimension the virtual rank of V , and we recover the formalism in [Ful98]. The Euler class in Chow groups is the top Chern class.
where the left hand side is the Grothendieck group of the triangulated category T c (X). For T c = DM cdh,c , we have H 0 (X k) ≃ CH 0 (X) is the Chow group of zero-cycles over X (up to p-torsion). It is conjectured that the map (5.1.6.1) is related to the 0-dimensional part of the closure of the characteristic cycle ([Sai17, Conjecture 6.8]).
(2) One can also relate the characteristic class with the Grothendieck group of varieties over a base.
Composing the map (5.1.6.1) with the obvious ring homomorphism
K 0 (V ar X) → K 0 (T c (X)) [f ∶ Y → X] ↦ [f ! 1 Y ] (5.1.6.2)
where K 0 (V ar X) is the Grothendieck group of varieties over X, we obtain a well-defined homomorphism of abelian groups
(5.1.6.3) K 0 (V ar X) → H 0 (X k)
and gives an additive invariant on K 0 (V ar X). In the case X = Spec(k), the group H 0 (X k) is equal to the endomorphism ring End(1 k ), and the map (5.1.6.3) is a ring homomorphism, which defines a motivic measure on K 0 (V ar k). When T = SH and k has characteristic 0, this agrees with the construction in [Rön16].
5.1.7. It follows from Proposition 3.1.6 that the characteristic class is compatible with the proper functoriality:
Corollary 5.1.8. Let f ∶ X → Y be a proper morphism. Then we have f * C X (M, u) = C Y (f * M, f * u).
5.1.9. The following lemma shows a relation between the characteristic class and the trace map: 5.1.12. We denote by ⟨−1⟩ k = χ(1 k (1)) ∈ End(1 k ) the Euler characteristic of the Tate twist, defined as the trace of the identity map. 9 Then for any morphism f ∶ X → Spec(k), we have χ(1 X (n)) = ⟨−1⟩ n X , where ⟨−1⟩ X = f * ⟨−1⟩ k ∈ End(1 X ). As a consequence we obtain Corollary 5.1.13. Let X be a scheme and let M ∈ T c (X). Let u ∈ End(M ) be an endomorphism and denote by u(n) be the corresponding endomorphism of M (n). Then C X (M (n), u(n)) = ⟨−1⟩ n X ⋅ C X (M, u). 5.1.14. We can compute characteristic classes for endomorphisms of primitive Chow motives as follows:
Proposition 5.1.15. let X be a smooth k-scheme with tangent bundle T X k and let p ∶ X → S be a proper morphism. Then for any endomorphism u of p * 1 X , the characteristic class C X (p * 1 X , u) is given by the composition
(5.1.15.1) 1 S → p * 1 X u → p * 1 X p * e(T X k ) → p * K X ad ′ (p * ,p ! ) → K S .
Note that formula (5.1.15.1) is quite similar to the motivic Gauss-Bonnet formula ([DJK18, Theorem 4.4.1]); when S = Spec(k) and u = id the two formulas are the same.
Proof. We need to show the commutativity of the following diagram:
1 S u / / Hom(p * 1 X , p * 1 X ) / / D(p * 1 X ) ⊗ p * 1 X ∼ / / p * 1 X ⊗ D(p * 1 X ) ǫ p * 1 X p * 1 X u / / p * 1 X p * e(L X k ) / / p * K X ad ′ (p * ,p ! ) / / O O K S . (5.1.15.2)
While the two squares on the left and on the right are straightforward, it remains to show the commutativity of the square in the middle. Denote by δ ∶ X → X × k X the diagonal morphism and p 1 , p 2 ∶ X × k X → X the projections. By the self-intersection formula ([DJK18, Example 3.2.9]) as explained in the proof of [DJK18, Theorem 4.6.1], the Euler class e(L X k ) ∶ 1 X → K X agrees with the composition
1 X → δ ! p * 1 K X → δ * p * 1 K X ≃ K X (5.1.15.3)
where the first map is induced by the fundamental class of the morphism δ, and the second map is induced by the natural transformation δ ! → δ * given by δ ! ≃ δ * δ * δ ! ad ′ (δ * ,δ ! ) → δ * . The commutativity then follows from a standard diagram chase.
Example 5.1.16. As a particular case of Proposition 5.1.15, for any smooth k-scheme X we have C X (1 X ) = e(L X k ). The class e(L X k ) can be understood as the class of the"self-intersection of the diagonal", and by 5.1.11 we recover the slogan "The degree of the self-intersection of the diagonal is the Euler characteristic" for smooth and proper schemes. 5.1.17. Alternatively, there is a more geometric description of the characteristic class C X (p * 1 X , u) using the refined Gysin map: denote by p 1 ∶ X × S X → X the projection onto the first summand. Then base change and purity isomorphisms induce a canonical isomorphism (5.1.17.1) End(p * 1 X ) ≃ H 0 (X × S X k, p −1 1 L X k ). Also consider the Cartesian diagram
X δ X S / / ∆ X × S X X δ X k / / X × k X (5.1.17.2) since δ X k ∶ X → X × k X is∆ ! ∶ H 0 (X × S X k, p −1 1 L X k ) → H 0 (X k). (5.1.17.3)
Then for any endomorphism u of p * 1 X we have
C X (p * 1 X , u) = p * ∆ ! u ′ (5.1.17.4) where u ′ ∈ H 0 (X × S X k, p −1 1 L X k )
is the image of u by the map (5.1.17.1).
A characterization.
In this section we give a characterization of the characteristic class for constructible motives.
5.2.1. Let T be a motivic triangulated category and let X be a scheme. We denote by T Chow (X) the idempotent completion of the additive subcategory generated by all primitive Chow motives over X, and ⟨T Chow (X)⟩ the triangulated subcategory of T(X) generated by T Chow (X). (1) The map (1) Alternatively, we can use [Bon10,5.3.1] instead of Lemma 5.2.4 in the proof. (2) When the field k is not perfect, the following description is suggested to us by D.-C. Cisinski:
⟨T Chow (X)⟩ → H 0 (X k) M ↦ C X (M ) (5.2.6.1)
Assume that T is the underlying motivic triangulated category of a constructible motivic derivator which satisfies the condition (RS 2) in 2.1.12. Assume in addition that T is extended to noetherian k-schemes and is continuous ([CD19, Definition 4.3.2]). Let k ′ be the perfect closure of k. Then for any scheme X, the canonical morphism φ X ∶ X k ′ = X × k k ′ → X is a universal homeomorphism, and by [EK18, Theorem 2.1.1] the functor φ * X ∶ T c (X) → T c (X k ′ ) is an equivalence of categories. By [EK18, Remark 2.1.13] for any finite surjective radicial morphism f ∶ Y → X we have a canonical identification f * = f ! , and therefore the Verdier pairing is contravariant for such morphisms (see Remark 3.1.7). If the collection of primitive Chow motives over X is negative, we conclude that the characteristic class of elements in T c (X) is uniquely determined by the functor φ * X and the description in Theorem 5.2.6 for the perfect field k s .
φ * ∶ H 0 (S k) → E 0 (S k) (5.3.1.2) where E 0 (S k) = Hom SH(S) (1 S , f ! E).φ * C SH S (M, u) = C E S (M ⊗ E S , u E ) (5.3.3.1) in E 0 (S k).
Proof. We denote by Hom E the internal Hom functor in Ho(E S − M od) and Hom the internal Hom functor in SH. We have a canonical identification Hom E (A ⊗ E S , B) ≃ Hom (A, B). The result then follows from the following commutative diagram:
1 S u / / φ Hom(M, M ) φ / / M ⊗ D(M ) ǫ M / / φ K S φ E S u E ( ( ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ ◗ Hom(M, M ⊗ E S ) / / ≀ M ⊗ D(M ) ⊗ E S ǫ M / / p ! φ K S ⊗ E S p ! φ Hom E (M ⊗ E S , M ⊗ E S ) / / + + ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ M ⊗ Hom(M, p ! E) ⊗ E S ǫ M / / ≀ p ! E ⊗ E S (6.1.1.1) M ⊗ E S ⊗ Hom E (M ⊗ E S , p ! E) ǫ M ⊗E S / / p ! Eφ * C E S (M ⊗ E S , u E ) = C F S (M ⊗ F S , u F ) (5.3.4.1)
where φ * ∶ E 0 (S k) → F 0 (S k) is the map induced by φ.
Example 5.3.5. Let E = KGL be the algebraic K-theory spectrum, F = ⊕ i∈Z HQ(i)[2i] be the periodized rational motivic Eilenberg-Mac Lane spectrum, and φ = ch ∶ KGL → ⊕ i∈Z HQ(i)[2i] be the Chern character. Then for any scheme X, the map ch * = τ X ∶ G 0 (X) → ⊕ i∈Z CH i (X) Q is the Riemann-Roch transformation in [Ful98, Theorem 18.3] ([Dég18, Example 3.3.12]).
If X is a smooth k-scheme of dimension d, then the KGL-valued characteristic class C KGL X (1 X ) ∈ G 0 (X) = K 0 (X) can be written as
C KGL X (1 X ) = d i=0 (−1) i [Λ i L ∨ X k ]. (5.3.5.1)
On the other hand we have the HQ-valued characteristic class C HQ X (1 X ) ∈ CH 0 (X) given by the top Chern class c d (L X k ). By [Ful98, Example 3.2.5] we have
ch(C KGL X (1 X )) = d i=0 (−1) i ch(Λ i L ∨ X k ) = c d (L X k ) ⋅ T d(L X k ) −1 = C HQ X (1 X ) ⋅ T d(L X k ) −1 . (5.3.5.2)
In other words we have ch(C KGL
X (1 X )) ⋅ T d(L X k ) = C HQ X (1 X ),
where the left hand side is nothing but τ X (C KGL X (1 X )), and we recover a particular case of Corollary 5.3.4.
KÜNNETH FORMULAS OVER GENERAL BASES
In this section, we study transversality conditions following the method of [YZ18], and generalize the Künneth formulas (2.4.6.3) and (2.4.6.4) to a general base scheme S under these conditions, which allows us to define the relative characteristic class. We consider T a motivic triangulated category which satisfies the condition (RS) in 2.1.12.
6.1. The transversality conditions. In this Section 6.1, we introduce the transversality conditions and prove some elementary properties, which will be used in the formulation of Künneth formulas over a general base scheme in Section 6.2. 6.1.1. Let f ∶ X → S be a morphism of schemes. For two objects A and B of T(S), there is a canonical natural transformation
(6.1.1.1) f * B ⊗ f ! A → f ! (B ⊗ A)
given by the composition
f * B ⊗ f ! A ad (f ! ,f ! ) → f ! f ! (f * B ⊗ f ! A) (2.2.1.2) ≃ f ! (B ⊗ f ! f ! A) ad ′ (f ! ,f ! ) → f ! (B ⊗ A). (6.1.1.2)
In particular when A = 1 S , the map (6.1.1.1) becomes
(6.1.1.3) f * B ⊗ f ! 1 S → f ! B.
6.1.2. If f ∶ X → S is a smooth morphism, or if B ∈ T(S) is dualizable, then the map (6.1.1.1) is an isomorphism: the first case follows from purity, and the second case is [FHM03,5.4].
Definition 6.1.3. Let f ∶ X → S be a morphism of schemes.
(1) ([Sai17, Definition 8.5]) Let B be an object of T(S). We say that the morphism f ∶ X → S is B-transversal if the map (6.1.1.3) is an isomorphism.
(2) Let C be an object of T(X). We say that the morphism f ∶ X → S is C-transversal if the graph morphism Γ f ∶ X → X × k S of f is C ⊠ k D-transversal for any object D ∈ T c (S). We say that f is universally C-transversal if this property holds after any base change (cf. Definition 2.1.7).
Remark 6.1.4.
(1) It is easy to see that in Definition 6.1.3 (2), f is C-transversal if and only if Γ f is C ⊠ k D-transversal for any object D ∈ T(S).
(2) Let T c be a constructible motivic derivator and let f ∶ X → S be a morphism of schemes. Then there is a stable derivator F un ex (T c (S), T c (X)) given by triangulated functors from T c (S) to T c (X). We lift the natural transformation f * B ⊗ f ! 1 S → f ! B to a coherent morphism in F un ex (T c (S), T c (X))(1); the target of its cofiber (Definition 4.1.2) is a functor f ∆ in F un ex (T c (S), T c (X))(0), seen as a functor T c (S) → T c (X). In the underlying triangulated category T c (X, 0) we have a canonical distinguished triangle
f * B ⊗ f ! 1 S → f ! B → f ∆ B → f * B ⊗ f ! 1 S [1].
By definition, f is B-transversal if and only if f ∆ B = 0.
6.1.5. Recall that following [DJK18,4.3.7], for any lci morphism f ∶ X → S with virtual tangent bundle T f and any object B ∈ T(S), there is a natural transformation called the purity transformation
(6.1.5.1) f * B ⊗ T h X (L f ) → f ! B,
which is deduced from the transformation (6.1.1.1), and the object B is said f -pure if the map (6.1.5.1) is an isomorphism. If f ∶ X → S is a lci morphism such that 1 Y is f -pure, then for any object B ∈ T(S), f is B-transversal if and only if B is f -pure.
By [DJK18,4.3.10], if there exists a scheme S ′ such that f is an S ′ -morphism between smooth S ′schemes, or if f is a lci morphism between regular schemes over a field, then 1 S is f -pure for any motivic triangulated category T.
Lemma 6.1.6. Let f ∶ X → Y and g∶ Y → Z be morphisms of schemes such that g is lci and 1 Z is g-pure.
Let F ∈ T(Z) be such that g is F -transversal. Then the following two conditions are equivalent:
Proof.
(1) By 6.1.5, we need to show that G is f -pure if and only if D(G) is f -pure. By duality, the map (6.1.7.1)
f * G ⊗ T h X (L f ) (6.1.5.1) → f ! G is an isomorphism if and only if its dual D(f ! G) → D(f * G ⊗ T h X (L f ))
is an isomorphism. This is equivalent to say that the canonical map
(6.1.7.2) f * D(G) (6.1.5.1) → f ! D(G) ⊗ T h X (−L f ) is an isomorphism, i.e. D(G) is f -pure.
(2) We know that the graph Γ f ∶ X → X × k Y is lci and 1 X× k Y is Γ f -pure. By (1) and duality, the following statements are equivalent:
(a) For any H ∈ T c (Y ), Γ f is F ⊠ k H-transversal. (b) For any H ∈ T c (Y ), Γ f is D(F ⊠ k H)-transversal. (c) For any H ∈ T c (Y ), Γ f is D(F ⊠ k D(H))-transversal. Denote by p 1 ∶ X × k Y → X and p 2 ∶ X × k Y → Y the projections.
Then p 2 is a smooth morphism, and we have the following isomorphism:
D(F ⊠ k D(H)) (4.1.15.1) ≃ Hom(p * 1 F, p * 2 H) ≃ Hom(p * 1 F, p ! 2 H) ⊗ T h X× k Y (−L p 2 ) (3.1.1.1) ≃ (D(F ) ⊠ H) ⊗ T h X× k Y (−L p 2 ). (6.1.7.3) It follows that Γ f is F ⊠ k H-transversal for any H ∈ T c (Y ) if and only if Γ f ∶ X → X × k Y is D(F ) ⊠ k H-transversal for any H ∈ T c (Y )
. This proves (2). . Let X be a smooth k-scheme and let F 1 and F 2 be two objects of T c (X). If the diagonal morphism δ∶ X → X × k X is D(F 1 ) ⊠ k F 2 -transversal, then the following canonical map is an isomorphism:
Hom(F 1 , 1) ⊗ F 2 → Hom(F 1 , F 2 ). (6.1.8.1)
Proof. Since X is smooth over k, by purity we have K X ≃ T h X (L X k ) and δ ! 1 X× k X ≃ T h(−L X k ). For i = 1, 2, denote by p i ∶ X × k X → X the i-th projection. Since δ is D(F 1 ) ⊠ k F 2 -transversal, we have the following isomorphism:
Hom(F 1 , 1) ⊗ F 2 ≃ Hom(F 1 , K X ) ⊗ F 2 ⊗ δ ! 1 X× k X = δ * (Hom(F 1 , K X ) ⊠ k F 2 ) ⊗ δ ! 1 X× k X (6.1.1.3) ≃ δ ! (Hom(F 1 , K X ) ⊠ k F 2 ) (3.1.1.1) ≃ δ ! Hom(p * 1 F 1 , p ! 2 F 2 ) ≃ Hom(δ * p * 1 F 1 , δ ! p ! 2 F 2 ) = Hom(F 1 , F 2 ).
(6.1.8.2)
One can check that the map (6.1.8.2) agrees with the canonical map, and the result follows.
Lemma 6.1.9. Let f ∶ X → Y be a lci morphism and let F and G be two objects of T(Y ). If both G and Hom(F, G) are f -pure, then the following canonical map is an isomorphism: ≃ Hom(f * F, f ! G) (6.1.5.1)
f * Hom(F, G) → Hom(f * F, f * G).≃ Hom(f * F, f * G ⊗ T h X (L f )) ≃ Hom(f * F, f * G) ⊗ T h X (L f ).
(6.1.9.2) It is straightforward to check that (6.1.9.2) is induced by the canonical map, and the result follows.
Since p is proper, we have canonical isomorphisms: Let F 2 be an object of T(X 2 ). Since π 1 ∶ X 1 → S is universally F 1 -transversal, the morphism p 2 ∶ X 1 × S X 2 → X 2 is also universally p * 1 F 1 -transversal. Consequently Γ p 2 is p * 1 F 1 ⊠ k F 2 = p * 13 (F 1 ⊠ k F 2 )transversal. By assumption p 13 is smooth, and by 6.1.2, p 13 is universally F 1 ⊠ k F 2 -transversal. By Lemma 6.1.6, the composition ι = p 13 ○ Γ is F 1 ⊠ k F 2 -transversal, which finishes the proof.
X 1 × k X 2 X 1 × S X 2 × k X 2 .
Proposition 6.2.3 ([YZ18, Proposition 3.1.3]). We use the notations of 6.2.1, and let E i and F i be objects of T c (X i ) for i = 1, 2. Assume that the following conditions are satisfied:
(1) The morphisms π 1 and π 2 are smooth, and both X 1 and X 2 are smooth k-schemes.
(2) For i = 1, 2, the diagonal morphism
X i → X i × k X i is D(E i ) ⊠ k F i -transversal.
(3) For i = 1, 2, π i is universally E i -transversal and universally F i -transversal. Then the following canonical map is an isomorphism:
Hom(E 1 , F 1 ) ⊠ S Hom(E 2 , F 2 ) (2.4.2.2) → Hom(E 1 ⊠ S E 2 , F 1 ⊠ S F 2 ). (6.2.3.1) Proof. We use the following notation: if X is a scheme and F ∈ T(X), we denote F ∨ ∶= Hom(F, 1 X ).
By assumption the diagonal morphism X i → X i × k X i is D(E i ) ⊠ k F i -transversal, and by Lemma 6.1.8 the following canonical map is an isomorphism:
(6.2.3.2) F i ⊗ E ∨ i = F i ⊗ Hom(E i , 1 X i ) ∼ → Hom(E i , F i ).
Hence we have the following isomorphism:
Hom(E 1 , F 1 ) ⊠ S Hom(E 2 , F 2 ) (6.2.3.2)
≃ (F 1 ⊗ E ∨ 1 ) ⊠ S (F 2 ⊗ E ∨ 2 ) ≃ (F 1 ⊠ S F 2 ) ⊗ (E ∨ 1 ⊠ S E ∨ 2 ).
(6.2.3.3)
Denote by ι the canonical closed immersion ι∶ X 1 × S X 2 → X 1 × k X 2 . By Proposition 2.4.3, Corollary 6.1.10 and Lemma 6.2.2, we have the following isomorphism:
E ∨ 1 ⊠ S E ∨ 2 = ι * (E ∨ 1 ⊠ k E ∨ 2 )
(2.4.2.2) ≃ ι * (E 1 ⊠ k E 2 ) ∨ (6.1.10.1) ≃ (ι * (E 1 ⊠ k E 2 )) ∨ = (E 1 ⊠ S E 2 ) ∨ . (6.2.3.4)
By assumption (2), Lemma 6.1.6, Lemma 6.1.7 and Lemma 6.2.2, the diagonal morphism X i → X i × S X i is D(E i ) ⊠ S F i -transversal for i = 1, 2. By Proposition 6.1.13, the diagonal morphism X 1 × S X 2 → (X 1 × S X 2 ) × k (X 1 × S X 2 ) is D(E 1 ⊠ S E 2 ) ⊠ k (F 1 ⊠ S F 2 )-transversal. Thus by Lemma 6.1.8, the following canonical map is an isomorphism:
(F 1 ⊠ S F 2 ) ⊗ (E 1 ⊠ S E 2 ) ∨ → Hom(E 1 ⊠ S E 2 , F 1 ⊠ S F 2 ). (6.2.3.5)
We deduce from (6.2.3.3), (6.2.3.4) and (6.2.3.5) that the map (6.2.3.1) is an isomorphism, which finishes the proof.
Corollary 6.2.4. We use the notations of 6.2.1, and let F i be an object of T c (X i ) for i = 1, 2. Assume that the following conditions are satisfied:
(1) The morphisms π 1 and π 2 are smooth, and both X 1 and X 2 are smooth k-schemes.
(2) For i = 1, 2, π i ∶ X i → S is universally F i -transversal. Then the map F 1 ⊠ S Hom(F 2 , π ! 2 1 S ) → Hom(p * 2 F 2 , p ! 1 F 1 ) (6.2.4.1)
given by the composition F 1 ⊠ S Hom(F 2 , π ! 2 1 S ) → Hom(p * 2 F 2 , p * 1 F 1 ⊗ p * 2 π ! 2 1 S ) (2.3.1.6) →Hom(p * 2 F 2 , p * 1 F 1 ⊗ p ! 1 1 X 1 ) → Hom(p * 2 F 2 , p ! 1 F 1 ).
(6.2.4.2)
is an isomorphism.
Proof. Since p 1 is smooth, by Proposition 6.2.3, we have the following isomorphism F 1 ⊠ S Hom(F 2 , π ! 2 1 S ) (6.2.3.1) ≃ Hom(p * 2 F 2 , p * 1 F 1 ⊗ p * 2 π ! 2 1 S ) ≃ Hom(p * 2 F 2 , p ! 1 F 1 ), (6.2.4.3) which shows that the map (6.2.4.1) is an isomorphism.
Proposition 6.2.5 ([YZ18, Proposition 3.1.9]). For i = 1, 2, consider a commutative diagram of Smorphisms of the form
X i f i / / π i ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ Y i q i ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ S, (6.2.5.1)
We denote X ∶= X 1 × S X 2 , Y ∶= Y 1 × S Y 2 and f ∶= f 1 × S f 2 ∶ X → Y . Let M i be objects of T c (Y i ) for i = 1, 2. Assume that the following conditions are satisfied:
(1) The morphisms π i and q i are smooth, and both X i , Y i are smooth k-schemes.
(2) For i = 1, 2, q i ∶ Y i → S is universally M i -transversal. Then the following canonical map is an isomorphism: Proof. By Lemma 2.3.4, we may assume that X 2 = Y 2 and f 2 = id X 2 , i.e. it suffices to show that the following canonical map is an isomorphism:
f ! 1 M 1 ⊠ S f ! 2 M 2(6.2.5.3) f ! 1 M 1 ⊠ S M 2 → (f 1 × id X 2 ) ! (M 1 ⊠ S M 2 ).
6.2.9. Given Corollary 6.2.4, we are now ready to define the realtive Verdier pairing in the same way as we have done in Section 3.1. We fix a base scheme S, and for any morphism h ∶ X → S we denote K X S = h ! 1 S . Let X 1 and X 2 be two smooth S-schemes which are also smooth over k. We denote by X 12 = X 1 × S X 2 and p i ∶ X 12 → X i the projections. Let L i ∈ T c (X i ) and let q i ∶ X i → S be the structure map for i = 1, 2. Let c ∶ C → X 12 and d ∶ D → X 12 be two morphisms, and let E = C × X 12 D with e ∶ E → X 12 the canonical morphism. For i = 1, 2 denote by c i = p i ○c ∶ C → X i and d i = p i ○d ∶ D → X i . Assume that for i = 1, 2, q i is universally L i -transversal. By Corollary 6.2.4, we produce the following map in the same way as the map (3.1.3.3):
(6.2.9.1) c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 ) → e * K E S Definition 6.2.10. In the situation above, for two maps u ∶ c * 1 L 1 → c ! 2 L 2 and v ∶ d * 2 L 2 → d ! 1 L 1 , we define the relative Verdier pairing (6.2.10.1) ⟨u, v⟩ ∶ 1 E → K E S obtained by adjunction from the composition 1 X 12 → c * 1 C ⊗ d * 1 D → c * Hom(c * 1 L 1 , c * 1 L 1 ) ⊗ d * Hom(d * 2 L 2 , d * 2 L 2 ) u * ⊗v * → c * Hom(c * 1 L 1 , c ! 2 L 2 ) ⊗ d * Hom(d * 2 L 2 , d ! 1 L 1 ) (6.2.9.1) → e * K E S . (6.2.10.2) 6.2.11. The relative Verdier pairing satisfies a proper covariance similar to Proposition 3.1.6 (see [YZ18, Theorem 3.3.2]). It satisfies an additivity property along distinguished triangles similar to Theorem 4.2.8. 6.2.12. We can define the relative characteristic class as in Definition 5.1.3: Definition 6.2.13. Let X be a smooth S-scheme which is also smooth over k. Let M ∈ T c (X) be such that the structure morphism X → S is universally M -tranversal. The Verdier pairing in Definition 6.2.10 in the particular case where C = D = X 1 = X 2 = X and L 1 = L 2 = M is a pairing ⟨ , ⟩ ∶ Hom(M, M ) ⊗ Hom(M, M ) → H 0 (X S), (6.2.13.1)
where H 0 (X S) = Hom Tc(X) (1 X , K X S ). For any endomorphism u ∈ Hom(M, M ), the relative characteristic class of u is defined as the element C X S (M, u) ∶= ⟨u, 1 M ⟩ ∈ H 0 (X S). The relative characteristic class of M is the characteristic class of the identity C X S (M ) ∶= C X S (M, 1 M ).
The following result is similar to Proposition 5.1.15: Proposition 6.2.14. Let X be a smooth S-scheme which is also smooth over k, with tangent bundle L X S . Then we have C X S (1 X (n)) = (⟨−1⟩ X ) n e(L X S ).
6.2.15. We now establish a link between the relative characteristic class and the (absolute) characteristic class via specialization of cycles ([DJK18, 4.5.1]). Let S be a smooth k-scheme and let s ∶ Spec(k) → S be a k-rational point. Let f ∶ X → S be a smooth morphism, and form the Cartesian square
h V 0 ∶ V ′ 0 ∶= h −1 (V 0 ) → V 0
is an isomorphism (resp. is finite lci with trivial virtual tangent bundle), and form the following Cartesian squares: (6.3.8.7)
V ′ 0 j ′ 2 / / h V 0 V ′ j ′ 1 / / h V Ṽ h V 0 j 2 / / V j 1 / /V .
It follows that j 2 * j * 2 N is equal to j 2 * h V 0 * h * V 0 j * 2 N = h V * j ′ 2 * j ′ * 2 h * V N (resp. is a direct summand of h V * j ′ 2 * j ′ * 2 h * V N by [EK18, Proposition 2.2.2]). By (6.3.8.2) the canonical map (6.3.8.8)
K Y ⊗ f * V j 1 * h V * h * V N (2.1.5.1) → j 1Y * (j * 1Y K YV ⊗ f * V h V * h * V N )
is an isomorphism. Given the localization distinguished triangle (6.3.8.9)
h V * i ′ 2! i ′! 2 h * V N → h V * h * V N → h V * j ′ 2 * j ′ * 2 h * V N → h V * i ′ 2! i ′! 2 h * V N [1] where i ′ 2 ∶ Z ′ 0 → V ′
is the reduced closed complement of j ′ 1 , and since h V i ′ 2 factors through h(Z ′ 0 ) which is a proper closed subscheme of V , we conclude using the induction hypothesis.
6.3.9. We now establish a link between strong local acyclicity and the Fulton-style specialization map defined in [DJK18]. Consider Cartesian squares of schemes (6.3.9.1)
X Z i X / / f Z X f X U j X o o f U Z i / / S U j o o
where i is a regular closed immersion and j the complementary open immersion. Assume that the Euler class e(−L i ), namely the Euler class of the normal bundle of i, is zero. Recall from [DJK18,4.5.6] that for A ∈ T(X) there is a natural transformation of functors (6.3.9.2) i X * (i * X A ⊗ f * Z T h Z (L i )) → j X! j ! X A. By Lemma 6.3.4, if 1 S is i-pure and if f is strongly locally acyclic relative to A, the following refined purity transformation (6.3.3.2) is an isomorphism:
(6.3.9.3) i * X A ⊗ f * Z T h Z (L i ) → i ! X A.
In particular from the construction of (6.3.9.2) we deduce the following: Corollary 6.3.10. We use the notations in (6.3.9.1) and assume that (1) The Euler class e(−L i ) is zero.
(2) 1 S is i-pure.
(3) f is strongly locally acyclic relative to A ∈ T(X). Then there exists a canonical map (6.3.10.1)
i X * i ! X A → j X! j ! X A such that the canonical map i X * i ! X A ad ′ (i X! ,i ! X )
→ A agrees with the following composition (6.3.10.2) i X * i ! X A (6.3.10.1)
→ j X! j ! X A ad ′ (j X! ,j ! X ) → A.
Equivalently, there exists a canonical map
(6.3.10.3) j X * j * X A → i X * i * X A
Date:
April 1, 2019. 2010 Mathematics Subject Classification.
tic of M Y (X) can be computed as the degree of the (motivic) Euler class of the tangent bundle of f ([Lev18a, Theorem 1], [DJK18, Theorem 4.6.1]), and is a refinement of the classical Gauss-Bonnet formula ([SGA5, VII 4.9]). There are other examples of dualizable motives ([Lev18b]), and more generally if k is a perfect field which has resolution of singularities, then every constructible object in SH(k) is dualizable. 1.1.5. The Lefschetz trace formula ([SGA4.5, Cycle]) plays an important role in Grothendieck's approach to the Weil conjectures via a cohomological interpretation of the L-functions ([SGA4.5, Rapport]); in
1.3.2. Let X be a scheme, M be a motive over X and u ∶ M → M be an endomorphism of M . We define the characteristic class C X (M, u) ∶= ⟨u, 1 M ⟩ as a particular case of the Verdier pairing (Definition 5.1.3). Explicitly, the characteristic class is the composition 1 X u → Hom(M, M ) → D(M ) ⊗ M ≃ M ⊗ D(M )
•
If M is a constructible motivic spectrum in SH c (X) and when we apply the A 1 -regulator map with values in the Milnor-Witt spectrum ([DJK18, Example 4.4.6]), then the Milnor-Witt-valued characteristic class C M W X (M ) is an element in the Chow-Witt group CH 0 (X).
( 4 )
4If A, B are objects in a closed symmetric monoidal category, we denote by η A ∶ 1 → Hom(A, A) (1.6.4.1) ǫ A ∶ A ⊗ Hom(A, B) → B (1.6.4.2)
( 6 )
6For every proper morphism f ∶ Y → X, the functor f * ∶ T(Y ) → T(X) has a right adjoint f ! .(7) For every smooth morphism f ∶ X → S with a section s ∶ S → X, the functor f # s * ∶ T(S) → T(S) is an equivalence of categories. (8) For any closed immersion i ∶ Z → X with open complement j ∶ U → X, the pair of functors (j * , i * ) is conservative, and the counit map i * i * ad (i * ,i * ) → 1 is an isomorphism.
Proposition 2.4.3. Suppose that T is a motivic triangulated category which satisfies the condition (RS) in 2.1.12 and M 1 , M 2 are constructible. Then for S = Spec(k), the map (2.4.2.2) is an isomorphism.
satisfies the condition (RS) in 2.1.12 and S = Spec(k), then the maps (2.4.6.1), (2.4.6.3) are isomorphisms. (3) If T satisfies the condition (RS) in 2.1.12, S = Spec(k) and M 1 , M 2 are constructible, then the map (2.4.6.4) is an isomorphism.
. 3 .
3For i = 1, 2 denote by c
⟨u, v⟩ ∶ 1 E → K E obtained by adjunction from the composition
an endomorphism of 1 k . Remark 3.1.13. The formula in [Hoy15, Theorem 1.3] has a similar appearance, but is indeed of different nature.
D(L 1 ) ⊗ p 123 * 2 K
12X 2 ⊗ p 123 * 3 L 3 ≃ p 123 * 2 K X 2 ⊗ p 123 * 1 D(L 1 ) ⊗ p 123 * 3 L 3 (2.3.3.2) ≃ p 123! 13 (p 13 * 1 D(L 1 ) ⊗ p 13 * 3 L 3 )(3.1.1.1) ≃ p 123! 13 Hom(p 13 * 1 L 1 , p 13! 3 L 3 ).
We denote by ⌜ and respectively ⌟ the full subcategories ◻ ∖ {(1, 1)} and ◻ ∖ {(0, 0)}, with i ⌜ ∶ ⌜ → ◻ and i ⌟ ∶ ⌟ → ◻ the inclusions.
( 4 )
4For every functor u ∶ A → B in F inCat, the functor u * ∶ T c (B) → T c (A) has a right adjoint u * and a left adjoint u # . (5) For any functor u ∶ A → B and object b of B, denote by j A b ∶ A b → A, j bnA ∶ bnA → A, p A b ∶ A b → 0 and p bnA ∶ bnA → 0 the canonical projections. Then the following canonical transformations are invertible:
We define the external product ⊙ ∶ T c,1 (A) × T c,2 (B) → T c,3 (A × B) as the composition
π A ∶ A × B → A and π B ∶ A × B → B are the canonical projections.
Definition 4.1.4. A symmetric monoidal stable derivator is a 2-functor (T c , ⊗) ∶ F inCat op → SM T R such that (1) The composition F inCat op Tc → SM T R → T R is a stable derivator.
→
ΣHom(t, z).
The following notions are specific to derivators and play a key role in the additivity of traces:Definition 4.1.8. Let A be a small category. The twisted arrow category tw(A) is defined as follows: its objects are morphisms a f → b in A, and morphisms from a 1 f 1
Proposition 4.1.10 (TC4). With the notations in Proposition 4.1.9, there is a biCartesian square Hom(z, z ′ ) ⊕ Hom(x, x ′ ) k 1 +k 3 / / w.
( 4 .
41.10.1) Note that when x, y, z are dualizable, up to taking their duals, Propositions 4.1.9 and 4.1.10 are precisely [GPS14, Theorems 6.2 and 7.3]. 4.1.11. Let T c be a closed symmetric monoidal derivator. Consider a distinguished triangle in T c
13. We now discuss the last one of May's axioms, where we work with local duality instead of the usual duality. The following definition is standard ([CD19, Definition 4.4.4]):
by D t ∶= Hom(⋅, t) ∶ T op c → T c the t-dual functor. We have clearly D t ○ D t = id.
Lemma 4. 1 . 15 .
115If t ∈ T c (0) is a dualizing object, then for any a ∈ T c (A) and b ∈ T c (B), the following canonical map is an isomorphism in T c (A op × B):
Definition 4.2. 4 .
4A constructible motivic derivator is a (non-strict) 2-functor (T c , ⊗) ∶ DiaSch op → SM T R satisfying the following properties: (1) The composition DiaSch op Tc → SM T R → T R is a stable algebraic derivator (Definition 4.2.2), and the monoidal structure agrees with the one on the underlying motivic triangulated category. (2) For any scheme X, the 2-functor (T c (X, ⋅), ⊗) ∶ F inCat op → SM T R J ↦ (T c (X, J), ⊗) (4.2.4.1) is a closed symmetric monoidal stable derivator (Definition 4.1.4). (3) For any J ∈ F inCat, any Cartesian J-shaped 1-morphism f ∶ (F, J) → (G, J) in DiaSch and any pair of objects (A, B) ∈ T c (G, J) × T c (F, J), the following canonical map is an isomorphism:
morphisms of squares in T c (C, ◻) and T c (D, ◻). Then the pairing (3.1.8.1) satisfies ⟨φ M , ψ M ⟩ = ⟨φ L , ψ L ⟩ + ⟨φ N , ψ N ⟩, (4.2.8.2)
.
Let X be a scheme and M ∈ T c (X). The Verdier pairing in Definition 3.1.8 in the particular case where C = D = X 1 = X 2 = X and L 1 = L 2 = M is a pairing ⟨ , ⟩ ∶ Hom(M, M ) ⊗ Hom(M, M ) → H 0 (X k). (5.1.3.1) For any endomorphism u ∈ Hom(M, M ), the characteristic class of u is defined as the element C X (M, u) ∶= ⟨u, 1 M ⟩ ∈ H 0 (X k). The characteristic class of a motive M is the characteristic class of the identity C X (M ) ∶= C X (M, 1 M ). We now list some elementary properties of the characteristic class. 5.1.4. Since identity maps are particularly good choices of morphisms of distinguished triangles, Theorem 4.2.8 implies the additivity of the characteristic class: Corollary 5.1.5. Let X be a scheme and let L → M → N → L[1] be a distinguished triangle in T c (X). Then C X (M ) = C X (L) + C X (N ).
Lemma 5. 1 . 10 .
110Let X be a scheme and let M, N be two objects in T c (X) such that M is dualizable. Let u be an endomorphism of M and let v be an endomorphism of N .Then C X (M ⊗ N, u ⊗ v) = T r(u) ⋅ C X (N, v).Proof. This follows from Proposition 3.2.8, using the fact that the canonical map M ∨ ⊗D(N ) → D(M ⊗ N ) is an isomorphism. 5.1.11. By Corollary 5.1.8 and Lemma 5.1.10, if f ∶ X → Spec(k) is a proper morphism, M ∈ T c (X) and u is an endomorphism of M , then the degree of the class C X (M, u) is the trace of the map f * u ∶ f * M → f * M . In particular, the degree of the class C X (M ) is the Euler characteristic of M .
is the unique map satisfying the following properties:(a) For any distinguished triangle L → M → N → L[1] in ⟨T Chow (X)⟩, C X (M ) = C X (L) + C X (N ). (b) If p ∶ Y → X is a proper morphism with Y smooth over k and M is the direct summand of a primitive Chow motive p * 1 Y (n) defined by an endomorphism u, then C X (M ) = C X (p * 1 Y (n),u) is described by Corollary 5.1.13 and Proposition 5.1.15 (or 5.1.17).(2) If the collection of primitive Chow motives over X is negative, then we can replace ⟨T Chow (X)⟩ by T c (X) in the statement above.Proof. On the one hand, the characteristic class satisfies these properties by Corollary 5.1.5, Corollary 5.1.13 and Proposition 5.1.15. On the other hand, the second property determine uniquely the characteristic class of all primitive Chow motives, and the uniqueness extends to ⟨T Chow (X)⟩ by additivity of traces. The second part follows from Lemma 5.2.4.Remark 5.2.7.
5.3.2. Let M ∈ SH c (S) be a constructible motivic spectrum, and let u ∶ M → M be an endomorphism of M in SH c (S). Then u induces an endomorphism u E ∶ M ⊗ E S → M ⊗ E S in Ho(E S − M od) c . By Definition 5.1.3, we have the characteristic class C SH S (M, u) ∈ H 0 (S k) in SH, as well as the characteristic class C E S (M ⊗ E S , u E ) ∈ E 0 (S k) in Ho(E S − M od). Proposition 5.3.3. Via the map (5.3.1.2), the two classes above satisfy the identity
Proof. By hypothesis, we have the following isomorphism:f * Hom(F, G) ⊗ T h X (L f )(6.1.5.1) ≃ f ! Hom(F, G) (2.4.1.3)
( 2 .
23.3.2) → f ! (M 1 ⊠ S M 2 ). (6.2.5.2)
[DJK18, 2.2.7(1)], there is a canonical specialization map induced by the base change ∆ * ∶ H 0 (X S) → H 0 (X s k). (6.2.15.2) Proposition 6.2.16. Let M ∈ T c (X) be such that f ∶ X → S is universally M -tranversal, and denote by M s ∶= M Xs = s * X M ∈ T c (X s ). Let u ∈ Hom(M, M ) be an endomorphism of M , and denote by u s ∈ Hom(M s , M s ) the induced endomorphism of M s . Then via the specialization map (6.2.15.2), the
Given a distinguished triangle L → M → N → L[1] of motives, one can naturally ask about the relations between the characteristic classes of L, M and N . It is well known that for monoidal triangulated categories, the trace map fails to be additive along distinguished triangles in general ([Note that since thé
etale realization functor is compatible with the six functors, our constructions are compatible with the
ones in [AS07] and [SGA5, III].
1.4. Additivity of traces.
1.4.1.
1.4.3. We use a very similar approach for the generalized trace map: in Section 4, we prove the additivity of the characteristic class using the language of derivators in the motivic setting ([Ayo07, Section 2.4.5]). Using the same notations as 1.3.1, the main result is the following additivity for the Verdier pairing:Theorem 1.4.4 (see Theorem 4.2.8). Let T c be a constructible motivic derivator (see Definition 4.2.4)
whose underlying motivic triangulated category T satisfies resolution of singularities (see condition (RS)
in 2.1.12). For i ∈ {1, 2}, let
Remark 2.4.4. When T = DM h is the category of h-motives, Proposition 2.4.3 also holds when M 1 and M 2 are locally constructible ([CD16, Definition 6.3.1]). This is because to show that the map (2.4.2.2) is an isomorphism it suffices to check itétale locally, and for anyétale morphism f ∶ Y → X and objects4.3.7)
l l
using the diagrams (2.4.1.5) and (2.4.1.6).
Section 4.4] for the detailed duality formalism related to this functor. By Proposition 2.3.5 and Proposition 2.4.3, we have the following isomorphism
12X 123
p 123
23 / /
p 123
12
X 23
p 23
2
C 12
c 12 / / X 12
p 12
2
/ / X 2
(3.2.2.1)
1 L 1 are two correspondences. By Definition 3.2.3, there is a composition vu ∶ f * 1 L 1 → f ! 3 L 1 . The following is stated in [SGA5, III (5.2.10)] without proof: Proposition 3.2.5. The Verdier pairing (3.1.8.1) satisfies the following equality
5.3. The characteristic class and Riemann-Roch-transformations. In this section we study the compatibility between the characteristic class and Riemann-Roch-transformations. 5.3.1. Assume that the pair (SH, k) satisfies condition (RS) in 2.1.12. Let E ∈ SH(k) be a ring spectrum endowed with a unital associative commutative multiplication. Let f ∶ S → k be a morphism.Following [CD19, 7.2.2], the homotopy category of modules over E S = f * E, Ho(E S −M od) is a motivic triangulated category, and the functor SH(S) → Ho(E S − M od) M ↦ M ⊗ E S (5.3.1.1) is a left adjoint of the forgetful functor Ho(E S −M od) → SH(S), which preserves constructible objects. The unit map φ ∶ 1 S → E S induces the A 1 -regulator map ([DJK18, Definition 4.1.2])
Corollary 5.3.4. Let E, F ∈ SH(k) be two ring spectra endowed with unital associative commutative multiplication. Let φ ∶ E → F be a morphism of ring spectra. With the notations in 5.3.2, we have
This notion is called "smoothable lci" in[DJK18].
In[Jin16] it is shown that for T = DM cdh,c and X quasi-projective over a perfect field, the idempotent completion of the additive subcategory generated by primitive Chow motives is equivalent to the category of relative Chow motives over X defined by Corti and Hanamura, whence the terminology.
In particular this condition means that f has relative dimension 0. If Tc is oriented (i.e. endowed with a trivialization of Thom spaces of all vector bundles), then any lci morphism of relative dimension 0 satisfy this property.
For Tc = SHc it is well-known that ⟨−1⟩ k ∈ End(1 k ) = GW (k) corresponds to the quadratic form x ↦ −x 2 ([Hoy15, Example 1.7]). This element is reduced to identity in DM cdh,c or ℓ-adicétale cohomology.
Acknowledgments. This work is inspired by a project initiated by Denis-Charles Cisinski about theCorollary 4.1.18. Assume that T c (0) has a dualizing object and consider a distinguished triangle in T c (0)(4.1.18.1)Let v be element specified in the (TC3D) diagram for the triangles (f, g, h) and (f, g, h). Then there is a mapη ∶ 1 → v in T c (0) such that the following incoherent diagrams commute:/ / Hom(y, y) v j 1 / / Hom(z, z). Definition 4.2.1. The 2-category DiaSch is defined as follows:• An object of DiaSch is a pair (F, J) where J ∈ F inCat and F ∶ J → Sch is a covariant functor.• An 1-morphism from (G, J ′ ) to (F, J) is the data of a functor α ∶ J ′ → J together with a natural transformation of functors f ∶ G → F ○ α. • A 2-morphism from (f, α) to (f ′ , α ′ ) as above is a natural transformation t ∶ α → α ′ such that f ′ = t ○ f . We say that a 1-morphism (f, α) ∶ (G, J ′ ) → (F, J) is Cartesian if α is an equivalence of categories and for any morphism i → j in J ′ , the following square is Cartesian:Proposition 4.2.6. Let T c be a constructible motivic derivator whose underlying motivic triangulated category T satisfies(RS). We use the notations in 3.2.7. Let Proof. It suffices to construct the following incoherent diagram in T c (X, 0)Proof. We show that the first condition implies the second, the converse being similar. Since g is Ftransversal and f is g * F -transversal, the following maps are isomorphisms:Since g is lci and 1 Z is g-pure, it follows that g ! 1 Z is dualizable, and by 6.1.2 the following canonical maps are isomorphisms:Therefore we have the following isomorphism(6.1.6.5)It is straightforward to check that the map (6.1.6.5) is induced by the map (6.1.1.3), and therefore g ○ f is F -transversal.Lemma 6.1.7 ([Sai17, Proposition 8.7]). Let f ∶ X → Y be a k-morphism of schemes. Then the following statements hold:(1) If f is lci and 1 Y is f -pure, then for anyCorollary 6.1.10. Let f ∶ X → Y be a morphism between smooth k-schemes. Let F be an object of T c (Y ) such that f is F -transversal. Then the following canonical map is an isomorphism:we know that f is also Hom(F, 1 Y )-transversal. By hypothesis and 6.1.5, both F and Hom(F, 1 Y ) are f -pure. We conclude by applying Lemma 6.1.9.Lemma 6.1.11. Let p ∶ X → Y and g ∶ Y → S be two morphisms of schemes and let f = g ○ p ∶ X → S be their composition. Denote by Γ p , Γ g and Γ f the graph morphisms of p, g and f respectively. Let F be an object of T(X). Then the following statements hold:(1) If g is smooth and p is F -transversal, then f is F -transversal.(2) Assume that p is proper, f is F -transversal and the canonical map→ Γ ! f 1 X× k S (6.1.11.1) associated to the Cartesian square of schemesis an isomorphism. Then g is p * (F )-transversal.Proof.(1) We have a commutative diagram of schemes(6.1.11.3)Let H be an object of T c (S). Since g is smooth, we have canonical isomorphismsSince Γ p is F ⊠ k g ! H-transversal, the following canonical map is an isomorphism:. (6.1.11.6) By (6.1.11.4), (6.1.11.5) and (6.1.11.6), the following canonical map is an isomorphism:In other words Γ f is F ⊠ k H-transversal. Since this is true for any H, by definition f is Ftransversal.(2) Let H be an object of T c (S). Then Γ f is F ⊠ k H-transversal, and the following canonical map is an isomorphism:. (6.1.11.10) By (6.1.11.8), (6.1.11.9) and (6.1.11.10), the following canonical map is an isomorphism:In other words Γ g is p * F ⊠ k H-transversal. Since this is true for any H, by definition g is p * F -transversal.Remark 6.1.12. Note that the map (6.1.11.1) is an isomorphism if p is smooth, or if g factors through an open subscheme of S which is smooth over k.Proof. By assumption and Proposition 2.3.5, we have the following isomorphism:(6.1.13.1)It is straightforward to check that the map (6.1.13.1) agrees with (6.1.1.3), and therefore f is G 1 ⊠ k G 2transversal.Conjecture 6.1.14. Let f 1 ∶ X 1 → Y 1 and f 2 ∶ X 2 → Y 2 be two morphisms of schemes, and let f ∶Then for any k-scheme X 2 and any objectProof. Denote by p 1 , p 2 , p 3 the projections of X 1 × k Y 1 × k X 2 to its components. By definition, we need to show that for any G ∈ T(Y 1 ), the graph of fnamely the following canonical map is an isomorphism:Since Γ f 1 is F 1 ⊠ k G-transversal, the following canonical map is an isomorphism:By Proposition 2.3.5, we have the following isomorphism:It is straightforward to check that the map (6.1.15.4) agrees with (6.1.15.2), and therefore the latter is an isomorphism, which finishes the proof.6.2. Relative Künneth formulas and the relative Verdier pairing. In this section we use the results in Section 6.1 to extend the Künneth formulas to the relative setting, under some transversality assumptions.Using such results we define the relative Verdier pairing as in[YZ18].6.2.1. Let S be a scheme, and let π 1 ∶ X 1 → S and π 2 ∶ X 2 → S be two morphisms. For i = 1, 2, we denote by p i ∶ X 1 × S X 2 → X i the projections.Lemma 6.2.2. We use the notations of 6.2.1, and assume that the morphism π 2 is smooth. Let F 1 be an object of T(X 1 ) such that the morphism π 1 ∶ X 1 → S is universally F 1 -transversal. Then the canonicalProof. Denote by p 13 ∶ X 1 × S X 2 × k X 2 → X 1 × k X 2 the projection, and the graph of p 2Theorem 6.2.7. Let S be a scheme and let f 1 ∶ X 1 → Y 1 , f 2 ∶ X 2 → Y 2 be two S-morphisms. Denote by f ∶ X 1 × S X 2 → Y 1 × S Y 2 be their product. Let T be a motivic triangulated category. For i = 1, 2, consider objects L i ∈ T(X i ) and M i , N i ∈ T c (Y i ). Then the following maps in Theorem 2.4.6are such that (1) If for i = 1, 2, f i is universally strongly locally acyclic relatively to L i , then then map (6.2.7.1)is an isomorphism. (2) If for i = 1, 2, both X i and Y i are smooth over S and smooth over k, and the structure morphism Y i → S is universally M i -transversal, then the map (6.2.7.2) is an isomorphism. (3) For i = 1, 2, denote by q i ∶ Y i → S the structure morphism. Then the map (6.2.7.3) is an isomorphism if the following conditions hold: (a) The morphisms q 1 and q 2 are smooth, and Y 1 and Y 2 are smooth k-schemes.(c) For i = 1, 2, q i is universally M i -transversal and universally N i -transversal.Remark 6.2.8. By Proposition 6.3.5 below, the universal transversality conditions in Theorem 6.2.7 can be replaced by strong universal local acyclicity conditions. have H 0 (X S) ≃ CH n (X) is the Chow group of n-cycles over X (up to p-torsion), and the specialization map (6.2.15.2) is Fulton's specialization map of algebraic cycles ([Ful98, Section 10.1]). By Proposition 6.2.16, the relative characteristic class of a motive can be seen as an n-cycle spanned by a family of 0-cycles given by the characteristic classes of its fibers. It is conjectured that the relative characteristic class is related to the the relative characteristic cycle (see[6.3. Purity, local acyclicity and transversality. In this section we clarify the link between the notions of purity, local acyclicity and transversality conditions. We also study the relation with the Fulton-style specialization map in[DJK18].6.3.1. Let T be a motivic triangulated category which satisfies the condition (RS) in 2.1.12. Let f ∶ X → S be a morphism of schemes with S smooth over k. Recall from 6.1.5 that if 1 S is f -pure, then for any B ∈ T(X), f is B-transversal if and only if for any C ∈ T c (S),is the graph of f . This amounts to say that the following canonical map is an isomorphism:6.3.2. The map (6.3.1.1) is always an isomorphism when C is dualizable.Recall 6.3.3. Consider a Cartesian square of schemes (6.3.3.1)with p a lci morphism. For K ∈ T(X), there is a canonical map called refined purity transformation(1) If both S and T are smooth over k and f is K-transversal, then K is ∆-pure.(2) If 1 S is i-pure and f is strongly locally acyclic relatively to K, then K is ∆-pure.Proof.(1) We have an isomorphismwhere the first isomorphism follows from the transversality condition, and the last isomorphism follows from 6.3.2. The result follows by applying the functor i ′ * to the map (6.3.4.2).(2) Without loss of generality we can assume that i is a regular closed immersion. We have a canonical mapwhere the second isomorphism comes from purity, and the last map is the functor i ′ * applied to the map (6.3.3.2). It follows from the local acyclicity and the localization sequence that the map (6.3.4.3) is an isomorphism, which implies that K is ∆-pure. Proposition 6.3.5 ([BG02, Theorem B.2]). Assume that k is a perfect field. Let f ∶ X → S be a morphism of schemes which factors through an open subscheme S 0 of S which is smooth over k. Let K ∈ T(X). We consider Cartesian squares of the form (6.3.5.1)Then the following statements hold:(1) If for any Cartesian square (6.3.5.1) with p smooth, g is strongly locally acyclic relatively to q * K, then f is K-transversal.(2) If for any Cartesian square (6.3.5.1) with p smooth, g is q * K-transversal, then f is strongly locally acyclic relatively to K.Proof.By hypothesis, f factors through a morphism f 0 ∶ X → S 0 . It is easy to see that f is strongly locally acyclic relatively to K (resp. K-transversal) if and only if f 0 is strongly locally acyclic relatively to K (resp. K-transversal). Therefore by working with f 0 we can assume that S = S 0 is smooth over k.We consider a Cartesian square of schemes (6.3.5.2)where S ′ is smooth over k, and a fortiori s is a lci morphism. Then we have the following diagram (6.3.5.3)• The map (a) is (6.1.5.1).• The map (b) is (2.1.5.1).• The map (c) is an isomorphism induced by base change and projection formula.• The map (d) is an isomorphism deduced from (6.1.5.1) by 6.3.2.• The map (e) is (6.3.3.2). One can check that the diagram is commutative.(1) If f is strongly locally acyclic relatively to K, then the map (b) above is an isomorphism. If the local acyclicity condition holds after any smooth base change, then by Lemma 6.3.4 K is ∆-pure and the map (e) above is an isomorphism, which implies that the map (a) above is an isomorphism. It follows from strong devissage that f is K-transversal.(2) If f is K-transversal, then the map (a) above is an isomorphism. If the transversality condition holds after any smooth base change, then by Lemma 6.3.4 K is ∆-pure and the map (e) above is an isomorphism, which implies that the map (b) above is an isomorphism. It follows from strong devissage that f is strongly locally acyclic relatively to K.6.3.6. We now show that the strong local acyclicity is equivalent to the following property, similar to the one in [Sai17, Proposition 8.11]:Definition 6.3.7. Let f ∶ X → S be a morphism of schemes and let K ∈ T(X). We say that the morphism f is strongly fibrewise locally acyclic relatively to K if the following condition holds: For any schemes S ′ and S ′′ smooth over a finite extension k ′ of k, and for any cartesian diagram of schemeswhere p is proper and generically finite and i is a closed immersion, and for any L ∈ T(S ′ ) the following composition map is an isomorphism:We say that f is universally strongly fibrewise locally acyclic relatively to K if the condition above holds after any base change.The proof of the following statement is inspired by [CD15, Proposition 7.2]: Proposition 6.3.8. Let f ∶ X → S be a morphism of schemes and let K ∈ T(X). Then f is universally strongly locally acyclic relatively to K if and only if f is universally strongly fibrewise locally acyclic relatively to K.Proof. If f is universally strongly locally acyclic relative to F , then by an argument similar to Lemma 6.3.4 we know that f is universally strongly fibrewise locally acyclic relatively to F . Now assume that f is universally strongly fibrewise locally acyclic relatively to F . Then it follows that for any Cartesian diagram (6.3.8.1)T is a base change of f , q is proper and generically finite, j is an open immersion with complement a strict normal crossing divisor and both U and T ′ are smooth over a finite extension k ′ of k, and any M ∈ T(U ), the following canonical map is an isomorphism:. We need to prove that for any Cartesian square (6.3.8.3)T is a base change of f , and any N ∈ T(V ), the following canonical map is an isomorphism:. We prove this claim by noetherian induction on V . By the existence of a compactification, we can factor the morphism r ∶ V → T above as an open immersion with dense image j 1 ∶ V →V followed by a proper morphism p ∶V → T . (6.3.8.5)Since p is proper, it suffices to prove that under the assumption (RS 1) in 2.1.12 (respectively under the assumption (RS 2), for every prime number l different from the characteristic of k), there exists a non-empty open immersion j 2 ∶ V ′ → V such that the following canonical map is an isomorphism (resp. is an isomorphism with coefficients in Z (l) ): (6.3.8.6)K YV ⊗ f * V j 1 * j 2 * j * 2 N (2.1.5.1) → j 1Y * (j * 1Y K YV ⊗ f * V j 2 * j * 2 N ).We can assume thatV is integral. By the assumption (RS 1) (resp. by de Jong-Gabber alteration ([ILO14, X. Theorem 2.1])), there exists a proper surjective morphism h ∶Ṽ →V which is birational (respectively generically flat, generically finite with degree prime to l) such thatṼ is smooth over k (resp. smooth over a finite extension k ′ of k of degree prime to l), and such that the inverse image ofV ∖ V inṼ is a strict normal crossing divisor. → i X * i * X A. Note that by Lemma 6.3.4, a similar result holds for the transversality condition when Z and S are smooth over a field. Remark 6.3.11. Corollary 6.3.10 is a consequence of the strong local acyclicity and therefore gives a criterion to detect it. In the usual derived category ofétale sheaves, the vanishing cycle formalism ([SGA7 II, XIII]) gives an insightful interpretation of this phenomena: the failure of local acyclicity is precisely described by the stalks of the vanishing cycle complex. We do not know how to realize such a picture in the motivic world.
The characteristic class and ramification of an l-adicétale sheaf. A Abbes, T Saito, ↑3 , ↑4 , ↑5. 16827A. Abbes, T. Saito, The characteristic class and ramification of an l-adicétale sheaf, Invent. Math. 168 (2007), no. 3, 567-612. ↑3 , ↑4 , ↑5 , ↑6 , ↑27
Les six opérations de Grothendieck et le formalisme des cyclesévanescents dans le monde motivique. J Ayoub, ↑2 , ↑5 , ↑11 , ↑29 , ↑34. 3541J. Ayoub, Les six opérations de Grothendieck et le formalisme des cyclesévanescents dans le monde motivique, Astérisque No. 314-315 (2007). ↑2 , ↑5 , ↑11 , ↑29 , ↑34 , ↑35 , ↑41
J Ayoub, La réalisationétale et les opérations de Grothendieck. 2J. Ayoub, La réalisationétale et les opérations de Grothendieck, Ann. Sci.Éc. Norm. Supér. (4) 47 (2014), no. 1, 1-145. ↑2
Algebraic cycles and higher K-theory. S Bloch, Adv. in Math. 613S. Bloch, Algebraic cycles and higher K-theory, Adv. in Math. 61 (1986), no. 3, 267-304. ↑2
Weight structures vs. t-structures; weight filtrations, spectral sequences, and complexes (for motives and in general). M Bondarko, J. K-Theory. 6341M. Bondarko, Weight structures vs. t-structures; weight filtrations, spectral sequences, and complexes (for motives and in general), J. K-Theory 6 (2010), no. 3, 387-504. ↑6 , ↑41
Dimensional homotopy t-structures in motivic homotopy theory. M Bondarko, F Déglise, Adv. Math. 311↑11M. Bondarko, F. Déglise, Dimensional homotopy t-structures in motivic homotopy theory, Adv. Math. 311 (2017), 91-189. ↑11
On Chow weight structures for cdh-motives with integral coefficients. M Bondarko, M Ivanov, Petersburg Math. J. 27641Algebra i AnalizM. Bondarko, M. Ivanov, On Chow weight structures for cdh-motives with integral coefficients, Algebra i Analiz 27 (2015), no. 6, 14-40. Reprinted in St. Petersburg Math. J. 27 (2016), no. 6, 869-888. ↑6 , ↑41
M Bondarko, A Luzgarev, arXiv:1605.08435On relative K-motives, weights for them, and negative K-groups. 641M. Bondarko, A. Luzgarev, On relative K-motives, weights for them, and negative K-groups, arXiv:1605.08435, 2016. ↑6 , ↑41
Geometric Eisenstein series. A Braverman, D Gaitsgory, Invent. Math. 1502A. Braverman, D. Gaitsgory, Geometric Eisenstein series, Invent. Math. 150 (2002), no. 2, 287-384. ↑53
Cohomological methods in intersection theory, to appear in the Clay lecture notes from the LMS-CMI Research School "Homotopy Theory and Arithmetic Geometry: Motivic and Diophantine Aspects. D.-C Cisinski, 411D.-C. Cisinski, Cohomological methods in intersection theory, to appear in the Clay lecture notes from the LMS-CMI Research School "Homotopy Theory and Arithmetic Geometry: Motivic and Diophantine Aspects". ↑4 , ↑11
Integral mixed motives in equal characteristic. D.-C Cisinski, F Déglise, Documenta Math. Extra. 1155Alexander S. Merkurjev's Sixtieth BirthdayD.-C. Cisinski, F. Déglise, Integral mixed motives in equal characteristic, Documenta Math. Extra Volume: Alexander S. Merkurjev's Sixtieth Birthday (2015), 145-194. ↑11 , ↑35 , ↑55
Étale motives. D.-C Cisinski, F Déglise, Compos. Math. 152319D.-C. Cisinski, F. Déglise,Étale motives, Compos. Math. 152 (2016), no. 3, 556-666. ↑2 , ↑19
Triangulated categories of motives. D.-C Cisinski, F Déglise, 42to appear in the series Springer Monographs in Mathematics. ↑2 , ↑3 , ↑8 , ↑10 , ↑11 , ↑20 , ↑22 , ↑33 , ↑35 , ↑41D.-C. Cisinski, F. Déglise, Triangulated categories of motives, to appear in the series Springer Monographs in Mathe- matics. ↑2 , ↑3 , ↑8 , ↑10 , ↑11 , ↑20 , ↑22 , ↑33 , ↑35 , ↑41 , ↑42
F Déglise, Bivariant theories in stable motivic homotopy. 42to appear in Documenta MathematicaF. Déglise, Bivariant theories in stable motivic homotopy, to appear in Documenta Mathematica, 2018. ↑42
F Déglise, F Jin, A Khan, arXiv:1805.05920Fundamental classes in motivic homotopy theory. 56↑2 , ↑4 , ↑5 , ↑7 , ↑23 , ↑38 , ↑39 , ↑40 , ↑42 , ↑44 , ↑51 , ↑52F. Déglise, F. Jin, A. Khan, Fundamental classes in motivic homotopy theory, arXiv:1805.05920, 2018. ↑2 , ↑4 , ↑5 , ↑7 , ↑23 , ↑38 , ↑39 , ↑40 , ↑42 , ↑44 , ↑51 , ↑52 , ↑56
E Elmanto, A Khan, arXiv:1812.07506Perfection in motivic homotopy theory. 4156E. Elmanto, A. Khan, Perfection in motivic homotopy theory, arXiv:1812.07506, 2018. ↑6 , ↑11 , ↑41 , ↑56
H Fausk, P Hu, J P May, Isomorphisms between left and right adjoints. 1143H. Fausk, P. Hu, J. P. May, Isomorphisms between left and right adjoints, Theory Appl. Categ. 11 (2003), No. 4, 107-131. ↑10 , ↑43
D Ferrand, arXiv:math/0506589On the non additivity of the trace in derived categories. 5D. Ferrand, On the non additivity of the trace in derived categories, arXiv:math/0506589, 2005. ↑5
W Fulton, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. BerlinSpringer-Verlag252Intersection theoryW. Fulton, Intersection theory, Second edition, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, 2. Springer-Verlag, Berlin, 1998. ↑6 , ↑7 , ↑38 , ↑42 , ↑43 , ↑52
The additivity of traces in monoidal derivators. M Groth, K Ponto, M Shulman, J. K-Theory. 14338↑5 , ↑29 , ↑30 , ↑31 , ↑32 , ↑33 , ↑36M. Groth, K. Ponto, M. Shulman, The additivity of traces in monoidal derivators, J. K-Theory 14 (2014), no. 3, 422-494. ↑5 , ↑29 , ↑30 , ↑31 , ↑32 , ↑33 , ↑36 , ↑37 , ↑38
A quadratic refinement of the Grothendieck-Lefschetz-Verdier trace formula. M Hoyois, Algebraic & Geometric Topology. 14639M. Hoyois, A quadratic refinement of the Grothendieck-Lefschetz-Verdier trace formula, Algebraic & Geometric Topology 14 (2015), no. 6, 3603-3658. ↑2 , ↑11 , ↑24 , ↑39
Travaux de Gabber sur l'uniformisation locale et la cohomologieétale des schémas quasi-excellents, Séminaire l'École Polytechnique. L Illusie, Y Laszlo, F Orgogozo, Alban Moreau, Vincent Pilloni, Michel Raynaud, Joël Riou, Benoît Stroh, Michael Temkin and Weizhe Zheng. Astérisque55ParisSociété Mathématique de FranceWith the collaboration of Frédéric DégliseL. Illusie, Y. Laszlo, F. Orgogozo, Travaux de Gabber sur l'uniformisation locale et la cohomologieétale des schémas quasi-excellents, Séminaire l'École Polytechnique 20062008. With the collaboration of Frédéric Déglise, Alban Moreau, Vincent Pilloni, Michel Raynaud, Joël Riou, Benoît Stroh, Michael Temkin and Weizhe Zheng. Astérisque No. 363-364 (2014). Société Mathématique de France, Paris, 2014. ↑55
Borel-Moore motivic homology and weight structure on mixed motives. F Jin, 1149-1183. ↑10Math. Z. 2833F. Jin, Borel-Moore motivic homology and weight structure on mixed motives, Math. Z. 283 (2016), no. 3, 1149-1183. ↑10
Ramification theory for varieties over a perfect field. K Kato, T Saito, Ann. of Math. 52K. Kato, T. Saito, Ramification theory for varieties over a perfect field, Ann. of Math. (2) 168 (2008), no. 1, 33-96. ↑5
Motivic homotopy theory in derived algebraic geometry. A Khan, Universität Duisburg-EssenPh.D. thesisA. Khan, Motivic homotopy theory in derived algebraic geometry, Ph.D. thesis, Universität Duisburg-Essen, 2016, available at https://www.preschema.com/thesis/. ↑38
M Levine, arXiv:1703.03049Toward an enumerative geometry with quadratic forms. 2M. Levine, Toward an enumerative geometry with quadratic forms, arXiv:1703.03049, 2018. ↑2
M Levine, arXiv:1806.10108Motivic Euler characteristics and Witt-valued characteristic classes. 2M. Levine, Motivic Euler characteristics and Witt-valued characteristic classes, arXiv:1806.10108, 2018. ↑2
J Lurie, Higher topos theory. Princeton, NJPrinceton University Press1705J. Lurie, Higher topos theory, Annals of Mathematics Studies, 170. Princeton University Press, Princeton, NJ, 2009. ↑5
The additivity of traces in triangulated categories. J P May, Adv. Math. 163129J. P. May, The additivity of traces in triangulated categories, Adv. Math. 163 (2001), no. 1, 34-73. ↑5 , ↑29
A 1 -homotopy theory of schemes. F Morel, V Voevodsky, Inst. HautesÉtudes Sci. Publ. Math. No. 902F. Morel, V. Voevodsky, A 1 -homotopy theory of schemes, Inst. HautesÉtudes Sci. Publ. Math. No. 90 (1999), 45-143 (2001). ↑2
Motivic cohomology, localized Chern classes, and local terms. M Olsson, Manuscripta Math. 1491-223M. Olsson, Motivic cohomology, localized Chern classes, and local terms, Manuscripta Math. 149 (2016), no. 1-2, 1-43. ↑4 , ↑23
The Grothendieck ring of varieties and algebraic K-theory of spaces. O Röndigs, arXiv:1611.0932739O. Röndigs , The Grothendieck ring of varieties and algebraic K-theory of spaces, arXiv:1611.09327, 2016. ↑39
The characteristic cycle and the singular support of a constructible sheaf. T Saito, ↑5 , ↑7 , ↑39 , ↑43. 20754T. Saito, The characteristic cycle and the singular support of a constructible sheaf, Invent. Math. 207 (2017), no. 2, 597-695. ↑5 , ↑7 , ↑39 , ↑43 , ↑44 , ↑54
Avec la collaboration de N. Bourbaki, P. Deligne et B. Saint-Donat. M Artin, A Grothendieck, J.-L Verdier ; M. Artin, A Grothendieck, J L Verdier, Lecture Notes in Mathematics. 269Springer-VerlagThéorie des topos et cohomologieétale des schémasM. Artin, A. Grothendieck, and J.-L. Verdier. Théorie des topos et cohomologieétale des schémas. Séminaire de Géométrie Algébrique du Bois-Marie 19631964 (SGA 4). Dirigé par M. Artin, A. Grothendieck, et J. L. Verdier. Avec la collaboration de N. Bourbaki, P. Deligne et B. Saint-Donat. Lecture Notes in Mathematics, Vol. 269, 270, 305. Springer- Verlag, Berlin-New York, 1972-1973. ↑2
Séminaire de géométrie algébrique du Bois-Marie SGA 4 1 2 . Avec la collaboration de. P Deligne, Cohomologieétale , P. Deligne, Cohomologieétale, Séminaire de géométrie algébrique du Bois-Marie SGA 4 1 2 . Avec la collaboration de
. F Boutot, A Grothendieck, L Illusie, J L Verdier, Lecture Notes in Mathematics. 56911Springer-VerlagF. Boutot, A. Grothendieck, L. Illusie et J. L. Verdier. Lecture Notes in Mathematics, Vol. 569. Springer-Verlag, Berlin-New York, 1977. ↑2 , ↑3 , ↑8 , ↑9 , ↑11
Cohomologie l-adique et fonctions L, Séminaire de géométrie algébrique du Bois-Marie 1965-66 (SGA 5). A Grothendieck, et J.-P. Serre. Springer Lecture Notes. I. Bucur, C. Houzel, L. Illusie, J.-P. JouanolouAvec la collaboration deA. Grothendieck, Cohomologie l-adique et fonctions L, Séminaire de géométrie algébrique du Bois-Marie 1965-66 (SGA 5). Avec la collaboration de I. Bucur, C. Houzel, L. Illusie, J.-P. Jouanolou, et J.-P. Serre. Springer Lecture Notes, Vol.
Springer-Verlag, ↑2 , ↑3 , ↑4 , ↑5 , ↑8 , ↑12 , ↑20 , ↑23. Berlin-New York2426Springer-Verlag, Berlin-New York, 1977. ↑2 , ↑3 , ↑4 , ↑5 , ↑8 , ↑12 , ↑20 , ↑23 , ↑24 , ↑26
Groupes de monodromie en géométrie algébrique II, Séminaire de géométrie algébrique du Bois-Marie 19671969 (SGA 7 II). N Ii] P. Deligne, Katz, Lecture Notes in Mathematics. 34057Springer-VerlagII] P. Deligne, N. Katz, Groupes de monodromie en géométrie algébrique II, Séminaire de géométrie algébrique du Bois-Marie 19671969 (SGA 7 II). Lecture Notes in Mathematics, Vol. 340. Springer-Verlag, Berlin-New York, 1973. ↑57
V Voevodsky, A Suslin, E Friedlander, Cycles, transfers, and motivic homology theories. Princeton, NJPrinceton University Press1432V. Voevodsky, A. Suslin, E. Friedlander, Cycles, transfers, and motivic homology theories, Annals of Mathematics Studies, 143. Princeton University Press, Princeton, NJ, 2000. ↑2
E Yang, Y Zhao, arXiv:1807.06930On the relative twist formula of ℓ-adic sheaves. 5152E. Yang, Y. Zhao, On the relative twist formula of ℓ-adic sheaves, arXiv:1807.06930, to appear in Acta Mathematica Sinica, English Series, 2018. ↑7 , ↑23 , ↑43 , ↑45 , ↑48 , ↑49 , ↑51 , ↑52
GER-MANY E-mail address: [email protected] SCHOOL OF MATHEMATICAL SCIENCES. Universität Fakultät Für Mathematik, Thea-Leymann-Strasse 9 Duisburg-Essen, 45127PEKING UNIVERSITY, NO.5 YIHEYUAN ROAD HAIDIAN DISTRICT, BEIJINGP. R. CHINA E-mail address: [email protected]ÄT FÜR MATHEMATIK, UNIVERSITÄT DUISBURG-ESSEN, THEA-LEYMANN-STRASSE 9, 45127 ESSEN, GER- MANY E-mail address: [email protected] SCHOOL OF MATHEMATICAL SCIENCES, PEKING UNIVERSITY, NO.5 YIHEYUAN ROAD HAIDIAN DISTRICT, BEIJING, 100871, P. R. CHINA E-mail address: [email protected]
| [] |
[
"Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS",
"Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS"
] | [
"Mahardhika Pratama ",
"Choiru Za'in ",
"Eric Pardede ",
"\nNanyang Technological University\n50 Nanyang Avenue639798Singapore\n",
"\nPlenty Rd & Kingsbury Dr\nBundoora VIC 3086Australia\n"
] | [
"Nanyang Technological University\n50 Nanyang Avenue639798Singapore",
"Plenty Rd & Kingsbury Dr\nBundoora VIC 3086Australia"
] | [] | The main challenge in large-scale data stream analytics lies in the ability of machine learning to generate largescale data knowledge in reasonable timeframe without suffering from a loss of accuracy. Many distributed machine learning frameworks have recently been built to speed up the large-scale data learning process. However, most distributed machine learning used in these frameworks still uses an offline algorithm model which cannot cope with the data stream problems. In fact, large-scale data are mostly generated by the non-stationary data stream where its pattern evolves over time. To address this problem, we propose a novel Evolving Large-scale Data Stream Analytics framework based on a Scalable Parsimonious Network based on Fuzzy Inference System (Scalable PANFIS), where the PANFIS evolving algorithm is distributed over the worker nodes in the cloud to learn large-scale data stream. Scalable PANFIS framework incorporates the active learning (AL) strategy and two model fusion methods. The AL accelerates the distributed learning process to generate an initial evolving large-scale data stream model (initial model), whereas the two model fusion methods aggregate an initial model to generate the final model. The final model represents the update of current large-scale data knowledge which can be used to infer future data. Extensive experiments on this framework are validated by measuring the accuracy and running time of four combinations of Scalable PANFIS and other Spark-based built in algorithms. The results indicate that Scalable PANFIS with AL improves the training time to be almost two times faster than Scalable PANFIS without AL. The results also show both rule merging and the voting mechanisms yield similar accuracy in general among Scalable PANFIS algorithms and they are generally better than Spark-based algorithms. In terms of running time, the Scalable PANFIS training time outperforms all Spark-based algorithms when classifying numerous benchmark datasets. | 10.1016/j.knosys.2018.12.028 | [
"https://arxiv.org/pdf/1807.06996v1.pdf"
] | 49,864,476 | 1807.06996 | 0044cfebe16945237e37e80d9b92ec494dfc404a |
Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS
18 Jul 2018
Mahardhika Pratama
Choiru Za'in
Eric Pardede
Nanyang Technological University
50 Nanyang Avenue639798Singapore
Plenty Rd & Kingsbury Dr
Bundoora VIC 3086Australia
Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS
18 Jul 2018arXiv:1807.06996v1 [cs.AI]Large-scale Data Stream AnalyticsDistributed Data Stream MiningParallel Data Stream ProcessingScalable Machine LearningBig DataKnowledge Integration (Fusion)
The main challenge in large-scale data stream analytics lies in the ability of machine learning to generate largescale data knowledge in reasonable timeframe without suffering from a loss of accuracy. Many distributed machine learning frameworks have recently been built to speed up the large-scale data learning process. However, most distributed machine learning used in these frameworks still uses an offline algorithm model which cannot cope with the data stream problems. In fact, large-scale data are mostly generated by the non-stationary data stream where its pattern evolves over time. To address this problem, we propose a novel Evolving Large-scale Data Stream Analytics framework based on a Scalable Parsimonious Network based on Fuzzy Inference System (Scalable PANFIS), where the PANFIS evolving algorithm is distributed over the worker nodes in the cloud to learn large-scale data stream. Scalable PANFIS framework incorporates the active learning (AL) strategy and two model fusion methods. The AL accelerates the distributed learning process to generate an initial evolving large-scale data stream model (initial model), whereas the two model fusion methods aggregate an initial model to generate the final model. The final model represents the update of current large-scale data knowledge which can be used to infer future data. Extensive experiments on this framework are validated by measuring the accuracy and running time of four combinations of Scalable PANFIS and other Spark-based built in algorithms. The results indicate that Scalable PANFIS with AL improves the training time to be almost two times faster than Scalable PANFIS without AL. The results also show both rule merging and the voting mechanisms yield similar accuracy in general among Scalable PANFIS algorithms and they are generally better than Spark-based algorithms. In terms of running time, the Scalable PANFIS training time outperforms all Spark-based algorithms when classifying numerous benchmark datasets.
Introduction
Large-scale data stream analytics has become one of the emerging area in data science [1,2,3]. A large volume of data in many forms (e.g. text, picture, sound, video, signals etc.) can be generated from numerous sources (e.g. IoT, Web 2.0, and social networks) in the Internet era. These information are essential for many companies/corporations to support their urgent decision-making to ensure their competitive advantage. 1 Nanyang Technological University, Singapore 2 La Trobe University, Australia Extracting valuable knowledge from large-scale data stream is challenging due to its 4V characteristics: volume, velocity, variety and veracity. Large-scale data stream is mostly generated by real-world applications in which data arrive continuously in non-stationary environments. With its characteristic, the velocity, it is important to obtain knowledge from large-scale data efficiently (reasonable timeframe without a reduction in the algorithms accuracy) [4].
Large-scale data stream analytics problem can be solved by two ways: 1) distributed computing ; and 2) streaming algorithm [5]. Distributed computing focuses on how to distribute/parallelize data processing from a single-node CPU-based processing into multi-node cluster-based processing framework [6], thus accelerate the learning time. Streaming algorithm also known as evolving algorithm processes/learns data at high speed, single pass, and online manner. Its structure is evolving following an update of the current datum. It does not require historical data as the current information/pattern/model is discarded after the last datum has been learned. This feature can help to reduce the storage requirement because the historical data do not need to be retained.
Recent work on large-scale data analytics was reported in [7], utilizing the MapReduce [8] method. In this work, the distributed algorithm used to learn the data partition was still based on the offline algorithm, namely Fuzzy Rule Based classifiers (FRBCSs) [9] to model complex problems. However, the offline learning is not efficient especially in handling rapid varying and a vast amount of large-scale data stream. On the other hand, processing large-scale data using a single-node evolving algorithm is limited by the memory and bandwidth of a single machine. This issue remains the main challenge for further developments in large-scale data analytics. Taking both benefits of distributed processing and online data processing, the large scale data stream analytics framework should accommodate between scalability of distributed learning and the efficiency of evolving algorithm.
In this work, we propose an Evolving large-scale data stream analytics framework based on Scalable PANFIS, where PANFIS [10] is a seminal evolving algorithm based on a hybrid neuro-fuzzy system which has the capability to learn the data stream in the single pass mode to cope with the high speed and dynamically changing data stream.
The three methods are involved in the Scalable PANFIS framework: 1) active learning (AL); 2) the rule merging; and 3) the majority voting.
The training phase of this framework is conducted by distributing the PANFIS algorithm (with or without AL) across the worker nodes. AL is the method to accelerate the learning process by selecting the important instance of training data. Fig. 1 in the blue box part illustrates that PANFIS (with or without AL) learns data stream partition in the worker node. Furthermore, the rule merging is designed to aggregate several models from different data stream partitions to yield single model for inference task. The majority voting method is applied to acquire the output from the majority decisions conducted by multiple classifiers in the system.
To summarize, the main contributions of this work are as follows:
• We present Scalable PANFIS framework, an Evolving large-scale data stream analytics framework, a distributed streaming algorithm, which can deal with large-scale data stream prediction problems. This framework is scalable/distributable as well as it can cope with the dynamic and evolving data stream.
• Our evolving large-scale data stream analytics framework is developed under four structures using combinations of active learning, rule merging and voting mechanisms. The four structures are capable of processing large-scale data stream in the one-pass fashion.
• We present the robust rule merging method to solve the aggregation problem in large-scale distributed data stream training. Because different data partitions are fed to the worker nodes and possibly distracts the original data distribution, the rules generated from the training data partitions (initial model) could not be applied directly to merge different model of data partitions. This issue is mainly due to poor rules generated using different tendency of original data distribution. Our rule merging method can solve the aggregation problem and yields the stable performance (accuracy) for all dataset.
The rest of the paper is organized as follows. Section 2 discusses the related research: PANFIS' algorithm and learning policy. Section 3 explains the architecture of the large-scale data analytics framework. Section 4 describes our proposed approach which specifically explains the active learning strategy and two aggregating methods. Section 5 discusses the numerical study: experimental setup, algorithms comparisons, and the numerical results of largescale data analytics. Section 5 presents the discussion and section 6 concludes the paper.
PANFIS
PANFIS is an evolving algorithm which is built on evolving neuro-fuzzy systems (ENFS), an extension of the well-known classic neuro-fuzzy systems(NFSs) [11]. NFSs combine fuzzy systems which follow the principle of human reasoning and neural networks which have a learning ability, parallelism, and robustness characteristics.
Basically, ENFSs are the evolving version of NFSs and have the capability to evolve their structure (rules) so that they can adapt to the changing environment, which is essential to cope with the non-stationary environment.
The PANFIS evolving algorithm can learn the data without an initial structure. During the learning, its structure (fuzzy rules and its parameters) is evolving, so that the new rule can be generated, updated, and pruned.
In addition, the the merging process is carried out by identifying identical (or similar) fuzzy rule sets to simplify the rules complexity.
The main feature of PANFIS is the construction of ellipsoids in arbitrary position to support multidimensional membership function in the feature space. Although the inference process of PANFIS is carried out in the highdimensional space, this ellipsoids in arbitrary position can be projected to fuzzy sets to form the antecedent parts of fuzzy sets which is interpretable for the user using two fuzzy set extraction methods.
The evolution of rules is controlled by the datum significance (DS) criterion which represents potential of current datum being learned in the system. DS was initially proposed by [12] and [13] to identify the significance of a datum which can be measured by its statistical contribution to PANFIS' output. Once, its value is high, this datum is considered to have high descriptive power and is thus a good candidate as a new rule.
The rule adaptation policy is executed when the arrival datum falls in the current clusters. In this case, the winning rule parameters are adjusted to determine the new coverage/span of the winning rule. Rule adaption in the original PANFIS utilize ESOM. However, this method has the drawback of instability which requires reinversion once the inverse covariance matrix is ill-conditioned (e.g., due to redundant input features). As a result, the adaptation formula of GENEFIS [14], pClass [15], and GEN-SMART-EFS [16] is adopted instead.
The three properties of the winning rule are updated as follows:
C update win = N prev win N prev win + 1 + X − C prev win N prev win + 1 ,(1)win (update) −1 = win (prev) −1 1 − α + α 1 − α , win (prev) −1 (X−C update win ))( win (prev) −1 (X−C update win )) T 1+α(X−C prev win ) win (prev) −1 (X−C prev win ) T ,(2)N update win = N win prev + 1,(3)
where C update win , win (update) −1 , and N update win denote the updated focal point, dispersion matrix, and the population of the winning rule.
Rule pruning of PANFIS is driven by an extended rule significance (ERS) concept which represents the contribution of every rule to the system output. ERS is inspired by the concept in the SAFIS method [13] but is extended by integrating hyperplane consequents and generalizing to ellipsoids in an arbitrary position which enable rules to be pruned in the high-dimensional learning space.
The fuzzy sets merging in PANFIS is carried out when some fuzzy sets are overlapped, which means they have similar membership function. This is done to reduce fuzzy rule redundancies, thus forming interpretable rule base.
The similarity calculation between two fuzzy sets can be found in [17]. The two fuzzy sets can be merged if the similarity Sker ≥ 0.8. Note that this mechanism is carried out only if one wish to produce classical interpretable rule with t-norm operators as AND part of the rules. This step is required because the projection of high-dimensional rule to one dimensional space often results in the overlapping issue.
The fuzzy consequences adjustment of PANFIS is driven by the enhanced recursive least squares (ERLS), which is inspired by conventional least squares (RLS) [18]. The main function of ERLS is designed to support the convergence of the system error, which is used for weight vector updates.
Four Structures of Scalable PANFIS for Evolving Large-Scale Data Stream Analytics
Scalable PANFIS framework covers three major phases: training phase, aggregating phase, and testing phase. In this section, we divide the discussion into two parts. The first part (subsection 3.1) discusses the Scalable PANFIS training mechanism which specify on how PANFIS (with or without AL) is parallelized in the Apache Spark (Spark) platform [19]. At the end of training process, the driver node receives the results obtained by learning of PANFIS for each data stream partitions as an initial model. The second part (subsection 3.2) discusses about how the initial model is processed further through aggregation mechanism to deliver the final knowledge base or model.
Scalable PANFIS Framework
Apache Spark platform [19] is regarded as the latest platform for distributed-based data processing. In comparison with the older platform, such as Hadoop, Spark improves performance significantly in terms of speed in data processing because it supports an in-memory based instead of disk-based programming model. The Spark ecosystem contains two parts: 1) spark-core ; and 2) programming interface core. Spark-core lies in the lower level library of the Spark ecosystem to serve the programming interface core. The programming interface core is integrated by Spark APIs which support many programming languages such as Scala, Java, Python, and R. Furthermore, Spark API also provides a machine learning library (Spark MLib), GraphX for analysis, a stream processing module of Spark Streaming, and SQL for structured data processing. For large-scale data analytics, these Spark ecosystem components enable framework to conduct parallel data processing and support the real-time insight/knowledge generation of large-scale data stream.
The R language is chosen as the main programming language in Scalable PANFIS framework as it is a well known programming language for data analysis. In order to bridge the operation between R and Spark, SparkR library (as a backend R and Java Virtual Machine (JVM)) is utilized to manipulate and process large-scale data in a parallel/distributed manner. The type of data used in processing large-scale data in parallel mode is Spark DataFrames (DataFrames), a unique Spark data abstraction which is stored in memory cluster computing. The diagram in Fig. 1 The Spark platform consists of a driver node and some worker nodes along with the data sources (e.g, Hadoop
Distributed File System (HDFS)). The training process of Scalable PANFIS framework operation is initialized by acquiring the dataset from the data source which is shown in step 1 Fig. 1. The second step creates the DataFrames in the memory cluster, instructed by the driver node, as a result of a data loading process from the HDFS data source to be used in many future Spark operations such as querying, subsetting, feeding the data into machine learning, etc. In our Scalable PANFIS framework, after the DataFrames are created in the memory cluster, they can be manipulated repetitively. For example, DataFrames can be partitioned and distributed to be processed in the worker nodes as shown in step 3. Then, the partitioned/chunked data are processed in the worker nodes by using either the Spark machine learning library or the user-defined function (e.g. PANFIS with AL) as shown in step 4 in Fig. 1 (see the blue solid box with the dashed arrow). The results/models are then sent back to the driver node as an initial model in step 5.
Step 6, is an optional step depending on whether the models should be saved into the stable storage for back up purposes or directly used for the next process (rule merging and majority voting). Please note that step 5 is the end of the Scalable PANFIS framework's training process (initial model is generated in the driver node). The initial model is then further aggregated into the final model, whose process is depicted using either the rule merging and majority voting methods.
Structure of Scalable PANFIS Framework model
This subsection details the four structures of Scalable PANFIS framework. The first and second structure utilize Scalable PANFIS (without AL) in generating initial model. While the first structure uses the rule merging, the second structure utilizes the majority voting in aggregating the initial model. The third and the fourth structure uses the first two structures added with AL.
Scalable PANFIS Framework using the Rule Merging Method
This structure of Scalable PANFIS framework uses distributed PANFIS to train large-scale data stream in generating the initial model. The initial model is then aggregated using the rule merging method to generate the final model. This structure is depicted in Fig. 2. .. The initial model of Scalable PANFIS framework contains many classifiers with many models generated (initial model) in the training phase. The initial model contains l models resulted from l data stream partitions training.
WorkerNodej(PANFIS)
From l models, r rules are extracted prior to rules merging as depicted in Fig. 2, assuming that each model contains one or more rules, so that r ≥ l.
Unlike the single classifier system, where rules can be used directly to test the testing dataset, using all r rules (extracted from initial model) directly to infer the testing dataset is impractical because the classification system contains many rules which are overlapping to each other. The overcomplex rules may deteriorate the generalization power of the classifier because they are generated by different subsets of a big dataset where some of them may be poor due to low supports or crafted by a small picture of the true distribution. To overcome this drawback, The rule merging method is applied to simplify the classifier's complexity.
Rule merging method has previously been applied in the big data processing with fuzzy system such as [20] and [21]. However, applying the rules merging method directly results in performance degradation. We investigate that the decrease in accuracy is caused by the class overlapping problem. Furthermore, outliers may appear and may generate new rules having a very small number of populations. To validate our hypothesis, we conduct an experiment on HEPMASS dataset to show that the initial rules selection (selecting k best rules) and rules removal (eliminating rules which have a small population) prior to rules merging influence the classifier performance. The results of this experiment is illustrated in the Table 1. It can be seen from the empirical result in Table 1 that outliers appear as indicated in the classification results and decrease the classification performance. The results in Table 1 (Accuracy without rules removal) shows that whenever some rules from the outliers are mistakenly involved in the rule merging process. The accuracy decreases from around 82 percent at k = 55 to around 62 percent. On the other hand, even though some outliers have been removed, in Table 1 (Accuracy with rules removal) the classification performance also decrease with a lower level than without rules removal. Therefore, we conclude that we should carefully design the rule merging method/mechanism by choosing the optimum number of rules (only the k highest confidence rule) and by removing the outliers prior to rule merging process in order to yield the high classification result.
The rules selection is inspired by the work in [7], where they select the rules with the highest weight which among the same antecedent in the same partition. The same procedure is also repeated, where the selected rules from every partition are compared with other selected rules from other partitions. Rules which have a highest weight among the same antecedent will be selected. At the end of the process, only the clusters/rules which have the highest weight for all unique antecedents will exist and will be be used for the final large-scale data modeling.
In this framework, the k best models are selected by observing the highest classification accuracy among training data partitions. The training accuracy reflects the confidence level of the model to be recruited as the base model.
Since each model is constructed by one or more fuzzy rules, the weight of rules are assigned by the weight of its corresponding model determined from the training accuracy.
Suppose that the data stream is denoted as ds = {x 1 , x 2 , x 3 , .., x n , ...} in the feature space R 2 , where n shows the index of n-th datum (current datum). Thus, n number of training data will be fed to one worker node (training set) denoted as S. Thus, for l data partitions in the training set, the collection l partitions of the training set and its corresponding models M respectively can be constructed as:
S T rain = {S 1 , S 2 , ..., S l } (4) M T rain = {M 1 , M 2 , ..., M l }.(5)
Prior to the merging process, several steps need to be carried out to process initial model: 1) extracting all rules from all models; 2) assigning the weight of r rules, where rule weight can be obtained from the weight of the model corresponding to it; 3) eliminating the rules which have a very small population; 4) selecting the k best rules which have the highest classification accuracy.
The preliminary steps of the merging process is carried out by simply concatenating all rules in the system, resulting in r rules being extracted from l models, r ≥ l. The next step is to assign the rule weight which can be Weaker rules to the closest Dominant rule as well as discard the rule if the maximum similarity obtained less or equal than θ , where θ is set to 0.9 in this experiment, assuming that the higher the level of threshold, the more similar rules will be merged to avoid the decrease of classification performance. The result is the p number of rules are assigned, where p ≤ m assuming that some rules are discarded.
The similarity calculation between two rules is adopted from the method proposed in [16]. This is based on the degree of deviation in the hyper-planes's gradient information, where the deviation is calculated based on the dihedral angle of the two hyper-planes they span, which is calculated as follows:
φ = arccos( a T b |a||b| ),(6)
where a = (D i;1 D i;2 D i;3 − 1) T and b = (W j;1 W j;2 W j;3 + 1) T the normal vectors of the two planes corresponding to rules D i and W j , showing into the opposite direction with respect to target y (-1 and +1 in the last coordinate).
Thus, the similarity of two hyper-planes is formulated as follows:
Sim (Di,Wj ) = φ π ,(7)
Two hyper-planes are regarded as similar if the similarity degree Sim (Di,Wj ) ≥ θ.
The next loop in Algorithm 1 performs the merging of p assigned rules to the associated Dominant rules. The parameter update of the dominant rules is carried out iteratively, where Dominant rules can only be updated with the list of the Weaker rules assigned to them. However, before performing the merging process where the value of both consequent and antecedent parameters between the Dominant rule and the Weaker rule are merged, another criterion also should be met to ensure that both merged rules should form a homogeneous shape and direction.
It also to represents the accurate representation of the two rules/clusters. Thus, the blow-up effect is applied to trigger homogeneous joint regions. The blow up effect is formulated as:
V merged n(V Dominanti + V W eakerj ),(8)
where V stands for volume and n is the dimension of input attribute. After these two conditions are fulfilled (formula 7 and formula 8), the updating parameters of merged rule is referring to the weighted average principle as applied in [14] as follows:
N update Domi = N cur Domi + N cur W eaki ,(10)
where c update Domi and −1
Domi (update) are the updated antecedent parameters of the merged rule and w update Domi is the updated consequent parameter of the merged rule.
Scalable PANFIS Framework using the Majority Voting Method
The initial model used in this second structure of Scalable PANFIS is the same model with the previous Scalable PANFIS framework described in subsection 3.2.1. However, this initial model is aggregated using the majority voting method to generate the final predictive outcomes without any merging. This structure is depicted in Fig. 4 ..
WorkerNodej(PANFIS)
initial model of Scalable PANFIS model 1
….
Model l
Final Model scenario
Scalable PANFIS & the Voting Model
Voting Method
Scalable PANFIS TRAINING PHASE (with or without AL) The Data Flow of this phase is depicted in (Fig. 1) Voting methods have become popular to combine multiple classifiers, such as those in the early works in [22,23,24]. A voting method in the fuzzy rule-based classifier was pioneered by Ishibuchi et. al, in [25]. In the realm of fuzzy rule-based classifiers, there are two kinds of fuzzy rule-based voting schemes: 1) multiple fuzzy if-then rules in a single fuzzy rule-based classification system; and 2) multiple fuzzy rule-based classifiers. We adopt the second type majority voting where the voting procedure is carried by multiple fuzzy rule-based classifiers, where the voting is conducted in the model level instead of the rule level.
Final Model
PANFIS was originally designed for regression problem. For the case of classification, the output of a given p-instance of testing data is structured under the MIMO (Multi-Input-Multi-Output) form. The output parameter w i of ith-rule in the MIMO form is expressed as follows:
w i = w 1 i0 , w 2 i0 , ..., w M i0 w 1 i1 , w 2 i1 , ..., w M i1 ............ w 1 iu , w 2 iu , ..., w M iu ,(13)
where M and u denote the number of classes and input dimension, respectively. Thus, the multiplication of output parameter w i ∈ ℜ ((u+1)r)×M and the firing strength φ i ∈ ℜ 1×((u+1)r) will result in the output value y ∈ ℜ 1×M . The classification decision of a particular instance is determined by observing the highest activation degree of output over all rules which is expressed as follows:
O = arg max m=1,..,M (O m )(14)
Please note that for the scalable PANFIS training process is conducted in a particular node of the large-scale data analytics framework which processes particular chunks/partitions of large-scale data. In the case of distributed learning, there are many partitions to be processed/learned. Therefore, the number of models generated depends on the initial number of partitions set before the training phase. In the case of the majority voting procedure for the final classification decision, all models generated in the large-scale data training phase influence the classification decision of every instance in the testing dataset. A visual illustration of this structure is depicted in Fig. 5.
Model 1
Largescale data stream The voting mechanism is applied in the Scalable PANFIS framework because it adopts multi classifiers technique which often shows better performance than a single classifier [26].
Scalable PANFIS Framework with AL and the Rule Merging Method
This Scalable PANFIS uses distributed PANFIS with AL to train large-scale data stream. This means that the number of samples trained in every worker node are less than number of samples in data partition processed by the worker node. From every worker node, the model is sent to the driver node as an initial model. The initial model is then merged using the rule merging method. This structure is illustrated in the Fig. 2.
The key feature in the large-scale data stream analytics is the capability of the algorithm/method to train the data efficiently in terms of running time and accuracy. In addition to speed up in the training process, the sample selection (also known as prototype reduction) is used to reduce the labeling cost because the sample evaluation procedure is done in an unsupervised mode. The sample selection in this work is inspired by the certainty-based active learning (AL) concept [27]. The main difference of AL from the prototype reduction method is in which AL evaluates the data in unsupervised mode.
The certainty-based AL scenario is developed by virtue of the Bayesian concept, where Bayesian posterior probability determines the conflict level between input and output spaces following the work in [27]. The Bayesian concept is more preferable than the firing strength criterion because the Bayesian concept is more robust to outliers.
In addition, the variable uncertainty strategy [28] counterbalances the effects of concept drift. This strategy adjusts the conflict threshold correspondingly to an up-to-date system dynamic. The substantial conflict in output spaces is triggered by the datum occupying an adjacent proximity to the decision boundary. The classifier's truncated output defines the conflict in the output which is expressed as follows:
p(ŷ o |X) output = min(max(conf f inal , 0), 1), conf f inal =ŷ 1 y 1 +ŷ 2 ,(15)
where p(ŷ o |X) output represents the output posterior probability. The first and the second dominant outputs are denoted asŷ 1 andŷ 2 , respectively. The conflict in the output space is determined by the quality of the decision boundary, whereas the conflict of the input space is caused by the unclean cluster, where the cluster has a different class sample. If the classifier exhibits a strong confusion, the training samples are accepted to update the model.
This criterion is defined as follow:
p(ŷ o |X) output < θ randomized or p(ŷ o |X) input < θ randomized ,(16)
where θ represents the conflict threshold. Please note that θ is generated by multiplying θ with random value with the amplitude of 1 and frequency of 1 inspired by the random sampling technique [28]. The budget B is introduced to determine the maximum allowable number of samples to be annotated in the training process. With the assumption that data is uniformly distributed, θ is initialized as θ = 1 m + B(1 − 1 m ). When the conflict in the output space or the conflict in the input space is higher than the conflict threshold, the sample does not need to be trained because a learner is confident with its own prediction. Otherwise, sample needs to be trained. The value of θ is dynamically changing, it increases when the sample is admitted for model updates, and decrease whenever sample is discarded from the training process.
Scalable PANFIS Framework with AL and the Majority Voting Method
The fourth structure of Scalable PANFIS framework employs distributed PANFIS with AL to train large-scale data stream to generate initial model (same initial model which is generated in subsection3.2.1). This initial model contains j number of classifiers (e.g. PANFIS with AL). The final model is obtained by processing the initial model using the majority voting explained in the subsection 3.2.2. The fourth structure of Scalable PANFIS framework (using AL) is illustrated in Fig. 4.
Numerical Study
This section describes the numerical study of large-scale data stream analytics using Scalable PANFIS. Subsection 4.1 details the experiment procedure including the environmental setup, performance measures, datasets, and methods used in the experiment. The results of the experiment are discussed in subsection 4.2.
Experiment Setup
In this work, all the experiments are performed in the Spark platform, under the NeCTAR Cloud flexible distributed machine learning computing environment. The Spark platform is built by one master node and 8 worker nodes, where the NeCTAR Ubuntu 16.04 LTS (Xenial) amd64 is installed for all nodes as the operating system. Each node has the maximum specification of 390 GB disk capacity and 48GB RAM. For the total memory used in the cluster, we configure only 35GB for each worker node to be allocated in the memory cluster, leaving the rest for other operations, thus the total memory cluster is 280GB. For the driver node, we allocate 10GB for the Spark operation, leaving the rest (38GB) for other operations, considering there may be a lot of variables stored in the local memory of the driver node.
In this experiment, eight algorithms are compared in order to measure their performance in terms of running time, accuracy, and the number of rules generated after the merging. The first four algorithms are the Scalable PANFIS algorithms and the other algorithms are the algorithms which are built in the Spark API library. For the sake of simplicity, we abbreviate the four structure of Scalable PANFIS algorithms which employs a combination of three techniques (e.g, the active learning, the rule merging, and the majority voting) as shown in Table 2. For a clear explanation, Fig. 2 and Fig. 4 show the Scalable PANFIS model sequence using the rule merging and majority voting methods as the aggregation method after initial model is generated. We utilize six datasets taken from the UCI dataset repository [29] for our experiments: SUSY, HIGGS, HEPMASS, Poker Hand, RLCPS, and KDDCup where their specifications being shown in the Table 3. All datasets are divided into 80% training data and 20% testing data.
Susy, Higgs, Hepmass, and RLCPS datasets are commonly used for large-scale data classification problems, such as in [30,7]. Poker Hand and KDDCup are multiclass datasets with 10 and 22 classes, respectively. While the KDDCup dataset features a binary classification problem: "normal" and "on attack" [7]. mapped into eight worker nodes to be processed in parallel mode. For each data partition, one model is generated.
The number of data partitions (96 partitions) is chosen based on the consideration that our system has 96 cores with 8 driver nodes, so that every driver node processes equal number of partition (12 partitions). Increasing the number of partitions will generally speed up the training process, such as the work conducted in [30]. However, our main concern in this experiment is to demonstrate the performance of Scalable PANFIS under the four big data structures. For each algorithm listed in Table 2, four performance metrics are applied:
1. Accuracy : The percentage of correctly classified data over all testing data.
2. Compression Rate: The performance measure of the AL method embedded in the PANFIS algorithm. This represents percentage of instances learned over all instances in the training data in particular data partitions.
It also usually called compression ratio. Please note that the compression ratio is only calculated by the Scalable-PANFIS algorithm with the AL method embedded on it. Otherwise, the compression rate is 1, meaning that all samples in the dataset are used for training process.
3. Running Time: The running time required for all distributed machine learning algorithms (
Results
Two group of experiments are conducted to measure the performance of the algorithms. The first group compares the Scalable PANFIS without AL (Algorithm 1-2) and Scalable PANFIS with AL (Algorithm 3-4). The second group compares all the algorithms as shown in Table 2 (Scalable PANFIS algorithms and Spark-based algorithms). For the first group, we measure the following performance: 1)accuracy; 2)compression rate; 3)running time; 4)number of rules generated after merging. The second group measures the performance of all the algorithms (both Scalable PANFIS and Spark-based) in terms of both accuracy and running time as illustrated in Table 7 and Table 8.
As previously discussed, the difference between Scalable PANFIS with AL and Scalable PANFIS without AL lies in the generation of the collected model of all partitions after training phase (initial model) as shown in the Fig. 2 and Fig. 4. Scalable PANFIS with AL only trains selected samples in each data partition, whereas Scalable PANFIS without AL trains all samples in each data partition. Therefore, it is important to compare the Scalable PANFIS with AL and Scalable PANFIS without AL. For the number of rules generated by Scalable PANFIS with and without AL (initial model) and the number of rules after the merging is depicted in the Table 6. It is clear that the proposed rule merging method reduces the complexity of rules in the system from before and after merging. The initial constant number of rule after merging for all datasets are constant (5 rules) as we set 5 best initial rules prior to merging process. It is proven that this method is robust to classify large-scale data stream. Table 7 and Table 8 show the performance of all the algorithms in terms of accuracy, running time, respectively. In general, in terms of accuracy (
Summary Discussion
This section summaries the methods, algorithms, and results obtained in this experiment. There are at least six points we can discuss in this experiment.
As discussed in the results section, firstly, the AL strategy is embedded in the PANFIS machine learning algorithm to speed up the training process by selecting samples to be trained in the driver node of Scalable PANFIS framework . For many cases of large-scale data stream processing, reducing the number of samples to be trained does not have a negative effect on accuracy exemplifying the success of AL method in identifying important samples. For smaller datasets, such as Poker Hand, scalable PANFIS with AL using a merging technique yields a lower accuracy than the others (48.69 percent compared to the others). This is due to the small size of the Poker Hand dataset compared to the other datasets. With around 800k of total samples, if it is divided into 96 partitions, each partition will have around 8k samples. With the further AL applied on each partition, the samples trained in each chunk is around 3.2k (assuming the compression rate is around 0.4). Hence, the training process in all the partitions is not converged.
Secondly, both the rule merging and voting techniques yield the similar performance results in terms of accuracy.
The voting mechanism discards the less supported decision made by the Weaker classifiers which generate a false classification output, whereas the rule merging mechanism discards the classifiers which have a lower confidence level (lower weight/lower classification training results) thus resulting in better inference results. Furthermore, the over-complex rule base leads to the overfitting issue and thus deteriorates generalizing ability.
Thirdly, the PANFIS architecture is designed for MIMO (multi input multi output) architecture. In the case of binary classifcation problems, the Spark-based algorithm can directly process the data. However, for the multi-class classification problem, Spark-based algorithms need to be modified into the One Versus All (OVA) form. Therefore, for the PokerHand dataset, Spark-based algorithms require a longer time to process the training data, as shown in the Table 8.
Fourthly, of the Spark-based algorithms, Spark.GBT performs better than others. GBT is known as one of the most powerful techniques for building predictive models. However, it has the longest computational time of all the other Spark-based algorithms. This is because the boosting mechanism iteratively finds the suitable cost function over function space, which takes longer computational time.
Fifthly, the large-scale data stream framework based on PANFIS accelerates the training process by processing all the data partitions in parallel mode. From each partition, one single model is generated by PANFIS with or without AL. In order to gain a final model, model fusion methods, such as rule merging and voting mechanism are applied because simply concatenating the data partition model can result in the overfitting issue deteriorating the generalization ability of the concatenated model because some rules may be overlapping.
Finally, we design the robust rules merging method by selecting the initial rules which have the highest weight and applying rules removal before the rule merging process as it can be seen in the performance (accuracy and number of rules after merging). The rule merging process is explained in algorithm 1. The rules removal is performed based on the consideration that the rules which have less support are considered as outlier, thus this could reduce the generalization capability of the classification performance. Note that our rule merging strategy is executed independently with the absence of data samples and is thus deployable for the single-pass learning context.
Conclusion
Evolving large-scale data stream analytics based on Scalable PANFIS demonstrates parallel data stream processing using PANFIS evolving algorithms, which combines two ways to deal with the large volume of large-scale data: streaming algorithms and distributed computing. PANFIS as an evolving algorithm can cope with data stream where its pattern is changing over time, whereas distributed computing is a platform which scales up the training phase(data process). This combination ensures that the final model generated from Scalable PANFIS (initial model) then model fusion/aggregation will ensure the large-scale data model is kept up to date. The embedded AL strategy further supports the running time in the training process of large-scale data stream analytics proven to not hit and even in some cases to improve the accuracy.
Acknowledgement
This work is supported by NTU Start-up Grant and MOE Tier 1 Research Grant.
Figure 1 :
1The Data Flow Architecture Scalable PANFIS Framework during the data stream training phase in Spark Platform
Figure 2 :
2The Structure of Scalable PANFIS Framework using the Rule Merging Method
Figure 3 :
3(a) The preliminary steps prior to merging in visual representation (b) The Merging Process in visual representation based on Algorithm1 acquired from the training accuracy of the model. The rules removal in step three aims to remove the poor rules due to outliers. The effect of outliers can be observed if rules have low populations, thus they contribute less during their lifespan. We apply 5 percent threshold population of the rule in the total population of the cluster. If it does not meet the requirement of the threshold, the rule is removed from the system. From this step, o number of rules are extracted, where l ≤ o ≤ r. Up to this step, we have o rules candidate to be fed in the merging process. The merging process is initialized by choosing the most k influential rules among other o rule candidates. We call the k-most influential rules the Dominant rules, whereas the m number of rules are called the Weaker rules thus, o = m + k. If the set of Dominant rules is denoted by D j = {D 1 , .., D j , ..., D k }, and the Weaker rules is denoted by W i = {W 1 , .., W i , ..., W m }. The merging process between rules occurs by following the procedure illustrated in algorithm 1. Both preliminary steps of merging and the merging process (Algorithm 1) are visualized in Fig. 3. The selected k-Dominant rules become the reference of the other m number of Weaker rules, assuming that Dominant rules have better results than the Weaker ones. From Algorithm 1, the first loop aims to assign each
number of Dominant rules m : number of Weaker rules Di : index i of Dominant rules Wj : index j of Weaker rules Loop Process: -Assign Weaker Rules to the closest Dominant Rules for j = 1 to m do for i = 1 to k do Count Similarity between Wj and Di (SimD i ,W j ) ((Formula 7)) end for Determine winner rule: calculating maximum similarityDi(winner) = arg max j=1,..,k (SimD i ,W j ) if (SimD i ,W j (winner) ≥ θ ) (Formula 7) thenRule Wj is recruited in the rule Dielse Discard Wj (Wj is regarded has a low similarity over current Dominant rules) end if end for Some rules may be eliminated, resulted in p assigned rules to the next merging process. Loop Process: Merging Of Assigned Weaker Rules to the closest Dominant Rules for i = 1 to k do for j = 1 to p do -Iteratively update Dominant rule parameters by merging it with the assigned list of Weaker rules if (j not in the list Di) then skip for the next Di else if ( the condition of blow-up effect is met-Formula (8) ) then Di(Update)=Merging of rule Di(current) and rule Wj (Formula 9 10 11 12) Di(Current) = Di(Update)
Figure 4 :
4The Structure of Scalable PANFIS Framework using the Majority Voting Method
Figure 5 :
5The voting mechanism scheme in the Scalable PANFIS architecture
Table 1 :
1The accuracy of HEPMASS testing dataset for different k best initial rules selection with and without rule removal prior to rules mergingk
Accuracy(rules removal)
Accuracy(without rules removal)
1
83.87
83.57
3
83.47
83.68
5
83.15
83.37
45
83.63
83.46
50
83.52
82.82
55
77.28
62.16
60
71.11
61.64
Assign m Weaker rules to k Dominant Rules by Looping, yield p assigned rules, and discard the rest IF the similarity of index of Weaker rule Merging p Weaker rules to k Dominant Rules by Looping,Rule extraction
r rules
Assigning rule
weight,
obtained from
the training
accuracy of
model
Weighted r
rules
Rules removal,
removing
outliers
selected o
rules
Sorting and
Choosing k best
rules are
selected, m
rules
(unselected
rules)
selected k
Dominant
rules
model
: number of Dominant rules
: number of Weaker rules
: index of Dominant rules
: index of Weaker rules
The rest m
Weaker rules
to the closest Dominant rule
higher than
( =0.9)
Assign
to
Else
Discard
End
Yield
assigned rules,
Weaker rule
and Dominant rule
If the condition of blow-up effect is met
Merging
to
(Update parameters)
Else
Discard
End
Yield
Rules
(Final Model)
.Large-
scale
data
stream
Training Phase in
the Scalable Mode
In the worker
node
Initial model
in the driver node
Aggretation
phase
Scalable
PANFIS
WorkerNode1(PANFIS)
Table 2 :
2Algorithm descriptionNo
Algorithm
Description
1
Scalable PANFIS Merging
Scalable PANFIS using
Rule Merging Technique
2
Scalable PANFIS Voting
Scalable PANFIS using
Majority Voting Technique
3
Scalable PANFIS with AL Merging
Scalable PANFIS with AL
using Rule Merging Technique
4
Scalable PANFIS with AL Voting
Scalable PANFIS with AL
using Majority Voting Technique
5
Spark.KMeans
K-Means
6
Spark.GLM
Spark Generalized Linear Model
7
Spark.GBT
Spark Gradient Boosted Tree
8
Spark.RF
Spark Random Forest
Table 3 :
3Dataset descriptionDataset
#Sample #Atts #Class
SUSY
5000000
18
2
HIGGS
11000000
28
2
HEPMASS
10500000
28
2
RLCPS
5174219
9
2
Poker Hand
1025011
10
10
KDDCup
4898431
41
2
96 data partitions are set in the Scalable PANFIS framework under spark environment. Every partition is
Table 2 )
2in pro-
Table 4 :
4The compression rate of the Scalable PANFIS with AL and the accuracy of Scalable PANFIS with and without ALDataset
Average
Average Accuracy
Average Accuracy
Compression
without AL(%)
with AL(%)
Rate(%)
Merging + Voting
Merging + Voting
SUSY
0.4012
76.46
76.50
HIGGS
0.4008
63.68
63.82
HEPMASS
0.4008
83.825
83.8
RLCPS
0.4011
99.975
99.935
Poker Hand
0.4008
51.06
48.69
KDDCup
0.408
99.73
99.79
Table 4
4shows that the average compression rate of Scalable PANFIS with AL is around 40 percent. It means that only around 40 percent of samples for each data partition are used for training purpose in the worker node on average. The average accuracy for both Scalable PANFIS with and without AL are comparable. There is no significant reduction in accuracy across all the datasets for both methods despite the reduction in the samples trained. For the SUSY, HIGGS, and KDDCup datasets, it is shown that Scalable PANFIS with AL is slightly better than Scalable PANFIS without AL. Conversely, for HEPMASS, RLCPS, and Poker Hand, Scalable PANFIS without AL slightly outperforms Scalable PANFIS with AL. From this, we can conclude that AL can operate in the training process without a loss of accuracy. Note that average accuracy is the average of classification result for Scalable PANFIS with AL or Scalable PANFIS without AL for both the rule merging and majority voting.A similar comparison between Scalable PANFIS with AL and without AL is shown inTable 5in terms of running time. It is noted that the average compression rate is linear with the speed of the training partition. For examplein the case of training for whole SUSY dataset, Scalable PANFIS requires 1349 seconds to generate the large-scale
data model, whereas Scalable PANFIS with AL needs around half the time, this being 691 seconds. This trend is
similar with the other datasets where the running time of Scalable PANFIS with AL is around half the running
time of Scalable PANFIS without AL.
Table 5 :
5The effect of the Active Learning Method in the Scalable PANFIS training algorithm on the running time performanceDataset
Average
Running Time
Running Time
Compression
without AL (s)
with AL(s)
Rate(%)
SUSY
0.4012
1349
691
HIGGS
0.4008
5671
2852
HEPMASS
0.4008
5280
2650
RLCPS
0.4011
630
364
Poker Hand
0.4008
218
119
KDDCup
0.4008
2965
1334
Table 6 :
6Number of rule generated before and after the rule merging for initial model generated with Scalable PANFIS (with and without AL)Dataset
Scalable PANFIS
Scalable PANFIS
Scalable PANFIS
Scalable PANFIS
Before Merging
After Merging
with AL Before Merging
with AL After Merging
# Rule
# Rule
# Rule
# Rule
SUSY
107
5
103
5
HIGGS
102
5
119
5
HEPMASS
135
5
137
5
RLCPS
96
5
96
5
Poker Hand
126
5
125
5
KDDCup
96
5
96
5
Table 7 :
7The Accuracy for all algorithms on all datasetsAlgorithm
Accuracy (%)
SUSY HIGGS HEPMASS RLCPS Poker Hand KDDCup
Scalable PANFIS Merging
76.70
63.66
83.47
99.98
51.06
99.72
Scalable PANFIS Voting
76.22
63.70
84.18
99.97
53.7
99.74
Scalable PANFIS with AL Merging 76.79
63.72
83.45
99.97
48.69
99.78
Scalable PANFIS with AL Voting
76.20
63.92
84.15
99.9
52.8
99.82
Spark.KMeans
50.04
48.34
50.66
99.63
50.21
77.21
Spark.GLM
75.01
63.51
83.40
99.97
50.21
78.30
Spark.GBT
75.11
59.49
81.83
99.97
53.12
99.88
Spark.RF
76.81
59.65
82.43
99.63
50.49
99.61
Table 7 )
7, all Scalable PANFIS algorithms have a similar performance for all datasets. For example, for the SUSY, HIGGS, HEPMASS, RLCPS, Poker Hand, and KDDCup datasets, the Scalable PANFIS algorithms demonstrate an accuracy of around 76, 63, 83, 99, 51, and 99 percent, respectively. Conversely, for Spark-based algorithms, the accuracy for some of the datasets are not the same. For the SUSY and HIGGS datasets, for example, Spark.KMeans algorithm is outperformed by its counterparts with only 50.04 and 48.34 percent of accuracy in comparison with other Spark-based algorithms, with around 75 and 60 percent in accuracy. Table 7 also shows that for most of the datasets, Scalable PANFIS algorithms outperform Spark-based algorithms in terms of accuracy, except for the KDDCup dataset, where Spark.GBT achieves slightly better than Scalable PANFIS algorithms accuracy, while some algorithms (Spark.GLM and Spark.GBT) achieve only around 80 percent.
Table 8
8shows that most of Spark-based algorithms perform faster when they train large-scale data. However, in one case (Poker Hand dataset), all the Spark-based algorithms (Algorithm 5-8) perform slower than the Scalable PANFIS algorithms (Algorithm 1-4). Of the Spark-based algorithms, Spark-GBT consumes more time than the other Spark-based algorithms. In some majority cases, Spark-GBT also performs slower than Scalable PANFIS with AL algorithms.
Table 8 :
8Running Time performance for all datasets and algorithms SUSY HIGGS HEPMASS RLCPS Poker Hand KDD CupAlgorithm
Running Time (s)
Scalable PANFIS Merging
1349
5671
5280
630
218
2965
Scalable PANFIS Voting
Scalable PANFIS with AL Merging
691
2852
2650
364
119
1334
Scalable PANFIS with AL Voting
Spark.KMeans
290
1113
873
256
437
338
Spark.GLM
457
2928
2092
606
967
833
Spark.GBT
664
3401
3167
852
1308
1224
Spark.RF
259
1713
1239
262
548
463
https://github.com/choiruzain/LargeScaleDataAnalytic
Business intelligence and analytics: from big data to big impact. H Chen, R H Chiang, V C Storey, MIS quarterly. H. Chen, R. H. Chiang, V. C. Storey, Business intelligence and analytics: from big data to big impact, MIS quarterly (2012) 1165-1188.
Big data with cloud computing: an insight on the computing environment, mapreduce, and programming frameworks. A Fernández, S Río, V López, A Bawakid, M J Jesus, J M Benítez, F Herrera, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 45A. Fernández, S. del Río, V. López, A. Bawakid, M. J. del Jesus, J. M. Benítez, F. Herrera, Big data with cloud computing: an insight on the computing environment, mapreduce, and programming frameworks, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 4 (5) (2014) 380-409.
C P Chen, C.-Y Zhang, Data-intensive applications, challenges, techniques and technologies: A survey on big data. 275C. P. Chen, C.-Y. Zhang, Data-intensive applications, challenges, techniques and technologies: A survey on big data, Information Sciences 275 (2014) 314-347.
Big data: Tutorial and guidelines on information and process fusion for analytics algorithms with mapreduce. S Ramírez-Gallego, A Fernández, S García, M Chen, F Herrera, Information Fusion. 42S. Ramírez-Gallego, A. Fernández, S. García, M. Chen, F. Herrera, Big data: Tutorial and guidelines on information and process fusion for analytics algorithms with mapreduce, Information Fusion 42 (2018) 51-61.
Big data stream learning with samoa. A Bifet, G D F Morales, 2014 IEEE International Conference on. IEEEData Mining WorkshopA. Bifet, G. D. F. Morales, Big data stream learning with samoa, in: Data Mining Workshop (ICDMW), 2014 IEEE International Conference on, IEEE, 2014, pp. 1199-1202.
Toward scalable systems for big data analytics: A technology tutorial. H Hu, Y Wen, T.-S Chua, X Li, IEEE access. 2H. Hu, Y. Wen, T.-S. Chua, X. Li, Toward scalable systems for big data analytics: A technology tutorial, IEEE access 2 (2014) 652-687.
A mapreduce approach to address big data classification problems based on the fusion of linguistic fuzzy rules. S Río, V López, J M Benítez, F Herrera, International Journal of Computational Intelligence Systems. 83S. del Río, V. López, J. M. Benítez, F. Herrera, A mapreduce approach to address big data classification problems based on the fusion of linguistic fuzzy rules, International Journal of Computational Intelligence Systems 8 (3) (2015) 422-437.
Mapreduce: simplified data processing on large clusters. J Dean, S Ghemawat, Communications of the ACM. 511J. Dean, S. Ghemawat, Mapreduce: simplified data processing on large clusters, Communications of the ACM 51 (1) (2008) 107-113.
Classification and modeling with linguistic granules: Advanced information processing. H Ishibuchi, T Nakashima, M Nii, H. Ishibuchi, T. Nakashima, M. Nii, Classification and modeling with linguistic granules: Advanced information processing (2004).
Panfis: A novel incremental learning machine. M Pratama, S G Anavatti, P P Angelov, E Lughofer, IEEE Transactions on Neural Networks and Learning Systems. 251M. Pratama, S. G. Anavatti, P. P. Angelov, E. Lughofer, Panfis: A novel incremental learning machine, IEEE Transactions on Neural Networks and Learning Systems 25 (1) (2014) 55-68.
Neural fuzzy systems: a neuro-fuzzy synergism to intelligent systems. C.-T Lin, Prentice hall PTRC.-T. Lin, et al., Neural fuzzy systems: a neuro-fuzzy synergism to intelligent systems, Prentice hall PTR, 1996.
A generalized growing and pruning rbf (ggap-rbf) neural network for function approximation. G.-B Huang, P Saratchandran, N Sundararajan, IEEE Transactions on Neural Networks. 161G.-B. Huang, P. Saratchandran, N. Sundararajan, A generalized growing and pruning rbf (ggap-rbf) neural network for function approximation, IEEE Transactions on Neural Networks 16 (1) (2005) 57-67.
Sequential adaptive fuzzy inference system (safis) for nonlinear system identification and prediction. H.-J Rong, N Sundararajan, G.-B Huang, P Saratchandran, Fuzzy Sets and Systems. 1579H.-J. Rong, N. Sundararajan, G.-B. Huang, P. Saratchandran, Sequential adaptive fuzzy inference system (safis) for nonlinear system identification and prediction, Fuzzy Sets and Systems 157 (9) (2006) 1260-1275.
Genefis: toward an effective localist network. M Pratama, S G Anavatti, E Lughofer, IEEE Transactions on Fuzzy Systems. 223M. Pratama, S. G. Anavatti, E. Lughofer, Genefis: toward an effective localist network, IEEE Transactions on Fuzzy Systems 22 (3) (2014) 547-562.
pclass: an effective classifier for streaming examples. M Pratama, S G Anavatti, M Joo, E D Lughofer, IEEE Transactions on Fuzzy Systems. 232M. Pratama, S. G. Anavatti, M. Joo, E. D. Lughofer, pclass: an effective classifier for streaming examples, IEEE Transactions on Fuzzy Systems 23 (2) (2015) 369-386.
Generalized smart evolving fuzzy systems. E Lughofer, C Cernuda, S Kindermann, M Pratama, Evolving Systems. 64E. Lughofer, C. Cernuda, S. Kindermann, M. Pratama, Generalized smart evolving fuzzy systems, Evolving Systems 6 (4) (2015) 269-292.
On-line elimination of local redundancies in evolving fuzzy systems. E Lughofer, J.-L Bouchot, A Shaker, Evolving Systems. 23E. Lughofer, J.-L. Bouchot, A. Shaker, On-line elimination of local redundancies in evolving fuzzy systems, Evolving Systems 2 (3) (2011) 165-187.
. B Wittenmark, Adaptive control. B. Wittenmark, et al., Adaptive control.
Spark: Cluster computing with working sets. M Zaharia, M Chowdhury, M J Franklin, S Shenker, I Stoica, 1095M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica, Spark: Cluster computing with working sets., HotCloud 10 (10-10) (2010) 95.
C Zain, M Pratama, E Lughofer, M M Ferdaus, Q Cai, M Prasad, Big data analytics based on panfis mapreduce. C. Zain, M. Pratama, E. Lughofer, M. M. Ferdaus, Q. Cai, M. Prasad, Big data analytics based on panfis mapreduce.
C Za'in, M Pratama, A Ashfahani, E Pardede, H Sheng, arXiv:1804.10166Big data analytic based on scalable panfis for rfid localization. arXiv preprintC. Za'in, M. Pratama, A. Ashfahani, E. Pardede, H. Sheng, Big data analytic based on scalable panfis for rfid localization, arXiv preprint arXiv:1804.10166.
Methods of combining multiple classifiers and their applications to handwriting recognition. L Xu, A Krzyzak, C Y Suen, IEEE transactions on systems, man, and cybernetics. 223L. Xu, A. Krzyzak, C. Y. Suen, Methods of combining multiple classifiers and their applications to handwriting recognition, IEEE transactions on systems, man, and cybernetics 22 (3) (1992) 418-435.
Consensus theoretic classification methods. J A Benediktsson, P H Swain, IEEE transactions on Systems, Man, and Cybernetics. 224J. A. Benediktsson, P. H. Swain, Consensus theoretic classification methods, IEEE transactions on Systems, Man, and Cybernetics 22 (4) (1992) 688-704.
Democracy in neural nets: Voting schemes for classification. R Battiti, A M Colla, Neural Networks. 74R. Battiti, A. M. Colla, Democracy in neural nets: Voting schemes for classification, Neural Networks 7 (4) (1994) 691-707.
Voting in fuzzy rule-based systems for pattern classification problems, Fuzzy sets and systems. H Ishibuchi, T Nakashima, T Morisawa, 103H. Ishibuchi, T. Nakashima, T. Morisawa, Voting in fuzzy rule-based systems for pattern classification problems, Fuzzy sets and systems 103 (2) (1999) 223-238.
Fuzzy rule based classification systems for big data with mapreduce: granularity analysis. A Fernández, S Río, A Bawakid, F Herrera, Advances in Data Analysis and Classification. 114A. Fernández, S. del Río, A. Bawakid, F. Herrera, Fuzzy rule based classification systems for big data with mapreduce: granularity analysis, Advances in Data Analysis and Classification 11 (4) (2017) 711-730.
An incremental meta-cognitive-based scaffolding fuzzy neural network. M Pratama, J Lu, S Anavatti, E Lughofer, C.-P Lim, Neurocomputing. 171M. Pratama, J. Lu, S. Anavatti, E. Lughofer, C.-P. Lim, An incremental meta-cognitive-based scaffolding fuzzy neural network, Neurocomputing 171 (2016) 89-105.
Active learning with drifting streaming data. I Zliobaite, A Bifet, B Pfahringer, G Holmes, IEEE transactions on neural networks and learning systems. 25I. Zliobaite, A. Bifet, B. Pfahringer, G. Holmes, Active learning with drifting streaming data, IEEE transactions on neural networks and learning systems 25 (1) (2014) 27-39.
K Bache, M Lichman, Uci machine learning repository. K. Bache, M. Lichman, Uci machine learning repository (2013).
Mrpr: a mapreduce solution for prototype reduction in big data classification. I Triguero, D Peralta, J Bacardit, S García, F Herrera, neurocomputing. 150I. Triguero, D. Peralta, J. Bacardit, S. García, F. Herrera, Mrpr: a mapreduce solution for prototype reduction in big data classification, neurocomputing 150 (2015) 331-345.
| [
"https://github.com/choiruzain/LargeScaleDataAnalytic"
] |
[
"Assumptions of the primordial spectrum and cosmological parameter estimation",
"Assumptions of the primordial spectrum and cosmological parameter estimation"
] | [
"Arman Shafieloo \nDepartment of Physics\nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordUK\n\nInter University Centre for Astronomy and Astrophysics\nPost Bag 4411007Ganeshkhind, PuneIndia\n",
"Tarun Souradeep \nInter University Centre for Astronomy and Astrophysics\nPost Bag 4411007Ganeshkhind, PuneIndia\n"
] | [
"Department of Physics\nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordUK",
"Inter University Centre for Astronomy and Astrophysics\nPost Bag 4411007Ganeshkhind, PuneIndia",
"Inter University Centre for Astronomy and Astrophysics\nPost Bag 4411007Ganeshkhind, PuneIndia"
] | [] | The observables of the perturbed universe, CMB anisotropy and large structures, depend on a set of cosmological parameters, as well as, the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best fit parameters and their relative confidence limits. In this letter, we demonstrate that a specific assumed form actually drives the best fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameters estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. PACS numbers: 98.80.Es,98.65.Dx,98.62.Sb | 10.1088/1367-2630/13/10/103024 | [
"https://arxiv.org/pdf/0901.0716v2.pdf"
] | 118,839,629 | 0901.0716 | 96ddff43b99db5d03c60d9ad08d88131e7c5dc9d |
Assumptions of the primordial spectrum and cosmological parameter estimation
6 Jan 2009
Arman Shafieloo
Department of Physics
University of Oxford
1 Keble RoadOX1 3NPOxfordUK
Inter University Centre for Astronomy and Astrophysics
Post Bag 4411007Ganeshkhind, PuneIndia
Tarun Souradeep
Inter University Centre for Astronomy and Astrophysics
Post Bag 4411007Ganeshkhind, PuneIndia
Assumptions of the primordial spectrum and cosmological parameter estimation
6 Jan 2009(Dated: January 21, 2009)numbers: 9880Es9865Dx9862Sb
The observables of the perturbed universe, CMB anisotropy and large structures, depend on a set of cosmological parameters, as well as, the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best fit parameters and their relative confidence limits. In this letter, we demonstrate that a specific assumed form actually drives the best fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameters estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. PACS numbers: 98.80.Es,98.65.Dx,98.62.Sb
INTRODUCTION
Precision measurements of anisotropy and polarization in the Cosmic Microwave Background (CMB), in conjunction with observations of the large scale structure, suggest that the primordial density perturbation is dominantly adiabatic and has a nearly scale invariant spectrum [1,2]. This is in good agreement with most simple inflationary scenarios which predict nearly power law or scale invariant forms of the primordial perturbation [3,4,5]. However, despite the strong theoretical appeal and simplicity of a featureless primordial spectrum, our results highlight that the determination of the shape of the primordial power spectrum directly from observations with minimal theoretical bias would be a critical requirement in cosmology.
The observables of the perturbed universe, such as, CMB anisotropy galaxy surveys and weak lensing etc., all depend on a set of cosmological parameters describing the current universe, as well as, the parameters characterizing the presumed nature of the initial perturbations. While certain characteristics of the initial perturbations, such as, the adiabatic nature, tensor contribution, can, and are, being tested independently, the shape of the primordial power spectrum remains, at best, a well motivated assumption.
It is important to distinguish between the cosmological parameters that describe the present universe, from that characterizing the initial conditions, specifically, the PPS, P (k). However, it is prevalent in cosmological parameter estimation to treat the two sets identically. Based on sampling on a coarse grid in the cosmological parameter space, we have already shown that the CMB data is sensitive to the PPS [6]. The best fit cosmological parameters with free form PPS have much enhanced likelihoods and the preferred regions significantly separated from the best fit parameters obtained with assumed power-law PPS. It is also known that specific features in the PPS can dramatically improve the fit to data (eg. ref. [7], and references therein).
In this letter, we bring forth another important issue introduced by our prior ignorance about the PPS. While, known correlations between cosmological parameters is always folded into parameter estimation, the analogous situation for P (k) is not as widely appreciated. An assumed functional form for the PPS, is equivalent to an analysis with a free form PPS, where, say, P (k), is estimated in separate k bins, but, then one imposes strong correlation between the power in different bins. As we show in this work, the assumed form (equivalently, the implied correlations in P (k) at different k,) drive the significant number of degrees of freedom available in the cosmological parameters to adjust into suitable specific combinations. Hence, the assumed form of PPS could be dominant in selecting the best fit regions (eg., ref. [8] where features in the PPS lead to very different best fit cosmological parameters). For specific functional forms, the corresponding best-fit models lie entrenched in distinct basins in the parameter space. Our results show that in these basins, the likelihood is remarkably robust to variations in the PPS. We conclude that there are sufficient degrees of freedom in the cosmological parameters to mould the fit around the constraints imposed by the assumed form of the PPS.
In this letter, we elucidate this issue in the context of CMB data from WMAP for three different well known assumed forms of the primordial spectrum, P (k), (i.) scale-invariant Harrison-Zeldovich (HZ), (ii.) scale-free Power-law (PL) and, (iii.) Power law with running(RN). The methodology and analysis is described in §2. The results are given in §3 and we give the conclusion of our work in §4.
METHOD AND ANALYSIS
The angular power spectrum of CMB anisotropy, C l , is a convolution of the PPS, P (k) generated in the early universe with a radiative transport kernel, G(l, k), determined by the current values of the cosmological parameters. The precision measurements of C l , and the concordance of cosmological parameters measured from other cosmological observations allow the possibility of direct recovery of P (k) from the observations. In our analysis we use an improved (error-sensitive) Richardson-Lucy (RL) method of deconvolution to reconstruct the optimized primordial power spectrum at each point in the parameter space [6,9,10,11,12]. The RL based method has been demonstrated to be an effective method to recover P (k) from C l measurements [6,11,12] (look at ref. [13] for some other reconstruction methods).
In this letter, we study the improvement in likelihood allowed by an 'optimal' free-form PPS at points in the cosmological parameter space around the best-fit region for the three different assumed form of PPS, viz., Harrison-Zeldovich (HZ), power-law (PL) and power-law with running (RN). We apply our deconvolution method to reconstruct an 'optimal' form of the PPS at each point [6].
Markov-chain Monte-Carlo (MCMC) samples of parameters provide a fair sampling of the parameter space around the best-fit point. We use the MCMC chains generated based on 3 year data by the WMAP team for parameter estimation with HZ, PL and RN forms of the PPS. We reconstruct the optimized PPS for each point of these chains and obtain the 'optimal' PPS likelihood based on the reconstructed spectrum.
We limit our attention to the flat ΛCDM cosmological model and consider the four dimensional parameter space, H 0 , τ , Ω b and Ω 0m . This corresponds to a minimalistic "Vanilla Model", a flat ΛCDM parametrized by six parameters (n s , A s , H 0 , τ , Ω b , Ω dm ). In case of Harrison-Zeldovich (HZ) PPS assumption, n s = 1, leaving only 5 parameters. The case of assuming a constant running in the spectral index (RN), n run , leads to 7 parameters. The dimensionality in the three models is different solely due to the parameters of the assumed PPS. Hence, in our analysis we always have a four-dimensional space of cosmological parameters (since we recover the optimal PPS).
In order to represent the likelihood in a four dimensional parameter space we find it convenient to define a normalized distance, ρ, between two points,
ρ(a, b) = Σ i (P a i − P b i ) 2 /(σ b i ) 2 ,(2.1)
where P a i and P b i are the value of i th cosmological parameter at point 'a' and point 'b', respectively. To ensure that equal separations along different parameters have a similar meaning, we divide P a i −P b i by standard deviation σ b i at point 'b'. We assign point 'b' to be a best fit point where σ b i are the 1σ confidence limits derived by WMAP team from the corresponding MCMC chains. Since we are primarily interested in studying the region around the best-fit point, ρ provides a convenient definition of distances to other points with respect to it. (Note, the 'distance' ρ is 'asymmetric' in 'a' and 'b' when σ b i = σ a i and should interpreted accordingly).
RESULTS
The simplest characterization of the likelihood landscape, L(P i ), around the best-fit point is to study its behavior as ρ increases with separation from the best-fit point. The trend in the likelihood can then be compared for two cases -assuming a form of primordial spectrum, or allowing a optimal free form. (We use the effective chi-square, χ 2 ≡ −2 ln L instead of L.)
We now define ρ c as the distance between each point in the given MCMC samples to the best-fit point. For each point, we compute the effective χ 2 difference, ∆χ 2 (i.e., twice the relative log-likelihood) with respect to this best fit point, both, for the likelihood obtained under the assumed PPS, and, with a free form PPS (the optimal PPS recover in our deconvolution). Fig. 1 shows scatterplots of ∆χ 2 vs ρ c for the case of power-law (PL) and running power-law (RN) assumptions of the primordial spectrum. Green crosses show the expected behavior that locally the likelihoods worsens with ρ c as points depart from the best-fit parameters. In the other hand, the red pluses mark the same points in the cosmological parameter space, but for the ∆χ 2 obtained under free-form optimal PPS. It is clear from Fig. 1, that a free-form optimal PPS can very markedly improve the likelihood relative to that in assumed form PPS. What is more remarkable is that the improvement through optimal free-form PPS is suppressed in a basin around ρ c < ∼ 1. This is apparent in the absence of red plus marks near the lower left corners of the plots.
It is also interesting to mention that the basins for the three assumed forms of the PPS are very distinct and non-over-lapping. The parameter distances between the best fit points assuming HZ, power law and power-law with running forms of the primordial spectrum are quite large. We have ρ (P L,HZ) = 14.56, ρ (RN,HZ) = 43.79, ρ (HZ,P L) = 6.89, ρ (RN,P L) = 4.85, ρ (HZ,RN ) = 11.09 and ρ (P L,RN ) = 2.44. It is important also to note that the best fit point obtained under one assumed form of PPS may be disfavored with a high confidence by another assumption.
As mentioned above, in the basins around the best-fit points it is very difficult to get a significantly better likelihood allowing for a free form PPS. It is instructive to explore the nature of these basins and the trends of likelihood assuming the free form PPS for each of parameters Ω b h 2 , Ω 0m h 2 , h, and τ . To do so, for each parameter, The panels show the comparative scatter-plots of relative χ 2 , with, and without, optimal P (k), versus normalized distance, ρc, (eqn. 2.1) in parameter space of the sample points for sub-samples of the MCMC chains generated by the WMAP team. Green crosses show the ∆χ 2 relative to the best fit value. Red pluses mark the same points in the parameter space but with χ 2 derived after 'optimization' of the primordial power spectrum. The top and bottom panels corresponds to MCMC chains assuming 'Power Law' (PL) form and 'running power law'(RN) forms PPS, respectively. The obvious absence of red points with significantly negative ∆χ 2 for ρc < ∼ 1 mark the basins for each assumed PPS where, no (or, minor) improvement in likelihood is seen even invoking a free-form 'optimal' PPS. The basins for three assumed PPS are non-overlapping. For comparison, in the PL case (Upper panel), the distances to the best-fit HZ model ρ (HZ,P L) = 6.89 and RN Model ρ (RN,P L) = 4.85, respectively. In the RN case (Lower panel), distance to best-fit HZ and PL models are ρ (HZ,RN) = 11.09 and ρ (P L,RN) = 2.44, respectively.
i we split the separation, ρ, between points, a and b in the parameters into a separation ∆P i = (P a i − P b i )/σ b i , along the parameter and the 'perpendicular' distance
ρ i ⊥ = j =i (P a j − P b j ) 2 /(σ b j ) 2
measuring the separation in the other three parameters. Fig. 2 shows for the PL PPS case, a 2D surface representation of the optimized ∆χ 2 around the best fit point plotted against ∆P i and ρ i ⊥ for each of the parameters. We have weighted the neighbouring sample points by their Euclidean distance in the parameter space to assign an average likelihood at each point. The color palette is chosen such that red (blue) regions have poorer (better) likelihood than the reference value of best-fit model. The white regions have likelihood comparable to the best fit value. In this representation, the figures clearly show that in all cases, there is a plateau in the parameter space (the red regions) enclosing the best fit point where a free form PPS does not improve the likelihood. The location of the best-fit points are marked by red arrows. Outside these basins, there are blue regions where optimal free form PPS leads to very significant improvement in the likelihood. (However, note that these are far from being the global minima for optimal PPS cosmological parameter estimates -as shown in ref. [6], there are models with much higher likelihood.)
The plots in Fig.2, also supplement the PL plot in Fig. 1 (top), by indicating the direction in the parameter space from the best fit models where likelihood resists improvement against modifications to the PPS.
CONCLUSIONS
In this paper we have shown that the assumed form of the primordial power spectrum (PPS) plays a key role in the determination of cosmological parameters. In fact, the functional form of the PPS forces the best-fit cosmological parameters to specific preferred basins of high likelihood to the data. These estimated cosmological parameters are then significantly biased. It is similar to a case where in an N dimensional parameter space of a model, we fix the values of m parameters (m < N ) and vary the other N − m parameters to fit an observation. The resultant best fit values of these N − m parameters can be very different, depending on the values assigned to the fixed m parameters. If there is no good reason to select a particular set of fixed values, the determination of the rest of the parameters remains under question. Assumption of assumed (say, power-law) form of the primordial spectrum can also be interpreted as a very strong, specific correlation between P (k) at different k. This assumption is similar to setting values for the m parameters with specific correlations. We surmise that the assumed form of the PPS could be the dominant reason that in the basins for each assumed form it was not possible to achieve a marked improvement in χ 2 by allowing optimal free-form PPS (see Fig.2).
In summary, we show that the apparently 'robust' determination of cosmological parameters under an assumed form of P (k) may be misleading and could well largely reflect the inherent correlations in the power at different k implied by the assumed form of the PPS. We conclude that is very important to allow for deviations from scale invariant, scale free or, simple phenomenological extensions of the same, in the PPS while estimating cosmological parameters. This provides strong motivation to pursue approaches that simultaneously determine both, the cosmological parameters, as well as, the primordial power spectrum from observations. The rapid improvement in cosmological observations, such as, the CMB polarization spectra, holds much promise towards this goal. It is not unlikely that early universe scenarios that produce features in PPS could in fact be favored by data.
FIG. 1: The panels show the comparative scatter-plots of relative χ 2 , with, and without, optimal P (k), versus normalized distance, ρc, (eqn. 2.1) in parameter space of the sample points for sub-samples of the MCMC chains generated by the WMAP team. Green crosses show the ∆χ 2 relative to the best fit value. Red pluses mark the same points in the parameter space but with χ 2 derived after 'optimization' of the primordial power spectrum. The top and bottom panels corresponds to MCMC chains assuming 'Power Law' (PL) form and 'running power law'(RN) forms PPS, respectively. The obvious absence of red points with significantly negative ∆χ 2 for ρc
FIG. 2 :
2A 2D surface representation of the optimized ∆χ 2 around the best-fit point for Power law PPS case for the four parameters. For each parameter, i, ∆Pi measures separation along the parameter, and ρi ⊥ measures the separation in the three other parameters. Regions in the parameter space with ∆χ 2 > 0 are shown by red color and are separated by a white band (representing ∆χ 2 ≈ 0) from the regions with ∆χ 2 < 0 shown by blue color. Red plateaus represent the regions where allowing the free form primordial spectrum does not improve the likelihood. Red arrows show the position of best fit point assuming PL form of PPS.
AcknowledgmentsWe have used WMAP data and the likelihood code provided by WMAP team in the Legacy Archive for Microwave Background Data Analysis (LAMBDA) web-site[14]. Support for LAMBDA is provided by the NASA Office of Space Science. In our method of reconstruction we have used a modified version of CMB-FAST[15]. We acknowledge use of HPC facilities at IU-CAA. A.S acknowledge BIPAC and the partial support of the European Research and Training Network MRTPN-CT-2006 035863-1 (UniverseNet).
. U Seljak, Phys. Rev. D. 71103515U. Seljak. et al., Phys. Rev. D 71, 103515 (2005).
. D Spergel, Astrophys.J.Suppl. 170377D. Spergel et al., Astrophys.J.Suppl. 170, 377 (2007);
. J Dunkley, arXiv:0803.0586Astrophys.J. Suppl, in pressJ. Dunkley et al., Astrophys.J.Suppl, in press (arXiv:0803.0586)
. A A Starobinsky, Phys. Lett. 117175A. A. Starobinsky, Phys. Lett, 117B, 175 (1982).
. A H Guth, & S.-Y Pi, Phys. Rev. Lett. 491110A. H. Guth & S.-Y. Pi, Phys. Rev. Lett., 49, 1110 (1982).
. J M Bardeen, P J S Steinhardt & M, Turner, Phys. Rev. D. 28679J. M. Bardeen, P. J. Steinhardt & M. S. Turner, Phys. Rev. D 28, 679 (1983).
. A Shafieloo, & T Souradeep, Phys. Rev. D. 7823511A. Shafieloo & T. Souradeep, Phys. Rev. D 78, 023511 (2008);
. T Souradeep, & A Shafieloo, Prog. of Theor. Phys. Suppl. 172T. Souradeep & A. Shafieloo, Prog. of Theor. Phys. Suppl. 172, 156 ,(2008).
. R Jain, arXiv:0809.3915JCAP. in pressR. Jain et al., JCAP, in press (arXiv:0809.3915)
. P Hunt, & S Sarkar, Phys. Rev. D. 70103518P. Hunt & S. Sarkar, Phys. Rev. D 70, 103518 (2004);
. B H Richardson, J. Opt. Soc. Am. 6255B. H. Richardson, J. Opt. Soc. Am., 62, 55 (1972);
. L B Lucy, Astron. J. 796L. B. Lucy, Astron. J., 79, 6 (1974). .
. C M Baugh, G Efstathiou, MNRAS. 265145C. M. Baugh and G. Efstathiou, MNRAS 265, 145 (1993);
. A Shafieloo, T Souradeep, Phys Rev. D. 7043523A. Shafieloo and T. Souradeep, Phys Rev. D 70, 043523 (2004).
. A Shafieloo, Phys. Rev. D. 75123502A. Shafieloo et al. Phys. Rev. D 75, 123502 (2007).
. M Tegmark, M Zaldarriaga, Phys. Rev. D66. 103508M. Tegmark and M. Zaldarriaga, Phys. Rev. D66, 103508, (2002);
. S L Bridle, MNRAS. 342S. L. Bridle et. al., MNRAS. 342
. P Mukherjee, Y Wang, Astrophys.J. 5991P. Mukherjee and Y. Wang, Astrophys.J. 599 1 (2003);
. D Tocchini-Valentini, Y Hoffman, J Silk, MNRAS. 3671095D. Tocchini-Valentini, Y. Hoffman, J. Silk, MNRAS. 367 T1095 (2006);
. N Kogo, M Sasaki, J Yokoyama, Prog.Theor.Phys. 114N. Kogo, M. Sasaki, J. Yokoyama, Prog.Theor.Phys. 114, 555, (2005);
. S Leach, MNRAS. 372646S. Leach, MNRAS. 372 646 (2006);
. M Bridges, arXiv:0812.3541M. Bridges et. al., (arXiv:0812.3541)
Legacy Archive for Microwave Background Data Analysis. Legacy Archive for Microwave Background Data Analysis [http://lambda.gsfc.nasa.gov/].
. U Seljak, & M Zaldarriaga, Astrophys.J. 469437U. Seljak & M. Zaldarriaga, Astrophys.J. 469, 437 (1996).
| [] |
[
"BOUNDS ON ∆B = 1 COUPLINGS IN THE SUPERSYMMETRIC STANDARD MODEL",
"BOUNDS ON ∆B = 1 COUPLINGS IN THE SUPERSYMMETRIC STANDARD MODEL"
] | [
"Marc Sher [email protected] \nDepartment of Physics\nCollege of William and Mary Williamsburg\nDepartment of Physics\nHampton University\n12000 Jefferson Ave23185, 23668, 23606Hampton, Newport NewsVA, VA, VAUSA, USA, USA\n",
"J L Goity [email protected] \nDepartment of Physics\nCollege of William and Mary Williamsburg\nDepartment of Physics\nHampton University\n12000 Jefferson Ave23185, 23668, 23606Hampton, Newport NewsVA, VA, VAUSA, USA, USA\n"
] | [
"Department of Physics\nCollege of William and Mary Williamsburg\nDepartment of Physics\nHampton University\n12000 Jefferson Ave23185, 23668, 23606Hampton, Newport NewsVA, VA, VAUSA, USA, USA",
"Department of Physics\nCollege of William and Mary Williamsburg\nDepartment of Physics\nHampton University\n12000 Jefferson Ave23185, 23668, 23606Hampton, Newport NewsVA, VA, VAUSA, USA, USA"
] | [] | The most general supersymmetric model contains baryon number violating terms of the form λ ijk D i D j U k in the superpotential. We reconsider the bounds on these couplings, assuming that lepton number conservation ensures proton stability. These operators can mediate n − n oscillations and double nucleon decay. We show that neutron oscillations do not, as previously claimed, constrain the λ dsu coupling; they do provide a bound on the λ dbu coupling, which we calculate. We find that the best bound on λ dsu arises from double nucleon decay into two kaons. There are no published limits on this process; experimenters are urged to examine this nuclear decay mode. Finally, the other couplings can be bounded by the requirement of perturbative unification. | 10.1016/0370-2693(96)01076-3 | [
"https://arxiv.org/pdf/hep-ph/9503472v1.pdf"
] | 119,414,883 | hep-ph/9503472 | bff6ed22ba004ae0b3f5bf5d568b9e65e8a684e5 |
BOUNDS ON ∆B = 1 COUPLINGS IN THE SUPERSYMMETRIC STANDARD MODEL
arXiv:hep-ph/9503472v1 30 Mar 1995
Marc Sher [email protected]
Department of Physics
College of William and Mary Williamsburg
Department of Physics
Hampton University
12000 Jefferson Ave23185, 23668, 23606Hampton, Newport NewsVA, VA, VAUSA, USA, USA
J L Goity [email protected]
Department of Physics
College of William and Mary Williamsburg
Department of Physics
Hampton University
12000 Jefferson Ave23185, 23668, 23606Hampton, Newport NewsVA, VA, VAUSA, USA, USA
BOUNDS ON ∆B = 1 COUPLINGS IN THE SUPERSYMMETRIC STANDARD MODEL
arXiv:hep-ph/9503472v1 30 Mar 1995
The most general supersymmetric model contains baryon number violating terms of the form λ ijk D i D j U k in the superpotential. We reconsider the bounds on these couplings, assuming that lepton number conservation ensures proton stability. These operators can mediate n − n oscillations and double nucleon decay. We show that neutron oscillations do not, as previously claimed, constrain the λ dsu coupling; they do provide a bound on the λ dbu coupling, which we calculate. We find that the best bound on λ dsu arises from double nucleon decay into two kaons. There are no published limits on this process; experimenters are urged to examine this nuclear decay mode. Finally, the other couplings can be bounded by the requirement of perturbative unification.
In the standard electroweak model, conservation of baryon number and lepton number arises automatically from gauge invariance. This is not the case in supersymmetric models, however. In the most general low-energy supersymmetric model, one has terms which violate lepton number and terms which violate baryon number. Since the presence of both of these may lead to unacceptably rapid proton decay, one or both must generally be suppressed by a discrete symmetry. In the most popular model, R-parity, given by (−1) 3B+L+F is imposed, leading to baryon and lepton number conservation. However, there is no a priori reason that R-parity must be imposed, it is quite possible that only one of the quantum numbers is conserved. There has been extensive discussion of the possibility that lepton number is violated, but relatively little investigation of the possibility that lepton number is conserved and baryon number is violated.
In this case, a term will appear in the superpotential given by λ ijk D i D j U k where the indices give the generation number and the chiral superfields are all righthanded isosinglet antiquarks. Since the term is symmetric under exchange of the first two indices, and is antisymmetric in color, it must be antisymmetric in the first two flavor indices, leaving nine couplings which will be designated (in an obvious notation) as λ dsu , λ dbu , λ sbu , λ dsc , λ dbc , λ sbc , λ dst , λ dbt and λ sbt .
There are many models in which low energy baryon number is violated and yet lepton number is conserved; the most familiar are some left-right symmetric models. It is widely believed that the strongest bound on B-violating operators comes from neutron oscillations, which violate B by two units. The first discussion on the effects of some of these operators in supersymmetric models used neutron oscillations to bound λ dsu and λ dbu ; this result remains widely cited today.
In this talk, I will discuss the most stringent bounds that can be placed on these nine couplings. First, it will be pointed out, in contrast with previous claims, that neutron oscillations do not provide any significant bound at all on the λ dsu coupling, due to a suppression factor which was neglected in the original calculation. This suppression is less severe for the λ dbu coupling, and we will obtain a bound in that case. It will then be pointed out that the strongest bound on the λ dsu coupling will come from limits on double nucleon decay (in a nucleus) into two kaons of identical strangeness, and will estimate the bound. Finally, the recent work of Brahmachari and Roy noted that the λ dbt and λ sbt couplings can be bounded by requiring perturbative unification; we will extend their work to cover all of the additional couplings. This work will appear in Physics Letters B in March or April of 1995; the reader is referred there for the details of the calculation and for a list of references (which are neglected here due to space limitations).
In the first attempt to place a bound on some of the above operators considered the effect of the λ dsu and λ dbu terms on neutron oscillations through the process (udd →d i d →g →d i d → u d d), whered is a squark andg is a gluino. The effects of intergenerational mixing were included by putting in an arbitrary mixing angle, assigned a value of 0.1. However, there is a much more severe suppression factor which results in this process giving no significant contribution to neutron oscillations.
Consider the term λ dsu D S U . It violates B by one unit and strangeness (S) by one unit. However, it conserves B-S. Since neutron oscillations violate B but not S, strangeness violation must appear elsewhere in the diagram. This means that there must be flavor-changing electroweak interactions (involving either a W or a charged Higgs or chargino) in the diagram. Since only isodoublets participate in the weak interactions, and the baryon number violating term has only isosinglets, there must be mass insertions (on the squark lines)-at least two in the simplest case. Thus, there will be electroweak interactions in the diagram, and an additional suppression factor of m 2 s /m 2 W , where m s is the strange quark mass. This makes the contribution of the λ dsu term highly suppressed. (Note: off-diagonal gluino couplings are generally much smaller, and will not contribute significantly).
The contribution involving the λ dbu terms will only be suppressed by m 2 b /m 2 W , which is not negligible. The leading contribution is from a box diagram in which ã b L and a d L turn into ab L and a d L ; theb L arises from the λ dbu coupling with a mass insertion on the squark line. We have calculated this contribution (see the Physics Letters paper for details), and find a bound on λ dbu which varies from .002 and .1 as the squark mass varies from 200 GeV to 600 GeV. For squark masses above a TeV, no useful bound emerges.
The best bound on the λ dsu term arises from double nucleon decay into two kaons, which violates both B and S, but not B-S. Thus, mass insertions and electroweak interactions are unnecessary. Here, the diagram simply involves having two λ dsu interactions turns u and d quarks into s anti-squarks, which exchange a gluino and turn into s antiquarks; the spectators then form the two kaons. Taking the limit on the rate to be 10 30 years, we find the bound to be λ dsu < 10 −15 R −5/2 , where R is roughly the ratio of a hadronic scale to the squark mass; for a hadronic scale of 300 MeV and a squark mass of 300 GeV, this bound is 10 −7 . A similar bound was obtained by assuming that a neutron could oscillate into a Ξ, which then annihilates with another neutron in the nucleus to produce two kaons. Note that there is currently no published bound on this decay; experimenters at Kamiokande are currently analysing the possibility, by considering the decay of the oxygen nuclei in their detector to carbon-14 and two charged kaons.
What about the other seven couplings? It was recently noted by Brahmachari and Roy that in a unified theory precise bounds can be obtained by requiring that the couplings be perturbative up to the unification scale. We have generalized their results (which applied only to two of the couplings) to all seven, and find an upper bound of 1.25 on each of them (the bound actually applies to the square root of the sum of the squares of combinations of three of the couplings, so is a bit stronger), with a slightly stronger bound for couplings involving the top quark. Note that if the fixed point is saturated for one of these couplings (as may be the case for the top quark Yukawa coupling), then this bound would, in fact, be the value of the coupling at low energy, in which case the scalar quarks would have to be heavier than the top quark in order to avoid the domination of top quark decays into a light quark and a squark.
We are grateful to Herbi Dreiner for many enlightening discussions and references, and Rabi Mohapatra for helpful remarks concerning neutron oscillations. The talk was presented by Marc Sher
| [] |
[
"HAUSDORFF DIMENSION OF LIMSUP SETS OF RECTANGLES IN THE HEISENBERG GROUP",
"HAUSDORFF DIMENSION OF LIMSUP SETS OF RECTANGLES IN THE HEISENBERG GROUP"
] | [
"Fredrik Ekström ",
"Esa Järvenpää ",
"Maarit Järvenpää "
] | [] | [] | The almost sure value of the Hausdorff dimension of limsup sets generated by randomly distributed rectangles in the Heisenberg group is computed in terms of directed singular value functions. | 10.7146/math.scand.a-119234 | [
"https://arxiv.org/pdf/1804.10405v2.pdf"
] | 119,311,269 | 1804.10405 | de7bfd0e61093b65074af6ce77092c689f10abfc |
HAUSDORFF DIMENSION OF LIMSUP SETS OF RECTANGLES IN THE HEISENBERG GROUP
27 Apr 2018
Fredrik Ekström
Esa Järvenpää
Maarit Järvenpää
HAUSDORFF DIMENSION OF LIMSUP SETS OF RECTANGLES IN THE HEISENBERG GROUP
27 Apr 2018
The almost sure value of the Hausdorff dimension of limsup sets generated by randomly distributed rectangles in the Heisenberg group is computed in terms of directed singular value functions.
Introduction
Dimensional properties of subsets of Heisenberg groups have attained a lot of interest recently. Due to the non-trivial relation between the Hausdorff dimensions with respect to the Euclidean and the Heisenberg metrics [4,5], one cannot directly transfer dimensional results in Euclidean spaces into Heisenberg groups. Indeed, it turns out that some theorems concerning dimensions have a special flavour or even an essentially different form in the Heisenberg setting. These include, for example, dimensional properties of self-affine sets, projections and slices.
In the Heisenberg group Hausdorff dimensions of self-similar and self-affine sets have been studied in [1,5,6]. Even though the class of affine iterated function systems is quite restrictive -every such system is a horizontal lift of an affine iterated function system on the plane -the dimension calculations involve some subtleties. The behaviour of the Hausdorff dimension under projections and slicing transpires to be interesting, see [2,3,18,20]. There are two kinds of natural projections (and slices) in Heisenberg groups -the horizontal and vertical ones. The vertical projections possess an exceptional feature: they are not Lipschitz continuous. This indicates that the methods developed in the Euclidean setting cannot be utilised. For related questions concerning Sobolev maps and the foliations generated by the horizontal subspaces, see [7].
In this paper, we initiate a new direction of research in Heisenberg groups by investigating dimensions of limsup sets generated by rectangles. Let X be a space and let (A n ) be a sequence of subsets of X. The limsup set generated by the sequence (A n ) consists of those points of X which are covered by infinitely many of the sets A n , that is,
lim sup n A n = ∞ n=1 ∞ k=n A k .
Limsup sets are encountered in many fields of mathematics -one of the earliest appearances being the Borel-Cantelli lemma [9,10]. They play a central role in the study of Besicovitch-Eggleston sets concerning the k-adic expansions of real numbers [8,12] as well as in Diophantine approximation [21,24]. For more information on different aspects of limsup sets, we refer to [19] and the references therein.
Dimensional properties of random limsup sets have been actively studied, see for example [11,14,16,17,19,22,23,25,26,28]. Combining the results of these 2010 Mathematics Subject Classification. 28A80, 60D05. We gratefully acknowledge the support of the Centre of Excellence in Analysis and Dynamics Research, funded by the Academy of Finland, and the hospitality of Institut Mittag-Leffler where part of this work was carried out. papers, the almost sure value of dimension of random limsup sets is known in the following cases:
-the underlying space X is a Riemann manifold and the driving measure determining the randomness is not singular with respect to the Lebesgue measure, -X is the Euclidean or the symbolic space, the generating sets (A n ) are balls and the driving measure has special properties like being a Gibbs measure, -X is an Ahlfors regular metric space, randomness is given by the natural measure and (A n ) is a sequence of balls. In [13] a dimension formula for limsup sets generated by rectangles in products of Ahlfors regular metric spaces is derived. In this paper, we address the problem of determining the Hausdorff dimension of random limsup sets generated by rectangles in the first Heisenberg group (see Theorem 1 below). In [13] the Lipschitz continuity of projections is utilised to a great extent, and because of that, the same methods cannot be used in our setting. Instead, we will extend some results known in the Euclidean setting to unimodular groups or to compact metric spaces, and make calculations specific to the Heisenberg group to complete the argument.
We proceed by introducing our notation. The Heisenberg group H is the set R 3 with the non-commutative group operation
pp ′ = (x + x ′ , y + y ′ , z + z ′ + 2(xy ′ − yx ′ )),
where p = (x, y, z) and p ′ = (x ′ , y ′ , z ′ ). The unit in H is (0, 0, 0) and the inverse of p is p −1 = (−x, −y, −z). There is a norm on H given by
p = x 2 + y 2 2 + z 2 1/4 ,
which gives rise to a left-invariant metric
d H (p, p ′ ) = p −1 p ′ = (x ′ − x) 2 + (y ′ − y) 2 2 + (z ′ − z − 2(xy ′ − yx ′ )) 2 1/4 .
Both left and right translation in H move vertical lines to vertical lines in such a way that the Euclidean distance between lines is preserved, and the image of the Lebesgue measure on a vertical line under translation is the Lebesgue measure on the image line. Thus Fubini's theorem implies that the Lebesgue measure on R 3 is invariant under translations in H. It is easy to see that the Lebesgue measure of B(0, r) in H is proportional to r 4 , and by translation invariance the same is true for every ball of radius r. In particular, the metric space (H, d H ) has Hausdorff dimension 4.
Let L(p) be the vertical line through p and
H(p) = {p ′ ; z ′ = z + 2(xy ′ − yx ′ )} ;
this is the plane through p that has slope 0 in the direction (x, y) and slope 2(x 2 + y 2 ) 1/2 in the orthogonal direction (−y, x). Then
d H (p, p ′ ) ≥ (x ′ − x) 2 + (y ′ − y) 2 1/2 ,
with equality if and only if p ′ ∈ H(p). It follows that the distance to L(p ′ ) from any point on L(p) equals the Euclidean distance from (x, y) to (x ′ , y ′ ), and vice versa by symmetry. Thus vertical lines are parallel in the Heisenberg metric, and the distance between them is the same as the Euclidean distance. The symmetry of the metric implies that p ′ ∈ H(p) if and only if p ∈ H(p ′ ), and the translation invariance of the metric implies that pH(p ′ ) = H(pp ′ ). If p ′ ∈ L(p) then H(p ′ ) is parallel in the Euclidean sense to H(p). A closed rectangle in H is a set of the form where r = (r 1 , r 2 ). This is the set of points that can be reached from p by moving "horizontally" in H(p) a distance at most r 1 and then vertically a distance at most r 2 , or by moving first vertically a distance at most r 2 and then horizontally a distance at most r 1 . A rectangle centred at p is the left translation by p of a rectangle centred at 0, that is, R (p, r) = pR (0, r). Let p = (p n ) be a sequence of points in H and let r = (r n ) be a bounded sequence of pairs of positive numbers, and define E r (p) = lim sup n R (p n , r n ) .
The purpose of this article is to give a formula for the Hausdorff dimension of such a set when the centres of the rectangles are chosen randomly. Let λ be the Lebesgue measure on H and let W be a bounded open subset of H. Let λ W = λ(W ) −1 λ| W and define the probability space (Ω, P) by Ω = H N and P = λ N W . Then ω → E r (ω) can be considered as a random set defined on (Ω, P). The directed singular value function is defined as follows: for r = (r 1 ,
r 2 ), if r 1 ≤ r 2 let Φ t (r) = r t 2 if t ∈ [0, 2], r t−2 1 r 2 2 if t ∈ [2, 4], and if r 1 ≥ r 2 let Φ t (r) = r t 1 if t ∈ [0, 3], r 6−t 1 r 2(t−3) 2 if t ∈ [3, 4].
Theorem 1. Almost surely,
dim H E r = inf t; n Φ t (r n ) < ∞ ∧ 4.
Let X be a metric space. The t-dimensional Hausdorff content of a subset A of X is defined by
H t ∞ (A) = n |C n | t ; {C n } is a cover of A .
If µ is a Borel measure on X and t > 0, the t-energy of µ is defined by
I t (µ) = d(x, y) −t dµ(y) dµ(x) .
The t-capacity of a Borel subset A of X is defined by
Cap t (A) = sup 1 I t (µ) ; µ ∈ P(A) ,
where P(A) denotes the set of Borel probability measures on X that give full measure to A. It can be shown that [19,Remark 3.3]).
(1) Cap t (A) ≤ H t ∞ (A) always holds (see
The proof of Theorem 1 is based on estimating the Hausdorff content and capacity of rectangles in the Heisenberg group. It is shown in Section 4 that there is a constant c t such that
c −1 t H t ∞ R (x, r) ≤ Φ t (r) ≤ c t Cap t R (x, r)
for every x and r. This immediately implies that if n Φ t (r n ) < ∞ then H t (E r (ω)) = 0 for every ω, since every tail of the sequence of rectangles is a cover of E r (ω). The almost sure lower bound for dim H E r then follows from the estimate of the capacity of a rectangle together with Theorem 2 below.
A Borel measure µ on a metric space X is (c, d)-regular if c −1 r d ≤ µ (B(x, r)) ≤ cr d
for every x ∈ X and r ∈ [0, |X|]. A locally compact group G is unimodular if the left invariant Haar measure is also right invariant, or equivalently, if it is invariant under inversion. For example, the Heisenberg group is (c, 4)-regular for some c and unimodular.
Theorem 2 is a variant of [19, Theorem 1.1(b)] and [25, Theorem 1]. We give a somewhat different proof even though the main philosophy follows the same lines as the proofs in [19,25].
= (λ| W ) N . Let (V n ) be a bounded sequence of open subsets of G ( bounded meaning that there is a ball in G that contains V n for every n). For ω = (ω n ) ∈ Ω let E(ω) = lim sup n (ω n V n ) . If t ∈ (0, d) and n Cap t (V n ) = ∞ then almost every ω is such that H t (U ∩ E(ω)) = ∞ for every open subset U of W .
The assumption that λ(W ) = 1 makes the proof slightly easier to read, but is not essential since a constant multiple of a (c, d)-regular Haar measure is a (c ′ , d)-regular Haar measure for some c ′ .
The proof of Theorem 2 is based on the following deterministic lemma (compare [15,Lemma 7], [19,Proposition 4.6] and [27,Theorem 1]). If X is a compact metric space, the weak- * topology on the space of finite Borel measures on X is the topology generated by the maps {µ → µ(ϕ)}, where ϕ ranges over the continuous functions X → R. It is not difficult to see that lim n→∞ µ n = µ in the weak- * topology if and only if lim n→∞ µ n (ϕ) = µ(ϕ) for every continuous function ϕ.
Lemma 3. Let ν be a finite Borel measure on a compact metric space X and let (ϕ n ) ∞ n=1 be a sequence of non-negative continuous functions on X such that lim n→∞ ϕ n dν = ν in the weak- * topology and lim inf n→∞ I t (ϕ n ρ dν ) ≤ I t (ρ dν ) whenever ρ is a product of finitely many of the functions {ϕ n }. Then for every t > 0,
Cap t supp ν ∩ lim sup n (supp ϕ n ) ≥ ν(X) 2 I t (ν) .
Notation and conventions. All measures appearing below are Borel measures, but this will not be explicitly stated. Thus "measure" below means "Borel measure". If X is a metric space with a measure µ and ϕ is a non-negative continuous function on X, then I t (ϕ) means I t (ϕ dµ ). Similarly, if A is a Borel subset of X then I t (A) means I t (µ| A ). If X is a compact metric space, then the space of finite measures on X is considered as a topological space under the weak- * topology.
Proof of Lemma 3
Let Y be a topological space
. A function f : Y → (−∞, ∞] is lower semi- continuous if f −1 (a, ∞]
is open for every a ∈ R, or equivalently if f (y 0 ) ≤ lim inf y→y0 f (y) for every y 0 ∈ Y . The following lemma is well known, but a proof is included for the convenience of the reader. Lemma 4. Let X be a compact metric space and let t > 0. Then µ → I t (µ) is lower semicontinuous.
Proof. Let M(X) be the space of finite measures on X. It will first be shown that the map M(X) → M(X) × M(X), µ → µ × µ is continuous. Let η be a continuous function on X × X and let µ 0 ∈ M(X). It suffices to show that for every ε > 0, the set
A = {µ ∈ M(X); |(µ × µ)(η) − (µ 0 × µ 0 )(η)| < 3ε}
contains an open neighbourhood of µ 0 . Let K > µ 0 (X). By Stone-Weierstrass' theorem, there are functions
{η i } n i=1 of the form η i = η i,1 × η i,2 , where η i,j are continuous, such that η − n i=1 η i ∞ < ε/K 2 . The set V = µ ∈ M(X); |(µ × µ)(η i ) − (µ 0 × µ 0 )(η i )| < ε n for every i and µ(X) < K is open since (µ × µ)(η i ) = µ(η i,1 )µ(η i,2 ), and µ 0 ∈ V . If µ ∈ V then |(µ × µ)(η) − (µ 0 × µ 0 )(η)| ≤ (µ × µ)(η) − (µ × µ) n i=1 η i + n i=1 |(µ × µ)(η i ) − (µ 0 × µ 0 )(η i )| + (µ 0 × µ 0 ) n i=1 η i − (µ 0 × µ 0 )(η) < 3ε, so that V is a subset of A. Define I M t (µ) = d(x, y) −t ∧ M dµ(y) dµ(x) .
Let a ∈ R and let µ 0 be a measure in M(X) such that I t (µ 0 ) > a. Then there exists M such that I M t (µ 0 ) > a, and the set µ; I M t (µ) > a is open, contains µ 0 and is contained in {µ; I t (µ) > a}.
Proof of Lemma 3. Let ε ∈ (0, ν(X)) and let (ε k ) be a sequence of positive numbers such that k ε k ≤ ε. Define recursively a sequence (n k ) ∞ k=1 of natural numbers as follows, using the notation ρ k = ϕ n k · . . . · ϕ n1 (in particular, ρ 0 = 1). For k ≥ 1, assume that n 1 , . . . , n k−1 are defined. Then since lim n→∞ ν(ϕ n ρ k−1 ) = ν(ρ k−1 ) and lim inf
n→∞ I t (ϕ n ρ k−1 ) ≤ I t (ρ k−1 ),
it is possible to find n k > n k−1 such that
ν(ρ k ) ≥ ν(ρ k−1 ) − ε k and I t (ρ k ) ≤ I t (ρ k−1 ) + ε k .
Let µ be an accumulation point of the sequence of measures (ρ k dν ) -then there is a strictly increasing sequence (k i ) such that µ = lim i→∞ ρ ki dν . Thus
µ(X) = lim i→∞ ν(ρ ki ) ≥ lim inf k→∞ ν(ρ k ) ≥ ν(X) − k ε k ≥ ν(X) − ε,
and by Lemma 4,
I t (µ) ≤ lim inf i→∞ I t (ρ ki ) ≤ lim sup k→∞ I t (ρ k ) ≤ I t (ν) + k ε k ≤ I t (ν) + ε. Moreover, supp µ ⊂ supp ν ∩ k supp ϕ n k ⊂ supp ν ∩ lim sup n (supp ϕ n )
and thus
Cap t supp ν ∩ lim sup n (supp ϕ n ) ≥ µ(X) 2 I t (µ) ≥ (ν(X) − ε) 2 I t (ν) + ε .
Letting ε → 0 concludes the proof.
Proof of Theorem 2
The proof of Theorem 2 is based on constructing a sequence (ϕ ω n ) of random continuous functions supported on a compact neighbourhood of W , satisfying the hypothesis of Lemma 3, such that
lim sup n (supp ϕ ω n ) ⊂ lim sup n (ω n V n ).
A few lemmas are needed. The first one is used in the proofs of Lemma 6 and Theorem 2.
Lemma 5. Let (X, λ) be a (c, d)-regular space and let t ∈ (0, d).
Then there is a constant C such that for every x ∈ X and r > 0,
I t (B(x, r)) ≤ Cr 2d−t .
Proof. If ϕ is a non-negative function on X then
ϕ dλ = ∞ 0 λ {z ∈ X; ϕ(z) > γ} dγ ,
since both sides equal the λ×Leb-measure of the set {(z, u) ∈ X × [0, ∞]; u ∈ (0, ϕ(z))} (note that the boundary of this set has measure 0). Thus for y ∈ X,
B(x,r) d(y, z) −t dλ(z) = ∞ 0 λ z ∈ B(x, r); d(y, z) −t > γ dγ = ∞ 0 λ B(x, r) ∩ B y, γ −1/t dγ ≤ c ∞ 0 min r d , γ −d/t dγ = c r −t 0 r d dγ + c ∞ r −t γ −d/t dγ = cd d − t r d−t ,
and thus
I t (B(x, r)) ≤ λ (B(x, r)) · cd d − t r d−t ≤ c 2 d d − t r 2d−t .
The next lemma is a variant of [19, Proposition 3.8].
Lemma 6. Let (X, λ) be a compact (c, d)-regular space and let θ be a finite measure on X. Then there is a sequence (ϕ n ) of non-negative continuous functions on X such that lim n→∞ ϕ n dλ = θ and lim sup n→∞ I t (ϕ n ) ≤ I t (θ) for every t ∈ (0, d).
Proof. For each n, let {x n,i } Nn i=1 be a maximal 1/n-separated subset of X. Then {B(x n,i , 1/2n)} are disjoint and {B(x n,i , 1/n)} is a cover of X. Let Q n,i be the set of points x in X for which i is the first index such that d(x, x n,i ) = min j d(x, x n,j ), that is, Q n,i = x ∈ X; d(x, x n,i ) < d(x, x n,j ) for j = 1, . . . , i − 1 and d(x, x n,i ) ≤ d(x, x n,j ) for j = i, . . . , N n .
Then {Q n,i } Nn i=1 is a partition of X into Borel sets and
B x n,i , 1 2n ⊂ Q n,i ⊂ B x n,i , 1 n for every i. For each i, let ϕ n,i (x) = a n,i max (0, 1 − 4nd(x, x n,i )) ,
where a n,i is such that λ(ϕ n,i ) = 1. Then ϕ n,i is supported on B(x n,i , 1/4n), and a n,i ≤ 2 1+3d cn d since
B(xn,i, 1 4n ) (1 − 4nd(x, x n,i )) dλ(x) ≥ 1 2 λ B x n,i , 1 8n ≥ 2 −(1+3d) c −1 n −d . Let ϕ n = Nn i=1 θ(Q n,i )ϕ n,i .
Let η be a continuous function on X and let ε > 0. Since X is compact, η is uniformly continuous, and thus there is some n 0 such that
|η(x) − η(x n,i )| ≤ ε 2θ(X)
whenever n ≥ n 0 and x ∈ Q n,i . Then for n ≥ n 0 ,
|θ(η) − (ϕ n dλ )(η)| ≤ Nn i=1 Qn,i |η − η(x n,i )| dθ + θ(Q n,i ) Qn,i |η(x n,i ) − η|ϕ n,i dλ ≤ ε,
using that (ϕ n,i dλ )(Q n,i ) = 1 for every i. Thus lim n→∞ ϕ n dλ = θ. It remains to show that lim sup n→∞ I t (ϕ n ) ≤ I t (θ), and for this it may be assumed that I t (θ) < ∞.
Let n be a natural number and α > 1, and if ψ 1 , ψ 2 are continuous functions on X let
J t (ψ 1 , ψ 2 ) = d(x, y) −t ψ 1 (x)ψ 2 (y) dλ(x) dλ(y)
.
Then I t (ϕ n ) = S 1 + S 2 + S 3 , where S 1 = d(xn,i,xn,j)≥ α n θ(Q n,i )θ(Q n,j )J t (ϕ n,i , ϕ n,j ), S 2 = d(xn,i,xn,j)< α n i =j θ(Q n,i )θ(Q n,j )J t (ϕ n,i , ϕ n,j ), S 3 = i θ(Q n,i ) 2 I t (ϕ n,i ).
If x ∈ supp ϕ n,i and y ∈ supp ϕ n,j and d(x n,i , x n,j ) ≥ α/n, then
|d(x, y) − d(x n,i , x n,j )| ≤ 1 n ≤ d(x n,i , x n,j ) α , so that α − 1 α ≤ d(x, y) d(x n,i , x n,j ) ≤ α + 1 α .
Thus for i, j appearing in S 1 ,
θ(Q n,i )θ(Q n,j )J t (ϕ n,i , ϕ n,j ) ≤ α α − 1 t θ(Q n,i )θ(Q n,j )d(x n,i , x n,j ) −t ≤ α + 1 α − 1 t Qn,i×Qn,j d(x, y) −t dθ(x) dθ(y) ,
and it follows that
S 1 ≤ α + 1 α − 1 t I t (θ).
If x ∈ supp ϕ n,i and y ∈ supp ϕ n,j then i = j implies that d(x, y) ≥ 1/2n and if x ∈ Q n,i and y ∈ Q n,j then d(x n,i , x n,j ) ≤ α/n implies that d(x, y) ≤ (α + 2)/n. Thus
S 2 ≤ 2 t d(xn,i,xn,j)< α n i =j θ(Q n,i )θ(Q n,j )n t ≤ 2 t (α + 2) t d(xn,i,xn,j)< α n Qn,i×Qn,j d(x, y) −t dθ(x) dθ(y) ≤ 2 t (α + 2) t d(x,y)≤ α+2 n d(x, y) −t dθ(x) dθ(y) .
By Lemma 5 there is a constant C such that
I t (B(x, r)) ≤ Cr 2d−t
for every x ∈ X and r > 0. It follows that
I t (ϕ n,i ) ≤ a 2 n,i I t B x n,i , 1 4n ≤ C ′ n t ,
where C ′ = 4 1+3d c 2 C, so that
S 3 ≤ C ′ i θ(Q n,i ) 2 n t ≤ 2 t C ′ i Qn,i×Qn,i d(x, y) −t dθ(x) dθ(y) ≤ 2 t C ′ d(x,y)≤ 2 n d(x, y) −t dθ(x) dθ(y) .
Given ε > 0 it is possible to choose α large enough so that S 1 ≤ I t (θ) + ε, and then n 0 large enough so that S 2 + S 3 ≤ ε for every n ≥ n 0 . It follows that lim sup n→∞ I t (ϕ n ) ≤ I t (θ) + 2ε, and letting ε → 0 concludes the proof.
The following lemma is a modification of the argument from [19, p. 39].
Lemma 7. Let (X, λ) be a (c, d)-regular space and let (V n ) be a sequence of open subsets of an open ball
B in X such that n Cap t (V n ) = ∞. Then there is a sequence (V ′ n ) of open subsets of X such that i) V ′ n ⊂ V n for every n, ii) lim n→∞ |V ′ n | = 0, and iii) n Cap t (V ′ n ) = ∞.
Proof. Let B 1 = {x ∈ X; d(x, B) < 1} and for each n, let µ n be a probability measure on V n such that I t (µ n ) ≤ 2 Cap t (V n ) −1 . Consider any r ∈ (0, 1) and let A n = {x ∈ X; µ n (B(x, r)) > 0}, and for x ∈ A n let µ x n = µ n (B(x, r)) −1 µ n | B(x,r) . Using Cauchy-Schwartz' inequality, Fubini's theorem and then that B(y, r) ⊂ B 1 for µ n -almost every y for every n,
B1 µ n (B(x, r)) 2 dλ(x) ≥ 1 λ(B 1 ) B1 µ n (B(x, r)) dλ(x) 2 = 1 λ(B 1 ) λ (B 1 ∩ B(y, r)) dµ n (y) 2 ≥ c −2 r 2d λ(B 1 ) .
Thus for any natural number a,
B1 ∞ n=a Cap t (V n ∩ B(x, r)) dλ(x) ≥ B1 n≥a x∈An 1 I t (µ x n ) dλ(x) ≥ ∞ n=a 1 I t (µ n ) B1∩An µ n (B(x, r)) 2 dλ(x) = ∞ n=a 1 I t (µ n ) B1 µ n (B(x, r)) 2 dλ(x) = ∞.
It follows that ∞ n=a Cap t (V n ∩ B(x, r)) is unbounded as a function of x, and hence there exist x ∈ B 1 and a natural number b such that b n=a Cap t (V n ∩ B(x, r)) ≥ 1.
It is now possible to recursively define a strictly increasing sequence (n k ) of natural numbers and a sequence (x k ) of points in B 1 , such that n 1 = 1 and for every k,
n k+1 −1 n=n k Cap t V n ∩ B x k , 2 −k ≥ 1. The sequence (V ′ n ) defined by V ′ n = V n ∩ B x k , 2 −k for n = n k , .
. . , n k+1 − 1 has the properties i)-iii). Lemma 8. Let (b n ) be a sequence of positive numbers bounded away from 0, such that n b −1 n = ∞. Then there are non-negative numbers (a n,k ) n,k∈N such that i) (a n,k ) k has finite support for every n and lim n→∞ min{k; a n,k = 0} = ∞, ii) k a n,k = 1 for every n and n,k a 2 n,k < ∞, and iii) lim n→∞ k a 2 n,k b k = 0.
Proof. Let a n,k = a n b −1
k if M n ≤ k ≤ N n , 0 otherwise, where a n = 1 Nn k=Mn b −1 k .
Since n b −1 n = ∞ it is possible to choose (M n ) and (N n ) such that i) holds and n a n < ∞. Then clearly k a n,k = 1 for every n, and n,k a 2 n,k ≤ B n a n k a n,k = B n a n < ∞,
where B = sup k b −1 k . Finally, k a 2 n,k b k = Nn k=Mn b −1 k Nn k=Mn b −1 k 2 = a n ,
which converges to 0 when n → ∞.
Lemma 9. Let µ be a probability measure on a compact metric space X, and define the probability space (Ω, P) by Ω = X N and P = µ N . Let (a n,k ) be non-negative numbers such that k a n,k = 1 for every n and n,k a 2 n,k < ∞. For ω ∈ Ω, let µ ω n = k a n,k δ ω k .
Then almost surely lim n→∞ µ ω n = µ.
Proof. Let η be a continuous function on X. Then for every k,
E η(ω k ) = µ(η)
and
Var η(ω k ) = µ η 2 − µ(η) 2 ≤ η 2 ∞ , and hence E(µ ω n (η)) = µ(η) and
Var(µ ω n (η)) = k a 2 n,k Var η(ω k ) ≤ η 2 ∞ k a 2 n,k .
Let ε > 0. Then by Chebyshev's inequality,
n P{|µ ω n (η) − µ(η)| ≥ ε} ≤ n η 2 ∞ k a 2 n,k ε 2 < ∞, so that Borel-Cantelli's lemma implies that lim sup n→∞ |µ ω n (η) − µ(η)| ≤ ε
almost surely. Since the space of continuous functions on X is separable, it follows that lim n→∞ µ ω n = µ almost surely. Proof. By taking a subsequence it may be assumed that lim n→∞ E ξ n exists. Let (ε n ) be a sequence of positive numbers converging to 0, such that n ε n 1 + ε n = ∞.
By Markov's inequality,
P (ξ n ≤ (1 + ε n ) E ξ n ) = 1 − P (ξ n > (1 + ε n ) E ξ n ) ≥ 1 − E ξ n (1 + ε n ) E ξ n = ε n 1 + ε n .
Then by Borel-Cantelli's lemma there is almost surely a strictly increasing sequence (n k ) of natural numbers such that ξ n k ≤ (1 + ε n k ) E ξ n k for every k, and thus lim inf
n→∞ ξ n ≤ lim inf k→∞ ξ n k ≤ lim inf k→∞ E ξ n k ≤ lim sup n→∞ E ξ n = lim inf n→∞ E ξ n .
Proof of Theorem 2. By Lemma 7 it may be assumed that lim n→∞ |V n | = 0. Define a sequence (ϕ ω n ) of random continuous functions on G in the following way. For each n, there is a probability measure θ n on V n such that I t (θ n ) ≤ 2 Cap t (V n ) −1 , and an open subset A n of V n such that d(A n , V c n ) > 0 and θ n (A n ) ≥ 1/2. By Lemma 6 there is then a non-negative continuous function ψ ′ n on G such that
(ψ ′ n dλ )(A n ) ≥ θ n (A n )/2 ≥ 1/4 and I t (ψ ′ n ) ≤ 2I t (θ n ) ≤ 4 Cap t (V n ) −1 .
Let ψ ′′ n be a continuous function on G such that χ An ≤ ψ ′′ n ≤ χ Vn and let ψ n = c n ψ ′ n ψ ′′ n , where c n is such that ψ n dλ is a probability measure. Then c n ≤ 4 and hence
I t (ψ n ) ≤ c 2 n I t (ψ ′ n ) ≤ 64 Cap t (V n ) −1
, so that the hypothesis of the theorem implies that n I t (ψ n ) −1 = ∞. Let (a n,k ) be as in Lemma 8 with respect to b n = I t (ψ n ) (≥ |V n | −t ), and let
ϕ ω n = k a n,k ψ ω k ,
where ψ ω k (x) = ψ k ω −1 k x . Let η be a non-negative continuous function on G such that supp η ⊂ W and let ν = η dλ . Property ii) of Lemma 8 together with Lemma 9 applied in the space (W , λ| W ) implies that for almost every ω, lim n→∞ k a n,k δ ω k = λ| W .
Since lim n→∞ |V n | = 0, it follows for such ω that lim n→∞ ϕ ω n dλ = λ| W and thus lim n→∞ ϕ ω n dν = η dλ| W = ν.
Let ρ be a continuous function on G with compact support. Using that ψ ω i (x) and ψ ω j (y) are independent for i = j,
E I t (ϕ ω n ρ dν ) = k a 2 n,k E I t (ψ ω k ρ dν ) + i =j a n,i a n,j d(x, y) −t E ψ ω i (x) E ψ ω j (y)ρ(x)ρ(y) dν(x) dν(y)
.
Since I t (ψ ω k ρ dν ) ≤ ρ 2 ∞ η 2 ∞ I t (ψ k )
, it follows from property iii) of Lemma 8 that the first sum converges to 0 when n → ∞. Next,
E ψ ω i (x) = ψ i ω −1 i x dλ| W (ω i ) ≤ ψ i ω −1 i x dλ(ω i ) = λ(ψ i ) = 1,
using that λ is invariant under right translation and inversion. Thus the integral in the second sum is less than or equal to I t (ρ dν ) and it follows by property ii) in Lemma 8 that the second sum is less than or equal to I t (ρ dν ) as well. Thus by Lemma 10, almost surely lim inf n→∞ I t (ϕ ω n ρ dν ) ≤ I t (ρ dν ).
Lemma 3 applied in the space (W , ν) together with (1) now implies that almost surely
H t (supp η ∩ E(ω)) ≥ λ(η) 2 I t (η) .
For m = 1, 2, . . ., let B m be a maximal collection of disjoint open balls in W of radius 2 −m . For each B ∈ m B m , let η B be a non-negative continuous function on G such that χ 1 2 B ≤ η B ≤ χ B , where 1 2 B is the ball concentric with B having half the radius. Since m B m is countable, almost every ω is such that whenever B ∈ B m then
H t (B ∩ E(ω)) ≥ λ(η B ) 2 I t (η B ) ≥ C2 −tm ,
where the last inequality holds by Lemma 5 for some constant C that is independent of m.
Let ω be such and let U be an open subset of W . Since (G, λ) is d-regular, there is a positive constant C ′ and some m 0 such that if m ≥ m 0 then
#{B ∈ B m ; B ⊂ U} ≥ C ′ 2 dm .
Thus for m ≥ m 0 ,
H t (U ∩ E(ω)) ≥ B∈Bm B⊂U H t (B ∩ E(ω)) ≥ CC ′ 2 (d−t)m ,
and letting m → ∞ shows that H t (U ∩ E(ω)) = ∞.
Hausdorff content and energy of rectangles in H
The purpose of this section is to estimate the Hausdorff content and energy of a rectangle R (x, r) in the Heisenberg group, up to multiplicative constants. Only upper bounds are provided, but it follows from (1) that they are the best possible ones. The multiplicative constants will mostly be implicit, using the following notation. If e 1 and e 2 are expressions depending on some parameters, then e 1 e 2 means that there is a constant C such that e 1 ≤ Ce 2 for all parameter values. Often some of the parameters in e 1 and e 2 will be considered as constants -then C may depend on those parameters. For example, the implicit constants always depend on t.
Upper bound for the Hausdorff content of a rectangle. Lemma 11. Let R = R (0, r). If r 1 ≤ r 2 then 2,4],
H t ∞ (R) r t 2 if t ∈ [0, 2], r t−2 1 r 2 2 if t ∈ [and if r 1 ≥ r 2 then H t ∞ (R) r t 1 if t ∈ [0, 3], r 6−t 1 r 2(t−3) 2 if t ∈ [3, 4],
where the implicit constants depend on t but not on r.
Proof. For t ≥ 0, is r 1 -dense in S -thus D is 2r 1 -dense in R. If r 1 ≤ r 2 then #D r 2 2 /r 2 1 , and hence
H t ∞ (R) ≤ |R| t max r t 1 , r t 2 = r t 2 if r 1 ≤ r 2 , r t 1 if r 1 ≥ r 2 .H t ∞ (R) r 2 2 r 2 1 · r t 1 = r t−2 1 r 2 2 .
It remains to show that H t ∞ (R) r 6−t 1 r
2(t−3) 2
for r 1 ≥ r 2 and t ∈ [3,4]. This is done by estimating the Hausdorff content of annuli of the form
A ρ = (x, y, z); x 2 + y 2 1/2 − ρ ≤ r 2 2 2ρ
, |z| ≤ r 2 2 .
Let
C ρ = {(x, y, 0); x 2 + y 2 = ρ 2 }.
Sublemma. For ρ ≥ r 2 , the set C ρ is √ 2r 2 2 /ρ-dense in A ρ . Proof. Let p = (ρ, 0, 0) and let
S = {(x, y, z); x ≥ 0, z = 2ρy};
this is the part of the plane H(p) defined in the introduction where the x-coordinate is non-negative. Let q = (x, y, z) be a point in A ρ ∩ S. Then
x ≤ ρ + r 2 2 2ρ and |y| = |z| 2ρ ≤ r 2 2 2ρ
, and
x 2 + y 2 ≥ ρ − r 2 2 2ρ 2 ,
so that
x ≥ ρ − r 2 2 2ρ 2 − y 2 1/2 ≥ ρ − r 2 2 2ρ 2 − r 2 2 2ρ 2 1/2 = ρ 2 − r 2 2 1/2 ≥ ρ − r 2 2 ρ
(the last inequality is proved by squaring both sides and using that ρ ≥ r 2 ). Thus
d H (p, q) = (x − ρ) 2 + y 2 1/2 ≤ √ 2 max (|x − ρ|, |y|) ≤ √ 2r 2 2 ρ .
Let R α be the rotation by α around the vertical axis {0} × R. The statement follows since d H is invariant under R α and
C ρ = α R α (p) and A ρ = α R α (A ρ ∩ S) .
Sublemma. Let ε > 0. The set
D ρ = ρ cos kε 2 2ρ 2 , ρ sin kε 2 2ρ 2 , 0 ; k = 0, . . . , 4πρ 2 ε 2 is ε-dense in C ρ .
Proof. The points p = (ρ, 0, 0) and q = (ρ cos α, ρ sin α, 0) satisfy
d H (p, q) = ρ (1 − cos α) 2 + sin 2 α 2 + (2 sin α) 2 1/4 = 2 3/4 ρ (1 − cos α) 1/4 ≤ ρ|2α| 1/2 ,
using that cos α ≥ 1 − α 2 /2. The same bound holds for any pair of points p, q on C ρ making an angle α, and taking α = ε 2 /2ρ 2 gives d H (p, q) ≤ ε.
Proof of Lemma 11 (continued). Take ε = r 2 2 /ρ in the definition of D ρ . Then for ρ ≥ r 2 the set D ρ is 3r 2 2 /ρ-dense in A ρ and #D ρ ρ 4 /r 4 2 , and thus
H t ∞ (A ρ ) ρ 4 r 4 2 · r 2 2 ρ t = r 2t−4 2 ρ 4−t .
Let ρ k = r 2 √ k. Then
ρ k+1 − ρ k = r 2 k+1 k 1 2 √ u du ≤ r 2 2 √ k = r 2 2 2ρ k ,
so that A ρ k and A ρ k+1 overlap. Thus
R ⊂ R (0, (r 2 , r 2 )) ∪ r 1 r 2 2 k=1 A ρ k .
It follows that
H t ∞ (R) r t 2 + r 1 r 2 2 k=1 r 2t−4 2 ρ 4−t k = r t 2 1 + r 1 r 2 2 k=1 k (4−t)/2 r t 2 1 + r 1 r 2 6−t r 6−t 1 r 2(t−3) 2 ,
using in the last step that (r 1 /r 2 ) 6−t ≥ 1.
Upper bound for the energy of a rectangle.
Lemma 12. Let R = R (0, r). If r 1 ≤ r 2 then
I t (R) r 4 1 r 4−t 2
if t ∈ (0, 2),
r 6−t 1 r 2 2 if t ∈ (2, 4),
and if r 1 ≥ r 2 then
I t (R) r 4−t 1 r 4 2 if t ∈ (0, 3) \ {1}, r t−2 1 r 2(5−t) 2 if t ∈ (3, 4),
where the implicit constants depend on t but not on r.
Proof. Let
R t (p) = R d H (p, q) −t dλ(q) ,
so that
I t (R) = R R t (p) dλ(p) .
Since R, d H and λ are invariant under rotation around the vertical axis, the integral defining R t (p) does not depend on the angle of p in the horizontal plane. To estimate R t (p) it is therefore enough to consider p of the form p = (ρ, 0, z 0 ). Assume that ρ ∈ [0, r 1 ] and z 0 ∈ [−r 2 2 , r 2 2 ], so that p ∈ R. Let f ρ (x, y, z) = max |x|, |y|, |z − 2ρy| 1/2 .
Then for q = (x, y, z),
d H (p, q) = (x − ρ) 2 + y 2 2 + (z − z 0 − 2ρy) 2 1/4 ≈ f ρ (x − ρ, y, z − z 0 ).
Define the Euclidean rectangles
A = − 2r 1 , 2r 1 × − 2r 1 , 2r 1 × − 2r 2 2 , 2r 2 2 , A ′ = (ρ, 0, z 0 ) + A,
where + means Euclidean translation, and let B(a) = {q; f ρ (q) ≤ a}. Since R ⊂ A ′ ,
R t (p) ≤ A ′ d H (p, q) −t dλ(q) A ′ f ρ (x − ρ, y, z − z 0 ) −t dλ(x, y, z) = A f ρ (x, y, z) −t dλ(x, y, z) = ∞ 0 λ q ∈ A; f ρ (q) −t ≥ γ dγ (2) = ∞ 0 λ A ∩ B γ −1/t dγ ≈ ∞ 0 λ (A ∩ B(a)) a −(t+1) da .
To estimate R t (p) it is useful to have an upper bound for λ (A ∩ B(a)).
The set B(a) is the intersection of the vertical cylinder [−a, a] × [−a, a] × R with the set of points having vertical Euclidean distance at most a 2 to the plane z = 2ρy. In particular, the projection of B(a) to the yz-plane is the intersection of the strips S 1 = {(y, z); −a ≤ y ≤ a}, S 2 = {(y, z); 2ρy − a 2 ≤ z ≤ 2ρy + a 2 }.
The projection of A to the yz-plane is the intersection of the strips S 3 = {(y, z); −2r 1 ≤ y ≤ 2r 1 }, S 4 = {(y, z); −2r 2 2 ≤ z ≤ 2r 2 2 }. It is used in the computations below that if u = −1 and 0 ≤ v ≤ w ≤ ∞ then (3) w v a u da max v u+1 , w u+1 , with the convention that 1/0 = ∞, and where the implicit constant depends on u but not on v, w.
Sublemma. It holds that λ(A ∩ B(a)) min a 4 , r 2 1 a 2 , r 2 1 r 2 2 . Proof. The projection of A ∩ B(a) to the yz-plane is contained in each of the parallelograms S 1 ∩ S 2 , S 3 ∩ S 4 , S 2 ∩ S 3 . The first two have area 4a 3 and 16r 1 r 2 2 , respectively, and the third has vertices −2r 1 , −4ρr 1 ± a 2 , 2r 1 , 4ρr 1 ± a 2 , and hence area 8r 1 a 2 . The extension of A in the x-direction is 4r 1 and the extension of B(a) in the x-direction is 2a. Thus λ(A ∩ B(a)) min a 3 , r 1 r 2 2 , r 1 a 2 · min (r 1 , a) ≤ min a 4 , r 2 1 a 2 , r 2 1 r 2 2 . Proof of Lemma 12 (continued). The case r 1 ≤ r 2 . Let t ∈ (0, 4) \ {2}. Using (2), the sublemma and (3), if t ∈ (2, 4).
R t (p)
Sublemma. It holds that λ(A ∩ B(a)) min a 4 , r 2 2 a 3 /ρ, r 1 r 2 2 a, r 2 1 r 2 2 .
For t ∈ (1, 4) \ {3}, the first part of the integral r1 0 g t (ρ)ρ dρ is again estimated using (4). Let
E r (ω) ⊂ ∞ n=n0
R (ω n , r n ) , and by Lemma 11, H t ∞ (R (ω n , r n )) Φ t (r n ). Thus for every n 0
H t ∞ (E r (ω)) ∞ n=n0 Φ t (r n ).
It follows that if n Φ t (r n ) < ∞ then H t ∞ (E r (ω)) = 0 so that dim H E r (ω) ≤ t. By Lemma 12, Cap t (R (0, r n )) ≥ λ(R (0, r n )) 2 I t (R (0, r n )) Φ t (r n ).
Thus if n Φ t (r n ) = ∞ then Theorem 2 with λ replaced by λ(W ) −1 λ implies that dim H E r (ω) ≥ t for almost every ω.
Theorem 2 .
2Let G be a unimodular group with left invariant metric and (c, d)regular Haar measure λ, and let W be a bounded open subset of G such that λ(W ) = 1. Define the probability space (Ω, P) by Ω = G N and P
Lemma 10 .
10Let (ξ n ) be a sequence of independent random variables. Then almost surely lim inf n→∞ ξ n ≤ lim inf n→∞ E ξ n .
The vertical segment S = {0} × −r 2 2 , r 2 2 is r 1 -dense in R, and D = (0, kr 2 1 ); k = −
t (R) λ(R) · max r
It is clear that dim H E r (ω) ≤ dim H H = 4. Let t ∈ (0, 4) \ {1, 2, 3}.For every ω and n 0 ,
Proof. The projection of A ∩ B(a) to the yz-plane is contained in each of the parallelograms S 1 ∩ S 2 , S 3 ∩ S 4 , S 2 ∩ S 4 . The first two have area 4a 3 and 16r 1 r 2 2 , respectively, and the third has vertices −2r 2 2 ± a 2 2ρ , −2r 2 2 ,and hence area 4r 2 2 a 2 /ρ. The extension of A in the x-direction is 4r 1 and the extension of B(a) in the x-direction is 2a. Thus≤ min a 4 , r 2 2 a 3 ρ , r 1 r 2 2 a, r 2 1 r 2 2 .Proof of Lemma 12 (continued). The case r 1 ≥ r 2 . Again, R t (p) can be estimated using(2), the sublemma and(3), but the estimate now depends on ρ. Let t ∈ (0, 4) \ {1, 3}. Then for every ρ,For t ∈ (1, 4) \ {3} and ρ ≥ r 4 2 /r 1 1/3 , the estimate can be made sharper. For such ρ, the interval [r 2 2 /ρ, √ ρr 1 ] is non-empty, and when a lies in this interval the minimum in the expression given by the sublemma is achieved by the second option. ThusDenoting the options in the maximum by, and the equality * follows using that the expressions in parentheses are greater than or equal to 1. Let g t (ρ) = sup z0 R t (p) where p = (ρ, 0, z 0 ). Then For t ∈ (0, 1), the estimate(4)gives I t (R) r 2 2 r1 0 r 2−t 1 r 2 2 ρ dρ ≈ r 4−t 1 r 4 2 .
Exceptional sets for self-similar fractals in Carnot groups. M Zoltán, Reto Balogh, Roberto Berger, Jeremy T Monti, Tyson, Math. Proc. Cambridge Philos. Soc. 1491Zoltán M. Balogh, Reto Berger, Roberto Monti, and Jeremy T. Tyson. Exceptional sets for self-similar fractals in Carnot groups. Math. Proc. Cambridge Philos. Soc., 149(1):147-172, 2010.
The effect of projections on dimension in the Heisenberg group. M Zoltán, Estibalitz Balogh, Katrin Durand-Cartagena, Pertti Fässler, Jeremy T Mattila, Tyson, Rev. Mat. Iberoam. 292Zoltán M. Balogh, Estibalitz Durand-Cartagena, Katrin Fässler, Pertti Mattila, and Jeremy T. Tyson. The effect of projections on dimension in the Heisenberg group. Rev. Mat. Iberoam., 29(2):381-432, 2013.
Projection and slicing theorems in Heisenberg groups. M Zoltán, Katrin Balogh, Pertti Fässler, Jeremy T Mattila, Tyson, Adv. Math. 2312Zoltán M. Balogh, Katrin Fässler, Pertti Mattila, and Jeremy T. Tyson. Projection and slicing theorems in Heisenberg groups. Adv. Math., 231(2):569-604, 2012.
Comparison of Hausdorff measures with respect to the Euclidean and the Heisenberg metric. M Zoltán, Matthieu Balogh, Francesco Serra Rickly, Cassano, Publ. Mat. 471Zoltán M. Balogh, Matthieu Rickly, and Francesco Serra Cassano. Comparison of Hausdorff measures with respect to the Euclidean and the Heisenberg metric. Publ. Mat., 47(1):237-259, 2003.
Hausdorff dimensions of self-similar and self-affine fractals in the Heisenberg group. M Zoltán, Jeremy T Balogh, Tyson, Proc. London Math. Soc. 913Zoltán M. Balogh and Jeremy T. Tyson. Hausdorff dimensions of self-similar and self-affine fractals in the Heisenberg group. Proc. London Math. Soc. (3), 91(1):153-183, 2005.
Sub-Riemannian vs. Euclidean dimension comparison and fractal geometry on Carnot groups. M Zoltán, Jeremy T Balogh, Ben Tyson, Warhurst, Adv. Math. 2202Zoltán M. Balogh, Jeremy T. Tyson, and Ben Warhurst. Sub-Riemannian vs. Euclidean dimension comparison and fractal geometry on Carnot groups. Adv. Math., 220(2):560-619, 2009.
Frequency of Sobolev dimension distortion of horizontal subgroups in Heisenberg groups. M Zoltán, Jeremy T Balogh, Kevin Tyson, Wildrick, Ann. Sc. Norm. Super. Pisa Cl. Sci. 175Zoltán M. Balogh, Jeremy T. Tyson, and Kevin Wildrick. Frequency of Sobolev dimension distortion of horizontal subgroups in Heisenberg groups. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 17(2):655-683, 2017.
On the sum of digits of real numbers represented in the dyadic system. Abram Besicovitch, Ann. Math. 1101Abram Besicovitch. On the sum of digits of real numbers represented in the dyadic system. Ann. Math., 110(1):321-330, 1935.
Sur les séries de Taylor. Emile Borel, Acta Math. 211Emile Borel. Sur les séries de Taylor. Acta Math., 21(1):243-247, 1897.
Sulla probabilitá come limite della frequenza. Francesco Cantelli, Atti Accad. Naz. Lincei. 261Francesco Cantelli. Sulla probabilitá come limite della frequenza. Atti Accad. Naz. Lincei, 26(1):39-45, 1917.
On randomly placed arcs on the circle. Arnaud Durand, Recent developments in fractals and related fields. Birkhäuser Boston IncArnaud Durand. On randomly placed arcs on the circle. In Recent developments in fractals and related fields, Appl. Numer. Harmon. Anal., pages 343-351. Birkhäuser Boston Inc., 2010.
The fractional dimension of a set defined by decimal properties. H G Eggleston, Quart. J. Math. Oxford. 20H. G. Eggleston. The fractional dimension of a set defined by decimal properties. Quart. J. Math. Oxford, 20:31-36, 1949.
Hausdorff dimension of limsup sets of random rectangles in products of regular spaces. Fredrik Ekström, Esa Järvenpää, Maarit Järvenpää, Ville Suomala, Proc. Amer. Math. Soc. 1466Fredrik Ekström, Esa Järvenpää, Maarit Järvenpää, and Ville Suomala. Hausdorff dimension of limsup sets of random rectangles in products of regular spaces. Proc. Amer. Math. Soc., 146(6):2509-2521, 2018.
Hausdorff dimension of random limsup sets. Fredrik Ekström, Tomas Persson, arXiv:1612.07110Fredrik Ekström and Tomas Persson. Hausdorff dimension of random limsup sets. arXiv:1612.07110.
Sets with large intersection properties. Kenneth J Falconer, J. London Math. Soc. 492Kenneth J. Falconer. Sets with large intersection properties. J. London Math. Soc. (2), 49(2):267-280, 1994.
A multifractal mass transference principle for Gibbs measures with applications to dynamical Diophantine approximation. Ai-Hua Fan, Jörg Schmeling, Serge Troubetzkoy, Proc. Lond. Math. Soc. 1075Ai-Hua Fan, Jörg Schmeling, and Serge Troubetzkoy. A multifractal mass transference prin- ciple for Gibbs measures with applications to dynamical Diophantine approximation. Proc. Lond. Math. Soc., 107(5):1173-1219, 2013.
On the covering by small random intervals. Ai-Hua Fan, Jun Wu, Ann. Inst. Henri Poincaré Probab. Stat. 401Ai-Hua Fan and Jun Wu. On the covering by small random intervals. Ann. Inst. Henri Poincaré Probab. Stat., 40(1):125-131, 2004.
Improved Hausdorff dimension estimate for vertical projections in the Heisenberg group. Katrin Fässler, Risto Hovila, Ann. Sc. Norm. Super. Pisa Cl. Sci. 155Katrin Fässler and Risto Hovila. Improved Hausdorff dimension estimate for vertical pro- jections in the Heisenberg group. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 15(1):459-483, 2016.
Dimensions of random covering sets in Riemann manifolds. Esa De-Jun Feng, Maarit Järvenpää, Ville Järvenpää, Suomala, Ann. Probab. 463De-Jun Feng, Esa Järvenpää, Maarit Järvenpää, and Ville Suomala. Dimensions of random covering sets in Riemann manifolds. Ann. Probab., 46(3):1542-1596, 2018.
Transversality of isotropic projections, unrectifiability, and Heisenberg groups. Risto Hovila, Rev. Mat. Iberoam. 302Risto Hovila. Transversality of isotropic projections, unrectifiability, and Heisenberg groups. Rev. Mat. Iberoam., 30(2):463-476, 2014.
Zur theorie der diophantischen approximationen. Vojtěch Jarník, Monatsh. Math. Phys. 391Vojtěch Jarník. Zur theorie der diophantischen approximationen. Monatsh. Math. Phys., 39(1):403-438, 1932.
Hausdorff dimension of affine random covering sets in torus. Esa Järvenpää, Maarit Järvenpää, Henna Koivusalo, Bing Li, Ville Suomala, Ann. Inst. Henri Poincaré Probab. Stat. 504Esa Järvenpää, Maarit Järvenpää, Henna Koivusalo, Bing Li, and Ville Suomala. Hausdorff dimension of affine random covering sets in torus. Ann. Inst. Henri Poincaré Probab. Stat., 50(4):1371-1384, 2014.
Hitting probabilities of random covering sets in tori and metric spaces. Esa Järvenpää, Maarit Järvenpää, Henna Koivusalo, Bing Li, Ville Suomala, Yimin Xiao, Electron. J. Probab. 221Esa Järvenpää, Maarit Järvenpää, Henna Koivusalo, Bing Li, Ville Suomala, and Yimin Xiao. Hitting probabilities of random covering sets in tori and metric spaces. Electron. J. Probab., 22(1):1-18, 2017.
Zur metrischen theorie der diophantischen approximationen. Aleksandr Khintchine, Math. Z. 241Aleksandr Khintchine. Zur metrischen theorie der diophantischen approximationen. Math. Z., 24(1):706-714, 1926.
A note on random coverings of tori. Tomas Persson, Bull. Lond. Math. Soc. 471Tomas Persson. A note on random coverings of tori. Bull. Lond. Math. Soc., 47(1):7-12, 2015.
Inhomogeneous potentials, Hausdorff dimension and shrinking targets. Tomas Persson, arxiv:1711.04468Tomas Persson. Inhomogeneous potentials, Hausdorff dimension and shrinking targets. arxiv:1711.04468.
A Frostman-type lemma for sets with large intersections, and an application to Diophantine approximation. Tomas Persson, W J Henry, Reeve, Proc. Edinb. Math. Soc. 582Tomas Persson and Henry W. J. Reeve. A Frostman-type lemma for sets with large inter- sections, and an application to Diophantine approximation. Proc. Edinb. Math. Soc. (2), 58(2):521-542, 2015.
Inhomogeneous coverings of topological Markov shifts. Stéphane Seuret, 10.1017/S0305004117000512Math. Proc. Cambridge Philos. Soc. To appear inStéphane Seuret. Inhomogeneous coverings of topological Markov shifts. To appear in Math. Proc. Cambridge Philos. Soc., available at https://doi.org/10.1017/S0305004117000512.
| [] |
[
"The Giant Proto-Galaxy cB58; an Artifact of Gravitational Lensing ? Publication Manuscript",
"The Giant Proto-Galaxy cB58; an Artifact of Gravitational Lensing ? Publication Manuscript"
] | [
"L L R Williams \nInstitute of Astronomy\nMadingley RoadCB3 0HACambridge\n",
"G F Lewis \nInstitute of Astronomy\nMadingley RoadCB3 0HACambridge\n"
] | [
"Institute of Astronomy\nMadingley RoadCB3 0HACambridge",
"Institute of Astronomy\nMadingley RoadCB3 0HACambridge"
] | [
"Mon. Not. R. Astron. Soc"
] | The proto-galaxy, cB58, was discovered in the CNOC survey of cluster redshifts. Absorption features reveal that this system is at a redshift of z = 2:72, implying an absolute magnitude of M v 26, and a star-formation rate of 4700M yr 1 , making it the most \active" star-forming galaxy. This proto-galaxy is observed to lie close ( 6 00 ) to a central cluster galaxy at z = 0:373. The X-ray properties of the cluster suggests that its mass, and therefore its lensing potential, could be greater than that found using a virial analysis. In this Letter we argue that the phenomenal properties of this proto-galaxy are due to the gravitational lensing e ect of the foreground cluster, and the unlensed properties of the source are typical of high-redshift star-forming systems. | 10.1093/mnras/281.3.l35 | [
"https://arxiv.org/pdf/astro-ph/9605062v1.pdf"
] | 14,392,384 | astro-ph/9605062 | 7af7a9cc5ccaecebf06d59c9cebc6fdc9ef0b276 |
The Giant Proto-Galaxy cB58; an Artifact of Gravitational Lensing ? Publication Manuscript
1996. May 1996
L L R Williams
Institute of Astronomy
Madingley RoadCB3 0HACambridge
G F Lewis
Institute of Astronomy
Madingley RoadCB3 0HACambridge
The Giant Proto-Galaxy cB58; an Artifact of Gravitational Lensing ? Publication Manuscript
Mon. Not. R. Astron. Soc
0001996. May 1996astro-ph/9605062 13Gravitational Lensing, Star-forming Galaxies
The proto-galaxy, cB58, was discovered in the CNOC survey of cluster redshifts. Absorption features reveal that this system is at a redshift of z = 2:72, implying an absolute magnitude of M v 26, and a star-formation rate of 4700M yr 1 , making it the most \active" star-forming galaxy. This proto-galaxy is observed to lie close ( 6 00 ) to a central cluster galaxy at z = 0:373. The X-ray properties of the cluster suggests that its mass, and therefore its lensing potential, could be greater than that found using a virial analysis. In this Letter we argue that the phenomenal properties of this proto-galaxy are due to the gravitational lensing e ect of the foreground cluster, and the unlensed properties of the source are typical of high-redshift star-forming systems.
INTRODUCTION
Gravitational lensing changes our view of the distant universe, distorting and magnifying the images of high-redshift galaxies. Spectral studies of lensed arcs in galaxy clusters (Ebbels et al. 1996) and optical Einstein rings in isolated galaxies (Warren et al. 1996) shed light on the physical processes underway in young, star forming systems. Recently, the brightest IRAS source, F10214+4724 (Rowan-Robinson et al. 1991), was found be a lensed system with several components (Broadhurst and Leh ar 1995). When the e ects of lensing magni cation are removed it is found that the source galaxy in this system has a luminosity typical of other IRAS sources. Yee et al. (1996), henceforth referred to as YEBCC, recently announced the serendipitous discovery of a highredshift (z = 2:72), galaxy, designated cB58, in the Canadian Network of Observational Cosmology (CNOC) survey of cluster redshifts . This galaxy lies within 6 00 of the centre of a low-redshift cluster (MS 1512+36, z = 0:373), is extended (2 00 3 00 ), and is extremely luminous with Mv 26, with an inferred star-formation rate (SFR) of 4700M yr 1 (h75 = 1; qo = 0:1). The nonextinction corrected value of the SFR is 400M yr 1 , which, when compared to spectroscopic studies which nd a mean SFR 9:3h 2 75 M yr 1 (qo = 0:1) (Steidel et al. 1996), suggests ? Email: [email protected] y Email: [email protected] that cB58 is the most \active" star-forming system yet discovered z .
In this letter we argue that the phenomenal properties of this system are an artifact of gravitational lensing, induced by the low-redshift cluster which cB58 shines through. Utilizing a simple model to describe the mass distribution in the foreground cluster we show that substantial magnication of a small elliptical source can be produced, giving the observed image con guration. Taking the e ect of this lensing magni cation into account, it is seen that the resultant SFR is consistent with that measured in spectroscopic surveys.
MOTIVATION
YEBCC investigated the possible e ects of gravitational lensing on cB58. The concluded that, although this system is very near to the centre of a foreground cluster, the cluster itself is rather poor (Abell class 0), and the regular morphology and low axis ratio of cB58 indicated that it provides little lensing magni cation.
There are two main reasons to believe that cB58 is substantially magni ed due to the gravitational lensing action z In their analysis, YEBCC applied an LMC-type extinction correction to their spectra before calculating the SFR. As other estimates of the SFR in high-redshift systems do not include such a correction (eg. Steidel et al. 1996), we use only the non-extinction corrected SFR to provide a fair comparison. of the foreground cluster. The rst has to do with cB58 itself. In particular, it is surprising that no similarly luminous galaxy has been found in other imaging/spectroscopic surveys to date. YEBCC estimate the probability of detecting such a galaxy in the CNOC survey, and conclude that it is not unlikely to observe such a proto-galaxy given a total area of 1 square degree. However, if the cB58 is not lensed, there is no a priori reason to nd such a galaxy close to a foreground cluster. The combined area of radius 6 00 around all 16 CNOC clusters amounts to 1809 square arcseconds. Each cluster eld measures about 1000 00 on the side, therefore the probability of nding the galaxy close to a cluster core is only 1:1 10 4 .
The second reason concerns the cluster. The cluster, MS1512+36, was identi ed in the Einstein Observatory Extended Medium Survey (EMSS) (Gioia et al. 1990). It is associated with a 3.8mJ radio source, is X-ray luminous with Lx = 4:81 10 44 ergs s 1 , and strong OII] emission indicates the presence of a cooling-ow (Stocke et al. 1991;Donahue et al. 1992;Gioia and Luppino 1994). This cluster was re-observed as part of the CNOC survey and Carlberg et al. (1996) conclude that the cluster velocity dispersion is = 690km s 1 . One error bars on this measurement are 100km s 1 , consistent with uncertainties resulting from sampling, experimental, and projection errors (Danese et al. 1980). With this velocity dispersion, the mass within the virial radius of rv = 2:22 Mpc is M(< 2:2Mpc) = 7:3 10 14 M (h75 = 1). The X-ray properties of the cluster may indicate a larger mass, since MS1512+36's bolometric X-ray luminosity ( 7:8 10 44 ergs s 1 ) implies a velocity dispersion of 900km s 1 (Edge and Stewart 1991). We suggest, therefore, that the cluster mass is possibly underestimated. In the next section we shall investigate whether a larger, but realistic, cluster mass can result in a substantial magni cation, with no multiple images or distortion in the observed image of cB58.
METHOD
Central regions of galaxy clusters can be modelled with a single \cluster scale" potential (Kneib et al. 1995). We model the cluster as a circularly symmetric, isothermal sphere with a core radius, rc, and asymptotic, line of sight velocity dispersion, k , such that the surface mass density at radius r is given by, (r) = 0 1 + 0:5(r=rc) 2
(1 + (r=rc) 2 ] 1:5 ;
(1) see Schneider, Elhers and Falco (1992), p. 244]. The central surface mass density, 0, velocity dispersion, and core radius of the cluster are related by 0rc = 2 k =G, so there are only 2 free parameters in our cluster model. We take these to be 0 and k . We rst consider a circular image, and lens it back into the source plane, using the standard lensing equation (Schneider et al. 1992). The radius of our hypothetical image is 2 00 , and it is located 6 00 from the central cluster galaxy. The observed redshifts of the cluster and cB58 are zl = 0:37 and zs = 2:72 respectively, and we assume a standard = 0:2 cosmology, with the Hubble constant of h75 = 1. With this, the position, shape, size, and magni ca-
In this case, we use the outer isophote only, which is quite su cient for our purposes since, as we will see shortly, the distortions are rather small. The 2-component distortion is then, e1 = (Q 11 Q 22 ) (Q 11 +Q 22 ) ; e2 = 2 Q 12 (Q 11 +Q 22 ) ; e = p e 2 1 + e 2 2 ; (3) same as distortion measures used in studies of weak lensing (Kaiser and Squires 1993) x . The quadrupole moments are calculated with respect to the center of the source. Note that for small distortions one can either calculate ellipticity of the source whose image is circular, or ellipticity of the image, where the unlensed source is circular. The change in ellipticity, e = p (e1;s e1;i) 2 + (e2;s e2;i) 2 , is nearly the same in both cases. Here, the subscripts i and s stand for image and source.
The dependence of the image magni cation and distortion on the cluster parameters, 0 and k , is presented in Figure 1. Here, the central surface mass density of the cluster is given in terms of the critical surface mass density for lensing, 0 = 0= crit , where, crit = c 2 4 G Dos DolDls ;
(4)
x Note that the ellipticity parameter e is not the same as the An example of a source/image geometry with k = 1000 km s 1 , and 0 = 0:9. Cluster core radius rc is 92 kpc, while its mass is 1:6 10 15 M . Note that the isophotes are completely undistorted in spite of a rather large magni cation = 38:5. The source is located 0 00 .8 from the central cD and has an orientation of 115 , semi-major axis of 0 00 .24, and axis ratio of 1.6. The resultant image is aligned eastward, has a semi-major axis of 1 00 .5, and an axis ratio of 1.5. Compare to Fig. 2 Figure 2 shows an unlensed elliptical source, and the corresponding image, for 0 = 0:9 and k = 1000 km s 1 . This velocity dispersion represents a 3 upward deviation from the Carlberg et al. (1996) estimate, but is consistent with the cluster's X-ray bolometric luminosity. The magnication, averaged over the surface of the image, is = 38:5. The change in the image ellipticity compared to the unlensed source is e = 0:35 and is consistent with the observed im-age being elliptical. The cluster has a core radius of 92 kpc or 22.2 arcsec, and a total mass, within the \virial radius" of 2.22 Mpc, of 1:62 10 15 M . This mass is only a factor of 2 larger than the dynamical mass estimate derived by Carlberg et al. (1996) for MS1512+36. The surprisingly undisturbed morphology of the image is due to the fact that the image is well within the cluster core radius. The value for the core radius is, however, quite typical for galaxy clusters (see Fort and Mellier 1993). If cB58 is signi cantly magni ed, the source must be correspondingly fainter, and located closer to the lens, as illustrated in Figure 2. Fainter galaxies are more numerous, but the probability of nding a galaxy very close to the lens is small. If a galaxy is magni ed times the unlensed sources are 2:5 s times more numerous, where s = dlogN(m)=dm is the logarithmic slope of the galaxy luminosity function. The impact parameter of a source galaxy magni ed by , is p times smaller compared to that of the image. Since s is quite steep brightward of L?, it is signi cantly more likely to nd a fainter galaxy closer to the cD, than a galaxy such as cB58 at 6 00 separation. Since the probability of nding an unlensed source depends on the magni cation, the lines of constant probability would be`parallel' to the lines of constant in Figure 1.
IMPLICATIONS
Considering the model presented in the previous section, and accounting for the lensing induced magni cation of a factor of 38:5, the galaxy, cB58, would have an intrinsic brightness of Mv 22. The SFR, which is calculated from the ux at 1500 A, is also to be corrected by this factor, and the intrinsic value of the non-extinction corrected SFR is 10M yr 1 (h75 = 1; qo = 0:1), which is similar to the 4 28h 2 75 M yr 1 (qo = 0:1) found from spectroscopic studies of populations of galaxies at z > 3 , and the value of 9h 2 75 M yr 1 (qo = 0:1) from the studies of lensed arcs in galaxy clusters (Ebbels et al. 1996).
Similarly, taking account of the lensing magni cation, the source would have intrinsic dimensions of 0 00 .5 0 00 .3. This value is consistent with the study of , who recently imaged several z > 3 star-forming galaxies, selected from Lyman-Break studies . These were found to have a half-light diameter of < 0 00 .7.
The sub-critical nature of the lens implies that there will be no counter images of cB58 in the eld of this cluster. If the cluster were super-critical, to produce such multiple images the source to cB58 would lie within, or straddle, the caustic network. This would produce more extreme distortions of the observed galaxy. The simple model presented in this Letter remains sub-critical for sources out to very high redshift implying that no giant arcs can be formed in this cluster.
The magni ed, undistorted nature of the image in this system provides an ideal opportunity for obtaining high signal-to-noise spectra of high-redshift galaxies. These will prove valuable in the study of population synthesis in young, star-forming systems.
CONCLUSIONS AND DISCUSSION
In this Letter we have discussed the hypothesis that the extraordinary properties of the star-forming galaxy cB58, discovered by YEBCC, are a consequence of the gravitational lensing by the cluster MS1512+36. We demonstrated that a simple, circularly symmetric, sub-critical mass model can magnify a small, elliptical source by a large factor without signi cant distortion of the lensed image. In fact, the lack of observable distortion by itself, provides no upper limit on its magni cation. Our modelling is preliminary and non-unique, i.e. we do not argue for any particular value of magni cation. However, for concreteness, we consider one possible case: if the unlensed source were 0 00 .5 0 00 .3, and had a SFR of 10M yr 1 , the gravitational lensing e ect reproduces the observed properties of cB58, namely, its size, undistorted elliptical nature, and apparently large SFR. In this case, the cluster mass is about twice that derived by Carlberg et al. (1996). Utilizing the cluster velocity dispersion and total mass, as determined by Carlberg et al. cB58 is only weakly lensed, by at most 2 magnitudes. We note, that it is possible, from other considerations, that Carlberg et al. have underestimated the cluster mass.
Since our modelling procedure is degenerate, an independent method to ascertain the lensing properties of this cluster is needed. ROSAT and ASCA data of the cluster have been acquired, although cluster mass estimates from these data have not yet been published. Also, a study of the weak lensing of background galaxies will provide a handle on the form of the lensing potential of this cluster (cf. Kaiser and Squires 1993).
To date, much work has been done on cluster lenses with observations of multiple images of background sources and extended arcs systems. All of the clusters showing these features are supercritical, and have caustic structure (Schneider et al. 1992). (One can have multiple images without exceeding crit anywhere in the cluster; however very large shear is required in such cases.) We speculate that there should be an equally large number of clusters with central surface density just below critical. Such clusters would signi cantly magnify background sources, but would not be readily recognized as important lenses because of the lack of arcs. We suggest that the best way to nd such clusters is to look at the number counts of the faint background galaxies and compare them to galaxy counts in the eld, as was proposed by Broadhurst (1996).
Zwicky, more than half a century ago, suggested that extragalactic nebulae could act as natural telescopes (Zwicky 1937). In the last decade, numerous examples of lensed systems have been observed, distorted and warped by the strength of their gravitational lens. The image of cB58, seen through the cluster M1512+36, provides our rst magni ed, yet undistorted view of the high-redshift universe through a gravitational lens, realizing Zwicky's insight.
Figure 1 .
1Curves of constant magni cation (solid lines), ellipticity change e (dashed lines), and cluster core radii, rc (dotted lines) for a range of cluster central surface mass densities, 0 , and line of sight velocity dispersion, k . Images with ellipticity changes e < 0:6 appear undistorted.The dot at k = 1000km s 1 and 0 = 0:9 gives the cluster parameters used inFigure 2.tion of the unlensed source can be determined. The distortion of the source is calculated using the second moments of its light distribution, I(xi;
Figure 2 .
2Figure 2. An example of a source/image geometry with k = 1000 km s 1 , and 0 = 0:9. Cluster core radius rc is 92 kpc, while its mass is 1:6 10 15 M . Note that the isophotes are completely undistorted in spite of a rather large magni cation = 38:5. The source is located 0 00 .8 from the central cD and has an orientation of 115 , semi-major axis of 0 00 .24, and axis ratio of 1.6. The resultant image is aligned eastward, has a semi-major axis of 1 00 .5, and an axis ratio of 1.5. Compare to Fig. 2 of YEBCC.
of YEBCC. and Dij are the observer (o), lens (l) and source (s) angular diameter distances. The solid lines are curves of equal magni cation, the dashed lines are for constant distortion of the source, and the dotted lines represent constant cluster core radii, rc. It is important to realize that none of the parameters considered inFigure 1will generate either multiple images or arcs of a background source. In spite of that, magni cations, for a small source, tend to be quite large for large velocity dispersions and surface mass densities (upper right corner of the plot). The corresponding distortions are also large; although even an ellipticity change of e = 0:5 represents a very small distortion in the shape of an object, and would not appear like a \typical" sheared lensed image. In fact, no region ofFigure 1, with the possible exception of the very top left corner ( e > 0:6), can be ruled out based on the observed undistorted nature of cB58.To demonstrate the e ect of lensing on an elliptical source, we pick a speci c set of cluster parameters from Figure 1.
ACKNOWLEDGMENTSWe would like to thank Max Pettini for discussions about star-formation rates, Alastair Edge for enlightenments on X-ray studies of galaxy clusters, and Roberto Abraham for co ee and chats about central cluster galaxies. LLRW would like to thank Richard Ellis for bringing this intriguing object to her attention, and several useful conversations on the subject; and Howard Yee for providing her with some information about cB58 and the cluster, prior to publication.
. T J Broadhurst, J Leh Ar, L41, T J Broadhurst, Astrophys. J. 450In PressAstrophys. J.Broadhurst, T. J. and Leh ar, J., 1995, Astrophys. J., 450, L41. Broadhurst, T. J., 1995, Astrophys. J., In Press.
. R G Carlberg, H K C Yee, E Ellingson, R G Abraham, P Gravel, S Morris, C J Pritchet, Astrophys. J. In PressCarlberg, R. G., Yee, H. K. C., Ellingson, E., Abraham, R. G., Gravel, P., Morris, S., and Pritchet, C. J., 1996, Astrophys. J., In Press.
. L Danese, G De Zotti, G Di Tullio, Astr. Astrophys. 82322Danese, L., De Zotti, G., & di Tullio, G., 1980. Astr. Astrophys., 82, 322.
. M Donahue, J T Stocke, I M Gioia, Astrophys. J. 38549Donahue, M., Stocke, J. T. and Gioia, I. M., 1992, Astrophys. J., 385, 49.
. T Ebbels, PreprintEbbels, T. et al., 1996, Preprint.
. A C Edge, G C Stewart, Mon. Not. R. astr. Soc. 252428Edge, A. C. and Stewart, G. C., 1991, Mon. Not. R. astr. Soc., 252, 428.
. B Fort, Y Mellier, M Giavalisco, C C Steidel, D F Macchetto, I M Gioia, G A Luppino, Astr. Astrophys. Rev. 5583Astrophys. J. Supp.Fort, B., & Mellier, Y., 1993, Astr. Astrophys. Rev., 5, 239. Giavalisco, M., Steidel, C. C. and Macchetto, D. F., 1996, preprint Gioia, I. M. and Luppino, G. A., 1994, Astrophys. J. Supp., 94, 583.
. I M Gioia, T Maccacaro, R E Schild, A Wolter, J T Stocke, S L Morris, J P Henry, Astrophys. J. Supp. 72567Gioia, I. M., Maccacaro, T., Schild, R. E., Wolter, A., Stocke, J. T., Morris, S. L. and Henry, J. P., 1990, Astrophys. J. Supp., 72, 567.
. N Kaiser, G Squires, Astrophys. J. 404441Kaiser, N. and Squires, G., 1993, Astrophys. J., 404, 441.
. J.-P Kneib, Y Mellier, R Pello, J Miralda-Escud E, J.-P Leborge, H Ohringer, J.-P Picat, Astron. and Astrophys. 30327Kneib, J.-P., Mellier, Y., Pello, R., Miralda-Escud e, J., LeBorge, J.-P., B ohringer, H. and Picat, J.-P., 1995, Astron. and As- trophys., 303, 27.
. M Rowan-Robinson, Nature. 719Rowan-Robinson, M. et al., 1991, Nature, 351, 719.
P Schneider, J Ehlers, E E Falco, Gravitational Lenses. Springer -Verlag PressSchneider, P., Ehlers J. and Falco E. E. , 1992, Gravitational Lenses, Springer -Verlag Press
. C C Steidel, M Giavalisco, M Pettini, M Dickinson, K L Adelberger, PreprintSteidel, C. C., Giavalisco, M., Pettini, M., Dickinson, M. and Adelberger, K. L.,1996, Preprint.
. J T Stocke, S L Morris, I M Gioia, T Maccacaro, R Schild, A Wolter, T A Fleming, J P Henry, Astrophys. J. Supp. 76813Stocke, J. T., Morris, S. L., Gioia, I. M., Maccacaro, T., Schild, R., Wolter, A., Fleming, T. A. and Henry, J. P., 1991, Astrophys. J. Supp., 76, 813.
. S J Warren, P C Hewett, G F Lewis, A Iovino, P Moller, P A Shaver, Mon. Not. R. astr. Soc. 278139Warren, S. J., Hewett, P. C., Lewis, G. F., Iovino, A., Moller, P. and Shaver, P. A., 1996, Mon. Not. R. astr. Soc., 278, 139.
. H K C Yee, E Ellingson, J Bechtold, R G Carlberg, J C Cuillandre, Astron. J. In Press. YEBCCYee, H. K. C., Ellingson, E., Bechtold, J., Carlberg, R. G. and Cuillandre, J. C., 1996, Astron. J., In Press. YEBCC]
. F Zwicky, Phys. Rev. 51290Zwicky, F., 1937, Phys. Rev., 51, 290.
| [] |
[
"Topological Weyl semimetals in Bi 1−x Sb x alloys",
"Topological Weyl semimetals in Bi 1−x Sb x alloys"
] | [
"Yu-Hsin Su \nMax Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany\n",
"Wujun Shi \nMax Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany\n\nSchool of Physical Science and Technology\nShanghaiTech University\n200031ShanghaiChina\n",
"Claudia Felser \nMax Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany\n",
"Yan Sun \nMax Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany\n"
] | [
"Max Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany",
"Max Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany",
"School of Physical Science and Technology\nShanghaiTech University\n200031ShanghaiChina",
"Max Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany",
"Max Planck Institute for Chemical Physics of Solids\nD-01187DresdenGermany"
] | [] | We have investigated the Weyl semimetal (WSM) phases in bismuth antimony (Bi1−xSbx) alloys by the combination of atomic composition and arrangement. Via first principles calculations, we have found two WSM states with the Sb concentration of x = 0.5 and x = 0.83 with specific inversion symmetry broken elemental arrangement. The Weyl points are close to the Fermi level in both of these two WSM states. Therefore, it has a good opportunity to obtain Weyl points in Bi-Sb alloy. The WSM phase provides a reasonable explanation for the current transport study of BiSb alloy with the violation of Ohm's law [Dongwoo Shin et al., Nature Materials 16, 1096]. This work shows that the topological phases in Bi-Sb alloys depend on both elemental composition and their specific arrangement. | 10.1103/physrevb.97.155431 | [
"https://arxiv.org/pdf/1802.00288v1.pdf"
] | 119,415,272 | 1802.00288 | afb23267a8fcd1d527a0b7090e494e83d0227f53 |
Topological Weyl semimetals in Bi 1−x Sb x alloys
Yu-Hsin Su
Max Planck Institute for Chemical Physics of Solids
D-01187DresdenGermany
Wujun Shi
Max Planck Institute for Chemical Physics of Solids
D-01187DresdenGermany
School of Physical Science and Technology
ShanghaiTech University
200031ShanghaiChina
Claudia Felser
Max Planck Institute for Chemical Physics of Solids
D-01187DresdenGermany
Yan Sun
Max Planck Institute for Chemical Physics of Solids
D-01187DresdenGermany
Topological Weyl semimetals in Bi 1−x Sb x alloys
We have investigated the Weyl semimetal (WSM) phases in bismuth antimony (Bi1−xSbx) alloys by the combination of atomic composition and arrangement. Via first principles calculations, we have found two WSM states with the Sb concentration of x = 0.5 and x = 0.83 with specific inversion symmetry broken elemental arrangement. The Weyl points are close to the Fermi level in both of these two WSM states. Therefore, it has a good opportunity to obtain Weyl points in Bi-Sb alloy. The WSM phase provides a reasonable explanation for the current transport study of BiSb alloy with the violation of Ohm's law [Dongwoo Shin et al., Nature Materials 16, 1096]. This work shows that the topological phases in Bi-Sb alloys depend on both elemental composition and their specific arrangement.
We have investigated the Weyl semimetal (WSM) phases in bismuth antimony (Bi1−xSbx) alloys by the combination of atomic composition and arrangement. Via first principles calculations, we have found two WSM states with the Sb concentration of x = 0.5 and x = 0.83 with specific inversion symmetry broken elemental arrangement. The Weyl points are close to the Fermi level in both of these two WSM states. Therefore, it has a good opportunity to obtain Weyl points in Bi-Sb alloy. The WSM phase provides a reasonable explanation for the current transport study of BiSb alloy with the violation of Ohm's law [Dongwoo Shin et al., Nature Materials 16, 1096Materials 16, (2017]. This work shows that the topological phases in Bi-Sb alloys depend on both elemental composition and their specific arrangement.
I. INTRODUCTION
Weyl semimemetal (WSMs) is topological metallic state with valence bands and conduction bands linearly touching in three dimensional momentum space via the Weyl points 1,2 . These Weyl points are doubly degeneracy and behave as the monopoles of Berry curvature with positive and negative charilities, resulting in non-zero topological charges. Similar to topological insulators (TIs), WSMs also have topological protected surface states.
Owing to the non-zero Berry flux between one pair of Weyl points with opposite chirality, the surface states present as non-closed Fermi arc connecting this pair of Weyl points, which have been observed in several WSMs via Angle-resolved photoemission spectroscopy (ARPES) and Scanning tunneling microscope (STM) 3-13 .
Beside Fermi arc surface states, WSMs also host exotic transport properties in the bulk, such as chiral anomaly effect [14][15][16][17] , large magnetoresistance (MR) [14][15][16][18][19][20][21] , strong spin and anomalous Hall effect [22][23][24][25][26] , gravitational anomaly effect 27 , even special catalyst effect 28 .
There are already several WSMs were predicted, and some of them were experimentally verified by the observation of Fermi arcs on the surface 3-13 and negative MR in the bulk transports [14][15][16][17] . So far, most of the study about inversion symmetry broken WSMs are focused on the space group without inversion center. Besides the intrinsic space group without inversion symmetry, alloy is another efficient way to break inversion symmetry. Owing to the alloy-able of lots of semimetals, it offers a good opportunity to achieve WSMs in a large number of materials. That motives us to look back of the well known topological semimetals and semiconductors based on alloy, in which a typical example is the Z 2 TI Bi 1−x Sb x 29-31 .
Bi 1−x Sb x is the first experimentally discovered 3D Z 2 TI 30 . Due to the similar lattice parameters for Bi and Sb, Bi 1−x Sb x alloy can be experimentally formed and the composition can be artificially adjusted. Through atom substitution of Bi by Sb, a non-smooth band inversion happens along with the topological phase transition at a Sb concentration of ≈ 0.04 30 . The non-trivial band order can exist in a large range of Sb concentration after the phase transtion 32 . Since there is not a global band gap for some Sb concentrations, the Bi 1−x Sb x presents as a topological semimetal with inverted band gap 29 . In previous studies, the theoretical studies for Bi 1−x Sb x were mainly based on the virtual crystal approximation keeping inversion symmetry as that in Bi and Sb 32,33 . However, owing to the chemical difference of elements Bi and Sb, the alloy of them should break the inversion symmetry. In combination with the inverted band order and semimetallic feature, the inversion symmetry broken arrangement in the alloy provides large possibility to realize WSM.
Recently, the chiral anomaly induced negative MR and violation of Ohm's law were observed in the transport measurements for Bi-Sb alloy, showing the emergence of WSM phase 34,35 . However, there is still a lack of fully investigation of the influence of the atomic arrangements in Bi-Sb alloy on their topological properties. In our present work, a systematic study on topological nature about the effect of composition and atomic arrangement in Bi-Sb alloy have been proposed and provides a detailed classification depending on the sort of topological phase, based on the quantitative first-principles calculations.
II. METHODS
The density functional theory (DFT)-based first-principles calculations were performed by projected augmented wave (PAW) method as implemented in the Vienna Ab-initio Simulation Package (VASP) 36 . The exchange-correlation energy was considered in the generalized gradient approximation (GGA), following the Perdew-Burke-Ernzerhof (PBE) 37 parametrization scheme. The van der Waals interactions were also taken into account due to the layered lattice structure. In order to analyze the topological properties, we have projected the Bloch wavefuctions into maximally localized Wannier functions (MLWFs) 38 .
The tight-binding model Hamiltonian parameters are determined from Both Bi and Sb have the rhombohedral A7 crystal structure with space group R3m, as indicated in Fig. 1(a). Consistent with previous reports, Bi and Sb have similar electronic band structures due to their similar chemical properties, see Fig. 1(c-d). Whereas, owing to different strengh of spin-orbital coupling (SOC), they have different band orders and therefore different Z 2 topological invariant of ν 0 = 0 and 1, respectively 32,33 , see Fig. 1(e-f). Based on the hexagonal lattice, we have investigated all atomic arrangements for different elements composition of x = 0.17, 0.33, 0.5, 0.67, and 0.83. In our calculations the band inversion between antisymmetric (L a ) and symmetric (L s ) orbitals happens at the concentration with x = 0.5, which agrees well with previous tight binding analysis. Above that the system present as topological insulators or semimetals, in which the WSM states were found in one arrangement for the composition of x = 0.5 and x = 0.83, respectively. For convenience, we will take the WSM phase in the compositions of x = 0.5 as the example to introduce the electronic structure. Though the formation energy is slightly higher (∼ 3 meV per formula unit) than that in the ground state arrangement, the energy difference is almost the limit of accuracy of the DFT itself. Therefore, it is reasonable to obtain this atomic arrangement in experiments. Owing to the absence of inversion symmetry, the spin degeneracy splits for all the bands, see Fig. 2(c), which also provides the possibility of Weyl points. The electronic band structure along high symmetry lines exhibits as a direct band gap insulator. To check the topological phase, we calculated the Wannier center evolutions in the high symmetry planes of k z = 0 and π, respectively. As shown in Fig. 2(d), the Wannier center curves calculated on the plane k z = 0 presents the existence of discontinuity, yet on plane k z = π plane, the curves are always continuously connected. It indeed provides the evidence of a topological non-trivial phase in Bi 0.5 Sb 0.5 . Though there is a general gap along high symmetry lines, its density of states (DOS) is not zero at Fermi level, implying a topological semimetal with bands cutting the Fermi level at some lower symmetry directions. Indeed, we found one pair of linear band crossings away from high symmetry points at (-0.031822, 0.592594, 0.245628) (1/Å) and (0.031808, 0.592380, 0.245608) (1/Å) as shown in Fig. 3. This pair of linear crossing points are Weyl cones behaving as the sink and source of Berry curvature. Considering the C3 rotation and time reversal symmetry, there are 6 pairs of Weyl points in total. Since the Weyl points is only around 30 meV bellow Fermi level, the Weyl points dominated phenomenons should be easy to detect by both surface technique and bulk transport measurements.
An important characteristic of WSM is the presence of surface Fermi arc states 1 . Considering the easy cleave-able bonding, we have chosen the (001) surface in our analysis. Fig. 4(a) and (b) illustrate the local DOS along two particular paths, one is passing through high symmetry lines of K-Γ-M, and the other is across a pair of Weyl points with opposite chirality. Along the high symmetry lines of K-Γ-M one can see that, besides the surface Dirac cone at Γ points, there is the other surface bands on Γ-M line connecting bulk conduction and valence states. Since one pair of Weyl points with opposite chirality symmetrically locates on the two sides of Γ-M line, the additional surface bands should be the Fermi arc related states. To check this speculation, we directly plot surface energy dispersion along the k-path crossing one pair of Weyl points and perpendicular to Γ-M. As shown in Fig. 4(b), a surface bands merges into bulk via the Weyl points, just the typical feature for the Fermi arc related states. Fixing the energy at Weyl points, one can easily found the Fermi arc starting from one Weyl points and end at the other see Fig. 4(c). Moreover, the Weyl points present as the linear touching of the projected bulk states. Hence, the Bi 0.5 Sb 0.5 is a type-II WSM. From the energy dispersions along Γ-M and one pair of Weyl points in Fig. 4(a-b), the Fermi arc related states can range in a large energy window from −0.1 to 0.2 eV, hence the Fermi arcs can be also observed at Fermi level, see Fig. 4(e). Therefore, the WSMs in Bi 0.5 Sb 0.5 is further confirmed by the surface Fermi arcs.
Besides Bi 0.5 Sb 0.5 , the composition x = 0.87 also shows WSMs state with Weyl points lying at (−0.00436, 0.721258, 0.275386) (1/Å) and 0.1 eV bellow Fermi energy. Based on the hexagonal cell, in total, there are five different compositions by atom substitutions for the alloy, and two of them can host WSM phase. Therefore, it has a large possibility to archive Weyl points close to Fermi level in Bi-Sb alloy, and our DFT calculations is in good agreement with the unusual transport properties observed in experiments 34,35 .
IV. CONCLUSION
In conclusion, we investigated the influence of atomic composition and arrangement on the topology of Bi 1−x Sb x alloy via ab−initio calculations. Increasing the concentration of Sb, topological phase transition happens in Bi 1−x Sb x . Interestingly, the atomic arrangements for a particular compositions in Bi 1−x Sb x remarkably relates to its own topological feature in contrast to the previous researches that just emphasizes the effect of composition. As a result, two WSM states were obtained at x = 0.5 and x = 0.87 with specific atomic arrangement with the absence of inversion symmetry.
Our result is helpful for the understanding of recently unusual transport properties observed in experiments. This work reveals the importance of the combination of elemental composition and their specific arrangement for the comprehensive understanding of topological phases in Bi-Sb alloys.
The hexagonal conventional unit cell of Bi and Sb atoms, consists of layered structure. The grey solid lines denote the intralayer bonding. (b) The Brillouin zone of hexagonal unit cell and the k-points path, containing high symmetry points, along which the band structure is calculated. (c) and (d) are the corresponding bulk energy band. (e, f) The evaluated Wannier centers on the plane kz = 0 and kz = π, respectively.
FIG. 2 .
2(a) The fully relaxed crystal structure for Bi0.5Sb0.5 with the specific arrangement of Bi and Sb atoms. (b) The 3D BZ of hexagonal unit cell where the location of Weyl points with positive chirality (blue) and negative chirality (greeen) are shown, and projected 2D BZ of (001) surface. (c) Bulk Energy band for specific Bi0.5Sb0.5 . (d) The evaluated Wannier centers on the plane kz = 0 and kz = π, respectively.
Fig. 2
2(a) shows the lattice structure of Bi 0.5 Sb 0.5 with the specific atomic arrangement, where Bi layer and Sb layer stacking with each other in c direction.
FIG. 3 .
3(a) The energy band passing through two Weyl points, where EF and Ewp represent the Fermi energy and the energy of Weyl points, respectively. The energy bands with color red denote the highest occupied band and the lowest unoccupied band. (b) The energy band structure around two Weyl points A and B on the plane kz=0.4620(1/Å). (c) The Berry curvature calculated on the two Weyl points A and B with normalization.
FIG. 4 .
4(a) The local DOS of (001) plane calculated along a specific path passing through high symmetry ponits (K-Γ -M), where EF denotes the Fermi enetrgy. (b) The local DOS of (001) plane calculated along a pth passing through two Weyl points. (c) The surface DOS on the (001) plane at the energy of Weyl points Ewp, where the Weyl points with positive (negative) chirality are denoted as blue (green) circles. and the grey dashed line represents the first BZ of the surface. The inset is the enlarged region around a pair of Weyl points. (d) The surface DOS on the plane (001) at the Fermi energy.
* [email protected] 1 X. G. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011). 2 G. E. Volovik, The Universe in A Helium Droplet (Clarendon Press, Oxford, 2003). 3 S.-Y. Xu, I. Belopolski, N. Alidoust, M. Neupane, G. Bian, C. Zhang, R. Sankar, G. Chang, Y. Zhujun, C.-C. Lee,
. H Shin-Ming, H Zheng, J Ma, D S Sanchez, B Wang, A Bansil, F Chou, P P Shibayev, H Lin, S Jia, M Z Hasan, Science. 349613H. Shin-Ming, H. Zheng, J. Ma, D. S. Sanchez, B. Wang, A. Bansil, F. Chou, P. P. Shibayev, H. Lin, S. Jia, and M. Z. Hasan, Science 349, 613 (2015).
. B Q Lv, H M Weng, B B Fu, X P Wang, H Miao, J Ma, P Richard, X C Huang, L X Zhao, G F Chen, Z Fang, X Dai, T Qian, H Ding, Phys. Rev. X. 531013B. Q. Lv, H. M. Weng, B. B. Fu, X. P. Wang, H. Miao, J. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, Z. Fang, X. Dai, T. Qian, and H. Ding, Phys. Rev. X 5, 031013 (2015).
. L X Yang, Z K Liu, Y Sun, H Peng, H F Yang, T Zhang, B Zhou, Y Zhang, Y F Guo, M Rahn, D Prabhakaran, Z Hussain, S K Mo, C Felser, B Yan, Y L Chen, Nat. Phys. 11728L. X. Yang, Z. K. Liu, Y. Sun, H. Peng, H. F. Yang, T. Zhang, B. Zhou, Y. Zhang, Y. F. Guo, M. Rahn, D. Prabhakaran, Z. Hussain, S. K. Mo, C. Felser, B. Yan, and Y. L. Chen, Nat. Phys. 11, 728 (2015).
. Z K Liu, L X Yang, Y Sun, T Zhang, H Peng, H F Yang, C Chen, Y Zhang, Y F Guo, D Prabhakaran, M Schmidt, Z Hussain, S.-K Mo, C Felser, B Yan, Y L Chen, Nat. Mater. 1527Z. K. Liu, L. X. Yang, Y. Sun, T. Zhang, H. Peng, H. F. Yang, C. Chen, Y. Zhang, Y. F. Guo, D. Prabhakaran, M. Schmidt, Z. Hussain, S.-K. Mo, C. Felser, B. Yan, and Y. L. Chen, Nat. Mater. 15, 27 (2016).
. S.-Y Xu, N Alidoust, I Belopolski, Z Yuan, G Bian, T.-R Chang, H Zheng, V N Strocov, D S Sanchez, G Chang, C Zhang, D Mou, Y Wu, L Huang, C.-C Lee, S.-M Huang, B Wang, A Bansil, H.-T Jeng, T Neupert, A Kaminski, H Lin, S J Jia, M Z Hasan, Nat. Phys. 11748S.-Y. Xu, N. Alidoust, I. Belopolski, Z. Yuan, G. Bian, T.-R. Chang, H. Zheng, V. N. Strocov, D. S. Sanchez, G. Chang, C. Zhang, D. Mou, Y. Wu, L. Huang, C.-C. Lee, S.-M. Huang, B. Wang, A. Bansil, H.-T. Jeng, T. Neupert, A. Kaminski, H. Lin, S. J. Jia, and M. Z. Hasan, Nat. Phys. 11, 748 (2015).
. I Belopolski, S.-Y Xu, D S Sanchez, G Chang, C Guo, M Neupane, H Zheng, C.-C Lee, S.-M Huang, G Bian, N Alidoust, T.-R Chang, B Wang, X Zhang, A Bansil, H.-T Jeng, H Lin, S Jia, M Z Hasan, Phys. Rev. Lett. 11666802I. Belopolski, S.-Y. Xu, D. S. Sanchez, G. Chang, C. Guo, M. Neupane, H. Zheng, C.-C. Lee, S.-M. Huang, G. Bian, N. Alidoust, T.-R. Chang, B. Wang, X. Zhang, A. Bansil, H.-T. Jeng, H. Lin, S. Jia, and M. Z. Hasan, Phys. Rev. Lett. 116, 066802 (2016).
. N Xu, H M Weng, B Q Lv, C E Matt, J Park, F Bisti, V N Strocov, D Gawryluk, E Pomjakushina, K Conder, N C Plumb, M Radovic, G Autès, O V Yazyev, Z Fang, X Dai, T Qian, J Mesot, H Ding, M Shi, Nat. Commun. 711006N. Xu, H. M. Weng, B. Q. Lv, C. E. Matt, J. Park, F. Bisti, V. N. Strocov, D. Gawryluk, E. Pomjakushina, K. Conder, N. C. Plumb, M. Radovic, G. Autès, O. V. Yazyev, Z. Fang, X. Dai, T. Qian, J. Mesot, H. Ding, and M. Shi, Nat. Commun. 7, 11006 (2016).
. S Souma, Z Wang, H Kotaka, T Sato, K Nakayama, Y Tanaka, H Kimizuka, T Takahashi, K Yamauchi, T Oguchi, K Segawa, Y Ando, Phys. Rev. B. 93161112S. Souma, Z. Wang, H. Kotaka, T. Sato, K. Nakayama, Y. Tanaka, H. Kimizuka, T. Takahashi, K. Yamauchi, T. Oguchi, K. Segawa, and Y. Ando, Phys. Rev. B 93, 161112 (2016).
. H Inoue, A Gyenis, Z Wang, J Li, S W Oh, S Jiang, N Ni, B A Bernevig, A Yazdani, Science. 3511184H. Inoue, A. Gyenis, Z. Wang, J. Li, S. W. Oh, S. Jiang, N. Ni, B. A. Bernevig, and A. Yazdani, Science 351, 1184 (2016).
. R Batabyal, N Morali, N Avraham, Y Sun, M Schmidt, C Felser, A Stern, B Yan, H Beidenkopf, Sci. Adv. 21600709R. Batabyal, N. Morali, N. Avraham, Y. Sun, M. Schmidt, C. Felser, A. Stern, B. Yan, and H. Beidenkopf, Sci. Adv. 2, e1600709 (2016).
. H Zheng, S.-Y Xu, G Bian, C Guo, G Chang, D S Sanchez, I Belopolski, C.-C Lee, S.-M Huang, X Zhang, R Sankar, N Alidoust, T.-R Chang, F Wu, T Neupert, F Chou, H.-T Jeng, N Yao, A Bansil, S Jia, H Lin, M Z Hasan, ACS Nano. 101378H. Zheng, S.-Y. Xu, G. Bian, C. Guo, G. Chang, D. S. Sanchez, I. Belopolski, C.-C. Lee, S.-M. Huang, X. Zhang, R. Sankar, N. Alidoust, T.-R. Chang, F. Wu, T. Neupert, F. Chou, H.-T. Jeng, N. Yao, A. Bansil, S. Jia, H. Lin, and M. Z. Hasan, ACS Nano 10, 1378 (2016).
. X Huang, L Zhao, Y Long, P Wang, D Chen, Z Yang, H Liang, M Xue, H Weng, Z Fang, X Dai, G Chen, Phys. Rev. X. 531023X. Huang, L. Zhao, Y. Long, P. Wang, D. Chen, Z. Yang, H. Liang, M. Xue, H. Weng, Z. Fang, X. Dai, and G. Chen, Phys. Rev. X 5, 031023 (2015).
. C.-L Zhang, S.-Y Xu, I Belopolski, Z Yuan, Z Lin, B Tong, G Bian, N Alidoust, C.-C Lee, S.-M , C.-L. Zhang, S.-Y. Xu, I. Belopolski, Z. Yuan, Z. Lin, B. Tong, G. Bian, N. Alidoust, C.-C. Lee, S.-M.
. T.-R Huang, G Chang, C.-H Chang, H.-T Hsu, Huang, T.-R. Chang, G. Chang, C.-H. Hsu, H.-T.
. M Jeng, D S Neupane, H Sanchez, J Zheng, H Wang, C Lin, H.-Z Zhang, S.-Q Lu, T Shen, M Zahid Neupert, S Hasan, Jia, Nat. Commun. 710735Jeng, M. Neupane, D. S. Sanchez, H. Zheng, J. Wang, H. Lin, C. Zhang, H.-Z. Lu, S.-Q. Shen, T. Neupert, M. Zahid Hasan, and S. Jia, Nat. Commun. 7, 10735 (2016).
. Z Wang, Y Zheng, Z Shen, Y Zhou, X Yang, Y Li, C Feng, Z.-A Xu, arXiv:1506.009241506.00924Z. Wang, Y. Zheng, Z. Shen, Y. Zhou, X. Yang, Y. Li, C. Feng, and Z.-A. Xu, arXiv:1506.00924 (2015), 1506.00924.
. A C Niemann, J Gooth, S.-C Wu, S Bäßler, P Sergelius, R Hühne, B Rellinghaus, C Shekhar, V Suß, M Schmidt, C Felser, B Yan, K Nielsch, Sci. Rep. 743394A. C. Niemann, J. Gooth, S.-C. Wu, S. Bäßler, P. Sergelius, R. Hühne, B. Rellinghaus, C. Shekhar, V. Suß, M. Schmidt, C. Felser, B. Yan, and K. Nielsch, Sci. Rep. 7, srep43394 (2017).
. C Shekhar, A K Nayak, Y Sun, M Schmidt, M Nicklas, I Leermakers, U Zeitler, Y Skourski, J Wosnitza, Z Liu, Y Chen, W Schnelle, H Borrmann, Y Grin, C Felser, B Yan, Nat. Phys. 11645C. Shekhar, A. K. Nayak, Y. Sun, M. Schmidt, M. Nicklas, I. Leermakers, U. Zeitler, Y. Skourski, J. Wosnitza, Z. Liu, Y. Chen, W. Schnelle, H. Borrmann, Y. Grin, C. Felser, and B. Yan, Nat. Phys. 11, 645 (2015).
. N J Ghimire, Y Luo, M Neupane, D J Williams, E D Bauer, F Ronning, J. Phys. Condens. Matter. 27152201N. J. Ghimire, Y. Luo, M. Neupane, D. J. Williams, E. D. Bauer, and F. Ronning, J. Phys. Condens. Matter 27, 152201 (2015).
. Y Luo, N J Ghimire, M Wartenbe, H Choi, M Neupane, R D Mcdonald, E D Bauer, J Zhu, J D Thompson, F Ronning, Phys. Rev. B. 92205134Y. Luo, N. J. Ghimire, M. Wartenbe, H. Choi, M. Neupane, R. D. McDonald, E. D. Bauer, J. Zhu, J. D. Thompson, and F. Ronning, Phys. Rev. B 92, 205134 (2015).
. P J W Moll, A C Potter, N L Nair, B J Ramshaw, K A Modic, S Riggs, B Zeng, N J Ghimire, E D , P. J. W. Moll, A. C. Potter, N. L. Nair, B. J. Ramshaw, K. A. Modic, S. Riggs, B. Zeng, N. J. Ghimire, E. D.
. R Bauer, F Kealhofer, J G Ronning, Analytis, Nat. Commun. 712492Bauer, R. Kealhofer, F. Ronning, and J. G. Analytis, Nat. Commun. 7, 12492 (2016).
. A A Burkov, L Balents, Phys. Rev. Lett. 107127205A. A. Burkov and L. Balents, Phys. Rev. Lett. 107, 127205 (2011).
. G Xu, H Weng, Z Wang, X Dai, Z Fang, Phys. Rev. Lett. 107186806G. Xu, H. Weng, Z. Wang, X. Dai, and Z. Fang, Phys. Rev. Lett. 107, 186806 (2011).
. Y Sun, Y Zhang, C Felser, B Yan, Phys. Rev. Lett. 117146403Y. Sun, Y. Zhang, C. Felser, and B. Yan, Phys. Rev. Lett. 117, 146403 (2016).
. E Liu, Y Sun, L Muechler, A Sun, L Jiao, J Kroder, V S , H Borrmann, W Wang, W Schnelle, S Wirth, S T B Goennenwein, C Felser, arXiv:1712.06722E. Liu, Y. Sun, L. Muechler, A. Sun, L. Jiao, J. Kroder, V. S, H. Borrmann, W. Wang, W. Schnelle, S. Wirth, S. T. B. Goennenwein, and C. Felser, arXiv:1712.06722 (2017).
. W Shi, L Muechler, K Manna, K K Zhang, Yang , R Car, J Brink, C Felser, Y S Sun, arXiv:1801.03273W. Shi, L. Muechler, K. Manna, K. K. Zhang, Yang, R. Car, J. v. d. Brink, C. Felser, and Y. S. Sun, arXiv:1801.03273 (2018).
. J Gooth, A C Niemann, T Meng, A G Grushin, K Landsteiner, B Gotsmann, F Menges, M Schmidt, C Shekhar, V Sueß, R Huehne, B Rellinghaus, C Felser, B Yan, K Nielsch, 1703.10682v1J. Gooth, A. C. Niemann, T. Meng, A. G. Grushin, K. Landsteiner, B. Gotsmann, F. Menges, M. Schmidt, C. Shekhar, V. Sueß, R. Huehne, B. Rellinghaus, C. Felser, B. Yan, and K. Nielsch, arXiv (2017), 1703.10682v1.
. C R Rajamathi, U Gupta, N Kumar, H Yang, Y Sun, V Suß, C Shekhar, M Schmidt, H Blumtritt, P Werner, B Yan, S Parkin, C Felser, C N R Rao, Adv. Mater. 541606202C. R. Rajamathi, U. Gupta, N. Kumar, H. Yang, Y. Sun, V. Suß, C. Shekhar, M. Schmidt, H. Blumtritt, P. Werner, B. Yan, S. Parkin, C. Felser, and C. N. R. Rao, Adv. Mater. 54, 1606202 (2017).
. L Fu, C L Kane, Phy. Rew. B. 7645302L. Fu and C. L. Kane, Phy. Rew. B 76, 045302 (2007).
. D Hsieh, D Qian, L Wray, Y Xia, Y S Hor, R J Cava, M Z Hasan, 10.1126/science.1089408Science. 30292D. Hsieh, D. Qian, L. Wray, Y. Xia, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Science 302, 92 (2003).
. M Z Hasan, C L Kane, Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
. J C Y Teo, L Fu, C L Kane, Phy. Rew. B. 7845426J. C. Y. Teo, L. Fu, and C. L. Kane, Phy. Rew. B 78, 045426 (2008).
. H.-J Zhang, C.-X Liu, X.-L Qi, X.-Y Deng, X Dai, S.-C , H.-J. Zhang, C.-X. Liu, X.-L. Qi, X.-Y. Deng, X. Dai, S.-C.
. Z Zhang, Fang, Phy. Rew. B. 8085307Zhang, and Z. Fang, Phy. Rew. B 80, 085307 (2009).
. D Shin, Y Lee, M Sasaki, Y H Jeong, F Weickert, J B Betts, H.-J Kim, K.-S Kim, J Kim, Nature Materials. 161096D. Shin, Y. Lee, M. Sasaki, Y. H. Jeong, F. Weickert, J. B. Betts, H.-J. Kim, K.-S. Kim, and J. Kim, Nature Materials 16, 1096 (2017).
. H.-J Kim, K.-S Kim, J.-F Wang, M Sasaki, N Satoh, A Ohnishi, M Kitaura, M Yang, L Li, Phy. Rew. Lett. 111246603H.-J. Kim, K.-S. Kim, J.-F. Wang, M. Sasaki, N. Satoh, A. Ohnishi, M. Kitaura, M. Yang, and L. Li, Phy. Rew. Lett. 111, 246603 (2013).
. G Kresse, J Furthmüller, Phys. Rev. B. 5411169G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996).
. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
. A A Mostofi, J R Yates, Y.-S Lee, I Souza, D Vanderbilt, N Marzari, Comput. Phys. Commun. 178685A. A. Mostofi, J. R. Yates, Y.-S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, Comput. Phys. Commun. 178, 685 (2008).
. M P L Sancho, J M L Sancho, J Rubio, Phys. F: Met. Phys. 14M. P. L. Sancho, J. M. L. Sancho, and J. Rubio, Phys. F: Met. Phys 14 (1984).
. M P L Sancho, J M L Sancho, J Rubio, Phys. F: Met. Phys. 15M. P. L. Sancho, J. M. L. Sancho, and J. Rubio, Phys. F: Met. Phys 15 (1985).
| [] |
[
"Interaction of in-plane magnetic skyrmions with 90 • magnetic domain walls: micromagnetic simulations",
"Interaction of in-plane magnetic skyrmions with 90 • magnetic domain walls: micromagnetic simulations"
] | [
"Pavel Baláž \nFZU -Institute of Physics\nCzech Academy of Sciences\nNa Slovance\n1999/2182 21Prague 8Czech Republic\n"
] | [
"FZU -Institute of Physics\nCzech Academy of Sciences\nNa Slovance\n1999/2182 21Prague 8Czech Republic"
] | [] | 90 • pinned magnetic domain walls can be observed in thin magnetic layers attached to a ferroelectric substrate. The main stabilization mechanism of the noncollinear magnetic texture is the strain transfer which is responsible for imprinting of the ferroelectic domains into the uniaxial anisotropy of the ferromagnet. Here, we investigate by means of micromagnetic simulations how the interfacial Dzyaloshinkii-Moriya interaction influences the 90 • domain wall structure. It is shown that Dzyaloshinskii-Moriya interaction induces a large out-of-plane magnetization component, strongly dependent on the domain wall type. In particular, it is shown that this out-of-plane magnetization component is crucial for the transport of the in-plane magnetic skyrmions, also known as bimerons, through the magnetic domain walls. Based on the results of micromagnetic simulations, a concept of in-plane magnetic skyrmion valve based on two 90 • pinned magnetic domain walls is introduced. | 10.1103/physrevapplied.17.044031 | [
"https://arxiv.org/pdf/2205.06876v1.pdf"
] | 248,811,725 | 2205.06876 | 61ddbd56eee47f4590e1c03788898b4d724f671b |
Interaction of in-plane magnetic skyrmions with 90 • magnetic domain walls: micromagnetic simulations
(Dated: May 17, 2022)
Pavel Baláž
FZU -Institute of Physics
Czech Academy of Sciences
Na Slovance
1999/2182 21Prague 8Czech Republic
Interaction of in-plane magnetic skyrmions with 90 • magnetic domain walls: micromagnetic simulations
(Dated: May 17, 2022)APS/123-QED
90 • pinned magnetic domain walls can be observed in thin magnetic layers attached to a ferroelectric substrate. The main stabilization mechanism of the noncollinear magnetic texture is the strain transfer which is responsible for imprinting of the ferroelectic domains into the uniaxial anisotropy of the ferromagnet. Here, we investigate by means of micromagnetic simulations how the interfacial Dzyaloshinkii-Moriya interaction influences the 90 • domain wall structure. It is shown that Dzyaloshinskii-Moriya interaction induces a large out-of-plane magnetization component, strongly dependent on the domain wall type. In particular, it is shown that this out-of-plane magnetization component is crucial for the transport of the in-plane magnetic skyrmions, also known as bimerons, through the magnetic domain walls. Based on the results of micromagnetic simulations, a concept of in-plane magnetic skyrmion valve based on two 90 • pinned magnetic domain walls is introduced.
90 • pinned magnetic domain walls can be observed in thin magnetic layers attached to a ferroelectric substrate. The main stabilization mechanism of the noncollinear magnetic texture is the strain transfer which is responsible for imprinting of the ferroelectic domains into the uniaxial anisotropy of the ferromagnet. Here, we investigate by means of micromagnetic simulations how the interfacial Dzyaloshinkii-Moriya interaction influences the 90 • domain wall structure. It is shown that Dzyaloshinskii-Moriya interaction induces a large out-of-plane magnetization component, strongly dependent on the domain wall type. In particular, it is shown that this out-of-plane magnetization component is crucial for the transport of the in-plane magnetic skyrmions, also known as bimerons, through the magnetic domain walls. Based on the results of micromagnetic simulations, a concept of in-plane magnetic skyrmion valve based on two 90 • pinned magnetic domain walls is introduced.
I. INTRODUCTION
Coupling of ferroelectric and ferromagnetic degrees of freedom is the key ingredient of novel multiferroic structures, which are intensively studied for their considerable potential for the future applications in spintronics [1]. It has been demonstrated that when a thin ferromagnetic layer is attached to a ferroelectric one, the ferroelectric domain patterns might imprint into the structure of the uniaxial anisotropy in the ferromagnet. As a result, magnetic anisotropy axis abruptly rotates at the anisotropy boundary by 90 • giving rise to the 90 • magnetic domain walls (DWs) strongly pinned to the position of the anisotropy boundary [2]. The physical mechanisms recognized beyond this effect are exchange coupling between the canted magnetic moment of ferroelectric layer and an adjacent ferromagnetic one [3,4] and strain transfer from ferroelectric domains to a magnetostrictive film [5][6][7][8]. According to the in-plane magnetic texture, two types of the 90 • magnetic DWs can be distinguished in the ferroelectric/ferromagnet bilayers; uncharged and charged. Both of them are strictly in-plane magnetic textures. In the center of an uncharged 90 • magnetic DW, the magnetization is perpendicular to the magnetic DW. In contrast, magnetization in the center of a charged 90 • DW is aligned with the DW. While the first type is stabilized mainly by the interplay of the magnetic exchange interactions with uniaxial anisotropy, the latter one is usually influenced by the magnetostatic dipolar interaction [2].
The technical merit of the 90 • magnetic DWs in multiferroic heterostructures consists in a vast array of related physical mechanisms that can be deployed in future applications. Firstly, it has been demonstrated that the anisotropy boundaries in the ferromagnet dynamically follow the changes in the ferroelectric domain structure [9]. This effect allows one to position the ferro- * [email protected] magnetic DWs purely via the electric field. Secondly, a possibility of utilizing a 90 • pinned DW as a highly efficient source of spin waves in a narrow range of wave lengths has been shown by means of micromagnetic simulations [10]. Moreover, a theoretical analysis has shown that the static and dynamic properties of the pinned 90 • DWs can be tuned via applied magnetic field [11]. Alternatively, short-wavelength spin waves can be emitted in multifferoic heterostructures with modulated magnetic anisotropy using applied magnetic field [12]. Finally, it has been recently reported that individual magnetic domains in multiferroic composite bilayers can be manipulated via a laser pulse showing laser-induced changes of magnetoelastic anisotropy leading to precessional switching [13].
In this paper, we extend the range of phenomena discussed in the anisotropy-modulated multiferroic multilayers by examining the effect of interfacial Dzyaloshinskii-Moriya interaction (DMI) [14,15] on the 90 • pinned DWs employing micromagnetic simulations. DMI can be observed not only in bulk crystals of low symmetry [14] but also in layered materials where the inversion symmetry is broken at the interfaces [16][17][18][19][20]. It has been shown that DMI significantly affects the dynamics of 180 • magnetic DWs by increasing their Walker field and velocity [21,22]. DMI has also a noticeable impact on the critical current density for DW depinning by spin transfer torque and DW resonance frequency [23]. Here, we show that interfacial DMI might substantially influence the structure of the 90 • DWs by inducing an out-of-plane magnetization component. While this effect introduces just a minor variation of the magnetization texture of the uncharged DWs, it becomes conspicuous in the case of the charged DWs. Importantly, the direction of the out-of-plane magnetization induced by DMI depends on the direction of the in-plane magnetization rotation. Thus the out-of-plane magnetization components are opposite for the head-to-head and tail-to-tail charged DWs.
Recently, numerical simulations by Moon et al. have shown a possibility of existence of the in-plane skyrmions in thin magnetic layers with in-plane easy-axis magnetic anisotropy and interfacial DMI [24]. Similar topological magnetic structures in systems with in-plane magnetic anisotropy have been reported in various materials and models [25][26][27][28]. Existence of the in-plane skyrmions would open new possibilities to study their interactions with other planar magnetic textures like the in-plane magnetic DWs. Another advantage of the inplane skyrmions is that, unlike skyrmions in systems with the perpendicular magnetic anisotropy, the in-plane skyrmions of both topological charges (Q = ±1) can simultaneously exist in the same magnetic domain. Importantly, due to the skyrmion Hall effect [29,30], currentinduced trajectories of skyrmions of opposite charges are bent in opposite directions [24]. Although, the mentioned skyrmions are in-plane, their core magnetization is perpendicular to the layer. We show here that DMIinduced out-of-plane magnetization in the charged DWs can significantly influence the interaction of the in-plane skyrmions with DWs. Namely, using micromagnetic simulations we demonstrate that charged pinned 90 • magnetic DWs can serve as a topological charge selective skyrmion filters. Following the results of micromagnetic simulations, we introduce a simple concept of in-plane skyrmion valve based on two 90 • pinned magnetic DWs.
The paper is organized as follows. In Sec. II we describe studied systems of 90 • magnetic DWs in a layer with modulated easy-axis and our results of micromagnetic simulations. Section III analyzes in-plane skyrmions and their dynamics in our system. Consequently, in Section IV we describe mutual interaction of in-plane magnetic skyrmions with the 90 • magnetic DWs. In Sec. V we describe the concept of in-plane skyrmion valve and discuss its functioning in practice. Finally, we conclude in Sec. VI. Additional materials can be find in the Appendix.
II. 90 • MAGNETIC DOMAIN WALLS IN THE PRESENCE OF DMI
First, we shall analyze, how DMI influences the 90 • magnetic DWs pinned to the anisotropy boundaries. To this end, we employ micromagnetic simulations implemented in the MuMax3 framework [31]. We simulated a thin magnetic layer of thickness d = 1 nm, shown in Fig. 1. The lateral sized of the layer were L x = 1024 ∆x and L y = 512 ∆y, where ∆x = ∆y = 2.5 nm are the sizes of the discretization cell along the x and y axis, respectively. The discretization cell size along the z-axis is equal to the layer's thickness, ∆z = d = 1 nm.
In the layer's plane we defined two anisotropy boundaries, where easy axis abruptly changes its direction by 90 • . The anisotropy boundaries are located at positions x = x L and x R , shown in Fig. 1. The distance between the anisotropy boundaries was set to |x R − x L | = 1 µm.
In the simulations we assumed periodic boundary conditions along the x and y axis. Therefore, the studied system is translationally symmetric along the y-axis. The easy axis changes along the x direction as followŝ
e u = 1/ √ 2, 1/ √ 2, 0 for x ∈ (x L , x R ) 1/ √ 2, −1/ √ 2, 0 otherwise .(1)
The uniaxial magnetic anisotropy parameter, K u , is the same in the whole layer. Moreover, in the simulations we used following parameters: exchange stiffness parameter A ex = 1.3 × 10 −11 J/m, saturated magnetization M s = 7.20 × 10 5 A/m [24], corresponding to the range of parameters of cobalt and iron based magnetic thin films [32][33][34][35]. Furthermore, we assumed interfacial DMI, as described in Refs. [31] and [36], given by an effective field term
H DM = 2 D µ 0 M s ∂m z ∂x , ∂m z ∂y , − ∂m x ∂x − ∂m y ∂y ,(2)
where µ 0 is the vacuum permeability, m = (m x , m y , m z ) is a unit vector in the direction of local magnetization, M = M s m, and D is the strength of the DMI. In our simulations with nonzero DMI, we assumed D = 2 mJ/m 2 [24,37]. are shown by the arrows, while the color scale expresses the out-of-plane, m z , component. Fig. 1(a) depicts the uncharged DWs. Charged DWs can be seen in Fig. 1(b). In both cases we can see out-of-plane magnetization localized at the DWs. In case of the uncharged DWs the out-of-plane magnetization is rather small and changes sign when passing the anisotropy boundary. On the other hand, out-of-plane magnetization of the charged DW is about 10 times larger than the one of the uncharged DW. Moreover, we can see that the out-of-plane magnetization of the tail-to-tail DW, located at x L , has a sign opposite to the one of the head-to-head DW, at x R .
To understand the effect of the DMI, let us compare the domain wall profiles, plotted in Fig. 2, showing the magnetization components at the anisotropy boundaries calculated without and with the DMI. The magnetization components are plotted as a function of position x (at constant y) with respect to the anisotropy boundary position, x c ; x c = x L for the left DW, and x c = x R for the right DW. Magnetization components calculated without and with the DMI are plotted by the dashed and solid lines, respectively. Note, the local magnetization vector is normalized to 1. The in-plane magnetization components of the the uncharged and charged DWs are shown in Fig. 2(a) -(d). The left column shows magnetization components of the uncharged DWs, while the right one plots the magnetization components of the charged DWs. First, the uncharged DWs seem to be almost uninfluenced by the DMI. The charged DW, however, appears to be significantly affected by the DMI field, which reduces the DW width. Similar trends can be observed also on the DMI effect on the out-of-plane magnetization components plotted in Fig. 2(e) and Fig. 2(f). In case of D = 0 the uncharged as well as charged DWs are strictly planar structures with zero out-of-plane magnetization components [11]. Thus, the out-of-plane magnetization observed in Fig. 1 are induced purely by the DMI. This effect is understandable, since the Dzyaloshinskii-Moriya field, H DM , has a non-zero z-component, which is proportional to ∂m x /∂x. Although, DMI induces just a minor out-of-plane magnetization component in the uncharged DWs, in case of the charged DWs the effect of DMI is about 10-times stronger. The reason, why the uncharged DWs are almost unaffected by the DMI is that it is defined mainly by variation of the m y component with modest change of m x . On the other hand, in case of charged DWs, dominant magnetization component which changes along the x direction is m x . Additionally, the charged DWs have also relatively large magnetic moment which responses to the Dzyaloshinskii-Moriya field by tilting in the z-direction. The direction of the DMIinduced out-of-plane magnetization of the charged DWs can be explained by Eq. (2), since ∂m x /∂x has a different sign in case of the tail-to-tail and head-to-head DW. In contrast, the out-of-plane components of the right and left uncharged DWs are both the same since the m x components vary in the same way in both cases.
Our simulations confirm that a charged magnetic DW has higher energy than the uncharged one. The difference in energy densities of the two systems shown in Figs. 1 (a) and (b) is ∆ε ∼ 124 mJ/m 2 . As the magnetic anisotropy increases, total energy of both systems decreases linearly keeping approximately the same energy difference between the two systems. The main reason of elevated energy of the charged 90 • magnetic DW is its relatively large magnetostatic field.
In our micromagnetic simulations, the distance between the DWs was set to 1 µm. In order to rule out that the dipolar interaction between the two DWs influences the final magnetic configuration, we simulated longer sample with L x = 2048 ∆x with the other parameters unchanged, where distance between the DWs has been set to 2 µm. The final magnetic configurations matched with those presented in Figs. 1 and 2. Thus, we can exclude any effect of the dipolar coupling between the charged DWs on their magnetic configurations. Let us now inspect how the charged DW structure depends on the amplitude of the uniaxial anisotropy, K u . Fig. 3(a) shows how the DMI-induced out-of-plane magnetization changes as a function of K u . Fig. 3(b) compares DW width with (solid circles) and without (open circles) DMI. The DW width has been determined from the results of micromagnetic simulations as
[2] δ = ∞ −∞ dx cos 2 (φ (x)) ,(3)
where
φ (x) = φ(x) − 1 2 ∆φ π ∆φ(4)
with φ(x) being the magnetization angle calculated from the easy axis in the central domain of the simulated sample. Moreover, ∆φ is the magnetization rotation angle calculated as a difference of magnetization angles in the left and right domains far from the domain wall,
∆φ = φ R − φ L .
First, the out-of-plane magnetization induced by DMI monotonously increases with K u . This effect can be elucidated simply, by shrinking of the DW width as K u rises, as shown in Fig. 3(b). Since the DW width decreases, the derivation ∂m x /∂x at the anisotropy boundary increases and, consequently, the z-coordinate of the Dzyaloshinkii-Moriya field, H DM , becomes higher. Second, Fig. 3(b) demonstrates that the DW width decreases with K u for zero as well as for nonzero D. For the case with D = 0, width of a 90 • charged DW can be approximated as δ π A/(2K u ). Introducing DMI leads to a significant reduction of DW width, which is caused by the out-of-plane magnetization component.
B. Effect of magnetic field
Next, we analyze how the out-of-plane magnetization induced by DMI depends on the applied magnetic field. When the magnetic field is applied along the y direction, H app = H appêy , with H app > 0, the equilibrium magnetization rotation angle decreases. Figure 4 plots field dependence of the out-of-plane magnetization component of a charged DW at the anisotropy boundary. When magnetic field, is applied along the DW, the magnetization rotation angle decreases [11], as shown on the right vertical axis. This results in the reduction of the out-of-plane magnetization at the anisotropy boundary. The effect is the same for both head-to-head and tail-to-tail DWs. As ∆φ vanishes, m z (x c ) also tends to zero.
C. Effect of the spin transfer torque 90 • magnetic DW can be excited by electric current flowing in the magnetic layer [10]. It has been also shown by means of numerical simulations that electric current induces a small out-of-plane magnetization in the vicinity of the anisotropy boundary due to the spin transfer torque [11]. In order to compare the currentinduced and DMI-induced out-of-plane magnetization components, we carried out number of simulations with DMI with electric current applied in the direction perpendicular to DW. The effect of the spin transfer torque is simulated using the Li-Zhang term [38,39] as imple- mented in MuMax3 [31] taking into account the Gilbert damping parameter α = 0.3 and the spin torque nonadiabaticity β = 0.6. The results are shown in Fig. 5, which compares the out-of-plane magnetization profile for uncharged and charged DWs with and without electric current flowing along the x direction. When the current density, I, was non-zero, we set I = ±10 12 A/m 2 , which is a typical value used for current-induced DW dynamics. In all cases we assumed D = 2 mJ/m 2 , and K u = 10 4 J/m 3 .
Spin transfer torque induces changes of the out-ofplane magnetization in case of the uncharged as well as charged DW. This change is, however, about one order of magnitude smaller than the one induced by DMI. In case of the charged DW, out-of-plane magnetization component increases when I > 0, however, it reduces for opposite current direction. Nevertheless, in the studied system, electric current has just a modest effect on the magnetization profile.
III. IN-PLANE SKYRMIONS
Recent numerical analysis of thin magnetic layers with DMI and in-plane uniaxial anisotropy with easy axis has revealed a possibility of the in-plane skyrmions exis-tence [24]. A stable in-plane skyrmion can be obtained by rotating the out-of-plane magnetic texture with skyrmion into the easy-axis direction. After relaxing the magnetic configuration one receives an in-plane skyrmion of a bean-like shape and out-of-plane core magnetization. In contrast to the out-of-plane skyrmions, in-plane skyrmions of both skyrmion numbers Q = ±1 can coexist in one magnetizatic layer. Figure 6 shows in-plane skyrmions of both topological charges obtained using micromagnetic simulations with DMI and K u = 10 4 J/m 3 . For higher resolution, we used discretization cells of size ∆x = ∆y = ∆z = 1 nm. Figures 6(a)-(f) plot the magnetization components of the in-plane skyrmions. In agreement with Fig. 1, we plotted skyrmions in a rotated coordination system (x , y , z ) so that the easy axis and the outer magnetization are aligned with the horizontal axis, x . The left column depicts components of the in-plane skyrmion with topological charge Q = 1, while the right one plots magnetization for in-plane skyrmion with Q = −1. In contrast to skyrmions in systems with perpendicular magnetic anisotropy, in-plane skyrmions are not spherically symmetric. They show an axial symmetry with respect to the easy axis. Moreover, their orientation is linked with their topological charge. The in-plane skyrmions can be seen as a combination of an vortex and antivortex [24], known also as bimerons. In case, of skyrmion with Q = 1, the skyrmion core is formed by a vortex, while in the opposite case, antivortex takes the role of the skyrmion core. In Figures 6(g)-(h) we map the topological charge density, defined as
ρ(x, y) = 1 4π m · ∂m ∂x × ∂m ∂y(5)
The values of ρ in Figs. 6 are normalized to the range of −1, 1 . The maximum of the topological charge density is located between the vortex and antivortex of the skyrmion. Unlike the out-of-plane skyrmions, the topological charge density of the in-plane skyrmion has a drop-like shape elongated towards the skyrmion core. The topological charge is then defined as Q = dx dy ρ(x, y).
Let us now inspect the current-induced motion of the in-plane skyrmions in our system. We assume an inplane skyrmion nucleated in the middle of the sample shown in Fig. 1(b). To induce the skyrmion motion we assumed electric current flowing along the x-axis. The resulting spin transfer torque has been modeled using the Li-Zhang torque [38,39] implemented in MuMax3 [31]. First, we study the skyrmion velocity in the central magnetic domain of the sample as a function of K u shown in Fig. 7. We analyzed skyrmion velocity for both topological charges separately. The velocities in Fig. 7 are plotted in the units of u defined as
u = µ B P I 2 e M s (1 − β 2 ) ,(6)
where µ B is Bohr magneton, P is the spin current po- In-plane skyrmions obtained using micromagnetic simulations with the in-plane anisotropy constant Ku = 10 4 J/m 3 . The quantities are plotted in a rotated coordination system (x , y , z ) (see Fig. 1) so that the easy axis and outer magnetization are aligned with the x -axis. The first three rows of the left and right column show the magnetization components of an in-plane skyrmion with topological charged Q = 1 and Q = −1, respectively; (a) -(b) m x , (c)-(d) m y , and (e)-(f) m z . In the fourth row is plotted the topological charge density. Fig. 1(b) as a function of Ku calculated for both topological charges, Q = ±1, using micromagnetic simulations. The current is applied along the x-axis, hence, under an angle −45 • with respect to the outer magnetization. The skyrmion velocities are given in the units of u defined by Eq. 6 with parameters described below.
larization, I is the current density, and e is the electron charge. The values used in the calculations of the skyrmion velocity are P = 0.56, I = 10 12 A/m 2 , and β = 0.6, which gives |u| = 35.17 m/s. Apparently, skyrmion velocities decrease with increasing K u . More importantly, however, we notice a strong difference in the velocities of skyrmions with opposite topological charge. In order to elucidate this difference, we study the skyrmion velocities in more detail. In the previous study by Moon et al. [24] it has been shown that the skyrmion velocity depends on current direction with respect to the easy axis. Moreover, it has been shown that when the current flows along the easy axis, skyrmions of both topological charges moves at the same velocities. This is, however, not our case, since the current is applied under an angle of −45 • with respect to the easy axis. Thus we studied velocities of skyrmions of both topological charges as a function of the current direction.
To estimate the skyrmion velocity with respect to the current direction the electric current of constant density I = 10 12 A/m 2 is applied under an angle Φ with respect to the x -axis (see Fig. 1). In order to separate the skyrmion Hall effect [29,30], we split the velocity into two components; the longitudinal component, v , which is measured along the current direction, and the transverse one, v ⊥ , which is perpendicular to the current direction. Triangles in Fig. 8 show the (a) longitudinal and (b) transverse skyrmion velocities calculated using micromagnetic simulations for skyrmions of both topological charges, Q = ±1. We notice, that the longitudinal skyrmion velocity oscillate with angle Φ with period π. The skyrmion velocity changes approximately in a range of 10% of the average velocity. Moreover, the velocities of skyrmions with different topological charges are shifted in phase by π/2. As a result, skyrmions of both topological charges move with the same longitudinal velocity when the electric current is oriented parallel or perpen- dicular to the easy axis. The longitudinal velocities are equal in both current orientations. On the other hand, the perpendicular skyrmion velocity is slightly higher, when the electric current flows parallel to the easy axis.
In agreement with the theory of the skyrmion Hall effect, perpendicular velocities differ in sign for opposite topological charges. Importantly, for any angle Φ different from k π/2 (for k = 0, 1, 2, . . . ), the skyrmion velocities of opposite Q differ from each other. The difference is maximum, when electric current is applied under an angle of 45 • with respect to the easy axis, which is the case studied in Fig. 7 as well as in the next section. The skyrmion velocities described above are fully consistent with the Thiele equation [40] studied also in Ref. [24]. More details on the calculation of the skyrmion velocities based on the Thiele equation can be found in Appendix A. The black lines in Fig. 8 are the results of the corresponding skyrmion velocities using Eqs. (A4). Since the Thiele equation slightly overestimates the skyrmion velocities, we shifted the calculated velocities by a value of −0.07 in Fig. 8(a), and by ±0.02 in Fig. 8(b). The overestimation of skyrmion velocities by the Thiele equation can be explained by additional dissipation mechanisms related to dynamic variation of the skyrmion shape, which cannot be captured by the rigid shape approximation used to derive the Thiele equation. However, the skyrmion velocity oscillations and their amplitude are well captured by the Thiele equation.
IV. INTERACTION OF SKYRMIONS WITH DOMAIN WALLS
Up to now, we studied current-induced dynamics of in-plane skyrmions remaining in the central magnetic domain. Let us now inspect how the skyrmion motion is affected by the pinned 90 • magnetic domains. Recently, interaction of skyrmions with DW has been studied in magnetic thin film with perpendicular magnetic anisotropy [41]. It has been shown, that a chiral DW can be considered as a guide for the skyrmion transport preventing them from annihilation at the sample boundary. Although we have not observed such an effect in the studied system with 90 • DWs, we show that 90 • DWs might have a significant effect on transport of the inplane skyrmions. It has been shown that the in-plane skyrmions can be efficiently moved by means of the spin transfer torque or spin orbit torque [24]. In Sec. II C we have demonstrated that applied electric current in the direction perpendicular to the anisotropy boundaries has just a minor effect on the DW structure. Thus we can use the spin transfer torque to study the common interaction between the 90 • DWs and the in-plane skyrmions. First, we inspect how the in-plane skyrmion passes an uncharged 90 • magnetic DW. As shown in the previous section, magnetization in the vicinity of an uncharged DW remains mainly in the layer's plane. Considering the sample shown in Fig. 1(a), we created an in-plane skyrmion in the center of the sample. We assumed D = 2 mJ/m 2 and K u = 10 4 J/m 3 . We move the skyrmions by applying a constant electric current of the density I = 10 12 A/m 2 in the direction perpendicular to the DWs. For I > 0, the skyrmions move towards the left DW.
In Figure 9, we show motion of the in-plane skyrmions passing an uncharged magnetic DW. The colors in Fig. 9 show the z-components of the reduced magnetization vector in the color scale shown in Fig. 6. The anisotropy boundary is marked by the vertical dashed line and the local in-plane magnetization direction is indicated by the thick grey arrows. The single image shows skyrmion positions in different times with time step approx. 7 ns, while the directions of moving skyrmions are given by the green arrows. Fig. 9 covers 30 ns of the skyrmions dynamics. Apparently, the skyrmion trajectories deviate from the horizontal direction due to the skyrmion Hall effect [29,30]. Skyrmions with opposite topological charge are deflected in opposite directions. As shown in Fig. 6, the in-plane skyrmions are strictly oriented according to the local magnetization direction. Thus, when a skyrmion passes the domain wall it rotates to the direction of magnetization of the neighboring domain. Importantly, the topological charge of the skyrmion remains unchanged, therefore, also the direction of the skyrmion motion after passing the DW is the same. For the opposite current direction with I < 0, the skyrmions move in the opposite direction towards the right DW (not shown). The deflection of the skyrmion trajectories is also reversed. Otherwise, the skyrmion transport through the right DW is analogical to the one described for the left DW.
The situation is more complex in the case of charged DW. The summary of the skyrmion interaction with the 90 • charged magnetic DWs is shown in Fig. 10 analogically to Fig. 9. The simulation parameters are the same as in Fig. 9. In the left column of Fig. 10, we show how the in-plane skyrmions with different topological charges interact with the left DW when I > 0. While the skyrmion with Q = −1 passes the DW changing its orientation, the skyrmion with Q = 1 is destroyed when hits the DW. For I < 0, the skyrmions move towards the right DW, where the situation reverses. Namely, skyrmion with Q = 1 transmits the DW, however, skyrmion with Q = −1 is annihilated. We attribute this selective skyrmion annihilation at the localized 90 • DW to the out-of-plane component induced by DMI. As follows out from our simulations, when the direction of the out-of-plane component of the DW is in accord with the skyrmion core magnetization direction, the skyrmion passes through the DW. In contrast, when the skyrmion core magnetization is opposite to the one of the DW out-of-plane magnetization component, the skyrmion be- comes unstable. Inside an opposite DW, the skyrmion core shrinks down and the skyrmion annihilates. Moreover, we find out that the effect of the charge selective skyrmion annihilation at the charged DWs depends on the strength of the uniaxial anisotropy. Namely, in our simulations we observe the effect when K u 7 × 10 3 J/m 3 at the same value of D = 2 mJ/m 2 . Below this threshold skyrmions of both charges can transmit through both charged DWs. In Fig. 3 we show, that the magnitude of the out-of-plane magnetization component in the DW center increases with increasing K u . This suggests that the out-of-plane DW magnetization component is responsible for the skyrmion annihilation. It also explains why such an effect is not observed in the case of the uncharged DWs, where the out-of-plane magnetization is of one order of magnitude smaller. In addition, we examined the stability of our results for a smaller Gilbert damping parameter. Simulations with α = 0.03 lead to the same qualitative results on skyrmion transmission and annihilation.
V. IN-PLANE SKYRMION VALVE
From the practical point of view, the effect of the topological charge dependent skyrmion annihilation can be utilized as a skyrmion filter for construction of skyrmionbased logic gates. This raises a question how to control the properties of the charged DWs and possibly how to switch their effect on skyrmions on and off. We have noticed that the out-of-plane magnetization of a charged DW must be large enough to be able to annihilate a passing skyrmion. Thus changing the magnitude of the out-of-plane magnetization might allow one to control the skyrmion flow through the DW. As we have shown, the out-of-plane magnetization component can be varied in various ways. On the one hand side, it can be manipulated by changing strength of the DMI or magnetic anisotropy. On the other hand, it can be changed by an applied magnetic field, which can vary the magnetization angle between two neighboring domains, and thus reduce the out-of-plane magnetization in the DW center. These methods, however, do not affect only the 90 • magnetic DWs but also the skyrmions, which can become unstable under distinct conditions [24].
Alternatively, one can consider the central magnetic domain in Fig. 1 as a valve for the in-plane skyrmion flow. In Fig. 11 we present a concept of an in-plane skyrmion valve based on two 90 • pinned magnetic DWs. The cartoon in Fig. 11 shows a device consisted of three magnetic domains separated by two 90 • pinned DWs. The positions of the DWs are given by the vertical dashed lines. The diagonal thick arrows correspond to directions of magnetization in each domain. The smaller arrows located at the DW positions show the magnetization direction in the DW center. Finally, the horizontal arrows passing the structure stand for the flow of skyrmions with topological charge Q = 1 (top arrow) and Q = −1 (bottom arrow). Such a device has 2 3 possible magnetic configurations. In Fig. 11 we show its four basic functionalities. Fig. 11(a) shows a magnetic configuration featuring two uncharged magnetic DWs corresponding to Fig. 1(a). In this configuration, skyrmions of both topological charges can freely move from one side of the sample to the other one. However, when the magnetization in the central domain is switched, both DWs turn to charged, as shown in Fig. 11(b). As a result, skyrmions of both topological charges will be annihilated at the two DWs and none of them can pass the central domain. Additionaly, if we want to introduce the topological charge selectivity into this scheme, we need to combine charged and uncharged DWs. Two examples are shown in Figs. 11(c) and (d). In the first one, the magnetization of the left domain has been switched giving raise to a charged tail-to-tail domain wall. This configuration allows just skyrmions with Q = −1 to pass the central domain. Oppositely, if the right domain magnetization is flipped, we obtain one charged head-to-head DW. Thus just skyrmions with Q = 1 can be transferred to the other side of the sample. Therefore, we suggest that such a device could fully control the flow of the in-plane magnetic skyrmions.
To make this scheme operate in practice, an effective method of switching magnetization selectively just in a specific magnetic stripe domain is necessary. To this goal, one can employ the laser-induced magnetization precessional switching, which has been recently demonstrated in multiferroic composite BaTiO 3 /CoFeB by Shelukhin et al. [13] using femtosecond pump-probe technique with micrometer spatial resolution. It has been shown that a femtosecond laser pulse can significantly reduce magnetoelastic coupling in a given domain. Consequently, with the assistance of ultrafast demagnetization [42,43] magnetization precessions can be triggered in the affected domain. Importantly, when an external magnetic field is applied along the anisotropy hard axis magnetization precessional switching [44] can be achieved in the given domain without influencing its neighborhood. In contrast to the previously described approaches, the magnetic field is applied just for the period of magnetization switching. Thus it does not affect the skyrmions passing the domain after the switching has been accomplished. Moreover, the precessional magnetization reversal happens on a subnanosecond time scale [13], which offers an ultrafast manipulation with the in-plane skyrmion valve.
VI. CONCLUSIONS
We have studied influence of interfacial DMI on the 90 • localized magnetic DWs formed in a thin magnetic layer with uniaxial in-plane magnetic anisotropy modified due to a ferroelectric substrate hosting stripe domains [2]. By means of micromagnetic simulations we have shown that, unlike in the uncharged magnetic DWs, DMI induces significant out-of-plane magnetization component in the charged magnetic DWs. The sign of the DMI-induced out-of-plane magnetization is opposite in case of the head-to-head and tail-to-tail DW type. Consequently, we have studied magnetization dynamics of recently proposed in-plane magnetic skyrmions [24] in the system of magnetic stripe domains separated by 90 • magnetic DWs. Particularly, we analyzed the longitudinal and transverse skyrmion velocities as a function of the applied current direction. We demonstrate that the longitudinal skyrmion velocity also depends on the topological charge which results in different velocities of skyrmions with opposite topological charges when the electric current is applied under a general angle with respect to the easy axis.
Importantly, we have shown that the charged DWs have an important effect on the transport of the inplane skyrmions. Namely, when an in-plane skyrmion passes a charged DW, it depends on its topological charge whether it passes or becomes destroyed. As follows from our simulations a tail-to-tail magnetic DW allows transmission of an in-plane skyrmion of topological charge equal to Q = −1, however, becomes fatal to a skyrmion with Q = 1. On the other hand, a head-to-head DW is passable for a skyrmion with Q = −1 but annihilates skyrmion with Q = 1. This process of skyrmion annihilation we attributed to the incompatibility of the out-of-plane magnetization of the charged DW and the skyrmion core. When a skyrmion passes a pinned DW with large enough out-of-plane magnetization of opposite direction, the skyrmion core becomes unstable and the skyrmion vanishes.
To achieve the coexistence of the pinned 90 • magnetic domain walls and in-plane skyrmions one needs a composite featuring both magnetic stripe domains and interfacial DMI. Although, to our best knowledge, such a device has not been reported yet, a suitable candidate might be BaTiO 3 /FM/Pt trilayer with FM being a thin magnetic layer made of CoFe or CoFeB [2, 6]. Once the material parameters fulfill the condition of inplane skyrmion stability [24], they could be nucleated by known methods making use of spin injection [45,46] or a laser pulse [47,48].
Finally, we introduced a concept of a device based on two 90 • degrees pinned magnetic DWs allowing one to fully control the in-plane skyrmion flow. We suggest that making use of laser-induced spatially selective magnetization reversal [13] such a device could be switched between different magnetic states acting as a valve for in-plane magnetic skyrmions.
ACKNOWLEDGMENT
This work was supported by the Czech Science Foundation (project no. 19-28594X).
Appendix A: Skyrmion velocities
Velocities of the in-plane skyrmions can be calculated using the Thiele equation [40,49]. Without any lost of generality, we assume here that the anisotropy easy axis is aligned with the x-axis. In case of an in-plane skyrmion moved by the spin transfer torque [38,39] the Thiele equation leads to [24] −
G (u y − v y ) = D xx (βu x − αv x ) , (A1a) G (u x − v x ) = D yy (βu y − αv y ) ,(A1b)
where v x , and v y are the skyrmion velocities in the x, and y direction, respectively. Assuming the current density vector as I = I (cos(Φ), sin(Φ), 0), we obtain u x = u cos(Φ), and u y = u sin(Φ), where u is given by Eq. (6). Here, Φ is the angle of the applied current with respect to the x-axis. Moreover, G = −4 π Q, and D ζξ = dx dy (∂ ζ m) · (∂ ξ m) with m = m(x, y) being the reduced magnetization vector. From (A1) we obtain
FIG. 1 .
190 • magnetic DWs resulting from micromagnetic simulations: (a) uncharged DWs, (b) charged DWs. The color scale shows the z-component of reduced magnetization vector m = M /Ms. The parameters of calculation were Aex = 1.3 × 10 −11 J/m, Ms = 7.20 × 10 5 A/m, Ku = 10 4 J/m 3 , and D = 2 mJ/m 2 . The distance between the domain walls is xR − xL = 1 µm.
Figure 1
1shows stable 90 • DWs, calculated using micromagnetic simulations, under the effect of the DMI. The anisotropy parameter used in the simulations was K u = 10 4 J/m 3 . The in-plane magnetization components magnetization profiles at the anisotropy boundaries without (dashed lines) and with (solid lines) DMI. Magnetization components of the uncharged DWs are shown in the left column, while the ones of the charged DWs are plotted in the right one. Components mx, my, and mz are plotted in the panels (a)-(b), (c)-(d), and (e)-(f), respectively. The parameters of the calculations are the same as in Fig. 1. The magnetization components are plotted as a function of position x with respect to the anisotropy boundary, xc; xc = xL for the left DW (L), and xc = xR for the right (R) DW.
FIG. 3 .
3Dependence of (a) the out-of-plane magnetization, mz(xc), and (b) the DW width of a charged 90 • DW as a function of the anisotropy constant, Ku. The parameters used in the calculations are the same as inFig. 1. Calculations of mz(xc) were done with D = 2 mJ/m 2 ; xc is the position of the anisotropy boundary. DW width has been calculated with and without DMI.
FIG. 4 .
4Dependence of the DMI-induced magnetization as a function of the magnetic field applied along the y-axis. The equilibrium magnetization rotation angle ∆φ is plotted on the right vertical axis.
FIG. 5 .
5Effect of the spin transfer torque on the out-ofplane DW profile in case of a (a) uncharged and (b) charged 90 • DW. The anisotropy boundary is located as xc (xc = xL for the left (L) DW and xc = xR for the right (R) DW). In all the calculations we assumed D = 2 mJ/m 2 . The electric current flows in the direction of the x-axis. When the current density, I, is non-zero, it has been set to I = ±10 12 A/m 2 . The other simulation parameters are the same as inFig. 1. In case of the uncharged DWs, the current-induced effect for the left and right DW is identical.
FIG. 6. In-plane skyrmions obtained using micromagnetic simulations with the in-plane anisotropy constant Ku = 10 4 J/m 3 . The quantities are plotted in a rotated coordination system (x , y , z ) (see Fig. 1) so that the easy axis and outer magnetization are aligned with the x -axis. The first three rows of the left and right column show the magnetization components of an in-plane skyrmion with topological charged Q = 1 and Q = −1, respectively; (a) -(b) m x , (c)-(d) m y , and (e)-(f) m z . In the fourth row is plotted the topological charge density.
FIG. 7 .
7Skyrmion velocities in the central magnetic domain of the sample shown in
FIG. 8 .
8Velocities of in-plane skyrmions in the central domain of sample shown in Fig. 1(b) as a function of the current direction. The velocities are calculated for skyrmions of both topological charges, Q = ±1. In Fig. (a) we plot the longitudinal velocity component (v ), and panel (b) shows the velocities perpendicular to the applied current direction (v ⊥ ). The magnetic anisotropy parameter was Ku = 10 4 J/m 3 and current density I = 10 12 A/m 2 , while the other parameters are the same as on Fig. 7. The current is applied under an angle Φ with respect to the x -axis. The velocities are plotted in the in the units of u given by Eq. (6). The points show the skyrmion velocities estimated for the micromagnetic simulations, while the black lines are corresponding skyrmion velocities calculated using the Thiele equation (see Appendix A). Note, since the Thiele equation overestimates the skyrmion velocities, we shifted the calculated values by a constant value, which is −0.07 in case of longitudinal velocities and ±0.02 in case of the transverse velocities.
FIG. 9 .
9In-plane skyrmions with topological charge (a) Q = 1 and (b) Q = −1 passing an uncharged 90 • DW under the influence of applied current in the direction perpendicular to the domain wall. The colors show the magnetization mz components in the color scale identical to Fig. 6. The current density I = 10 12 A/m 2 and anisotropy Ku = 10 4 J/m 3 have been used in the simulations. The single image shows skyrmions positions with time step approx. 7 ns, while the green arrows give their direction. The vertical dashed lines marks the anisotropy boundary. The grey thick arrows show the local magnetization directions.
FIG. 10 .
10In-plane skyrmion with topological charged (a), (d) Q = 1 and (b), (c) Q = −1 passing a charged 90 • DW under the influence of applied current in the direction perpendicular to the domain wall. The applied current density was (a), (c) I = 10 12 A/m 2 and (b), (d) I = −10 12 A/m 2 . The other parameters are the same as in Fig. 9. The colors show the magnetization mz components in the color scale identical to Fig. 6. The single image shows skyrmions positions with time step approx. 7 ns, while the green arrows give their direction. The vertical dashed lines marks the anisotropy boundary. The grey thick arrows show the local magnetization directions.
FIG. 11 .
11Cartoon of an in-plane skyrmion valve based on two pinned 90 • magnetic DWs. Pictures shows three neighboring magnetic domain separated by 90 • magnetic DWs. The DWs are located at the gray dashed lines. The grey diagonal arrows show directions of magnetization in each of the domain. The smaller arrows show magnetizations in the centers of the DWs. The horizontal lines passing the structure depict the transmission of skyrmions with topological charge Q = 1 (red line) and Q = −1 (blue line). Figures (a) -(d) present different magnetic configurations allowing or blocking transfer of in-plane skyrmions depending on their topological charge.
2 + αβ D xx D yy G D yy (α − β) −G D xx (α − β) G 2 + αβ D xx D yy (A3)and ∆ = G 2 + αβ D xx D yy . Applying the rotation transformation, we obtain the longitudinal, v , and transverse, v ⊥ , velocity components, which readv = u ∆ G 2 + αβ D xx D yy − 1 2 (D xx − D yy ) G (α − β) cos (2Φ) , (A4a) v ⊥ = − 1 2 u ∆ G (α − β) × [D xx + D yy + (D xx − Dyy ) sin (2Φ)] . (A4b) [1] E. Gradauskaite, P. Meisenheimer, M. Müller, J. Heron, and M. Trassin, Physical Sciences Reviews 6, 20190072 (2021). [2] K. J. A. Franke, D. López González, S. J. Hämäläinen, and S. van Dijken, Phys. Rev. Lett. 112, 017201 (2014).
. D Lebeugle, A Mougin, M Viret, D Colson, L Ranno, 10.1103/PhysRevLett.103.257601Phys. Rev. Lett. 103257601D. Lebeugle, A. Mougin, M. Viret, D. Colson, and L. Ranno, Phys. Rev. Lett. 103, 257601 (2009).
. J T Heron, M Trassin, K Ashraf, M Gajek, Q He, S Y Yang, D E Nikonov, Y.-H Chu, S Salahuddin, R Ramesh, 10.1103/PhysRevLett.107.217202Phys. Rev. Lett. 107217202J. T. Heron, M. Trassin, K. Ashraf, M. Gajek, Q. He, S. Y. Yang, D. E. Nikonov, Y.-H. Chu, S. Salahuddin, and R. Ramesh, Phys. Rev. Lett. 107, 217202 (2011).
. T H E Lahtinen, J O Tuomi, S Van Dijken, 10.1109/TMAG.2011.2143393IEEE Trans. Magnetics. 473768T. H. E. Lahtinen, J. O. Tuomi, and S. van Dijken, IEEE Trans. Magnetics 47, 3768 (2011).
. T H E Lahtinen, J O Tuomi, S Van Dijken, 10.1002/adma.201100426Adv. Mat. 233187T. H. E. Lahtinen, J. O. Tuomi, and S. van Dijken, Adv. Mat. 23, 3187 (2011).
. T H E Lahtinen, Y Shirahata, L Yao, K J A Franke, G Venkataiah, T Taniyama, S Van Dijken, 10.1063/1.4773482Appl. Phys. Lett. 101262405T. H. E. Lahtinen, Y. Shirahata, L. Yao, K. J. A. Franke, G. Venkataiah, T. Taniyama, and S. van Dijken, Appl. Phys. Lett. 101, 262405 (2012).
. R V Chopdekar, V K Malik, A Rodríguez, L Le Guyader, Y Takamura, A Scholl, D Stender, C W Schneider, C Bernhard, F Nolting, L J Heyderman, 10.1103/PhysRevB.86.014408Phys. Rev. B. 8614408R. V. Chopdekar, V. K. Malik, A. Fraile Rodríguez, L. Le Guyader, Y. Takamura, A. Scholl, D. Stender, C. W. Schneider, C. Bernhard, F. Nolting, and L. J. Heyderman, Phys. Rev. B 86, 014408 (2012).
. K J A Franke, B Van De Wiele, Y Shirahata, S J Hämäläinen, T Taniyama, S Van Dijken, 10.1103/PhysRevX.5.011010Phys. Rev. X. 511010K. J. A. Franke, B. Van de Wiele, Y. Shirahata, S. J. Hämäläinen, T. Taniyama, and S. van Dijken, Phys. Rev. X 5, 011010 (2015).
. B Van De Wiele, S J Hämäläinen, P Baláž, F Montoncello, S Van Dijken, 10.1038/srep21330Sci. Rep. 621330B. Van de Wiele, S. J. Hämäläinen, P. Baláž, F. Monton- cello, and S. van Dijken, Sci. Rep. 6, 21330 (2016).
. P Baláž, S J Hämäläinen, S Van Dijken, 10.1103/PhysRevB.98.064417Phys. Rev. B. 9864417P. Baláž, S. J. Hämäläinen, and S. van Dijken, Phys. Rev. B 98, 064417 (2018).
. S J Hämäläinen, F Brandl, K J A Franke, D Grundler, S Van Dijken, 10.1103/PhysRevApplied.8.014020Phys. Rev. Applied. 814020S. J. Hämäläinen, F. Brandl, K. J. A. Franke, D. Grundler, and S. van Dijken, Phys. Rev. Applied 8, 014020 (2017).
. L A Shelukhin, N A Pertsev, A V Scherbakov, D L Kazenwadel, D A Kirilenko, S J Hämäläinen, S Van Dijken, A M Kalashnikova, 10.1103/PhysRevApplied.14.034061Phys. Rev. Applied. 1434061L. A. Shelukhin, N. A. Pertsev, A. V. Scherbakov, D. L. Kazenwadel, D. A. Kirilenko, S. J. Hämäläinen, S. van Dijken, and A. M. Kalashnikova, Phys. Rev. Ap- plied 14, 034061 (2020).
. I Dzyaloshinsky, 10.1016/0022-3697(58)90076-3Journal of Physics and Chemistry of Solids. 4241I. Dzyaloshinsky, Journal of Physics and Chemistry of Solids 4, 241 (1958).
. T Moriya, 10.1103/PhysRev.120.91Phys. Rev. 12091T. Moriya, Phys. Rev. 120, 91 (1960).
. A Crépieux, C Lacroix, 10.1016/S0304-8853(97)01044-5J. Magn. Magn. Mater. 182341A. Crépieux and C. Lacroix, J. Magn. Magn. Mater. 182, 341 (1998).
. A Hrabec, N A Porter, A Wells, M J Benitez, G Burnell, S Mcvitie, D Mcgrouther, T A Moore, C H Marrows, 10.1103/PhysRevB.90.020402Phys. Rev. B. 9020402A. Hrabec, N. A. Porter, A. Wells, M. J. Benitez, G. Bur- nell, S. McVitie, D. McGrouther, T. A. Moore, and C. H. Marrows, Phys. Rev. B 90, 020402(R) (2014).
. S.-G Je, D.-H Kim, S.-C Yoo, B.-C Min, K.-J Lee, S.-B Choe, 10.1103/PhysRevB.88.214401Phys. Rev. B. 88214401S.-G. Je, D.-H. Kim, S.-C. Yoo, B.-C. Min, K.-J. Lee, and S.-B. Choe, Phys. Rev. B 88, 214401 (2013).
. J Cho, N.-H Kim, S Lee, J.-S Kim, R Lavrijsen, A Solignac, Y Yin, D.-S Han, N J J Van Hoof, H J M Swagten, B Koopmans, C.-Y You, 10.1038/ncomms8635Nature Commun. 67635J. Cho, N.-H. Kim, S. Lee, J.-S. Kim, R. Lavrijsen, A. Solignac, Y. Yin, D.-S. Han, N. J. J. van Hoof, H. J. M. Swagten, B. Koopmans, and C.-Y. You, Nature Com- mun. 6, 7635 (2015).
. K.-W Kim, K.-W Moon, N Kerber, J Nothhelfer, K Everschor-Sitte, 10.1103/PhysRevB.97.224427Phys. Rev. B. 97224427K.-W. Kim, K.-W. Moon, N. Kerber, J. Nothhelfer, and K. Everschor-Sitte, Phys. Rev. B 97, 224427 (2018).
. A Thiaville, S Rohart, É Jué, V Cros, A Fert, 10.1209/0295-5075/100/57002Europhysics Letters). 10057002EPLA. Thiaville, S. Rohart,É. Jué, V. Cros, and A. Fert, EPL (Europhysics Letters) 100, 57002 (2012).
. J P Garcia, A Fassatoui, M Bonfim, J Vogel, A Thiaville, S Pizzini, 10.1103/PhysRevB.104.014405Phys. Rev. B. 10414405J. P. n. Garcia, A. Fassatoui, M. Bonfim, J. Vogel, A. Thi- aville, and S. Pizzini, Phys. Rev. B 104, 014405 (2021).
. Z.-D Li, F Liu, Q.-Y. Li, P B He, http:/arxiv.org/abs/https:/doi.org/10.1063/1.4919676Journal of Applied Physics. 117173906Z.-D. Li, F. Liu, Q.-Y. Li, and P. B. He, Journal of Applied Physics 117, 173906 (2015), https://doi.org/10.1063/1.4919676.
. K.-W Moon, J Yoon, C Kim, C Hwang, 10.1103/PhysRevApplied.12.064054Phys. Rev. Applied. 1264054K.-W. Moon, J. Yoon, C. Kim, and C. Hwang, Phys. Rev. Applied 12, 064054 (2019).
. X Zhang, M Ezawa, Y Zhou, 10.1038/srep09400Scientific Reports. 59400X. Zhang, M. Ezawa, and Y. Zhou, Scientific Reports 5, 9400 (2015).
. Y A Kharkov, O P Sushkov, M Mostovoy, 10.1103/PhysRevLett.119.207201Phys. Rev. Lett. 119207201Y. A. Kharkov, O. P. Sushkov, and M. Mostovoy, Phys. Rev. Lett. 119, 207201 (2017).
. B Göbel, A Mook, J Henk, I Mertig, O A Tretiakov, 10.1103/PhysRevB.99.060407Phys. Rev. B. 9960407B. Göbel, A. Mook, J. Henk, I. Mertig, and O. A. Tre- tiakov, Phys. Rev. B 99, 060407(R) (2019).
. X Li, L Shen, Y Bai, J Wang, X Zhang, J Xia, M Ezawa, O A Tretiakov, X Xu, M Mruczkiewicz, M Krawczyk, Y Xu, R F L Evans, R W Chantrell, Y Zhou, 10.1038/s41524-020-00435-yComputational Materials. 6169X. Li, L. Shen, Y. Bai, J. Wang, X. Zhang, J. Xia, M. Ezawa, O. A. Tretiakov, X. Xu, M. Mruczkiewicz, M. Krawczyk, Y. Xu, R. F. L. Evans, R. W. Chantrell, and Y. Zhou, npj Computational Materials 6, 169 (2020).
. W Jiang, X Zhang, G Yu, W Zhang, X Wang, M Benjamin Jungfleisch, J E Pearson, X Cheng, O Heinonen, K L Wang, Y Zhou, A Hoffmann, S G E Velthuis, 10.1038/nphys3883Nature Physics. 13162W. Jiang, X. Zhang, G. Yu, W. Zhang, X. Wang, M. Ben- jamin Jungfleisch, J. E. Pearson, X. Cheng, O. Heinonen, K. L. Wang, Y. Zhou, A. Hoffmann, and S. G. E. te Velthuis, Nature Physics 13, 162 (2017).
. K Litzius, I Lemesh, B Krüger, P Bassirian, L Caretta, K Richter, F Büttner, K Sato, O A Tretiakov, J Förster, R M Reeve, M Weigand, I Bykova, H Stoll, G Schütz, G S D Beach, M Kläui, 10.1038/nphys4000Nature Physics. 13170K. Litzius, I. Lemesh, B. Krüger, P. Bassirian, L. Caretta, K. Richter, F. Büttner, K. Sato, O. A. Tretiakov, J. Förster, R. M. Reeve, M. Weigand, I. Bykova, H. Stoll, G. Schütz, G. S. D. Beach, and M. Kläui, Nature Physics 13, 170 (2017).
. A Vansteenkiste, J Leliaert, M Dvornik, M Helsen, F Garcia-Sanchez, B V Waeyenberge, 10.1063/1.4899186AIP Advances. 4107133A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, and B. V. Waeyenberge, AIP Ad- vances 4, 107133 (2014).
. H.-G Piao, H.-C Choi, J.-H Shim, D.-H Kim, C.-Y You, 10.1063/1.3658805Appl. Phys. Lett. 99192512H.-G. Piao, H.-C. Choi, J.-H. Shim, D.-H. Kim, and C.-Y. You, Appl. Phys. Lett. 99, 192512 (2011).
. G D Chaves-O'flynn, G Wolf, D Pinna, A D Kent, 10.1063/1.4907241J. Appl. Phys. 117G. D. Chaves-O'Flynn, G. Wolf, D. Pinna, and A. D. Kent, J. Appl. Phys. 117, 17D705 (2015).
. M Yamanouchi, A Jander, P Dhagat, S Ikeda, F Matsukura, H Ohno, 10.1109/LMAG.2011.2159484IEEE Magn. Lett. 23000304M. Yamanouchi, A. Jander, P. Dhagat, S. Ikeda, F. Mat- sukura, and H. Ohno, IEEE Magn. Lett. 2, 3000304 (2011).
. Y Liu, L Hao, J Cao, 10.1063/1.4947132AIP Advances. 645008Y. Liu, L. Hao, and J. Cao, AIP Advances 6, 045008 (2016).
. A N Bogdanov, U K Rößler, 10.1103/PhysRevLett.87.037203Phys. Rev. Lett. 8737203A. N. Bogdanov and U. K. Rößler, Phys. Rev. Lett. 87, 037203 (2001).
. H Yang, A Thiaville, S Rohart, A Fert, M Chshiev, 10.1103/PhysRevLett.115.267210Phys. Rev. Lett. 115267210H. Yang, A. Thiaville, S. Rohart, A. Fert, and M. Chshiev, Phys. Rev. Lett. 115, 267210 (2015).
. Z Li, S Zhang, 10.1103/PhysRevLett.92.207203Phys. Rev. Lett. 92207203Z. Li and S. Zhang, Phys. Rev. Lett. 92, 207203 (2004).
. Z Li, S Zhang, 10.1103/PhysRevB.70.024417Phys. Rev. B. 7024417Z. Li and S. Zhang, Phys. Rev. B 70, 024417 (2004).
. A A Thiele, 10.1103/PhysRevLett.30.230Phys. Rev. Lett. 30230A. A. Thiele, Phys. Rev. Lett. 30, 230 (1973).
. M Song, K.-W Moon, S Yang, C Hwang, K.-J Kim, 10.35848/1882-0786/ab8d0bApplied Physics Express. 1363002M. Song, K.-W. Moon, S. Yang, C. Hwang, and K.-J. Kim, Applied Physics Express 13, 063002 (2020).
. M Battiato, K Carva, P M Oppeneer, 10.1103/PhysRevLett.105.027203Phys. Rev. Lett. 10527203M. Battiato, K. Carva, and P. M. Oppeneer, Phys. Rev. Lett. 105, 027203 (2010).
. K Carva, P Balǎž, I Radu, ElsevierK. Carva, P. Balǎž, and I. Radu (Elsevier, 2017) pp. 291-463.
. H W Schumacher, C Chappert, R C Sousa, P P Freitas, J Miltat, J Ferré, 10.1063/1.1557376J. Appl. Phys. 937290H. W. Schumacher, C. Chappert, R. C. Sousa, P. P. Fre- itas, J. Miltat, and J. Ferré, J. Appl. Phys. 93, 7290 (2003).
. J Sampaio, V Cros, S Rohart, A Thiaville, A Fert, 10.1038/nnano.2013.210Nature Nanotech. 8839J. Sampaio, V. Cros, S. Rohart, A. Thiaville, and A. Fert, Nature Nanotech. 8, 839 (2013).
. S Woo, K Litzius, B Krüger, M.-Y Im, L Caretta, K Richter, M Mann, A Krone, R M Reeve, M Weigand, P Agrawal, I Lemesh, M.-A Mawass, P Fischer, M Kläui, G S D Beach, 10.1038/nmat4593Nature Materials. 15501S. Woo, K. Litzius, B. Krüger, M.-Y. Im, L. Caretta, K. Richter, M. Mann, A. Krone, R. M. Reeve, M. Weigand, P. Agrawal, I. Lemesh, M.-A. Mawass, P. Fischer, M. Kläui, and G. S. D. Beach, Nature Mate- rials 15, 501 (2016).
. M Finazzi, M Savoini, A R Khorsand, A Tsukamoto, A Itoh, L Duò, A Kirilyuk, T Rasing, M Ezawa, 10.1103/PhysRevLett.110.177205Phys. Rev. Lett. 110177205M. Finazzi, M. Savoini, A. R. Khorsand, A. Tsukamoto, A. Itoh, L. Duò, A. Kirilyuk, T. Rasing, and M. Ezawa, Phys. Rev. Lett. 110, 177205 (2013).
. K Gerlinger, B Pfau, F Büttner, M Schneider, L.-M Kern, J Fuchs, D Engel, C M Günther, M Huang, I Lemesh, L Caretta, A Churikova, P Hessing, C Klose, C Strüber, C V K Schmising, S Huang, A Wittmann, K Litzius, D Metternich, R Battistelli, K Bagschik, A Sadovnikov, G S D Beach, S Eisebitt, 10.1063/5.0046033Appl. Phys. Lett. 118192403K. Gerlinger, B. Pfau, F. Büttner, M. Schneider, L.-M. Kern, J. Fuchs, D. Engel, C. M. Günther, M. Huang, I. Lemesh, L. Caretta, A. Churikova, P. Hessing, C. Klose, C. Strüber, C. v. K. Schmising, S. Huang, A. Wittmann, K. Litzius, D. Metternich, R. Battis- telli, K. Bagschik, A. Sadovnikov, G. S. D. Beach, and S. Eisebitt, Appl. Phys. Lett. 118, 192403 (2021).
. J Iwasaki, M Mochizuki, N Nagaosa, 10.1038/ncomms2442Nature Comm. 41463J. Iwasaki, M. Mochizuki, and N. Nagaosa, Nature Comm. 4, 1463 (2013).
| [] |
[
"A Reference-Free Algorithm for Computational Normalization of Shotgun Sequencing Data",
"A Reference-Free Algorithm for Computational Normalization of Shotgun Sequencing Data"
] | [
"C Titus Brown \nComputer Science and Engineering\nMichigan State University\nEast LansingMIUSA\n\nMicrobiology and Molecular Genetics\nMichigan State University\nEast LansingMIUSA\n",
"Adina Howe \nMicrobiology and Molecular Genetics\nMichigan State University\nEast LansingMIUSA\n",
"Qingpeng Zhang \nComputer Science and Engineering\nMichigan State University\nEast LansingMIUSA\n",
"Alexis B Pyrkosz \nAvian Disease and Oncology Laboratory\nUSDA\nEast LansingMIUSA\n",
"Timothy H Brom \nComputer Science and Engineering\nMichigan State University\nEast LansingMIUSA\n"
] | [
"Computer Science and Engineering\nMichigan State University\nEast LansingMIUSA",
"Microbiology and Molecular Genetics\nMichigan State University\nEast LansingMIUSA",
"Microbiology and Molecular Genetics\nMichigan State University\nEast LansingMIUSA",
"Computer Science and Engineering\nMichigan State University\nEast LansingMIUSA",
"Avian Disease and Oncology Laboratory\nUSDA\nEast LansingMIUSA",
"Computer Science and Engineering\nMichigan State University\nEast LansingMIUSA"
] | [] | Deep shotgun sequencing and analysis of genomes, transcriptomes, amplified single-cell genomes, and metagenomes has enabled investigation of a wide range of organisms and ecosystems. However, sampling variation in short-read data sets and high sequencing error rates of modern sequencers present many new computational challenges in data interpretation. These challenges have led to the development of new classes of mapping tools and de novo assemblers. These algorithms are challenged by the continued improvement in sequencing throughput. We here describe digital normalization, a single-pass computational algorithm that systematizes coverage in shotgun sequencing data sets, thereby decreasing sampling variation, discarding redundant data, and removing the majority of errors. Digital normalization substantially reduces the size of shotgun data sets and decreases the memory and time requirements for de novo sequence assembly, all without significantly impacting content of the generated contigs. We apply digital normalization to the assembly of microbial genomic data, amplified single-cell genomic data, and transcriptomic data. Our implementation is freely available for use and modification. | null | [
"https://arxiv.org/pdf/1203.4802v2.pdf"
] | 1,487,397 | 1203.4802 | 02380df138be7871f4b6430bd5d1878b57ef39c5 |
A Reference-Free Algorithm for Computational Normalization of Shotgun Sequencing Data
C Titus Brown
Computer Science and Engineering
Michigan State University
East LansingMIUSA
Microbiology and Molecular Genetics
Michigan State University
East LansingMIUSA
Adina Howe
Microbiology and Molecular Genetics
Michigan State University
East LansingMIUSA
Qingpeng Zhang
Computer Science and Engineering
Michigan State University
East LansingMIUSA
Alexis B Pyrkosz
Avian Disease and Oncology Laboratory
USDA
East LansingMIUSA
Timothy H Brom
Computer Science and Engineering
Michigan State University
East LansingMIUSA
A Reference-Free Algorithm for Computational Normalization of Shotgun Sequencing Data
1
Deep shotgun sequencing and analysis of genomes, transcriptomes, amplified single-cell genomes, and metagenomes has enabled investigation of a wide range of organisms and ecosystems. However, sampling variation in short-read data sets and high sequencing error rates of modern sequencers present many new computational challenges in data interpretation. These challenges have led to the development of new classes of mapping tools and de novo assemblers. These algorithms are challenged by the continued improvement in sequencing throughput. We here describe digital normalization, a single-pass computational algorithm that systematizes coverage in shotgun sequencing data sets, thereby decreasing sampling variation, discarding redundant data, and removing the majority of errors. Digital normalization substantially reduces the size of shotgun data sets and decreases the memory and time requirements for de novo sequence assembly, all without significantly impacting content of the generated contigs. We apply digital normalization to the assembly of microbial genomic data, amplified single-cell genomic data, and transcriptomic data. Our implementation is freely available for use and modification.
Introduction
The ongoing improvements in DNA sequencing technologies have led to a new problem: how do we analyze the resulting large sequence data sets quickly and efficiently? These data sets contain millions to billions of short reads with high error rates and substantial sampling biases [1]. The vast quantities of deep sequencing data produced by these new sequencing technologies are driving computational biology to extend and adapt previous approaches to sequence analysis. In particular, the widespread use of deep shotgun sequencing on previously unsequenced genomes, transcriptomes, and metagenomes, has resulted in the development of several new approaches to de novo sequence assembly [2].
There are two basic challenges in analyzing short-read sequences from shotgun sequencing. First, deep sequencing is needed for complete sampling. This is because shotgun sequencing samples randomly from a population of molecules; this sampling is biased by sample content and sample preparation, requiring even deeper sequencing. A human genome may require 100x coverage or more for near-complete sampling, leading to shotgun data sets 300 GB or larger in size [3]. Since the lowest abundance molecule determines the depth of coverage required for complete sampling, transcriptomes and metagenomes containing rare population elements can also require similarly deep sequencing.
The second challenge to analyzing short-read shotgun sequencing is the high error rate. For example, the Illumina GAII sequencer has a 1-2% error rate, yielding an average of one base error in every 100 bp of data [1]. The total number of errors grows linearly with the amount of data generated, so these errors usually dominate novelty in large data sets [4]. Tracking this novelty and resolving errors is computationally expensive.
These large data sets and high error rates combine to provide a third challenge: it is now straightforward to generate data sets that cannot easily be analyzed [5]. While hardware approaches to scaling existing algorithms are emerging, sequencing capacity continues to grow faster than computational capacity [6]. Therefore, new algorithmic approaches to analysis are needed.
Many new algorithms and tools have been developed to tackle large and error-prone short-read shotgun data sets. A new class of alignment tools, most relying on the Burrows-Wheeler transform, has been created specifically to do ultra-fast short-read alignment to reference sequence [7]. In cases where a reference sequence does not exist and must be assembled de novo from the sequence data, a number of new assemblers have been written, including ABySS, Velvet, SOAPdenovo, ALLPATHS, SGA, and Cortex [3,[8][9][10][11][12]. These assemblers rely on theoretical advances to store and assemble large amounts of data [13,14]. As short-read sequencing has been applied to single cell genomes, transcriptomes, and metagenomes, yet another generation of assemblers has emerged to handle reads from abundance-skewed populations of molecules; these tools, including Trinity, Oases, MetaVelvet, Meta-IDBA, and Velvet-SC, adopt local models of sequence coverage to help build assemblies [15][16][17][18][19]. In addition, several ad hoc strategies have also been applied to reduce variation in sequence content from whole-genome amplification [20,21]. Because these tools all rely on k-mer approaches and require exact matches to construct overlaps between sequences, their performance is very sensitive to the number of errors present in the underlying data. This sensitivity to errors has led to the development of a number of error removal and correction approaches that preprocess data prior to assembly or mapping [22][23][24].
Below, we introduce "digital normalization", a single-pass algorithm for elimination of redundant reads in data sets. Critically, no reference sequence is needed to apply digital normalization. Digital normalization is inspired by experimental normalization techniques developed for cDNA library preparation, in which hybridization kinetics are exploited to reduce the copy number of abundant transcripts prior to sequencing [25,26]. Digital normalization works after sequencing data has been generated, progressively removing high-coverage reads from shotgun data sets. This normalizes average coverage to a specified value, reducing sampling variation while removing reads, and also removing the many errors contained within those reads. This data and error reduction results in dramatically decreased computational requirements for de novo assembly. Moreover, unlike experimental normalization where abundance information is removed prior to sequencing, in digital normalization this information can be recovered from the unnormalized reads.
We present here a fixed-memory implementation of digital normalization that operates in time linear with the size of the input data. We then demonstrate its effectiveness for reducing compute requirements for de novo assembly on several real data sets. These data sets include E. coli genomic data, data from two single-cell MD-amplified microbial genomes, and yeast and mouse mRNAseq.
Results
Estimating sequencing depth without a reference assembly Short-read assembly requires deep sequencing to systematically sample the source genome, because shotgun sequencing is subject to both random sampling variation and systematic sequencing biases. For example, 100x sampling of a human genome is required for recovery of 90% or more of the genome in contigs > 1kb [3]. In principle much of this high-coverage data is redundant and could be eliminated without consequence to the final assembly, but determining which reads to eliminate requires a per-read estimate of coverage. Traditional approaches estimate coverage by mapping reads to an assembly. This presents a chicken-and-egg problem: to determine which regions are oversampled, we must already have an assembly! We may calculate a reference-free estimate of genome coverage by looking at the k-mer abundance distribution within individual reads. First, observe that k-mers, DNA words of a fixed length k, tend to have similar abundances within a read: this is a well-known property of k-mers that stems from each read originating from a single source molecule of DNA. The more times a region is sequenced, the higher the abundance of k-mers from that region would be. In the absence of errors, average k-mer abundance could be used as an estimate of the depth of coverage for a particular read (Figure 1, "no errors" line). However, when reads contain random substitution or indel errors from sequencing, the k-mers overlapping these errors will be of lower abundance; this feature is often used in k-mer based error correction approaches [24]. For example, a single substitution will introduce k low-abundance k-mers within a read. (Figure 1, "single substitution error" line). However, for small k and reads of length L where L > 3k − 1, a single substitution error will not skew the median k-mer abundance. Only when multiple substitution errors are found in a single read will the median k-mer abundance be affected ( Figure 1, "multiple substitution errors").
Using a fixed-memory CountMin Sketch data structure to count k-mers (see Methods and [27]), we find that median k-mer abundance correlates well with mapping-based coverage for artificial and real genomic data sets. There is a strong correlation between median k-mer abundance and mapping-based coverage both for simulated 100-base reads generated with 1% error from a 400kb artificial genome sequence (r 2 = 0.79; also see Figure 2a), as well as for real short-read data from E. coli (r 2 = 0.80, also see Figure 2b). This correlation also holds for simulated and real mRNAseq data: for simulated transcriptome data, r 2 = 0.93 (Figure 3a), while for real mouse transcriptome data, r 2 = 0.90 ( Figure 3b). Thus the median k-mer abundance of a read correlates well with mapping-based estimates of read coverage.
Eliminating redundant reads reduces variation in sequencing depth
Deeply sequenced genomes contain many highly covered loci. For example, in a human genome sequenced to 100x average coverage, we would expect 50% or more of the reads to have a coverage greater than 100. In practice, we need many fewer of these reads to assemble the source locus.
Using the median k-mer abundance estimator discussed above, we can examine each read in the data set progressively to determine if it is high coverage. At the beginning of a shotgun data set, we would expect many reads to be entirely novel and have a low estimated coverage. As we proceed through the data set, however, average coverage will increase and many reads will be from loci that we have already sampled sufficiently.
Suppose we choose a coverage threshold C past which we no longer wish to collect reads. If we only keep reads whose estimated coverage is less than C, and discard the rest, we will reduce the average coverage of the data set to C. This procedure is algorithmically straightforward to execute: we examine each read's estimated coverage, and retain only those whose coverage is less than C. The following pseudocode provides one approach:
for read in dataset:
if estimated_coverage(read) < C: accept(read) else: discard(read)
where accepted reads contribute to the estimated coverage function. Note that for any data set with an average coverage > 2C, this has the effect of discarding the majority of reads. Critically, low-coverage reads, especially reads from undersampled regions, will always be retained. The net effect of this procedure, which we call digital normalization, is to normalize the coverage distribution of data sets. In Figure 4a, we display the estimated coverage of an E. coli genomic data set, a S. aureus single-cell MD-amplified data set, and an MD-amplified data set from an uncultured Deltaproteobacteria, calculated by mapping reads to the known or assembled reference genomes (see [19] for the data source). The wide variation in coverage for the two MDA data sets is due to the amplification procedure [28]. After normalizing to a k-mer coverage of 20, the high coverage loci are systematically shifted to an average mapping coverage of 26, while lower-coverage loci remain at their previous coverage. This smooths out coverage of the overall data set.
At what rate are sequences retained? For the E. coli data set, Figure 5 shows the fraction of sequences retained by digital normalization as a function of the total number of reads examined when normalizing to C=20 at k=20. There is a clear saturation effect showing that as more reads are examined, a smaller fraction of reads is retained; by 5m reads, approximately 50-100x coverage of E. coli, under 30% of new reads are kept. This demonstrates that as expected, only a small amount of novelty (in the form of either new information, or the systematic accumulation of errors) is being observed with increasing sequencing depth.
Digital normalization retains information while discarding both data and errors
The 1-2% per-base error rate of next-generation sequencers dramatically affect the total number of kmers. For example, in the simulated genomic data of 200x, a 1% error rate leads to approximately 20 new k-mers for each error, yielding 20-fold more k-mers in the reads than are truly present in the genome ( Table 1, row 1). This in turn dramatically increases the memory requirements for tracking and correcting k-mers [4]. This is a well-known problem with de Bruijn graph approaches, in which erroneous nodes or edges quickly come to dominate deep sequencing data sets.
When we perform digital normalization on such a data set, we eliminate the vast majority of these k-mers (Table 1, row 1). This is because we are accepting or rejecting entire reads; in going from 200x random coverage to 20x systematic coverage, we discard 80% of the reads containing 62% of the errors ( Table 1, row 1). For reads taken from a skewed abundance distribution, such as with MDA or mRNAseq, we similarly discard many reads, and hence many errors (Table 1, row 2). In fact, in most cases the process of sequencing fails to recover more true k-mers (Table 1, middle column, parentheses) than digital normalization discards (Table 1, fourth column, parentheses).
The net effect of digital normalization is to retain nearly all real k-mers, while discarding the majority of erroneous k-mers -in other words, digital normalization is discarding data but not information. This rather dramatic elimination of erroneous k-mers is a consequence of the high error rate present in reads: with a 1% per-base substitution error rate, each 100-bp read will have an average of one substitution error. Each of these substitution errors will introduce up to k erroneous k-mers. Thus, for each read we discard as redundant, we also eliminate an average of k erroneous k-mers.
We may further eliminate erroneous k-mers by removing k-mers that are rare across the data set; these rare k-mers tend to result from substitution or indel errors [24]. We do this by first counting all the k-mers in the accepted reads during digital normalization. We then execute a second pass across the accepted reads in which we eliminate the 3' ends of reads at low-abundance k-mers. Following this error reduction pass, we execute a second round of digital normalization (a third pass across the data set) that further eliminates redundant data. This three-pass protocol eliminates additional errors and results in a further decrease in data set size, at the cost of very few real k-mers in genomic data sets ( Table 2).
Why use this three-pass protocol rather than simply normalizing to the lowest desired coverage in the first pass? We find that removing low-abundance k-mers after a single normalization pass to C ≈ 5 removes many more real k-mers, because there will be many regions in the genome that by chance have yielded 5 reads with errors in them. If these erroneous k-mers are removed in the abundance-trimming step, coverage of the corresponding regions is eliminated. By normalizing to a higher coverage of 20, removing errors, and only then reducing coverage to 5, digital normalization can retain accurate reads for most regions. Note that this three-pass protocol is not considerably more computationally expensive than the single-pass protocol: the first pass discards the majority of data and errors, so later passes are less time and memory intensive than the first pass.
Interestingly, this three-pass protocol removed many more real k-mers from the simulated mRNAseq data than from the simulated genome -351 of 48,100 (0.7%) real k-mers are lost from the mRNAseq, vs 4 of 399,981 lost (.000001%) from the genome ( Table 2). While still only a tiny fraction of the total number of real k-mers, the difference is striking -the simulated mRNAseq sample loses k-mers at almost 1000-fold the rate of the simulated genomic sample. Upon further investigation, all but one of the lost k-mers were located within 20 bases of the ends of the source sequences; see Figure 6. This is because digital normalization cannot distinguish between erroneous k-mers and k-mers that are undersampled due to edge effects. In the case of the simulated genome, which was generated as one large chromosome, the effect is negligible, but the simulated transcriptome was generated as 100 transcripts of length 500. This added 99 end sequences over the genomic simulation, which in turn led to many more lost k-mers.
While the three-pass protocol is effective at removing erroneous k-mers, for some samples it may be too stringent. For example, the mouse mRNAseq data set contains only 100m reads, which may not be enough to thoroughly sample the rarest molecules; in this case the abundance trimming would remove real k-mers as well as erroneous k-mers. Therefore we used the single-pass digital normalization for the yeast and mouse transcriptomes. For these two samples we can also see that the first-pass digital normalization is extremely effective, eliminating essentially all of the erroneous k-mers (Table 1, rows 4 and 5.)
Digital normalization scales assembly of microbial genomes
We applied the three-pass digital normalization and error trimming protocol to three real data sets from Chitsaz et al (2011) [19]. The first pass of digital normalization was performed in 1gb of memory and took about 1 min per million reads. For all three samples, the number of reads remaining after digital normalization was reduced by at least 30-fold, while the memory and time requirements were reduced 10-100x.
Despite this dramatic reduction in data set size and computational requirements for assembly, both the E. coli and S. aureus assemblies overlapped with the known reference sequence by more than 98%. This confirms that little or no information was lost during the process of digital normalization; moreover, it appears that digital normalization does not significantly affect the assembly results. (Note that we did not perform scaffolding, since the digital normalization algorithm does not take into account pairedend sequences, and could mislead scaffolding approaches. Therefore, these results cannot directly be compared to those in Chitsaz et al. (2011) [19].)
The Deltaproteobacteria sequence also assembled well, with 98.8% sequence overlap with the results from Chitsaz et al. Interestingly, only 30kb of the sequence assembled with Velvet-SC in Chitsaz et al. (2011) was missing, while an additional 360kb of sequence was assembled only in the normalized samples. Of the 30kb of missing sequence, only 10% matched via TBLASTX to a nearby Deltaproteobacteria assembly, while more than 40% of the additional 360kb matched to the same Deltaproteobacteria sample. Therefore these additional contigs likely represents real sequence, suggesting that digital normalization is competitive with Velvet-SC in terms of sensitivity.
Digital normalization scales assembly of transcriptomes
We next applied single-pass digital normalization to published yeast and mouse mRNAseq data sets, reducing them to 20x coverage at k=20 [15]. Digital normalization on these samples used 8gb of memory and took about 1 min per million reads. We then assembled both the original and normalized sequence reads with Oases and Trinity, two de novo transcriptome assemblers (Table 4) [15,16].
For both assemblers the computational resources necessary to complete an assembly were reduced ( Table 4), but normalization had different effects on performance for the different samples. On the yeast data set, time and memory requirements were reduced significantly, as for Oases running on mouse. However, while Trinity's runtime decreased by a factor of three on the normalized mouse data set, the memory requirements did not decrease significantly. This may be because the mouse transcriptome is 5-6 times larger than the yeast transcriptome, and so the mouse mRNAseq is lower coverage overall; in this case we would expect fewer errors to be removed by digital normalization.
The resulting assemblies differed in summary statistics (Table 5). For both yeast and mouse, Oases lost 5-10% of total transcripts and total bases when assembling the normalized data. However, Trinity gained transcripts when assembling the normalized yeast and mouse data, gaining about 1% of total bases on yeast and losing about 1% of total bases in mouse. Using a local-alignment-based overlap analysis (see Methods) we found little difference in sequence content between the pre-and post-normalization assemblies: for example, the normalized Oases assembly had a 98.5% overlap with the unnormalized Oases assembly, while the normalized Trinity assembly had a 97% overlap with the unnormalized Trinity assembly.
To further investigate the differences between transcriptome assemblies caused by digital normalization, we looked at the sensitivity with which long transcripts were recovered post-normalization. When comparing the normalized assembly to the unnormalized assembly in yeast, Trinity lost only 3% of the sequence content in transcripts greater than 300 bases, but 10% of the sequence content in transcripts greater than 1000 bases. However, Oases lost less than 0.7% of sequence content at 300 and 1000 bases. In mouse, we see the same pattern. This suggests that the change in summary statistics for Trinity is caused by fragmentation of long transcripts into shorter transcripts, while the difference for Oases is caused by loss of splice variants. Indeed, this loss of splice variants should be expected, as there are many low-prevalence splice variants present in deep sequencing data [29]. Interestingly, in yeast we recover more transcripts after digital normalization; these transcripts appear to be additional splice variants.
The difference between Oases and Trinity results show that Trinity is more sensitive to digital normalization than Oases: digital normalization seems to cause Trinity to fragment long transcripts. Why? One potential issue is that Trinity only permits k=26 for assembly, while normalization was performed at k=20; digital normalization may be removing 26-mers that are important for Trinity's path finding algorithm. Alternatively, Trinity may be more sensitive than Oases to the change in coverage caused by digital normalization. Regardless, the strong performance of Oases on digitally normalized samples, as well as the high retention of k-mers (Table 1) suggests that the primary sequence content for the transcriptome remains present in the normalized reads, although it is recovered with different effectiveness by the two assemblers.
Discussion
Digital normalization dramatically scales de novo assembly
The results from applying digital normalization to read data sets prior to de novo assembly are extremely good: digital normalization reduces the computational requirements (time and memory) for assembly considerably, without substantially affecting the assembly results. It does this in two ways: first, by removing the majority of reads without significantly affecting the true k-mer content of the data set. Second, by eliminating these reads, digital normalization also eliminates sequencing errors contained within those reads, which otherwise would add significantly to memory usage in assembly [4].
Digital normalization also lowers computational requirements by eliminating most repetitive sequence in the data set. Compression-based approaches to graph storage have demonstrated that compressing repetitive sequence also effectively reduces memory and compute requirements [11,30]. Note however that eliminating many repeats may also have its negatives (discussed below).
Digital normalization should be an effective preprocessing approach for most assemblers. In particular, the de Bruijn graph approach used in many modern assemblers relies on k-mer content, which is almost entirely preserved by digital normalization (see Tables 1 and 2
) [2].
A general strategy for normalizing coverage
Digital normalization is a general strategy for systematizing coverage in shotgun sequencing data sets by using per-locus downsampling, albeit without any prior knowledge of reference loci. This yields considerable theoretical and practical benefits in the area of de novo sequencing and assembly.
In theoretical terms, digital normalization offers a general strategy for changing the scaling behavior of sequence assembly. Assemblers tend to scale poorly with the number of reads: in particular, de Bruijn graph memory requirements scale linearly with the size of the data set due to the accumulation of errors, although others have similarly poor scaling behavior (e.g. quadratic time in the number of reads) [2]. By calculating per-locus coverage in a way that is insensitive to errors, digital normalization converts genome assembly into a problem that scales with the complexity of the underlying sample -i.e. the size of the genome, transcriptome, or metagenome.
Digital normalization also provides a general strategy for applying online or streaming approaches to analysis of de novo sequencing data. The basic algorithm presented here is explicitly a single-pass or streaming algorithm, in which the entire data set is never considered as a whole; rather, a partial "sketch" of the data set is retained and used for progressive filtering. Online algorithms and sketch data structures offer significant opportunities in situations where data sets are too large to be conveniently stored, transmitted, or analyzed [31]. This can enable increasingly efficient downstream analyses. Digital normalization can be applied in any situation where the abundance of particular sequence elements is either unimportant or can be recovered more efficiently after other processing, as in assembly.
The construction of a simple, reference-free measure of coverage on a per-read basis offers opportunities to analyze coverage and diversity with an assembly-free approach. Genome and transcriptome sequencing is increasingly being applied to non-model organisms and ecological communities for which there are no reference sequences, and hence no good way to estimate underlying sequence complexity. The referencefree counting technique presented here provides a method for determining community and transcriptome complexity; it can also be used to progressively estimate sequencing depth.
More pragmatically, digital normalization also scales existing assembly techniques dramatically. The reduction in data set size afforded by digital normalization may also enable the application of more computationally expensive algorithms such as overlap-layout-consensus assembly approaches to shortread data. Overall, the reduction in data set size, memory requirements, and time complexity for contig assembly afforded by digital normalization could lead to the application of more complex heuristics to the assembly problem.
Digital normalization drops terminal k-mers and removes isoforms
Our implementation of digital normalization does discard some real information, including terminal kmers and low-abundance isoforms. Moreover, we predict a number of other failure modes: for example, because k-mer approaches demand strict sequence identity, data sets from highly polymorphic organisms or populations will perform more poorly than data sets from low-variability samples. Digital normalization also discriminates against highly repetitive sequences. We note that these problems traditionally have been challenges for assembly strategies: recovering low-abundance isoforms from mRNAseq, assembling genomes from highly polymorphic organisms, and assembling across repeats are all difficult tasks, and improvements in these areas continue to be active areas of research [32][33][34]. Using an alignmentbased approach to estimating coverage, rather than a k-mer based approach, could provide an alternative implementation that would improve performance on errors, splice variants, and terminal k-mers. Our current approach also ignores quality scores; a "q-mer" counting approach as in Quake, in which k-mer counts are weighted by quality scores, could easily be adapted [24].
Another concern for normalizing deep sequencing data sets is that, with sufficiently deep sequencing, sequences with many errors will start to accrue. This underlies the continued accumulation of sequence data for E. coli observed in Figure 5. Assemblers may be unable to distinguish between this false sequence and the error-free sequences, for sufficiently deep data sets. This accumulation of erroneous sequences is again caused by the use of k-mers to detect similarity, and is one reason why exploring local alignment approaches (discussed below) may be a good future direction.
Applying assembly algorithms to digitally normalized data
The assembly problem is challenging for several reasons: many formulations are computationally complex (NP-hard), and practical issues of both genome content and sequencing, such as repetitive sequence, polymorphisms, short reads and high error rates, challenge assembly approaches [35]. This has driven the development of heuristic approaches to resolving complex regions in assemblies. Several of these heuristic approaches use the abundance information present in the reads to detect and resolve repeat regions; others use pairing information from paired-end and mate-pair sequences to resolve complex paths. Digital normalization aggressively removes abundance information, and we have not yet adapted it to paired-end sequencing data sets; this could and should affect the quality of assembly results! Moreover, it is not clear what effect different coverage (C) and k-mer (k) values have on assemblers. In practice, for at least one set of k-mer size k and normalized coverage C parameters, digital normalization seems to have little negative effect on the final assembled contigs. Further investigation of the effects of varying k and C relative to specific assemblers and assembler parameters will likely result in further improvements in assembly quality.
A more intriguing notion than merely using digital normalization as a pre-filter is to specifically adapt assembly algorithms and protocols to digitally normalized data. For example, the reduction in data set size afforded by digital normalization may make overlap-layout-consensus approaches computationally feasible for short-read data [2]. Alternatively, the quick and inexpensive generation of contigs from digitally normalized data could be used prior to a separate scaffolding step, such as those supported by SGA and Bambus2 [14,36]. Digital normalization offers many future directions for improving assembly.
Conclusions
Digital normalization is an effective demonstration that much of short-read shotgun sequencing is redundant. Here we have shown this by normalizing samples to 5-20x coverage while recovering complete or nearly complete contig assemblies. Normalization is substantially different from uniform downsampling: by doing downsampling in a locus-specific manner, we retain low coverage data. Previously described approaches to reducing sampling variation rely on ad hoc parameter measures and/or an initial round of assembly and have not been shown to be widely applicable [20,21].
We have implemented digital normalization as a prefilter for assembly, so that any assembler may be used on the normalized data. Here we have only benchmarked a limited set of assemblers -Velvet, Oases, and Trinity -but in theory digital normalization should apply to any assembler. De Bruijn and string graph assemblers such as Velvet, SGA, SOAPdenovo, Oases, and Trinity are especially likely to work well with digital normalized data, due to the underlying reliance on k-mer overlaps in these assemblers.
Digital normalization is widely applicable and computationally convenient
Digital normalization can be applied de novo to any shotgun data set. The approach is extremely computationally convenient: the runtime complexity is linear with respect to the data size, and perhaps more importantly it is single-pass: the basic algorithm does not need to look at any read more than once. Moreover, because reads accumulate sub-linearly, errors do not accumulate quickly and overall memory requirements for digital normalization should grow slowly with data set size. Note also that while the algorithm presented here is not perfectly parallelizable, efficient distributed k-mer counting is straightforward and it should be possible to scale digital normalization across multiple machines [37].
The first pass of digital normalization is implemented as an online streaming algorithm in which reads are examined once. Streaming algorithms are useful for solving data analysis problems in which the data are too large to easily be transmitted, processed, or stored. Here, we implement the streaming algorithm using a fixed memory data "sketch" data structure, CountMin Sketch. By combining a single-pass algorithm with a fixed-memory data structure, we provide a data reduction approach for sequence data analysis with both (linear) time and (constant) memory guarantees. Moreover, because the false positive rate of the CountMin Sketch data structure is well understood and easy to predict, we can provide data quality guarantees as well. These kinds of guarantees are immensely valuable from an algorithmic perspective, because they provide a robust foundation for further work [31]. The general concept of removing redundant data while retaining information underlies "lossy compression", an approach used widely in image processing and video compression. The concept of lossy compression has broad applicability in sequence analysis. For example, digital normalization could be applied prior to homology search on unassembled reads, potentially reducing the computational requirements for e.g. BLAST and HMMER without loss of sensitivity. Digital normalization could also help merge multiple different read data sets from different read technologies, by discarding entirely redundant sequences and retaining only sequences containing "new" information. These approaches remain to be explored in the future.
Methods
All links below are available electronically through ged.msu.edu/papers/2012-diginorm/, in addition to the archival locations provided.
Data sets
The E. coli, S. aureus, and Deltaproteobacteria data sets were taken from Chitsaz et al. [19], and downloaded from bix.ucsd.edu/projects/singlecell/. The mouse data set was published by Grabherr et al. [15] and downloaded from trinityrnaseq.sf.net/. All data sets were used without modification. The complete assemblies, both pre-and post-normalization, for the E. coli, S. aureus, the uncultured Deltaproteobacteria, mouse, and yeast data sets are available from ged.msu.edu/papers/2012-diginorm/.
The simulated genome and transcriptome were generated from a uniform AT/CG distribution. The genome consisted of a single chromosome 400,000 bases in length, while the transcriptome consisted of 100 transcripts of length 500. 100-base reads were generated uniformly from the genome to an estimated coverage of 200x, with a random 1% per-base error. For the transcriptome, 1 million reads of length 100 were generated from the transcriptome at relative expression levels of 10, 100, and 1000, with transcripts assigned randomly with equal probability to each expression group; these reads also had a 1% per-base error.
Scripts and software
All simulated data sets and all analysis summaries were generated by Python scripts, which are available at github.com/ged-lab/2012-paper-diginorm/. Digital normalization and k-mer analyses were performed with the khmer software package, written in C++ and Python, available at github.com/ged-lab/khmer/, tag '2012-paper-diginorm'. khmer also relies on the screed package for loading sequences, available at github.com/ged-lab/screed/, tag '2012-paper-diginorm'. khmer and screed are Copyright (c) 2010 Michigan State University, and are free software available for distribution, modification, and redistribution under the BSD license.
Mapping was performed with bowtie v0.12.7 [38]. Genome assembly was done with velvet 1.2.01 [9]. Transcriptome assembly was done with velvet 1.1.05/oases 0.1.22 and Trinity, head of branch on 2011.10.29. Graphs and correlation coefficients were generated using matplotlib v1.1.0, numpy v1.7, and ipython notebook v0.12 [39]. The ipython notebook file and data analysis scripts necessary to generate the figures are available at github.com/ged-lab/2012-paper-diginorm/.
Analysis parameters
The khmer software uses a CountMin Sketch data structure to count k-mers, which requires a fixed memory allocation [27]. In all cases the memory usage was fixed such that the calculated false positive rate was below 0.01. By default k was set to 20.
Genome and transcriptome coverage was calculated by mapping all reads to the reference with bowtie (-a --best --strata) and then computing the per-base coverage in the reference. Read coverage was computed by then averaging the per-base reference coverage for each position in the mapped read; where reads were mapped to multiple locations, a reference location was chosen randomly for computing coverage. Median k-mer counts were computed with khmer as described in the text. Artificially high counts resulting from long stretches of Ns were removed after the analysis. Correlations between median k-mer counts and mapping coverage were computed using numpy.corrcoef; see calc-r2.py script.
Normalization and assembly parameters
For Table 3, the assembly k parameter for Velvet was k=45 for E. coli; k=41 for S. aureus single cell; and k=39 for Deltaproteobacteria single cell. Digital normalization on the three bacterial samples was done with -N 4 -x 2.5e8 -k 20, consuming 1gb of memory. Post-normalization k parameters for Velvet assemblies were k=37 for E. coli, k=27 for S. aureus, and k=27 for Deltaproteobacteria. For Table 4, the assembly k parameter for Oases was k=21 for yeast and k=23 for mouse. Digital normalization on both mRNAseq samples was done with -N 4 -x 2e9 -k 20, consuming 8gb of memory. Assembly of the single-pass normalized mRNAseq was done with Oases at k=21 (yeast) and k=19 (mouse).
Assembly overlap and analysis
Assembly overlap was computed by first using NCBI BLASTN to build local alignments for two assemblies, then filtering for matches with bit scores > 200, and finally computing the fraction of bases in each assembly with at least one alignment. Total fractions were normalized to self-by-self BLASTN overlap identity to account for BLAST-specific sequence filtering. TBLASTX comparison of the Deltaproteobacteria SAR324 sequence was done against another assembled SAR324 sequence, acc AFIA01000002.1.
Compute requirement estimation
Execution time was measured using real time from Linux bash 'time'. Peak memory usage was estimated either by the 'memusg' script from stackoverflow.com, peak-memory-usage-of-a-linux-unix-process, included in the khmer repository; or by the Torque queuing system monitor, for jobs run on MSU's HPC system. While several different machines were used for analyses, comparisons between unnormalized and normalized data sets were always done on the same machine. Tables Table 1. Digital normalization to C=20 removes many erroneous k-mers from sequencing data sets. Numbers in parentheses indicate number of true k-mers lost at each step, based on reference.
Figure Legends
Data set
True 20-mers 20-mers in reads 20-mers at C=20 % reads kept
Figure 1 .
1Representative rank-abundance distributions for 20-mers from 100-base reads with no errors, a read with a single substitution error, and a read with multiple substitution errors.
Figure 2 .
2Mapping and k-mer coverage measures correlate for simulated genome data and a real E. coli data set (5m reads). Simulated data r 2 = 0.79; E. coli r 2 = 0.80.
Figure 3 .
3Mapping and k-mer coverage measures correlate for simulated transcriptome data as well as real mouse transcriptome data. Simulated data r 2 = 0.93; mouse transcriptome r 2 = 0.90.
Figure 4 .
4Coverage distribution of three microbial genome samples, calculated from mapped reads (a) before and (b) after digital normalization (k=20, C=20).
Figure 5 .
5Fraction of reads kept when normalizing the E. coli dataset to C=20 at k=20.
Figure 6 .
6K-mers at the ends of sequences are lost during digital normalization.
Table 2 .
2Three-pass digital normalization removes most erroneous k-mers. Numbers in parentheses indicate number of true k-mers lost at each step, based on known reference.Data set
True 20-mers
20-mers in reads
20-mers remaining
% reads kept
Simulated genome
399,981
8,162,813
453,588 (-4)
5%
Simulated mRNAseq
48,100
2,466,638 (-88)
182,855 (-351)
1.2%
E. coli genome
4,542,150
175,627,381 (-152)
7,638,175 (-23)
2.1%
Yeast mRNAseq
10,631,882
224,847,659 (-683)
10,532,451 (-99,436)
2.1%
Mouse mRNAseq
43,830,642
709,662,624 (-23,196) 42,350,127 (-1,488,380)
7.1%
Table 3 .
3Three-pass digital normalization reduces computational requirements for contig assembly of genomic data. set N reads pre/post Assembly time pre/post Assembly memory pre/postData E. coli
31m / 0.6m
1040s / 63s (16.5x)
11.2gb / 0.5 gb (22.4x)
S. aureus single-cell
58m / 0.3m
5352s / 35s (153x)
54.4gb / 0.4gb (136x)
Deltaproteobacteria single-cell
67m / 0.4m
4749s / 26s (182.7x)
52.7gb / 0.4gb (131.8x)
Table 4 .
4Single-pass digital normalization to C=20 reduces computational requirements for transcriptome assembly.
Table 5 .
5Digital normalization has assembler-specific effects on transcriptome assembly. Trinity) 50,801 / 61,242 79.6 mb / 78.8mb 23,760 / 24,994 65.7mb / 59.4mbData set
Contigs > 300
Total bp > 300
Contigs > 1000 Total bp > 1000
Yeast (Oases)
12,654 / 9,547
33.2mb / 27.7mb
9,156 / 7,345
31.2mb / 26.4mb
Yeast (Trinity)
10,344 / 12,092 16.2mb / 16.5mb
5,765 / 6,053
13.6 mb / 13.1mb
Mouse (Oases)
57,066 / 49,356 98.1mb / 84.9mb 31,858 / 27,318 83.7mb / 72.4mb
Mouse (
AcknowledgmentsWe thank Chris Hart, James M. Tiedje, Brian Haas, Jared Simpson, Scott Emrich, and Russell Neches for their insight and helpful technical commentary.
Sequencing technologies -the next generation. M Metzker, Nat Rev Genet. 11Metzker M (2010) Sequencing technologies -the next generation. Nat Rev Genet 11: 31-46.
Assembly algorithms for next-generation sequencing data. J Miller, S Koren, G Sutton, Genomics. 95Miller J, Koren S, Sutton G (2010) Assembly algorithms for next-generation sequencing data. Genomics 95: 315-27.
High-quality draft assemblies of mammalian genomes from massively parallel sequence data. S Gnerre, I Maccallum, D Przybylski, F Ribeiro, J Burton, Proc Natl Acad Sci U S A. 108Gnerre S, Maccallum I, Przybylski D, Ribeiro F, Burton J, et al. (2011) High-quality draft assem- blies of mammalian genomes from massively parallel sequence data. Proc Natl Acad Sci U S A 108: 1513-8.
Succinct data structures for assembling large genomes. T Conway, A Bromage, Bioinformatics. 27Conway T, Bromage A (2011) Succinct data structures for assembling large genomes. Bioinfor- matics 27: 479-86.
The real cost of sequencing: higher than you think. A Sboner, X Mu, D Greenbaum, R Auerbach, M Gerstein, Genome Biol. 12125Sboner A, Mu X, Greenbaum D, Auerbach R, Gerstein M (2011) The real cost of sequencing: higher than you think! Genome Biol 12: 125.
The case for cloud computing in genome informatics. L Stein, Genome Biol. 11207Stein L (2010) The case for cloud computing in genome informatics. Genome Biol 11: 207.
How to map billions of short reads onto genomes. C Trapnell, S Salzberg, Nat Biotechnol. 27Trapnell C, Salzberg S (2009) How to map billions of short reads onto genomes. Nat Biotechnol 27: 455-7.
Abyss: a parallel assembler for short read sequence data. J Simpson, K Wong, S Jackman, J Schein, S Jones, Genome Res. 19Simpson J, Wong K, Jackman S, Schein J, Jones S, et al. (2009) Abyss: a parallel assembler for short read sequence data. Genome Res 19: 1117-23.
Velvet: algorithms for de novo short read assembly using de bruijn graphs. D Zerbino, E Birney, Genome Res. 18Zerbino D, Birney E (2008) Velvet: algorithms for de novo short read assembly using de bruijn graphs. Genome Res 18: 821-9.
State of the art de novo assembly of human genomes from massively parallel sequencing data. Y Li, Y Hu, L Bolund, J Wang, Hum Genomics. 4Li Y, Hu Y, Bolund L, Wang J (2010) State of the art de novo assembly of human genomes from massively parallel sequencing data. Hum Genomics 4: 271-7.
Efficient de novo assembly of large genomes using compressed data structures. J Simpson, R Durbin, Genome Res. 22Simpson J, Durbin R (2012) Efficient de novo assembly of large genomes using compressed data structures. Genome Res 22: 549-56.
De novo assembly and genotyping of variants using colored de bruijn graphs. Z Iqbal, M Caccamo, I Turner, P Flicek, G Mcvean, Nat Genet. 44Iqbal Z, Caccamo M, Turner I, Flicek P, McVean G (2012) De novo assembly and genotyping of variants using colored de bruijn graphs. Nat Genet 44: 226-32.
How to apply de bruijn graphs to genome assembly. P Compeau, P Pevzner, G Tesler, Nat Biotechnol. 29Compeau P, Pevzner P, Tesler G (2011) How to apply de bruijn graphs to genome assembly. Nat Biotechnol 29: 987-91.
Efficient construction of an assembly string graph using the fm-index. J Simpson, R Durbin, Bioinformatics. 26Simpson J, Durbin R (2010) Efficient construction of an assembly string graph using the fm-index. Bioinformatics 26: i367-73.
Full-length transcriptome assembly from rna-seq data without a reference genome. M Grabherr, B Haas, M Yassour, J Levin, D Thompson, Nat Biotechnol. 29Grabherr M, Haas B, Yassour M, Levin J, Thompson D, et al. (2011) Full-length transcriptome assembly from rna-seq data without a reference genome. Nat Biotechnol 29: 644-52.
Oases: robust de novo rna-seq assembly across the dynamic range of expression levels. M Schulz, D Zerbino, M Vingron, E Birney, Bioinformatics. 28Schulz M, Zerbino D, Vingron M, Birney E (2012) Oases: robust de novo rna-seq assembly across the dynamic range of expression levels. Bioinformatics 28: 1086-92.
MetaVelvet: An extension of Velvet assembler to de novo metagenome assembly from short sequence reads. T Namiki, T Hachiya, H Tanaka, Y Sakakibara, Computational Biology and Biomedicine. ACM Conference on BioinformaticsNamiki T, Hachiya T, Tanaka H, Sakakibara Y (2011) MetaVelvet: An extension of Velvet assem- bler to de novo metagenome assembly from short sequence reads. ACM Conference on Bioinfor- matics, Computational Biology and Biomedicine .
Meta-idba: a de novo assembler for metagenomic data. Y Peng, H Leung, S Yiu, F Chin, Bioinformatics. 27Peng Y, Leung H, Yiu S, Chin F (2011) Meta-idba: a de novo assembler for metagenomic data. Bioinformatics 27: i94-101.
Efficient de novo assembly of single-cell bacterial genomes from short-read data sets. H Chitsaz, J Yee-Greenbaum, G Tesler, M Lombardo, C Dupont, Nat Biotechnol. 29Chitsaz H, Yee-Greenbaum J, Tesler G, Lombardo M, Dupont C, et al. (2011) Efficient de novo assembly of single-cell bacterial genomes from short-read data sets. Nat Biotechnol 29: 915-21.
Whole genome amplification and de novo assembly of single bacterial cells. S Rodrigue, R Malmstrom, A Berlin, B Birren, M Henn, PLoS One. 46864Rodrigue S, Malmstrom R, Berlin A, Birren B, Henn M, et al. (2009) Whole genome amplification and de novo assembly of single bacterial cells. PLoS One 4: e6864.
Decontamination of mda reagents for single cell whole genome amplification. T Woyke, A Sczyrba, J Lee, C Rinke, D Tighe, PLoS One. 626161Woyke T, Sczyrba A, Lee J, Rinke C, Tighe D, et al. (2011) Decontamination of mda reagents for single cell whole genome amplification. PLoS One 6: e26161.
Error correction of high-throughput sequencing datasets with non-uniform coverage. P Medvedev, E Scott, B Kakaradov, P Pevzner, Bioinformatics. 27Medvedev P, Scott E, Kakaradov B, Pevzner P (2011) Error correction of high-throughput se- quencing datasets with non-uniform coverage. Bioinformatics 27: i137-41.
Fragment assembly with short reads. M Chaisson, P Pevzner, H Tang, Bioinformatics. 20Chaisson M, Pevzner P, Tang H (2004) Fragment assembly with short reads. Bioinformatics 20: 2067-74.
Quake: quality-aware detection and correction of sequencing errors. D Kelley, M Schatz, S Salzberg, Genome Biol. 11116Kelley D, Schatz M, Salzberg S (2010) Quake: quality-aware detection and correction of sequencing errors. Genome Biol 11: R116.
Normalization and subtraction: two approaches to facilitate gene discovery. M Bonaldo, G Lennon, M Soares, Genome Res. 6Bonaldo M, Lennon G, Soares M (1996) Normalization and subtraction: two approaches to facilitate gene discovery. Genome Res 6: 791-806.
Construction and characterization of a normalized cdna library. M Soares, M Bonaldo, Jelene P Su, L Lawton, L , Proc Natl Acad Sci U S A. 91Soares M, Bonaldo M, Jelene P, Su L, Lawton L, et al. (1994) Construction and characterization of a normalized cdna library. Proc Natl Acad Sci U S A 91: 9228-32.
An improved data stream summary: the count-min sketch, and its applications. G Cormode, S Muthukrishnan, Journal of Algorithms. Cormode G, Muthukrishnan S (2004) An improved data stream summary: the count-min sketch, and its applications. Journal of Algorithms .
Whole-genome multiple displacement amplification from single cells. C Spits, C L Caignec, M D Rycke, L V Haute, A V Steirteghem, Nat Protoc. 1Spits C, Caignec CL, Rycke MD, Haute LV, Steirteghem AV, et al. (2006) Whole-genome multiple displacement amplification from single cells. Nat Protoc 1: 1965-70.
Noisy splicing drives mrna isoform diversity in human cells. J Pickrell, A Pai, Y Gilad, J Pritchard, PLoS Genet. 61001236Pickrell J, Pai A, Gilad Y, Pritchard J (2010) Noisy splicing drives mrna isoform diversity in human cells. PLoS Genet 6: e1001236.
Green: a tool for efficient compression of genome resequencing data. A Pinho, D Pratas, S Garcia, Nucleic Acids Res. 4027Pinho A, Pratas D, Garcia S (2012) Green: a tool for efficient compression of genome resequencing data. Nucleic Acids Res 40: e27.
Data streams: algorithms and applications. Foundations and trends in theoretical computer science. S Muthukrishnan, Now PublishersMuthukrishnan S (2005) Data streams: algorithms and applications. Foundations and trends in theoretical computer science. Now Publishers. URL http://books.google.com/books?id= 415loiMd_c0C.
Spectrum-based de novo repeat detection in genomic sequences. H Do, K Choi, F Preparata, W Sung, L Zhang, J Comput Biol. 15Do H, Choi K, Preparata F, Sung W, Zhang L (2008) Spectrum-based de novo repeat detection in genomic sequences. J Comput Biol 15: 469-87.
Graph-based clustering and characterization of repetitive sequences in next-generation sequencing data. P Novak, P Neumann, J Macas, BMC Bioinformatics. 11378Novak P, Neumann P, Macas J (2010) Graph-based clustering and characterization of repetitive sequences in next-generation sequencing data. BMC Bioinformatics 11: 378.
Identification of repeat structure in large genomes using repeat probability clouds. W Gu, T Castoe, D Hedges, M Batzer, D Pollock, Anal Biochem. 380Gu W, Castoe T, Hedges D, Batzer M, Pollock D (2008) Identification of repeat structure in large genomes using repeat probability clouds. Anal Biochem 380: 77-83.
Parametric complexity of sequence assembly: theory and applications to next generation sequencing. N Nagarajan, M Pop, J Comput Biol. 16Nagarajan N, Pop M (2009) Parametric complexity of sequence assembly: theory and applications to next generation sequencing. J Comput Biol 16: 897-908.
Bambus 2: scaffolding metagenomes. S Koren, T Treangen, M Pop, Bioinformatics. 27Koren S, Treangen T, Pop M (2011) Bambus 2: scaffolding metagenomes. Bioinformatics 27: 2964-71.
Cloudburst: highly sensitive read mapping with mapreduce. M Schatz, Bioinformatics. 25Schatz M (2009) Cloudburst: highly sensitive read mapping with mapreduce. Bioinformatics 25: 1363-9.
Ultrafast and memory-efficient alignment of short dna sequences to the human genome. B Langmead, C Trapnell, M Pop, S Salzberg, Genome Biol. 1025Langmead B, Trapnell C, Pop M, Salzberg S (2009) Ultrafast and memory-efficient alignment of short dna sequences to the human genome. Genome Biol 10: R25.
IPython: a System for Interactive Scientific Computing. F Pérez, B E Granger, Comput Sci Eng. 9/ 145 min (6.1x) 31.8gb / 10.4gb (3.1x)Pérez F, Granger BE (2007) IPython: a System for Interactive Scientific Computing. Comput Sci Eng 9: 21-29. / 145 min (6.1x) 31.8gb / 10.4gb (3.1x)
Mouse (Oases) 100m / 26.4m 761 min/ 73 min (10.4x) 116. 0gb / 34.6gb (3.4x)Mouse (Oases) 100m / 26.4m 761 min/ 73 min (10.4x) 116.0gb / 34.6gb (3.4x)
Mouse (Trinity) 100m / 26.4m 2297 min / 634 min (3.6x) 42. 1gb / 36.4gb (1.2x)Mouse (Trinity) 100m / 26.4m 2297 min / 634 min (3.6x) 42.1gb / 36.4gb (1.2x)
| [] |
[
"Asymptotic Analysis of the Bayesian Likelihood Ratio for Testing Homogeneity in Normal Mixture Models",
"Asymptotic Analysis of the Bayesian Likelihood Ratio for Testing Homogeneity in Normal Mixture Models"
] | [
"Natsuki Kariya \nDepartment of Mathematical and Computing Science\nTokyo Institute of Technology\n\n",
"Sumio Watanabe \nDepartment of Mathematical and Computing Science\nTokyo Institute of Technology\n\n"
] | [
"Department of Mathematical and Computing Science\nTokyo Institute of Technology\n",
"Department of Mathematical and Computing Science\nTokyo Institute of Technology\n"
] | [] | When we use the normal mixture model, the optimal number of the components describing the data should be determined. Testing homogeneity is good for this purpose; however, to construct its theory is challenging, since the test statistic does not converge to the χ 2 distribution even asymptotically. The reason for such asymptotic behavior is that the parameter set describing the null hypothesis (N.H.) contains singularities in the space of the alternative hypothesis (A.H.). Recently, a Bayesian theory for singular models was developed, and it has elucidated various problems of statistical inference. However, its application to hypothesis tests for singular models has been limited.In this paper, we introduce a scaling technique that greatly simplifies the derivation and study testing of homogeneity for the first time the basis of Bayesian theory. We derive the asymptotic distributions of the marginal likelihood ratios in three cases:(1) only the mixture ratio is a variable in the A.H. ; (2) the mixture ratio and the mean of the mixed distribution are variables; And (3) the mixture ratio, the mean, and the variance of the mixed distribution are variables.; In all cases, the results are complex, but can be described as functions of random variables obeying normal distributions. A testing scheme based on them was constructed, and their validity was confirmed through numerical experiments. | null | [
"https://arxiv.org/pdf/1812.03510v2.pdf"
] | 88,517,560 | 1812.03510 | 3145bef5f0616c69b444b3cd99e48af6b82557c3 |
Asymptotic Analysis of the Bayesian Likelihood Ratio for Testing Homogeneity in Normal Mixture Models
22 Dec 2019 December 24, 2019
Natsuki Kariya
Department of Mathematical and Computing Science
Tokyo Institute of Technology
Sumio Watanabe
Department of Mathematical and Computing Science
Tokyo Institute of Technology
Asymptotic Analysis of the Bayesian Likelihood Ratio for Testing Homogeneity in Normal Mixture Models
22 Dec 2019 December 24, 2019
When we use the normal mixture model, the optimal number of the components describing the data should be determined. Testing homogeneity is good for this purpose; however, to construct its theory is challenging, since the test statistic does not converge to the χ 2 distribution even asymptotically. The reason for such asymptotic behavior is that the parameter set describing the null hypothesis (N.H.) contains singularities in the space of the alternative hypothesis (A.H.). Recently, a Bayesian theory for singular models was developed, and it has elucidated various problems of statistical inference. However, its application to hypothesis tests for singular models has been limited.In this paper, we introduce a scaling technique that greatly simplifies the derivation and study testing of homogeneity for the first time the basis of Bayesian theory. We derive the asymptotic distributions of the marginal likelihood ratios in three cases:(1) only the mixture ratio is a variable in the A.H. ; (2) the mixture ratio and the mean of the mixed distribution are variables; And (3) the mixture ratio, the mean, and the variance of the mixed distribution are variables.; In all cases, the results are complex, but can be described as functions of random variables obeying normal distributions. A testing scheme based on them was constructed, and their validity was confirmed through numerical experiments.
Introduction
Normal mixtures have been widely used for analyzing various problems, such as pattern recognition, clustering analysis, and anomaly detection, since they were first applied to Pearson's biology research in the 19th century [1]. This remains one of the most important models in statistics, both in theory and in practice [2].
When a normal mixture model is employed, the optimal number of components for describing the data has to be determined. Testing homogeneity is a well-known approach for this purpose, which is a hypothesis test to determine whether the data are described by a single normal distribution or a mixture distribution. In a normal mixture, the correspondence between a parameter and a probability density function is not one-to-one, and the Fisher information matrix of the statistical model that represents alternative hypotheses becomes singular at the parameter of the null hypothesis. As a result, the log likelihood ratio of the test of homogeneity for the normal mixture model does not converge to a χ 2 distributions, unlike the regular models [3][4] [5].
Therefore, it is necessary to study the testing of homogeneity in normal mixture models, not only out of theoretical interests but also for practical applications. Various methods have been proposed; for example, the modified likelihood ratio test, a method that adds a regularizing term [6] [7], an EM algorithm for calculating the modified likelihood ratio [8] [9], and the D test [10]. (for a recent review on this topic, see for example [11]). However, little research exists on treating the problem from the Bayesian perspective.
On the other hand, a theoretical foundation for singular statistical models has been constructed within the framework of Bayesian statistics in recent years [12]. One of the achievement of this theoretical study is WAIC, a new information criterion that can be applied to singular models [13]. However, most of the results have been on the problem of statistical inference, whereas the problem of hypothesis testing remains insufficiently studied.
In this paper, we study the test of homogeneity of normal mixture models based on the framework of a Bayesian hypothesis test, for the first time. We derive the asymptotic distribution of the test statistic, i.e.,the marginal likelihood ratio, in three cases: (1) only the mixture ratio is a variable; (2) The paper is organized as follows. In Section 2, we review the framework of the Bayesian hypothesis test and show that the marginal likelihood ratio gives the most powerful test.
Our main results are presented in Sections 3 to Section 5. We derive the asymptotic distributions of the marginal likelihood ratio analytically for three cases. The results of the numerical experiment for validation are also presented. In Section 6, we summarize our results and give a conclusion.
Framework of Bayesian Hypothesis Test
In this section, we briefly review the framework of the Bayesian hypothesis test.
Let {X n } = (X 1 , X 2 , ..., X n ) be a sample which is generated independently and identically from a probability distribution. We consider a statistical model of a normal mixture,
p(x|w) = (1 − a)N (0, 1 2 ) + aN (b, 1 c ),(1)
where w = (a, b, c), 0 ≤ a ≤ 1, b ∈ R, and c > 0. Here N (b, σ 2 ) denotes a normal distribution with the average b and variance σ 2 .
In a Bayesian hypothesis test, the null and alternative hypotheses are set as
N.H. : w 0 ∼ ϕ 0 (w), X i ∼ p(x|w 0 ), A.H. : w 0 ∼ ϕ 1 (w), X i ∼ p(x|w 0 ),
where X ∼ p(x) means that a random variable X is generated from a probability density function p(x). In the case of the testing homogeneity, the null hypothesis is set as,
N.H. : ϕ 0 (w) = δ(a)δ(b)δ(c − 1).
For a given statistic T (X n ) and a real value t, a hypothesis test T is defined by a determining procedure,
T (X n ) ≤ t =⇒ N.H., T (X n ) > t =⇒ A.L(X n ) = ϕ 1 (w) n i=1 p(X i |w)dw ϕ 0 (w) n i=1 p(X i |w)dw .(2)
where In this paper, we mathematically derive the asymptotic probability distributions of L(X n ) in the following three cases for A.H.,
X n = {X i } is an i.i.d.1. ϕ 1 (a, b, c) = U a (0, 1)δ(b − β)δ(c − 1), 2. ϕ 1 (a, b, c) = U a (0, 1)U b (0, B)δ(c − 1), 3. ϕ 1 (a, b, c) = U a (0, 1)U bc (D),
where U a (0, 1) is the uniform distribution of a on the interval (0, 1), U b (0, B) is the uniform distribution of b on (0, B), and U bc (D) is the uniform distribution of a set D in (b, 1/c) space.
The proofs use the following notation. For a given sample X n , two random variables ξ n and η n are defined by
ξ n = 1 √ n n i=1 X i ,(3)η n = 1 √ 2n n i=1 (X 2 i − 1).(4)
If X n is an i.i.d. sample generated from the N.H., then both ξ n and η n converge to N (0, 1)
in distribution and they are asymptotically independent.
3 Case 1: the case only the mixture ratio is unknown
Asymptotic distribution of the test statistic
Let us consider the case 1, that is, the mean of the mixed distribution of the A.H. is fixed and only the mixture ratio is the variable. This case is quite simple and we can readily derive the asymptotic distribution of the marginal likelihood ratio analytically. However, even in such a case the marginal likelihood ratio shows non-conventional behavior (different from the ordinary χ 2 distribution), as the support of the prior of the A.H. approaches to the singularity in the parameter space.
Therefore, this case can be regarded as a minimal model with which to study the effect of the singularity on the behavior of the marginal likelihood ratio. Hence we will study it as a first step towards analyzing more practical situations in the following sections.
We consider the case that the A.H. is near the N.H. in terms of the Kullback-Leibler divergence. In this situation, it is not easy to discriminate the alternative hypothesis from the null one. This is a typical situation in which a hypothesis test is needed.
A similar situation occurs in the context of the Bayesian inference, where the true distribution generating the sample is slightly deviates from the singularity of the model on the order of O(n −1/2 ) is studied [14] . Here, it was shown that the singularity greatly affects the behavior of the generalization error, even when the parameter set that represents the true model does not definitely match the singularity.
Although our problem is not an inference but a hypothesis test, we expected that a similar structure exists. We will see that this is true, and that the n −1/2 scaling works as well. This is because the scaling is determined from the order of the Kullback-Leibler divergence between the A.H. and the singularity (N.H.).
Applying the scaling mentioned above, we can derive the asymptotic distribution of the marginal likelihood ratio as follows.
(w) = δ(a)δ(b)δ(c − 1), A.H. : ϕ 1 (w) = U a (0, 1)δ(b − β)δ(c − 1),
where β = β 0 × n − 1 2 and β 0 is a nonzero constant. If {X i } is independently and identically generated from the N.H., the convergence in probability,
L(X n ) − L ∞ (ξ n ) → 0 holds for n → ∞, where L ∞ (ξ n ) = √ 2π 2β 0 erf β 0 − ξ n √ 2 + erf ξ n √ 2 exp( ξ 2 n 2 ).(5)
Here ξ n is a random variable defined in eq.(3) and erf(x) is the error function,
erf(x) ≡ 2 √ π x 0 e −t 2 dt.
Remark. Assume that ξ is a random variable whose probability distribution is N (0, 1).
By Theorem 1 and the convegence in distribution ξ n → ξ, the convergence in distribution
L(X n ) → L ∞ (ξ) holds. Since L ∞ (ξ) can be rewriten as L ∞ (ξ) = 1 0 da exp − β 2 0 a 2 2 + β 0 ξa ,
it is an increasing function of ξ. Therefore, we can determine the rejection region by using ξ.
Proof. : The integral with respect to (b, c) is easily performed and the prior of a in the A.H. is a uniform distribution on (0, 1); it follows that
L(X n ) = 1 0 exp(H(a)) da,(6)
where H(a) is defined as,
H(a) ≡ n i=1 log p(X i |a, β, 1) p(X i |0, 0, 1) (7) = n i=1 log (1 − a) + a exp βX i − β 2 2 .(8)
Under the N.H., from a well-known result in extreme statistics, the order of the maximum
of X i is X M ≡ max {X i } = O p 2 log n .
This results in
βX i − β 2 2 ≤ βX M − β 2 2 = O p log n n .(9)
Let α be a constant which satisfies 1 < α. Then
βX i − β 2 2 ∼ o p (log n) α n .
Hence
exp βX i − β 2 2 = 1 + βX i − β 2 2 + 1 2! βX i − β 2 2 2 + 1 3! βX i − β 2 2 3 × e C 0 ,
where C 0 is a random variable that satisfies
|C 0 | ≤ βX i − β 2 2 .
Then,
0 ≤ C 0 ≤ βX i − β 2 2 ( if βX i − β 2 2 ≥ 0), βX i − β 2 2 ≤ C 0 ≤ 0 (otherwise). Therefore, 1 3! βX i − β 2 2 3 e C 0 ∼ o p (log n) 3α/2 n 3 2 .
It follows that
H(a) = n i=1 log 1 + a βX i − β 2 2 + 1 2 βX i − β 2 2 2 + o p ( 1 n ) = n i=1 log 1 + aβX i − aβ 2 2 + aβ 2 X 2 i 2 + o p ( 1 n ) .
Then, by applying a Taylor expansion log(1 + ǫ)
= ǫ − ǫ 2 /2 + O(ǫ 3 ) to this equation, we obtain H(a) = n i=1 aβX i − 1 2 aβ 2 + 1 2 aβ 2 X 2 i − 1 2 a 2 β 2 X 2 i + o p (1).(10)
Let us use the following notations,
γ ≡ i βX i + 1 2 i (βX i ) 2 − 1 2 β 2 1 2 i (βX i ) 2 , δ ≡ 1 2 i (βX i ) 2 .
Accordingly, H(a) can be written as
H(a) = −δa 2 + γδa = −δ(a − γ/2) 2 + δγ 2 /4.
It follows that
L(X n ) = 1 0 da exp −δ(a − 1 2 γ) 2 × exp 1 4 × γ 2 δ = √ π 2 √ δ erf γ √ δ 2 + erf √ δ(1 − γ 2 ) × exp 1 4 × γ 2 δ ,
where erf(x) is the error function defined by
erf(x) = 2 √ π x 0 e −t 2 dt.
As n tends to infinity, δ converges in probability as
δ → 1 2 β 2 0 ,(11)
by which γ satisfies
γ = 2ξ n + o p (1),(12)
which completes the theorem.
A remarkable feature of this theorem is that L(X n ) does not explicitly depend on the sample size n. The reason is that, in the current setting, the distance between two centers of the clusters is O(n −1/2 ), and as the sample size n increases, the posterior distribution becomes localized around the true parameter. But at the same time, the fluctuation around the true parameter induced by the randomness of the sample is of the same magnitude as the speed that the posterior distribution approaches the true parameters as the sample size increases. As a result of this, these two effects cancel and L(X n ) does not explicitly depend on the sample size n.
Numerical evaluation of the level
Here, we numerically derive the rejection region and the level based on the results above.
From the definition, the level of the test is given as the probability that L(X n ) exceeds a certain threshold value a.
To see its behavior, we numerically calculated the level by generating 1,000 random samples from a standard normal distribution N (0, 1 2 ) and calculated L(X n ) by using them according to each sample. Then, we evaluated the level as the portion of the L(X n ) that exceeded the threshold. The level drops rapidly when the threshold exceeds a certain value. This tendency is especially clear in small β cases. This can be understood as follows.
The situation we consider is a "delicate", one in which it is not easy to discriminate between the N.H. and the A.H. is not clear. If we choose a small a threshold, the level is large, but supposing that we choose larger and larger values, the level sharply decreases, because the N.H. and the A.H. become similar in the situation, and the probability that the value of marginal likelihood ratio is large is expected to be very low. Figure 2 shows the results of our numerical calculation of the threshold that gives the 5% level as a function of β. From the asymptotic distribution obtained above, it can be seen that L → 1 2 when β is sufficiently large, and L → 1 is within the limit of β → 0. To confirm the validity of the above result obtained above, we numerically evaluated the value of L(X n ) for a finite number of samples. We used a set of samples generated from the N.H. distribution. Here, we set the sample size as n = 50 or n = 100, and generated 10000 sets of samples from the null hypothesis. In each case, the level was calculated from them as the ratio of the number of sets X n for which L(X n ) falls into the critical region to the total number of sets.
The result is shown in the Table 1, where we can see that the numerically calculated levels match those derived from the asymptote in both n = 50 and n = 100 cases. Therefore, it can be concluded that the asymptotic distribution we derived above is valid. In this section, we proceed towards a bit more general case. Here, the alternative hypothesis is a normal mixture whose mixture ratio and the mean are both variables. We assume that we have no prior knowledge about the A.H., and that the prior is a uniformly distribution.
Asymptotic distribution of the test statistic
We will prove the following theorem on the asymptotic distribution of the marginal likelihood ratio L.
L(X n ) − L ∞ (ξ n ) → 0 holds as n → ∞, where L ∞ (ξ n ) = 1 B 0 B 2 0 0 1 2 √ t log B 2 0 t e −t/2 cosh ξ n √ t dt,(13)
and ξ n is a random variable defined in eq.(3).
Remark. Assume that ξ is a random variable whose probability distribution is N (0, 1).
By Theorem 2, the convergence in distribution
L(X n ) → L ∞ (ξ)
holds. Hence the asymptotic rejection region of the most powerful test can be found by using L ∞ (ξ).
Proof. : The marginal likelihood ratio can be written as H(a, b) can be approximated in the same way as in the proof of Theorem 1 as,
L(X n ) = 1 0 da B 0 db B exp(H(a, b)), where H(a, b) = i log (1 − a) + a exp bX i − 1 2 b 2 . From the condition b ∈ [0, B 0 / √ n], theH(a, b) = i abX i − 1 2 ab 2 + 1 2 ab 2 X 2 i − 1 2 a 2 b 2 X 2 i + o p (1).(14)
Hence,
H(a, b) = − n 2 a 2 b 2 + i abX i + 1 2 ab 2 X 2 i − 1 − 1 2 a 2 b 2 X 2 i − 1 + o p (1).(15)
Under the N.H., by using the definitions, eqs. (3) and (4), we have
H(a, b) = − n 2 a 2 b 2 + √ n abξ n + 1 2 ab 2 η n − 1 2 a 2 b 2 η n + o p (1) = − n 2 a 2 b 2 + √ nabξ n + o p (1).
Using the notation L = L(X n ) for simplicity, it follows that
L = 1 0 da B 0 / √ n 0 √ n db B 0 exp(− n 2 a 2 b 2 + √ nabξ n + o p (1)) = 1 0 da B 0 0 db B 0 exp(− 1 2 a 2 b 2 + abξ n + o p (1)).
Hence, the convergence in probability L(
X n ) − L ∞ (ξ n ) → 0 holds, where L ∞ (ξ n ) = 1 0 da B 0 0 db B 0 exp(− 1 2 a 2 b 2 + abξ n )
By using b = t/a, we have
L ∞ (ξ n ) = 1 0 da aB 0 0 dt aB 0 exp(− 1 2 t 2 + tξ n ) = B 0 0 dt B 0 1 t/B 0 da a exp(− 1 2 t 2 + tξ) = 1 B 0 B 0 0 dt (log(B 0 ) − log t) exp(− 1 2 t 2 + tξ). = 1 2B 0 B 0 0 dt (log(B 0 ) − log t) exp(− 1 2 t 2 ) cosh(tξ).
Then eq.(13) is obtained by replacing the integration of t by √ t.
Similar to the previous example, we can see that the asymptotic behavior of the test statistics L does not explicitly depend on n.
We should also notice that the stochastic behavior of L is determined only by that of the random variable ξ. Clearly, L increases monotonously as the absolute value of ξ increases, and cosh ξ √ t is an even function with respect to ξ, hence, we can determine the critical region in the same way as is done in a two-sided hypothesis test of ξ.
For example, under the null hypothesis, the random variable ξ obeys the standard normal distribution, and the 5 % critical region is given as |ξ| > 1.96.
As a result of this, the 5 % critical region of the test statistics L is given as follows,
|L| > 1 B 0 B 2 0 0 1 2 √ t log B 2 0 t e −t/2 cosh 1.96 √ t dt.(16)
For example, if we choose B 0 = 1, the 5% critical region of L is given as
|L| > 2.298
We numerically validated the effectiveness of the analytically derived distribution of L when the sample size is finite.
First, we prepared the 10000 sets of the n samples, where n means the sample size and we set n as n = 50 or n = 100. We calculated the L by substituting the ξ in the asymptote with 1 √ n X i . Here, we fixed B 0 as 1. In each case, the level was calculated from them as the ratio of the number of sets X n for which L(X n ) falls into the critical region to the total number of sets. The levels were compared with those calculated from the level calculated from the asymptote of L. Table 2 shows the result. It shows the asymptote we derived in the previous section works well even in the finite n cases.
Comments on the comparison with hypothesis test using
Bayes factor
Let us comment on the comparison of our results with those obtained by another well-known method of Bayesian hypothesis testing, i.e.,using the Bayes factor.
As we derive the asymptote of the marginal likelihood ratio L, we can readily calculate the log marginal likelihood ratio F .
F = − log L
The log marginal likelihood ratio F , which is also called the logarithm of the Bayes factor, can be used as a tools for hypothesis testing. The procedure is very simple and effective, and it is used in various situations.
The procedure is as follows. When the value of F calculated from the data becomes negative, we choose the alternative hypothesis, and if otherwise, we choose the null hypothesis otherwise.
For the present problem, we can consider two ways of hypothesis testing with the result we derived. One is based on the stochastic behavior of L, and the other is based on the F . Both use the same quantity L, but we will see below that the former may work more effective in the "delicate" situation. Figure 3 shows the behavior of F as a function of ξ.
Interestingly, when B 0 is small, the value of F is always negative, regardless of any ξ, while in the large B 0 case, F becomes positive in the small ξ region. This can be understood as follows. When the two centers of the mixture distribution are so close as the distance between them is O(n −1/2 ), the overlap of the distribution of the null hypothesis and the distribution of the alternative hypothesis is large, and the sign of Bayes factor can become negative for any ξ.
In other words, when the two hypotheses are difficult to distinguish, the hypothesis test using the Bayes factor may choose the alternative hypothesis for any data, and it does not work well. On the other hand, the likelihood ratio test based on the stochastic behavior of L is expected to work in such delicate cases.
5 Case 3: the case the mixture ratio, the mean of the distribution mixed, and the variance are unknown
Asymptotic distribution of test statistic
Here, we discuss a more practical case in which the variance of the A.H. is also a variable.
That is, we consider the following probabilistic model, where U a (0, 1) is a uniform distribution on the interval (0, 1), and U(b, c) is a uniform distribution on an ellipsoid in the (b, c) plane such as,
p(x|a, b, c) = (1 − a) N (0, 1) + aN (b, 1 c ).(17)D = (b, c) ; b 2 + (c − 1) 2 2 ≤ R 2 0 n , where R 0 is a constant. The area of D is √ 2πR 2 0 /n.L ∞ (Ξ n ) = 1 2R 2 0 R 2 0 0 R 0 √ t − 1 exp − t 2 I 0 ( √ tΞ n ) dt.
Here, I 0 (t) is the modified Bessel function,
I 0 (t) = 1 π π 0 cosh(t sin θ)dθ,
which is monotone increasing in t > 0 and Ξ n is a random variable defined by
Ξ n = ξ 2 n + η 2 n ,
where ξ n and η n are defined in eq. (3) and (4), respectively.
Remark. Let Ξ be a random variable whose square is subject to a χ 2 distribution with freedom 2. In accordance with this theorem, L(X n ) converges in distribution to L ∞ (Ξ).
Hence, the rejection region of the most poweful test can be asymptotically determined by
L ∞ (Ξ).
Proof. The log density ratio function is given by
f (X i , a, b, c) = log p(X i |a, b, c) p(X i |0, 0, 1) = log 1 − a + a √ c e g(X i ,b,c) ,
where g(x, b, c) is a function defined by
g(X i , b, c) = − c 2 − 1 2 X 2 i + bcX i − b 2 c 2 .
Hence the marginal likelihood ratio L = L(X n ) is given by
L = 1 0 da D dbdc ϕ 1 (a, b, c) exp n i=1 f (X i , a, b, c) , where ϕ 1 (a, b, c) = n √ 2πR 2 0 .
Since the integrated region of (b, c) of this integral is
D, b = O p (n −1/2 ), c = O p (n −1/2 ). It follows that f (X i , a, b, c) = f | (b,c)=(0,1) + ∂f ∂b (b,c)=(0,1) b + ∂f ∂c (b,c)=(0,1) (c − 1) + 1 2 ∂ 2 f ∂b 2 (b,c)=(0,1) b 2 + 1 2 ∂ 2 f ∂c 2 (b,c)=(0,1) (c − 1) 2 + ∂ 2 f ∂b∂c (b,c)=(0,1) b (c − 1) + o p (1/n), where ∂f ∂b (b,c)=(0,1) = aX i ∂f ∂c (b,c)=(0,1) = a 2 X 2 i − 1 ∂ 2 f ∂b 2 (b,c)=(0,1) = a X 2 i − 1 − a 2 X 2 i ∂ 2 f ∂b∂c | (b,c)=(0,1) = aX i + a 2 (1 − a) X i 1 − X 2 i ∂ 2 f ∂c 2 (b,c)=(0,1) = − a 4 1 + X 2 i − a 4 X 2 i − X 4 i − a 2 4 1 − X 2 i 2 . Hence f (X i , a, b, c) = abX i + a(c − 1) 2 X 2 i − 1 + 1 2 a X 2 i − 1 − a 2 X 2 i b 2 + aX i + a 2 (1 − a)X i 1 − X 2 i b (c − 1) + 1 2 − a 4 1 + X 2 i − a 4 X 2 i − X 4 i − a 2 4 1 − X 2 i 2 (c − 1) 2 +o p (1/n).(18)
Note that the order of the quadratic forms of (b, c − 1) is 1/n and
(1/n) i X i = o p (1), (1/n) i X 2 i = 1 + o p (1), (1/n) i X 3 i = o p (1), (1/n) i X 4 i = 3 + o p (1).
The log likelihood ratio function is given by
n i=1 f (X i , a, b, c) = ab n i=1 X i + a(c − 1) 2 n i=1 X 2 i − 1 − n 2 a 2 b 2 − n 4 a 2 (c − 1) 2 + o p (1).
Let us define (r, θ) by b = r cos θ,
c = 1 + √ 2r sin θ.
Then by using eq. (3) and (4),
n i=1 f (X i , a, b, c) = − n 2 a 2 r 2 + √ nar(ξ n cos θ + η n sin θ) + o p (1) = − n 2 a 2 r 2 + √ nar ξ 2 n + η 2 n sin(θ + θ 0 ) + o p (1),
where θ 0 is a random variable which satisfies tan θ 0 = ξ n /η n . By using the notation R = R 0 / √ n, the log marginal likelihood ratio can be written as
L = 1 0 da D dbdc n √ 2πR 2 0 exp i f i (a, b, c) = 1 0 da R 0 2r dr R 2 2π 0 dθ 2π exp − n 2 a 2 r 2 + ar ξ 2 n + η 2 n sin(θ + θ 0 ) + o p (1) = 1 0 da R 0 2r dr R 2 2π 0 dθ 2π exp − n 2 a 2 r 2 + arΞ n sin(θ) + o p (1) .
Then by replacing r = ℓ/ √ n with dr = dℓ/ √ n, it follows that
L = 1 0 da R 0 0 2ℓ dℓ R 2 0 2π 0 dθ 2π exp − 1 2 a 2 ℓ 2 + aℓΞ n sin(θ) + o p (1) .
We define L ∞ (Ξ n ) by
L ∞ (Ξ n ) = 1 0 da R 0 0 2ℓ dℓ R 2 0 2π 0 dθ 2π exp − 1 2 a 2 ℓ 2 + aℓΞ n sin(θ) .
Then, the convergence in probability L(X n )−L ∞ (Ξ n ) → 0 holds. L ∞ (Ξ n ) can be rewritten as
L ∞ (Ξ n ) = 1 0 da R 0 0 2ℓ dℓ R 2 0 π 0 dθ 2π exp − 1 2 a 2 ℓ 2 cosh (aℓΞ n sin(θ)) .
By using t = a 2 ℓ 2 , the random variable L ∞ (Ξ n ) can be also rewritten as
L ∞ (Ξ n ) = 1 0 da a 2 R 2 0 0 dt a 2 R 2 0 π 0 dθ 2π exp − t 2 cosh √ tΞ n sin(θ) , = R 2 0 0 dt 1 √ t/R 0 da a 2 R 2 0 π 0 dθ 2π exp − t 2 cosh √ tΞ n sin(θ) , = 1 2R 2 0 R 2 0 0 dt R 0 / √ t − 1 exp − t 2 I 0 ( √ tΞ n ),
which completes the theorem.
As well as the results we obtained in the previous sections, the asymptote of L does not explicitly depend on the sample size n. The reason for this behavior is the same as in the previous cases, the critical scaling r ∝ n −1/2 .
Let us validate the asymptote we derived above. First, we will numerically calculate the behavior of Ξ, by using a sample generated from the standard normal distribution.
From a well-known result on the percentile point of the χ 2 result, we obtain the 10% percentile as Ξ ≃ 2.146, and the 5% percentile as Ξ ≃ 2.448, and the 1% percentile as Ξ ≃ 3.035. Therefore, we can construct a hypothesis test by using Ξ as a test statistics.
Next, we describe our numerically validation of the asymptotic distribution of L when the sample size is finite.
We firstly prepared the 10000 sample sets, whose size is denoted by n, and conducted the validation for three different values of n, i.e., n = 200, 400, 800. We calculated the L by using Ξ calculated from the finite sample. Here, we set R 0 as 1. Then, we estimated the level numerically in each case and compared them with the levels of case 2. Table 3 shows the result. Compared with the previous cases that used simpler models, in the present case, the numerically calculated levels slightly deviate from the theoretical values derived from the asymptote. But we can see that as the n becomes larger, the numerically calculated levels approach the theoretical value, and we can conclude that they match well and the asymptote we derived in the previous section works well even in the case of finite n.
Comments from the perspective of the singular learning theory
To conclude this section, let us mention the relation between the result obtained above and the general asymptotic form of the log marginal likelihood of the singular model, which is derived from the theory of algebraic geometry [15].
In Theorem 2, we derived the asymptotic form of L and saw that L did not depend on the sample size n as a result of the scaling law B ∝ n −1/2 that we applied.
We can consider another scaling B ∝ n −α , where α > 0 is a constant. As long as α ≤ 1 2 , we can calculate the asymptotic form of L in the same way as the derivation of Theorem 2. The result is as follows.
L = 1 B 0 n 1 2 −α B 2 0 n 1−2α 0 1 √ t log B 2 0 n 1−2α t e −t/2 cosh ξ √ tdt(19)
We can immediately obtain the log marginal likelihood ratio F = − log L.
F = 1 2 − α log n − (1 − 2α) log (log n) + o p (log (log n))(20)
From the general theory, the asymptotic form of log marginal likelihood becomes
F = λ log n − (m − 1) log (log n) + o p (log (log n))(21)
We can see that our result corresponds to λ = 1 2 − α and m = (2 − 2α). The sample size's dependency on the support of the prior affects the real canonical log threshold λ and the multiplicity m. In this paper, we treated α = 1 2 as a "critical" case, where the λ and m effectively vanish. In such a case, the main term of F becomes stochastic. This is why it can be difficult to apply conventional Bayes factor-based testing to such a case.
Let us comment more on the scaling n −1/2 . The Kullback-Leibler divergence K between the null hypothesis and the alternative hypothesis can be easily calculated.
K(a.b) = p(x|(0, 0)) log p(x|(0, 0)) p(x|w) dx = − 1 2 a 2 b 2(22)
Here, nK(a, b) is nothing other than the leading term of the H(a.b).
In the proof of the Theorem 2, we mainly considered that b ∝ n −1/2 , and a ∼ O(1).
The meaning of this setup is clear, the center of the mixed distribution deviates from the origin , as much as the variance of the distribution, and the null and alternative hypothesis are hard to discriminate.
As a result of this scaling, both na 2 b 2 and √ nabξ becomes O(1), and this result in the n-independent asymptote of L.
However, as we can be easily seen, this "scaling" is not unique. So long as ab ∼ n −1/2 and bX i − b 2 2 is small enough that the Taylor expansion of the exponential is valid, a proof similar to the one above can be constructed. For example, a scaling such as a ∼ n −1/4 and b ∼ n −1/4 will lead to the same results.
The important point here is that this can be understood as a Taylor expansion around the singularity ab = 0, and the deviation is described as a power of ab, not of b.
As we saw above, in this delicate situation, a hypothesis test based on the stochastic behavior of L works well, and to construct it, we need to find the singularity (in our setting, ab = 0)and an appropriate scaling (in our setting, ab ∼ n −1/2 )is essentially important.
Therefore, to construct the hypothesis test using singular models, we should keep in mind the effect of the singularity, and consider whether the case under consideration is "delicate" or not, by computing the Kullback-Leibler divergence between the null hypothesis and the alternative hypothesis. The scaling is determined by the form of the Kullback-Leibler divergence that consists of a polynomial for each parameters. From the perspective of the singular learning theory, this is nothing other than the relation between the real log canonical threshold (RCLT) λ and the representation of the parameters in the model.
Conclusion
In this paper, we theoretically studied the test of homogeneity for normal mixtures in terms of the Bayesian framework, for the first time.
By applying the mathematical technique developed for the analysis of singular models and by appropriately scaling from the singularity, we derived the asymptotic behavior of the marginal likelihood ratio for several forms of the prior. These forms are clearly different from the conventional χ 2 distribution, as an effect of the singularity in the parameter space, but their stochastic behavior can be described as a function of random variables that obey the normal distributions. We constructed a hypothesis test based on these results and numerically validated their effectiveness.
The merits of our treatment, based on the Bayesian learning theory for singular models, are as follows.
First, the test statistics that we analyzed was the marginal likelihood ratio and as a result of this, the hypothesis test using it is guaranteed to be the most powerful test.
Second, compared with other methods using the value of the (log) likelihood ratio, such as Bayes factor based ones, the hypothesis test based on the stochastic behavior of the marginal likelihood ratio is valid even when the null hypothesis and the alternative one are hard to discriminate, as we saw in Section 4. The stochastic behavior of the test statistics we derived can be described as a function of the probability variables obeying well-known probability distributions. From the practical perspective, this gives us a clear and easy-to-use formalism.
We should note that the construction of our hypothesis test is not possible until the stochastic behavior of the marginal likelihood ratio is theoretically derived. As far as we know, this is the first time a concrete form was derived. The results are of the mathematical theory that enable us to treat singular models properly.
To conclude our discussion, we should note that in Bayesian learning theory, the study of hypothesis tests is not sufficient and there is much that remains to be studied. We believe that our method is very general, and that it can be applied to various singular models. This direction of study could be of practical value. We also believe that it is also important to study the methods of approximating the log marginal likelihood ratios with high accuracy.
One candidate for this is variational Bayes, which is an efficient way to approximate the posterior distribution. However, the theory of hypothesis test based on variational Bayes is still insufficient. Therefore, in the future, we should study how to apply it to a Bayesian hypothesis test.
the mixture ratio and the mean of the mixed distribution in the A.H. are variables; (3) the mixture ratio, the mean, and the variance of the mixed distribution in the A. H. are variables. In all cases, the marginal likelihood ratios converge to certain random variables, which are different from the well-known χ 2 distribution as an effect of the singularities in the model. The validity and efficiency of the derived theory are shown numerically.
Theorem 1 .
1Assume that the N.H. and A.H. are given as N.H. : ϕ 0
Figure 1 :
1Level calculated from the asymptotic distribution of L as a function of the threshold for each β
Figure 1
1shows the plot of the level as a function of the threshold for each β.
Figure 2 :
2Relation between the threshold that gives a 5% level and β
where B = B 0 × n − 1 2 .Then, if X n is an i.i.d. sample generated from N.H., the convergence in probability
Figure 3 :
3The log marginal likelihood ratio F as a function of the random variable ξ for several values of B 0 .
We set the N.H. and the A.H. as N.H. : ϕ 0 (a, b, c) = δ(a)δ(b)δ(c − 1), A.H. : ϕ 1 (a, b, c) = U a (0, 1)U(b, c),
Theorem 3 .
3When the sample size n → ∞, convergence in probability L(X n )−L ∞ (Ξ n ) → 0 holds, where
H.. The level and power of this hypothesis test are defined by the probability that the A.H. is chosen on the assumption that the N.H. and A.H. generate X n respectively. Power(T, t) = Probability(A.H.|A.H.).For given two hypothesis tests T and U, T is said to be more powerful than U if and only holds for an arbitrary set (t, u) ∈ R 2 . A test T is said to be most powerful if it is more powerful than any other test.In regard to Bayesian hypothesis test, it was proved that a test using the marginal likelihood ratio as a test statistic is the most powerful test, where the marginal likelihood ratio is defined asLevel(T, t) = Probability(A.H.|N.H.),
if
Level(T, t) = Level(U, u) =⇒ Power(T, t) ≥ Power(U, u)
sample. Therefore, the probability distribution of the marginal likelihood ratio L(X n ) is necessary for constructing the most powerful test in the Bayesian framework. Note that this test is different from the Bayesian model selection using the marginal likelihood ratio, because this test is NOT defined by choosing the A.H. when L(X n ) ≥ 1. The most powerful test of this paper is defined by choosing the A.H. when L(X n ) ≥ t for t which makes P (A.H.|N.H.) be a given level.
Table 1 :
1Level calculated from the samples generated from the null hypothesisparameters
level
β threshold of 5% n = 50 n = 100
0.5
1.464
5.07%
4.76%
1
2.02
5.56%
5.15%
1.5
2.475
5.13%
4.92%
2
2.929
4.88%
5.03%
4 Case 2: both the mixture ratio and the mean of the
mixed distribution are unknown
Table 2 :
2The level calculated from the samples generated from the null hypothesislevel
10%
5%
1%
rejection region L > r
r=2.171 r=2.298 r=2.646
numerically calculated level(n=50)
9.75%
5.22%
1.04%
numerically calculated level(n=100) 9.91%
4.73%
0.97%
numerically calculated level(n=200) 9.74%
4.84%
0.99%
Table 3 :
3Comparison of levels derived from asymptote and those numerically calculatedlevels
level
10%
5%
1%
rejection region L > r
r=0.550 r=0.581 r=0.659
numerically calculated level(n=100) 9.53%
4.81%
1.21%
numerically calculated level(n=200) 9.72%
4.64%
0.88%
numerically calculated level(n=400) 10.04%
4.91%
0.79%
numerically calculated level(n=800) 10.33%
5.09%
1.03%
contributions to the mathematical theory of evolution. K Pearson, Iii, 10.1098/rsta.1894.0003Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 185K Pearson. Iii. contributions to the mathematical theory of evolution. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineer- ing Sciences, 185:71-110, 1894. ISSN 0264-3820. doi: 10.1098/rsta.1894.0003.
Finite mixture models. G J Mclachlan, D Peel, Wiley Series in Probability and Statistics. G. J. McLachlan and D. Peel. Finite mixture models. Wiley Series in Probability and Statistics, New York, 2000.
A failure of likelihood asymptotics for normal mixtures. J A Hartigan, Proceedings of the Barkeley Conference in Honor of Jerzy Neyman and Jack Kiefer. the Barkeley Conference in Honor of Jerzy Neyman and Jack Kiefer2J. A. Hartigan. A failure of likelihood asymptotics for normal mixtures. Proceedings of the Barkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, 1985, 2:807-810, 1985.
Asymptotics for likelihood ratio tests under loss of identifiability. Xin Liu, Yongzhao Shao, 10.1214/aos/1056562463Ann. Statist. 313Xin Liu and Yongzhao Shao. Asymptotics for likelihood ratio tests under loss of identifiability. Ann. Statist., 31(3):807-832, 06 2003. doi: 10.1214/aos/1056562463.
Likelihood ratio test for univariate gaussian mixture. Bernard Garel, 0378-3758Journal of Statistical Planning and Inference. 962Bernard Garel. Likelihood ratio test for univariate gaussian mixture. Journal of Statistical Planning and Inference, 96(2):325 -350, 2001. ISSN 0378-3758.
A modified likelihood ratio test for homogeneity in finite mixture models. Hanfeng Chen, Jiahua Chen, John D Kalbfleisch, Journal of the Royal Statistical Society. Hanfeng Chen, Jiahua Chen, and John D. Kalbfleisch. A modified likelihood ratio test for homogeneity in finite mixture models. Journal of the Royal Statistical Society.
13697412Series B (Statistical Methodology). 6314679868Series B (Statistical Methodology), 63(1):19-29, 2001. ISSN 13697412, 14679868.
Testing for a finite mixture model with two components. Hanfeng Chen, Jiahua Chen, John D Kalbfleisch, 10.1111/j.1467-9868.2004.00434.xJournal of the Royal Statistical Society: Series B (Statistical Methodology). 661Hanfeng Chen, Jiahua Chen, and John D. Kalbfleisch. Testing for a finite mixture model with two components. Journal of the Royal Statistical Society: Series B (Sta- tistical Methodology), 66(1):95-115, 2004. doi: 10.1111/j.1467-9868.2004.00434.x.
Hypothesis test for normal mixture models: The em approach. Jiahua Chen, Pengfei Li, 00905364The Annals of Statistics. 375AJiahua Chen and Pengfei Li. Hypothesis test for normal mixture models: The em approach. The Annals of Statistics, 37(5A):2523-2542, 2009. ISSN 00905364.
Inference on the order of a normal mixture. Jiahua Chen, Pengfei Li, Yuejiao Fu, doi: 10. 1080/01621459.2012.695668Journal of the American Statistical Association. 107499Jiahua Chen, Pengfei Li, and Yuejiao Fu. Inference on the order of a normal mixture. Journal of the American Statistical Association, 107(499):1096-1105, 2012. doi: 10. 1080/01621459.2012.695668.
Testing homogeneity in a mixture distribution via the l2 distance between competing models. Richard Charnigo, Jiayang Sun, Journal of the American Statistical Association. 99466Richard Charnigo and Jiayang Sun. Testing homogeneity in a mixture distribution via the l2 distance between competing models. Journal of the American Statistical Association, 99(466):488-498, 2004.
Testing for univariate Gaussian mixture in practice. Didier Chauveau, Bernard Garel, Sabine Mercier, working paper or preprintDidier Chauveau, Bernard Garel, and Sabine Mercier. Testing for univariate Gaussian mixture in practice. working paper or preprint, November 2017.
Mathematical Theory of Bayesian Statistics. Sumio Watanabe, Chapman and Hall/CRCNew YorkSumio Watanabe. Mathematical Theory of Bayesian Statistics. Chapman and Hall/CRC, New York, 2018.
Asymptotic equivalence of bayes cross validation and widely applicable information criterion in singular learning theory. Sumio Watanabe, 1532-4435J. Mach. Learn. Res. 11Sumio Watanabe. Asymptotic equivalence of bayes cross validation and widely ap- plicable information criterion in singular learning theory. J. Mach. Learn. Res., 11: 3571-3594, December 2010. ISSN 1532-4435.
Learning coefficients of layered models when the true distribution mismatches the singularities. Sumio Watanabe, Shun-Ichi Amari, 10.1162/089976603765202640Neural Computation. 155Sumio Watanabe and Shun-ichi Amari. Learning coefficients of layered models when the true distribution mismatches the singularities. Neural Computation, 15(5):1013- 1033, 2003. doi: 10.1162/089976603765202640.
Algebraic analysis for nonidentifiable learning machines. Sumio Watanabe, Neural Computation. 134Sumio Watanabe. Algebraic analysis for nonidentifiable learning machines. Neural Computation, 13(4):899-933, 2001.
| [] |
[
"POINT INTERACTIONS FOR 3D SUB-LAPLACIANS",
"POINT INTERACTIONS FOR 3D SUB-LAPLACIANS"
] | [
"Riccardo Adami ",
"Ugo Boscain ",
"Valentina Franceschi ",
"ANDDario Prandi "
] | [] | [] | In this paper we show that, for a sub-Laplacian ∆ on a 3-dimensional manifold M , no point interaction centered at a point q 0 ∈ M exists. When M is complete w.r.t. the associated sub-Riemannian structure, this means that ∆ acting on C ∞ 0 (M \ {q 0 }) is essentially self-adjoint. A particular example is the standard sub-Laplacian on the Heisenberg group. This is in stark contrast with what happens in a Riemannian manifold N , whose associated Laplace-Beltrami operator is never essentially self-adjoint onWe then apply this result to the Schrödinger evolution of a thin molecule, i.e., with a vanishing moment of inertia, rotating around its center of mass. | 10.1016/j.anihpc.2020.10.007 | [
"https://arxiv.org/pdf/1902.05475v1.pdf"
] | 119,641,042 | 1902.05475 | 2016bfc968b6b06ea752b35d68ce51a8a283aa1f |
POINT INTERACTIONS FOR 3D SUB-LAPLACIANS
Riccardo Adami
Ugo Boscain
Valentina Franceschi
ANDDario Prandi
POINT INTERACTIONS FOR 3D SUB-LAPLACIANS
Essential self-adjointnessHeisenberg groupsub-Laplacianpoint in- teractionssub-Riemannian geometryrotation of molecules
In this paper we show that, for a sub-Laplacian ∆ on a 3-dimensional manifold M , no point interaction centered at a point q 0 ∈ M exists. When M is complete w.r.t. the associated sub-Riemannian structure, this means that ∆ acting on C ∞ 0 (M \ {q 0 }) is essentially self-adjoint. A particular example is the standard sub-Laplacian on the Heisenberg group. This is in stark contrast with what happens in a Riemannian manifold N , whose associated Laplace-Beltrami operator is never essentially self-adjoint onWe then apply this result to the Schrödinger evolution of a thin molecule, i.e., with a vanishing moment of inertia, rotating around its center of mass.
Introduction
Let (M, g) be a Riemannian manifold endowed with a smooth volume ω (one can think, e.g., of the Riemannian volume). The associated Laplace operator is the operator on L 2 (M, ω) acting on C ∞ 0 (M ) and defined by ∆ ω = div ω •∇. Here, div ω denotes the divergence w.r.t. the measure ω and ∇ is the Riemannian gradient. A fundamental issue is the essential self-adjointness of ∆ ω , i.e., whether it admits a unique self-adjoint extension in L 2 (M, ω). Indeed, the essential self-adjointness of ∆ ω implies the well-posedeness in L 2 (M, ω) of the Cauchy problems for the heat and Schrödinger equations, that read, respectively, ∂ t φ = ∆ ω φ, φ| t=0 = φ 0 ∈ L 2 (M, ω), i∂ t ψ = −∆ ω ψ, ψ| t=0 = ψ 0 ∈ L 2 (M, ω).
Roughly speaking, when ∆ ω is not essentially self-adjoint, the above Cauchy problems are not well-defined without additional requirement, as for instance boundary conditions on ∂M .
The self-adjointness of ∆ ω is related with geometric properties of (M, g), as is evident from the following classical result. Theorem 1.1. Let (M, g) be a Riemannian manifold that is complete as metric space, and let ω be any smooth volume on M . Then, ∆ ω is essentially self-adjoint in L 2 (M, ω).
The above is due to Gaffney [16] when ω is the Riemannian volume. A simpler argument, which generalizes to arbitrary smooth measures, is given by Strichartz [28].
Date: February 15, 2019. 1 A simple way to obtain non-complete Riemannian manifolds from a given complete one (M, g), is by removing a point q 0 ∈ M . Considering ∆ ω on M \ {q 0 } yields the pointed Laplace operator∆ ω . We have the following. Theorem 1.2. Let (M, g) be a Riemannian manifold that is complete as metric space, and let ω be any smooth volume on M . Let∆ ω be the pointed Laplace operator at q 0 ∈ M . Then∆ ω is essentially self-adjoint in L 2 (M, ω) if and only if n ≥ 4.
The above result for the Euclidean space endowed with the Lebesgue measure is a consequence of [25,Ex. 4,p. 160], while the case of Riemannian manifolds where ω is the Riemannian volume is treated in [12]. Similar arguments can be applied when ω is an arbitrary smooth volume. Theorem 1.2 is relevant in physics. Indeed, in non-relativistic quantum mechanics, self-adjoint extensions of the pointed Laplace operator can be used to construct potentials concentrated at a point, the so-called point interactions, as, e.g., i∂ t ψ = (−∆ ω + αδ q 0 )ψ, α ∈ R, ψ(0, q) = ψ 0 (q).
Here, δ q 0 is a Dirac-like potential representing a point interaction. Dirac δ and δ are widely used in modelling of quantum systems, since Fermi's paper [13] up to contemporary applications [6,1,5] . In this language, Theorem 1.2 can be interpreted as the fact that point interactions do not occur in dimension 4 and higher or, equivalently, that single points are seen by Laplace operators only in dimension less or equal than 3.
In this paper we study the essential self-adjointness of sub-Laplacians, i.e., the generalization of the Riemannian Laplace operators to sub-Riemannian manifolds. Let us briefly introduce this setting. We refer to [2,21] for a more detailed treatement. Letting D s q = {X(q) | X ∈ D s }, s ≥ 1, the Hörmander condition then amounts to the requirement that for any q ∈ M there exists r = r(q) such that D r q = T q M . A sub-Riemannian manifold is then defined as the pair (M, {X 1 , . . . , X m }). With abuse of notation, we will sometimes denote it by M .
On a sub-Riemannian manifold the distance between two points q 1 , q 2 ∈ M is defined by
d(q 1 , q 2 ) = inf 1 0 m i=1 u i (t) 2 dt γ : [0, 1] → M,γ(t) = m i=1 u i (t)X i (γ(t)), γ(0) = q 0 , γ(1) = q 1 , u i ∈ L 1 ([0, 1], R), i = 1, . . . , m .
Owing to the Rashevskii-Chow theorem [2], (M, d) is a metric space inducing on M its original topology. The set of vector fields {X 1 , . . . , X m } is called a generating frame and it is a generalization of Riemannian orthonormal frames. As for the latter, there are different choices of generating frames giving rise to the same metric space (M, d), which is the true intrinsic object. For an equivalent definition of sub-Riemannian manifold that does not employ generating frames, see, e.g., [2].
The above definition includes several geometric structures [2]. Indeed, letting k(q) = dim(D q ), it holds that:
• If k(·) ≡ n, one obtains a Riemannian structure.
• If k(·) ≡ k < n, one obtains a classical sub-Riemannian structure. In this case, we will identify D ⊂ Vec(M ) with the vector distribution q∈M D q ⊂ T M . • if k(·) is not constant, one obtains a so-called rank-varying sub-Riemannian structure. This includes what are usually called almost-Riemannian structures [4,2]. Motivated by the above observations, we say that a sub-Riemannian structure is genuine if k(q) < n for all q ∈ M . Remark 1.3. In the first two cases above, if k(·) ≡ m the family {X 1 , . . . , X m } is a global orthonormal frame for the (sub-)Riemannian structure. Observe that, due to topological restrictions, such a frame does not always exist. However, if k(·) is locally constant around q 0 ∈ M , there always exists a local orthonormal frame 1 around q 0 .
In this paper a particular role is played by 3-dimensional structures. Definition 1.4. Consider a genuine sub-Riemannian structure on a 3-dimensional manifold M . We say that q ∈ M is a contact point if D 2 q = T q M . If every point of M is contact, we say that the structure is a 3-dimensional contact structure.
In other words, in the 3-dimensional case, a contact point is a point in which the full tangent space is generated by the vector fields X 1 , . . . , X m and their first Lie brackets. Since M is 3-dimensional, contact points coincide with what in the literature are called regular points.
1.2. Sub-Laplacians. Let {X 1 , . . . , X m } be a generating frame for the sub-Riemannian structure on M . Given a smooth volume ω the associated sub-Laplacian is defined as ∆ ω = div ω •∇ where div ω is computed with respect to the volume ω and ∇ is the sub-Riemannian gradient, whose expression is
∇φ = m i=1 X i (φ)X i , φ ∈ C ∞ (M ).
Such an operator is intrinsic in the sense that it does not depend on the particular choice of generating frame. We have then,
(1.1) ∆ ω = m i=1 X 2 i + (div ω X i )X i .
Notice the presence of the "sum of squares" of the vector fields of the generating frame plus some first order terms guaranteeing the symmetry of ∆ ω w.r.t. the volume ω.
As a consequence of Hörmander condition, ∆ ω is hypoelliptic [18], and we have the following generalization of Theorem 1.1. Theorem 1.5 (Strichartz,[29]). Let M be a sub-Riemannian manifold that is complete as a metric space, and let ω be any smooth volume on M . Then, ∆ ω is essentially self-adjoint on L 2 (M, ω).
The main object of interest in this paper is the pointed sub-Laplacian∆ ω at a point q 0 ∈ M . Similarly to the Riemannian case, this is defined as the sub-Laplacian ∆ ω on M \ {q 0 }.
1.3.
Main results. One of the main features of sub-Riemannian manifolds, is the existence of several natural notions of dimension. Although for Riemannian manifolds these are all coinciding, this is not the case in genuine sub-Riemannian manifolds. For instance, in the case of a classical sub-Riemannian manifold, some relevant dimensions are:
• the dimension of the space of admissible velocities k,
• the topological dimension n,
• the Hausdorff dimension Q of the metric space (M, d), where k < n < Q, see [20] 2 . It is then a natural question to understand which of these dimensions are relevant for essential self-adjointness of the pointed sub-Laplacian. In particular, since in a 3D contact sub-Riemannian manifold we have k = 2, n = 3, Q = 4, in view of Theorem 1.2, we focus on pointed sub-Laplacians at contact points of genuine 3D sub-Riemannian manifolds. For these structures we prove the following. Theorem 1.6. Let M be a genuine 3-dimensional sub-Riemannian manifold that is complete as metric space, and let ω be any smooth volume on M . Let q 0 ∈ M be a contact point, and∆ ω be the pointed sub-Laplacian at q 0 . Then∆ ω is essentially self-adjoint in L 2 (M, ω).
The above result follows from Theorem 5.1, and shows that, regarding the essential self-adjointness of pointed sub-Laplacians, 3D sub-Riemannian manifolds behave like Riemannian manifolds of dimension at least 4. This suggests that the relevant dimension for self-adjointness is not the topological one, and that a more suitable candidate seems to be the Hausdorff dimension.
A crucial step in establishing Theorem 1.6 is the following corresponding result for the celebrated Heisenberg group H 1 .
Theorem 1.7. The operator (∂ x − y 2 ∂ z ) 2 + (∂ x + x 2 ∂ z ) 2 on R 3 \ {(0, 0, 0)} is essentially self-adjoint in L 2 (R 3 , dx dy dz).
When q 0 is not a contact point, or M is of dimension larger than 3, we conjecture that Theorem 1.6 still holds. However, our techniques are not easily extended to higher dimensions. In dimension 2, classical sub-Riemannian manifolds do not exist, while for rank varying structures we have two cases. Either the point q 0 is Riemannian and then we can conclude that the pointed Laplace operator is not essentially self-adjoint; or q 0 is not Riemannian and in this case we conjecture that the pointed 2 Notice that Q = sup q∈M Q(q) where Q(q) is the local Hausdorff dimension, which can be com- Laplace operator is not essentially self-adjoint as well. However, the techniques necessary to study this case are very different from those developed in this paper and we do not treat this case here.
1.4.
Rotations of a thin molecule. We now apply Theorem 1.6 to the Schrödinger evolution on SO(3) of a thin molecule rotating around its center of mass, described as follows. Consider a rod-shaped molecule of mass m > 0, radius r > 0, and length > 0, as in Figure 1. We denote by z the principal axis of the rod, and by x and y two orthogonal ones. Then, the moments of inertia of the molecule are
I x = I y = I := m 3r 2 + 2 12 , I z = m r 2 2 .
Letting (ω x , ω y , ω z ) be the angular velocity of the molecule and L x = Iω x , L y = Iω y , L z = I z ω z be the corresponding angular momenta, the classical Hamiltonian is
H = 1 2 Iω 2 x + Iω 2 y + I z ω 2 z = 1 2 L 2 x I + L 2 y I + L 2 z I z .
Letting r → 0, with constant and m, we have that I z → 0, and the classical Hamiltonian reads
H thin = 1 2I L 2 x + L 2 y .
The corresponding Schrödinger equation is
(1.2) i dψ dt =Ĥ thin ψ, whereĤ thin = 1 2I L 2 x +L 2 y .
Here,L x ,L y , (andL z ) are the three angular momentum operators given by (in the following α, β, γ denote the Euler angles)
L x = iF x , F x = cos α cot β ∂ ∂α + sin α ∂ ∂β − cos α sin β ∂ ∂γ , L y = iF y , F y = sin α cot β ∂ ∂α − cos α ∂ ∂β − sin α sin β ∂ ∂γ , L z = iF z , F z = − ∂ ∂α . Since [F x , F y ] = F z we have that (SO(3), {F x , F y })
is a contact sub-Riemannian manifold. Moreover, being SO(3) unimodular, we have that F x , F y , F z are divergencefree with respect to the Haar measure dh (see [3]) and we have that the corresponding sub-Laplacian is
∆ dh = F 2 x + F 2 y . It follows thatĤ thin = − 1
2I ∆ dh When considering the Schrödinger equation (1.2) on functions of (α, β, γ), we are describing the evolution of a thin molecule in which the thin degree of freedom (i.e., the angle α of the rod w.r.t. the z axis) is part of the configuration space. The essential self-adjointness of the pointed sub-Laplacian∆ dh on SO(3) \ {(α 0 , β 0 , γ 0 )} given by Theorem 1.6 can be interpreted in the following way: A point interaction centered at (α 0 , β 0 , γ 0 ) does not affect the evolution of a thin molecule.
Notice that this would not be the case if the molecule were not thin. Indeed in this case the quantum Hamiltonian would have been proportional to a left-invariant Riemannian Laplacian on SO(3), and by Theorem 1.2 the elimination of a point from the manifold crashes its essential self-adjointness.
Moreover, if the evolution of the thin molecule is considered on the 2D sphere instead than on SO(3), meaning that we are totally forgetting the thin degree of freedom, then the elimination of a point would break the essential self-adjointness of the Laplacian as well.
1.5. Structure of the paper and strategy of proof. Sections 2 and 3 are devoted to preliminaries on the Heisenberg group and some of the functional analytic properties of sub-Riemannian manifolds, respectively. The remaining sections contain the proof of the main result of the paper, Theorem 5.1. This is obtained by first establishing Theorem 1.7 in Section 4, which is then extended to 3D genuine sub-Riemannian manifolds in Section 5.
More precisely, the proof of Theorem 1.7 consists in first reducing the problem of essential self-adjointness to the absence of L 2 solutions of the equation (∆ ω + i)θ = ϕ, where ϕ is a linear combination of derivatives of the Dirac delta mass at 0, see Lemma 4.1. This criterion is then verified in Section 4.2 by exploiting the non-commutative Fourier transform associated with the Heisenberg group structure. Then, in Theorem 4.11, we localize the above result, showing that the self-adjoint extensions of the pointed sub-Laplacian at 0 defined on a domain Ω ⊂ H 1 coincide with those of the (standard) sub-Laplacian on the same domain. The latter result is then generalized to any 3D genuine sub-Riemannian manifold via local normal forms, in Section 5.
Finally, in Appendix A, we show how a criterion for essential self-adjointness based on an Hardy inequality with constant strictly bigger than 1, exploited e.g. in [15,22,24], fails for the Heisenberg group. This, in particular, raises a crucial criticism against the results contained in [30], and forces us to consider the above strategy of proof for Theorem 5.1.
The Heisenberg group H 1
The Heisenberg group H 1 is the nilpotent Lie group on R 3 associated with the non-commutative group law
(x, y, z) * (x , y , z ) = x + x , y + y , z + z + 1 2 (xy − x y) .
The associated Haar measure, i.e., the only (up to multiplicative constant) leftinvariant measure on H 1 , is the standard Lebesgue measure L 3 of R 3 . One can check that H 1 is unimodular, that is, this measure is also right-invariant. A basis for the Lie algebra of left-invariant vector fields is given by
X H (x, y, z) = ∂ x − y 2 ∂ z , Y H (x, y, z) = ∂ y + x 2 ∂ z , Z H = ∂ z .
These satisfy the commutator relations
[X H , Y H ] = Z H and [X H , Z H ] = [Y H , Z H ] = 0. The sub-Riemannian structure on H 1 is defined by {X H , Y H }.
This is a global orthonormal frame, and thanks to the above commutator relations, the sub-Riemannian
manifold (H 1 , {X H , Y H }) is contact.
We let ∇ H be the sub-Riemannian gradient of H 1 . Then, the Heisenberg sub-Laplacian is the associated sub-Laplacian w.r.t. the Lebesgue measure, that reads
∆ H = X 2 H + Y 2 H = ∂ 2 x + ∂ 2 y + x 2 + y 2 4 ∂ 2 z + (x∂ y − y∂ x )∂ z .
Remark 2.1 (Fundamental solution). By [14,Thm. 2], the fundamental solution Γ :
H 1 \ {0} → R of the operator −∆ H is Γ(p) = 1 (8π)N (p) 2 .
Here, N is the Koranyi norm (see [11, Section 2.2.1]), given by
(2.1) N (x, y, z) = ((x 2 + y 2 ) 2 + 16z 2 ) 1/4 .
A simple computation shows that Γ is not square-integrable on any compact set containing the origin nor on its complement. This is in contrast with what happens for the fundamental solution of the Euclidean Laplacian on R 3 , Γ R 3 (p) = (4π|p|) −1 , which is square-integrable near the origin.
Associated with the group structure of H 1 we have the family of anisotropic dilations λ : H 1 → H 1 , λ > 0, defined by (2.2) λ (x, y, z) := (λx, λy, λ 2 z).
One can check that the sub-Riemannian distance from the origin is 1-homogeneous w.r.t. these dilations. Moreover, we have
L 3 ( λ (Ω)) = λ 4 L 3 (Ω), ∀λ > 0.
As a consequence of these facts, the Hausdorff dimension of H 1 is 4 and H 4 = L 3 .
That is, one more that its topological dimension.
Functional analytic preliminaries
Let A be a non-negative symmetric operator on some Hilbert space H, with dense domain Dom A. The closure of A is the operatorĀ whose domain is the closure of Dom A w.r.t. the norm
u A = u H + Au H ,
and whose action is defined by closing the graph of A.
Recall that A is self-adjoint if Dom A = Dom A * . If A is not self-ajoint, a selfadjoint extension T of A is a self-adjoint operator on H such that Dom A ⊂ Dom T and A * u = T * u for any u ∈ Dom T . We denote the set of self-adjoint extensions of A by A(A). If A admits exactly one self-adjoint extension, we say that A is essentially self-adjoint. By, e.g., [26, Theorem VIII.1] we have the following. 3.1. Sub-Riemannain Sobolev spaces. Let M be sub-Riemannian manifold with local generating family {X 1 , . . . , X m }, endowed with a smooth and positive measure ω. We denote by L 2 (M, ω) (or L 2 (M )) the complex Hilbert space of (equivalence classes of) functions u : M → C with scalar product
(u, v) = M uv dω, u, v ∈ L 2 (M, ω),
where the bar denotes the complex conjugation. The corresponding norm is denoted by
u 2 L 2 (M ) = (u, u). Similarly, L 2 (T M, ω) (or L 2 (T M )) is the complex Hilbert space of sections of the complexified tangent bundle X : M → T M C , with scalar product (X, Y ) = M g q (X(q), Y (q)) dω(q), X, Y ∈ L 2 (T M, ω).
Here, g q is the complexification of the scalar product on D q defined by polarization from the norm
(3.1) |ξ| 2 q = min m i=1 |u i | 2 ξ = m i=1 u i X i (q) , ξ ∈ D q .
The corresponding norm is X 2 L 2 (M ) = (X, X). Observe that in the above the minimum can be removed if {X 1 , . . . , X m } is an orthonormal frame. In this case, we have
X 2 L 2 (M ) = M k i=1 |u i | 2 dω, where X = k i=1 u i X i .
Given an open set Ω ⊂ M , the space C ∞ 0 (Ω) is the space of smooth functions compactly supported in Ω. For Ω ⊂ M we then let H 2 0 (Ω, ω) (or H 2 0 (Ω)) to be the closure of
C ∞ 0 (Ω) w.r.t. the norm (3.2) u 2 H 2 (Ω) = u 2 L 2 (Ω) + ∆ ω u 2 L 2 (Ω) , where ∆ ω is the sub-Laplacian (1.1).
The following result shows that this space is actually the horizontal second-order Sobolev space.
∇u 2 L 2 (Ω) + m i,j=1 X i X j u 2 L 2 (V ) ≤ C u 2 H 2 (Ω) , u ∈ C ∞ 0 (Ω).
Proof. Let u ∈ C ∞ 0 (Ω). We start by observing that, since (∆ ω u, u) = − ∇u 2 L 2 (Ω) , by Cauchy-Schwarz and Young's inequality we have
(3.3) ∇u 2 L 2 (Ω) ≤ ∆ ω u L 2 (Ω) u L 2 (Ω) ≤ 1 2 ∆ ω u 2 L 2 (Ω) + 1 2 u 2 L 2 (Ω) .X i X j u 2 L 2 (V ) ≤ C u 2 H 2 (U ) ≤ C u 2 H 2 (Ω) .
Here, the last inequality follows since supp u ⊂ Ω.
The relevance of this space is evident from the following consequence of Proposition 3.1. In the remainder of the section we derive some essential properties of H 2 0 (Ω). Proposition 3.4. Let Ω ⊂ Ω ⊂ M be smooth, and u ∈ L 2 (Ω). Then, u ∈ H 2 0 (Ω) if and only ifū ∈ H 2 0 (Ω ), wherē
u(p) := u(p), if p ∈ Ω, 0,
otherwise.
Proof. The result for the Euclidean Sobolev space of order one is well-known [10, Proposition 9.18]. The same arguments extends in a straightforward way to the case under consideration.
In view of the above, for any Ω ⊂ Ω ⊂ H 1 we will always identify H 2 0 (Ω) with the set of the correspondingū in H 2 0 (Ω ). In particular, for any smooth open set Ω 1 , Ω 2 ⊂ M it holds
(3.4) H 2 0 (Ω 1 ) ∩ H 2 0 (Ω 2 ) = H 2 0 (Ω 1 ∩ Ω 2 ).
In the sequel we will need the following simple fact, which we will apply with Ω 1 = Ω \B ε/2 (p) and Ω 2 = B ε (p), where Ω is a smooth open set, p ∈ Ω, and B r (p) stands for the open ball at p of radius r > 0.
Lemma 3.5. Let Ω 1 , Ω 2 ⊂ M be smooth open sets such that ∂Ω 1 ∩ ∂Ω 2 = ∅ and Ω 1 ∩ Ω 2 is relatively compact in M . Then,
H 2 0 (Ω 1 ) + H 2 0 (Ω 2 ) = H 2 0 (Ω 1 ∪ Ω 2 ). Proof. We start by proving the inclusion H 2 0 (Ω 1 ) + H 2 0 (Ω 2 ) ⊂ H 2 0 (Ω 1 ∪ Ω 2 ). Let u 1 ∈ H 2 0 (Ω 1 ) and u 2 ∈ H 2 0 (Ω 2 ). Then, there exists (u n i ) n ⊂ C ∞ 0 (Ω i ) such that u n i → u i w.r.t. the H 2 (Ω i ) norm, i = 1, 2. Then, the sequence u n 1 + u n 2 ∈ C ∞ 0 (Ω 1 ∪ Ω 2 ) satisfies u n 1 + u n 2 − (u 1 + u 2 ) H 2 0 (Ω 1 ∪Ω 2 ) ≤ u n 1 − u 1 H 2 0 (Ω 1 ∪Ω 2 ) + u n 2 − u 2 H 2 0 (Ω 1 ∪Ω 2 ) n→∞ −→ 0,
which proves the claim.
We now turn to the other inclusion. Let χ 1 , χ 2 ∈ C ∞ (Ω 1 ∪ Ω 2 ) be such that χ 1 + χ 2 = 1, 0 ≤ χ i ≤ 1 for i = 1, 2, χ 1 ≡ 1 on Ω 1 \ Ω 2 , and χ 1 ≡ 0 on Ω 2 \ Ω 1 . Such smooth functions exist thanks to the fact that ∂Ω 1 ∩ ∂Ω 2 = ∅. Moreover, since Ω 1 ∩ Ω 2 is relatively compact, there exists c > 0 such that |∇χ i | ≤ c and |∆ ω χ i | ≤ c, i = 1, 2.
Given
u ∈ H 2 0 (Ω 1 ∪ Ω 2 ), let (u n ) n ⊂ C ∞ 0 (Ω 1 ∪ Ω 2 ) such that u n → u in H 2 (Ω 1 ∪ Ω 2 ). Then, χ i u n → χ i u in L 2 (Ω i ), i = 1, 2, and ∇(χ i u n ) − ∇(χ i u) L 2 (Ω i ) ≤ χ i (∇u n − ∇u) L 2 (Ω i ) + |∇χ i |(u n − u) L 2 (Ω i ) ≤ ∇u n − ∇u L 2 (Ω 1 ∪Ω 2 ) + c u n − u L 2 (Ω 1 ∪Ω 2 ) . (3.5)
On the other hand, by Proposition 3.2 we have ∇(u n − u) L 2 (Ω 1 ∪Ω 2 ) ≤ C u n − u H 2 0 (Ω 1 ∪Ω 2 ) , and hence (3.5) shows that ∇(χ i u n ) → ∇(χ i u) in L 2 (Ω i ), i = 1, 2. Thanks to this fact, a similar computation on ∆ ω (χ i u n ) − ∆ ω (χ i u) yields χ i u n → χ i u in H 2 (Ω i ), i = 1, 2. This proves that u = χ 1 u + χ 2 u ∈ H 2 (Ω 1 ) + H 2 (Ω 2 ).
Essential self-adjointness of the Heisenberg pointed sub-laplacian
In this section we focus on the pointed sub-Laplacian in the Heisenberg group. We start by proving Theorem 1.7 via non-commutative harmonic analysis techniques. We then conclude the section by localizing this result in Theorem 4.11. That is, we show that the self adjoint extensions of the pointed sub-Laplacian on a domain Ω ⊂ H 1 coincide with those of the (standard) sub-Laplacian on the same domain.
4.1. Pavlov-like lemma for essential self-adjointness ofH. In what follows, S(R 3 ) is the Schwartz space on R 3 and S (R 3 ) the space of tempered distributions on R 3 . We denote by T, u the action of T ∈ S (R 3 ) on u ∈ S(R 3 ). Observe that, given a symmetric operator A on L 2 (R 3 ), with S(R 3 ) ⊂ Dom A and A(S(R 3 )) ⊂ S(R 3 ), its action on T ∈ S (R 3 ) is defined as (4.1)
AT, u = T, Au ∀u ∈ S(R 3 ).
The Dirac's delta centered at the origin, is the distribution δ 0 defined as
δ 0 , u = u(0), u ∈ S(R 3 ).
For a multi-index α = (α X , α Y , α Z ), the symbol D α δ 0 denotes the derivative of δ 0 in the sense of distribution of order |α| = α X + α Y + α Z , where α X derivatives are computed with respect to X H , α Y with respect to Y H , and α Z with respect to Z H . The following lemma is the adaptation to our setting of [23, Lemma 1].
Lemma 4.1. Let A be an essentially self-adjoint operator on L 2 (R 3 ), with domain S(R 3 ), and A 0 be the restriction of A to C ∞ 0 (R 3 \ {0}). Assume, moreover, that rng A ⊂ S(R 3 ). Then, the deficiency space K − (A 0 ) = Ker(A * 0 + i) of A 0 is characterized as follows
K − (A 0 ) = θ ∈ L 2 (R 3 ) : ∃(c α ) α∈N 3 ⊂ R, (c α ) α∈N 3 ≡ 0, s.t. (A + i)θ = |α|∈N c α D α δ 0 in the sense of distributions . (4.2) Proof. Observe that, since span{X H , Y H , Z H } = T H 1 = R 3 , it holds span{D α δ 0 | α ∈ N 3 } = span{∂ α 1 x ∂ α 2 y ∂ α 3 z δ 0 | α = (α 1 , α 2 , α 3 ) ∈ N 3 }. We then have, Dom A 0 ⊂ u ∈ S(R 3 ) : |α|∈N d α D α u(0) = 0, for all (d α ) α∈N 3 ⊂ R = u ∈ S(R 3 ) : |α|∈N d α D α δ 0 , u = 0, for all (d α ) α∈N 3 ⊂ R . (4.3)
By density of Dom A 0 , we have that θ ∈ K − (A 0 ) ⊂ Dom A 0 * if and only if for every u ∈ Dom A 0 we have
0 = ((A 0 * + i)θ, u) (density of Dom A 0 ) = (θ, (A 0 − i)u) (definition of adjoint) = (θ, (A − i)u) (A 0 u = Au if u ∈ Dom A 0 ) = θ, (A − i)u (u, Au ∈ S(R 3 )) = (A + i)θ, u (A is symmetric, see (4.1)).
Hence, summing up the above and the relation in (4.3) we get that θ ∈ K − (A 0 ) if and only if for any (d α ) α∈N 3 ⊂ R it holds
(4.9) (A + i)θ + |α|∈N d α D α δ 0 , u = 0, ∀u ∈ C ∞ 0 (R 3 \ {0}).
By definition of support of a distribution and the density of C ∞ 0 (R 3 ) in S(R 3 ), the latter is equivalent to the fact that the distribution on the left-hand side is supported in {0}. Since it is well-known that the only distributions supported at the origin are the Dirac delta and its derivatives, (4.9) is equivalent to the existence of (c α ) α∈N 3 ⊂ R such that
(A + i)θ = |α|∈N c α D α δ 0 .
The statement follows by observing thath the case (c α ) α∈N 3 ≡ 0 can be excluded since A is essentially self-adjoint.
4.2.
Essential self-adjointness of the pointed sub-Laplacian on H 1 . In this section we apply Lemma 4.1 to the sub-Laplacian on H 1 , via non-commutative harmonic analysis. This is done in Section 4.2.2 and requires some preliminary work that is presented in the next section.
4.2.1.
Non-commutative harmonic analysis on H 1 . For the following we heavily rely on [3,7,8]. (Observe that our normalization choice for the group law on H 1 agrees with [3].)
The dual space of H 1 (i.e., the space of equivalence classes of irreducible representations of H 1 ) is composed of the Schrödinger representations (X λ ) λ∈R , acting on L 2 (R) and given by 3
X λ (x,y,z) u(ξ) := e iλ(z−yξ+ xy 2 ) u(ξ − x), u ∈ L 2 (R).
The non-commutative Fourier transform of a function f ∈ L 1 (H 1 ) is then the family of Hilbert-Schmidt operators F(f ) =f = (f λ ) λ∈R ⊂ HS(L 2 (R)) acting on L 2 (R) and defined byf
λ = H 1 f (p) X λ p −1 dp.
Similarly to the Euclidean case, the above Fourier transform can be extended to an isometry between L 2 (H 1 ) and L 2 (Ĥ 1 , HS(L 2 (R))), whereĤ 1 R \ {0} endowed with the Plancherel measure dμ = |λ| 4π 2 dλ. Remark 4.2. The choice of group law in [7,8] is (x, y, z) B (x , y , z ) = (x + x , y + y , z + z − 2(xy − x y)).
Then, (H 1 , * ) (H 1 , B ) via the group isomorphism Φ(x, y, z) = ( x 2 , − y 2 , z).
Observe that under this isomorphism the sub-Laplacian ∆ H = X 2 H + Y 2 H coincides with 4∆ B , where ∆ B is the sub-Laplacian considered in [7,8]. Moreover, letting U λ p be the Schrödinger representations considered in [7], we have
X λ p −1 = U λ Φ•ι(p)
, where ι(x, y, z) = (−x, y, z). This identity, and the fact that the definition of Fourier transform (denoted by F B ) in [7,8] evaluates the representations at p and not at p −1 ,
implies that F(f ) = 4F B (f • Φ −1 • ι).
Given a differential operator P on functions over H 1 , we letP = F • P • F −1 . Then, we have the following relationŝ
X H = ⊕ RX λ H ,X λ H u(ξ) = −∂ ξ u(ξ), (4.10)Ŷ H = ⊕ RŶ λ H ,Ŷ λ H u(ξ) = −iλξu(ξ), ∆ H = ⊕ R∆ λ H ,∆ λ H u(ξ) = ∂ 2 ξ − λ 2 ξ 2 u(ξ).
Remark 4.3. Due to the difference of our definition of the Fourier transform with respect to the one defined in [7,8], while in our case ∆ H acts by left composition, i.e., ∆ H f = ∆ H • f , in [7,8] it acts by right composition. This accounts for some differences in the following. (Compare, e.g., formula (4.13) with [7, (1.15)].)
Following [7], we consider the family (H n ) n∈N of Hermite functions given by
H n (ξ) := 1 2 n n! (−∂ ξ + ξ) n H 0 (ξ), H 0 (ξ) := 1 √ π e − |ξ| 2 2 .
These form an orthonormal basis of L 2 (R), which diagonalizes∆ 1 H . In particular, −∆ 1 H H n = (2n + 1)H n . For λ = 0, we introduce the rescaled Hermite functions H n,λ (ξ) := |λ| 1/4 H n (|λ| 1/2 ξ). The family (H n,λ ) n∈N is still an orthonormal basis of L 2 (R), and Then, we letF :
L 2 (H 1 ) → L 2 (H 1 ) be the functioñ F(f )(n, m, λ) =f (n, m, λ) := (f λ H m,λ |H n,λ ),
where (·|·) denotes the standard scalar product in L 2 (R),
(f |g) = R f (ξ) g(ξ) dξ, f, g ∈ L 2 (R).
Let us define the following "Wigner distributions":
(4.12) W((x, y),w) = R e iλyξ H n,λ ξ + x 2 H m,λ ξ − x 2 dξ,w = (n, m, λ).
Then, a simple change of variables yields
f (w) = H 1
f (x, y, z)e −iλz W((x, y),w) dx dy dz.
Proposition 4.5. Let P be a linear operator on L 2 (H 1 ) such thatP = ⊕ RP λ . Then, for any f ∈ L 2 (H 1 ) we havẽ F(P f )(·, ·, λ) =P (·, ·, λ) •f (·, ·, λ), whereP (n, m, λ) = (P λ H m,λ |H n,λ ), and the composition between two operators A, B on 2 (N) is given by (A • B) Let Id be the identity operator on 2 (N), that is, if δ n,m denotes the Kroenecker delta, Id n,m = δ n,m . We will also denote by S k the shift operator on 2 (N) given by (S k v) n = v n−k , v ∈ 2 (N), and whose matrix is (S k ) n,m = δ n−k,m . Observe also that S k • S = S k+ .
We will need the following, which follows by straightforward computations.
(S k • D • S h ) n,m = δ n−(h+k),m d n−k .
In particular, we have
[S −1 • D − D • S 1 , S −1 • D + D • S 1 ] n,m = 2δ n,m (d 2 n+1 − d 2 n )
. We then have the following. Remark 4.9. In a more explicit form, we have,
X H (n, m, λ) = −|λ| 1/2 n + 1 2 δ n+1,m − m + 1 2 δ n,m+1 , Y H (n, m, λ) = −i sgn(λ)|λ| 1/2 n + 1 2 δ n+1,m + m + 1 2 δ n,m+1
Proof. By (4.10) and the definition ofX (see Proposition 4.5), it suffices to compute
(X λ H H m,λ |H n,λ ) = R ∂ ξ H m,λ (ξ)H n,λ (ξ) dξ = |λ| R H m (|λ| 1/2 ξ)H n (|λ| 1/2 ξ) dξ = |λ| 1/2 (H m |H n ).
The recurrence relation for Hermite functions H m = m/2H m−1 − (m + 1)/2H m+1 proves (4.14). Indeed,
(S −1 − Id) • D • (Id −S 1 ) n,m = n + 1 2 δ n+1,m − m + 1 2 δ n,m+1 .
Thanks to the recurrence relation ξH m (ξ) = m/2H m−1 (ξ) + (m + 1)/2H m+1 (ξ), the same arguments yield also (4.15). Finally, to complete the proof, it suffices to observe that
Z H =F([X H , Y H ]) = [X H ,Ỹ H ] = iλ[S −1 • D − D • S 1 , S −1 • D + D • S 1 ].
Indeed, (4.16) then follows by Lemma 4.7.
For the following observations we refer to [8]. Similarly to what happens for the standard Fourier transform, it can be shown thatF is a continuous isomorphism between the class S(H 1 ) = S(R 3 ) of Schwarz functions on H 1 , which are defined as in the Euclidean case, and a space of functions onH 1 , denoted by S(H 1 ). This allows to extendF to tempered distributions on H 1 , e.g., elements of S (H 1 ), via the following relation
(4.17) F T, θ S (H 1 ) = T,F * θ S (H 1 ) .
Here ·, · denotes the duality, andF * (which can be computed on functions in S(H 1 )), is the operator (4.18)F * θ(x, y, z) = H1 e −izλ W((x, y),w) θ(w) dw
We then have the following.
Proposition 4.10. Let δ 0 be the Dirac distribution centered at the origin. Then, for any multi-index α = (α X , α Y , α Z ) ∈ N 3 , we have the following
δ 0 (·, ·, λ) = Id,F(X α X H Y α Y H Z α Z H δ 0 )(·, ·, λ) = Q(|λ| 1/2 )B,
where Q is a polynomial of degree α X + α Y + 2α Z and B is an non-zero operator on 2 (N).
Proof. By (4.17), for any θ ∈ S(H 1 ), we have F δ 0 , θ S (H 1 ) = F * θ(0). Then, by the orthonormality of the families (H n,λ ) n∈N , λ = 0, lettingw = (n, m, λ) by (4.18) and (4.12) we havẽ
F * θ(0) = H1 W((0, 0),w) θ(w) dw = H1 (H n,λ |H m,λ ) θ(w) dw = H1 δ n,m θ(w) dw.
This proves the first part of the statement.
To complete the proof it then suffices to apply toδ 0 the expression of the operators X H ,Ỹ H , andZ H , given in Proposition 4.8.
Proof of Theorem 1.7. Recall that the sub-Laplacian
H = −∆ H , Dom H = C ∞ 0 (R 3 )
is essentially self-adjoint. It is easy to check that S(R 3 ) ⊂ Dom(H). We then let A be the essentially self-adjoint operator obtained by restrictingH to S(R 3 ). Moreover, by smoothness of X H and Y H it holds rng A ⊂ S(R 3 ).
In view of the above, we apply Lemma 4.1 to A 0 = −∆ H , Dom(A 0 ) = C ∞ 0 (R 3 \{0}). In particular, we consider the equation (4.2) that elements of K − (A 0 ) have to satisfy and applyF on both sides. By (4.13) and Proposition 4.10, this yields θ(·, ·, λ) = Q(|λ| 1/2 )B |λ|(2n + 1) + i .
Choosing n, m such that B n,m = 0, by [7, Theorem 1.2] this yields
π 2 θ 2 L 2 (H 1 ) = θ 2 L 2 (H 1 ) ≥ R Q(|λ| 1/2 ) 2 B 2 n,m |λ| 2 (2n + 1) 2 + 1 |λ| 4π 2 dλ +∞ 1 dλ |λ| = +∞.
This implies that K − (A 0 ) = {0}. Since the operator A 0 is non-negative, this proves that A 0 is essentially self-adjoint. (See, e.g., [25, Thm. X.I and Corollary]) To conclude the proof of the statement, it suffices to observe that the essential self-adjointness of A 0 implies the essential self-adjointness ofH, since Dom A 0 ⊂ DomH.
4.3.
Heisenberg pointed sub-Laplacian on domains. In this section, we localize Theorem 1.7, by proving the following result. Thanks to the invariance under left translation of ∆ H we can assume p = 0. Moreover, for any ε > 0 we let U ε ⊂ R 3 to be the Euclidean ball of radius ε centered at the origin. By Lemma 3.5, for any ε > 0 sufficiently small, we have that
H 2 0 (Ω) = H 2 0 (U ε ) + H 2 0 (Ω \ U ε/2 ) and H 2 0 (Ω \ {0}) = H 2 0 (U ε \ {0}) + H 2 0 (Ω \ U ε/2 ). Thus, we are reduced to show that H 2 0 (U ε ) = H 2 0 (U ε \ {0}).
Since the other inclusion is obvious, let us prove that
H 2 0 (U ε ) ⊂ H 2 0 (U ε \ {0}).
To this aim, we consider u ∈ H 2 0 (U ε ) and a sequence (u n ) n ⊂ C ∞ 0 (U ε ) such that u n → u in the H 2 norm. Observe that, by Theorem 1.7 and Lemma 3.5, we have (4.19) implies that there exist two sequences (u
(4.19) H 2 0 (H 1 ) = H 2 0 (H 1 \ {0}) = H 2 0 (U ε \ {0}) + H 2 0 (H 1 \ U ε/2 ). Hence, since u ∈ H 2 0 (U ε ) ⊂ H 2 0 (H 1 ),(1) n ) n ⊂ C ∞ 0 (U ε \ {0}) and (u (2) n ) n ⊂ C ∞ 0 (H 1 \ U ε/2
) which converge in the H 2 norm respectively to u (1) , u (2) where u (1) + u (2) = u. Thus, the sequence u n − u (1) n ⊂ C ∞ 0 (U ε ) converges to u (2) in the H 2 norm, and hence u (2) ∈ H 2 0 (U ε ) ∩ H 2 0 (H 1 \ U ε/2 ) = H 2 0 (U ε \ U ε/2 ). Here, the last equality follows by (3.4). As a consequence, we can assume (u (2) n ) n ⊂ C ∞ 0 (U ε \ U ε/2 ). Finally, we have shown that the sequence u (1) n + u (2) n is contained in C ∞ 0 (U ε \ {0}), and satisfies lim n (u (1) n + u (2) n ) = u. This completes the proof.
Essential self-adjointness of 3D pointed sub-Laplacians
Let M be a 3-dimensional genuine sub-Riemannian manifold, endowed with a smooth and positive measure ω. Let p ∈ M be a regular point, and {X 1 , X 2 } be a local orthonormal frame for the sub-Riemannian structure in U ⊂ M , p ∈ U . (See Remark 1.3.) By (1.1), we have
(5.1) ∆ ω = X 2 1 + X 2 2 + X 0 , where X 0 = div ω (X 1 )X 1 + div ω (X 2 )X 2 .
The purpose of this section is to prove the following.
Theorem 5.1. The set of self-adjoint extensions of −∆ ω with domain C ∞ 0 (M \ {p}) coincides with the one with domain C ∞ 0 (M ). We remark that, since when M is complete the sub-Laplacian is essentially selfadjoint, the above implies Theorem 1.6.
The idea of the proof is to show that, sufficiently near p, the Sobolev space H 2 0 associated with the sub-Laplacian (5.1) is equivalent to the one associated with the Heisenberg sub-Laplacian. This will then allow to exploit the results obtained in Theorem 4.11 locally around p. Finally, a localization argument completes the proof.
In order to go on with the above plan, we fix the following set of coordinates around p, for which we refer to [31,Sec. 8.2]. We stress that the regularity of p is essential for the existence of these coordinates.
Proposition 5.2. There exists a local set of coordinates in a neighborhood V around p such that, denoting by X H and Y H the Heisenberg vector fields, there exists C = (c ij ) ∈ C ∞ (V, GL 2 (R)) such that C −1 ∈ C ∞ (V, GL 2 (R)) and
X 1 = c 11 X H + c 12 Y H , X 2 = c 21 X H + c 22 Y H .
Since, without loss of generality, we can assume that these coordinates cover the whole U , we will henceforth identify points in U with their coordinate representation and p with the origin. In the following, for ε > 0 we let U ε be the Euclidean ball centered at 0 of radius ε, and H 2 0 (U ε ) be the Sobolev space (3.2) with respect to the sub-Laplacian ∆ ω in M .
Proposition 5.3. It holds that H 2 0 (U ε ) = H 2 0 (U ε \ {p}(5.2) 1 ≤ ω(q) ≤ , ∀q ∈ U ε .
In particular, this implies that the L 2 norms on U ε w.r.t. ω and the Lebesgue measure are equivalent. Moreover, by Proposition 5.2, there exists smooth functions α ij and β i , i, j = 1, 2, such that
∆ H = 2 i,j=1 α ij X i X j + 2 i=1 β i X i .
Let c > 0 be such that |α ij |, |β i | ≤ c on U ε for i, j = 1, 2. By (5.2) and Proposition 3.2 with Ω = U = U ε , we then have that there exists a constant C > 0 such that, for any u ∈ C ∞ 0 (U ε ), it holds
∆ H u 2 L 2 (Uε,H 1 ) ≤ Uε |∆ H u| 2 dω ≤ c 2 i,j=1 X i X j u 2 L 2 (Uε) + ∇u 2 L 2 (Uε) ≤ C u H 2 (Uε) .
The same argument can be used to show that ∆ ω u L 2 (Uε) u H 2 (Uε,H 1 ) , completing the proof of the statement.
Thanks to the above we are now in a position to complete the proof of the main theorem.
Proof of Theorem 5.1. By Proposition 3.3, we need to show that H 2 0 (M ) = H 2 0 (M \ {p}). Let ε > 0 sufficiently small, so that Proposition 5.3 implies that H 2 0 (U ε ) = H 2 0 (U ε \ {p}). By Lemma 3.5, we then have
H 2 0 (M ) = H 2 0 (U ε ) + H 2 0 (M \ U ε/2 ) = H 2 0 (U ε \ {p}) + H 2 0 (M \ U ε/2 ) = H 2 0 (M \ {p}).
Appendix A. Hardy constant in the Heisenberg group
In the Riemannian setting, the essential self-adjointness for n ≥ 4 follows from the validity of local Hardy-type inequalities, with constant bigger than or equal to 1. Indeed, via normal coordinates, one can show that for every u ∈ C ∞ 0 (M \ {p}) it holds
M |∇ R u| dω ≥ n − 2 2 2 {δ R <η} 1 δ 2 R − k δ R u 2 dω + c u L 2 (M ) ,
for some constants η > 0, k ≤ 1/η, c ∈ R, see [24]. Here, δ R (q) = d R (q, p) is the Riemannian distance from p ∈ M . By Agmon type estimates, this yields at once the essential self-adjointness for n ≥ 4 as presented in [22,24,15].
In this appendix, we show that Hardy-type inequalities as the above with constant bigger than or equal to 1 do not hold for 3-dimensional sub-Riemannian manifolds. In particular, the Hardy constant in the 3-dimensional Heisenberg group H 1 is strictly less than 1.
In the following, we denote the distance from the origin in H 1 as δ(p) := d(p, 0). One can check that δ is smooth on R 3 \ {x = y = 0} and satisfies (A.1) |∇ H δ| = 1 a.e. on H 1 .
Here with abuse of notation |·| denotes the sub-Riemannian norm of horizontal vector fields as defined in (3.1), where g = g H is defined in section 2. Observe that, by [19], there exists C > 0 such that
(A.2) H 1 |∇ H u| 2 dp ≥ C H 1 |u| 2 δ 2 dp, ∀u ∈ C ∞ 0 (H 1 \ {0}).
In the sequel we prove the following fact on the sharp constant in the above, contradicting the result claimed in [30].
Theorem A.1. We have
C H = inf u∈C ∞ 0 (R 3 \{0}) H 1 |∇ H u| 2 dp H 1 |u| 2 δ 2 dp < 1.
Remark A.2. Numerical computations on the explicit function used in the proof of Theorem A.1 yield C H ≤ 0.798.
Remark A.3. In [17], an inequality similar to (A.2) is investigated, where the distance from the origin is replaced by the Koranyi norm N , defined in (2.1). In particular, they obtain
H 1 |∇ H u| 2 dp ≥ H 1 |u| 2 |∇ H N | 2 N 2 dp, ∀u ∈ C ∞ 0 (H 1 \ {0}).
Here, the constant is equal to 1 and is sharp. Unfortunately, since the level sets of |∇ H N |/N are not neighborhoods of the origin, this inequality cannot be paired with the Agmon-type estimates techniques of [15,24] in order to yield the essential self-adjointness result.
A.1. A set of coordinates in H 1 . We define the diffeomorphism Φ : Observe that Φ(t, θ, r) = t • Φ(1, θ, r), where t denotes the anisotropic dilation introduced in (2.2). Moreover, direct computations show that Φ * L 3 = t 3 µ(r)dt dθ dr, where µ(r) = 2 − 2 cos r − r sin r r 4 .
R + × S 1 × (−2π, 2π) → H 1 \ {x = y = 0}
Let θ 0 ∈ S 1 , h 0 ∈ R and t ∈ [0, 2π/|λ z |]. Then, the curve t → Φ(t, θ 0 , th 0 ) is an arc-parametrized length-minimizing curve issuing from 0. (See, [2,Section 4.4.3].) In particular, this implies that δ(Φ(t, θ, r)) = t, for any t > 0, θ ∈ S 1 and r ∈ [−2π, 2π].
One can check that ∇ H δ = cos(θ + r)X H + sin(θ + r)Y H . Then, we let (∇ H δ) ⊥ = − sin(θ + r)X H + cos(θ + r)Y H be a choice of horizontal unit vector orthogonal to ∇ H δ. By (A.1), ∇ H δ and (∇ H δ) ⊥ form an orthonormal basis of horizontal vector fields. Straightforward computations then show that, in the coordinates given by Φ, we have the following:
∇ H δ = ∂ t + r t ∂ r ,
(∇ H δ) ⊥ = r t r − sin(r) r sin(r) + 2 cos(r) − 2 ∂ θ + r t w(r)∂ r .
Here, we let w(r) = r 2 − r cot r 2 , r ∈ (−2π, 2π).
A.2. Preliminary computations on the Koranyi norm. Recall that the Koranyi norm (2.1) is
N (x, y, z) = ((x 2 + y 2 ) 2 + 16z 2 ) 1/4 .
In particular, we have N • Φ(t, θ, r) = √ 2t r 4 r 2 − 2r sin(r) − 2 cos(r) + 2.
With a little abuse of notation we still denote by N the Korany norm in the coordinates Φ. Since δ(Φ(t, ·, ·)) = t, t > 0, for any α ∈ R this yields, (A.3) |N α/2 | 2 δ 2 = 2 α/2 t α−2 4 r 2 − 2r sin(r) − 2 cos(r) + 2 r α = t α−2 γ α (r).
Here, γ α is defined by the last equality. It is simple to check that γ α (r)µ(r) is a nonnegative and bounded continuous function on [−2π, 2π], whose maximum is 1/12 at r = 0 and whose minimum is 0 at r = ±2π. In particular, the above is integrable in t 3 µ(r) dt dr for t → +∞ only if α < −2.
Using the expression of ∇ H δ and (∇ H δ) ⊥ , and the fact that they form an orthonormal frame outside t = 0, we then get |∇ H N (t, θ, r)| 2 = 1 − cos(r) r 2 − 2r sin(r) − 2 cos(r) + 2
In particular, for any α ∈ R we have |∇ H (N α/2 )| 2 = α 2 4 r 2 (1 − cos r) 2 (r 2 − 2r sin(r) − 2 cos(r) + 2) |N α/2 | 2 δ 2 = α 2 4 t α−2 γ α (r)η(r).
(A.4)
Here, η is defined by the last equality, and is independent of α. Observe that, also in this case, the integrability at infinity is true only if α < −2.
A.3. Proof of Theorem A.1. Let us fix a smooth function χ : R + → [0, 1] such that χ| [0,1/2] ≡ 0 and χ| [1,+∞] ≡ 1. Then, for α > −2, we let u α (t, θ, r) = χ(t)N α/2 (1, θ, r), if t ≤ 1, N α/2 (t, θ, r), otherwise.
By definition of χ, (A.3), and (A.4), for any α > −2 there exists (v n ) n ⊂ C ∞ 0 (H 1 ) such that lim n→+∞ H 1 |v n | 2 δ 2 dp = H 1 |u α | 2 δ 2 dp, lim n→+∞ H 1 |∇ H v n | 2 dp = H 1 |∇ H u α | 2 dp.
In particular, |u α | 2 δ 2 dp ≥ δ≥1 |u α | 2 δ 2 dp = 2π
C H ≤ inf H 1 |∇ H u α | 2 dp+∞ 1 t α+1 dt 2π −2π
γ α (r)µ(r) dr.
Observe that the integral in t on the r.h.s. goes to +∞ as α → −2. Moreover, as direct computations show, N α/2 | t=1 and ∂ r (N α/2 )| t=1 are uniformly bounded from above for α ∈ [−2, −3]. As a consequence, there exists a constant C > 0 such that |∇ H u α | 2 ≤ C. In particular, by (A.4), (A.6)
H 1 |∇ H u α | 2 dp ≤ CL 3 ({0 ≤ δ ≤ 1}) + α 2 4 2π +∞ 1 t α+1 dt 2π −2π
γ α (r)η(r)µ(r) dr.
Taking the quotient of (A.6) and (A.5), and passing to the limit as α → −2, we get Here, we passed to the limit under the integral signs thanks to monotone convergence. Simple computations show that η(0) = 1, η(±2π) = 0, and that η is monotone decreasing in |r|. Hence, for any a > 0, it holds |r|>a γ −2 (r)η(r)µ(r) dr < η(a) |r|>a γ −2 (r)µ(r) dr, |r|≤a γ −2 (r)η(r)µ(r) dr ≤ |r|≤a γ −2 (r)µ(r) dr.
Summing up, since η(a) < 1 and |r|>a γ −2 (r)µ(r) dr > 0, by (A.7) we obtain that C H < 1.
1. 1 .
1Sub-Riemannian manifold. A sub-Riemannian structure on a smooth manifold M is given by a family of smooth vector fields {X 1 , . . . , X m } ⊂ Vec(M ) satisfying the Hörmander condition. Namely, let D = span{X 1 , . . . , X m }, pose D 1 = D and recursively define D s = D s−1 + [D, D s−1 ], s ∈ N, s ≥ 2. This defines the flag D 1 ⊂ . . . ⊂ D s ⊂ . . . ⊂ Vec(M ).
puted via the flag D 1 q ⊂ . . . ⊂ D k(q) q = T q M .In particular, Q is possibly infinite.
Figure 1 .
1The thin molecule is obtained by considering the above rod and letting r → 0. The thin degree of freedom is α.
Proposition 3 . 1 .
31An operator T is a self-adjoint extension of A if and only if it is a self-adjoint extension ofĀ. In particular, if A and B are two operators such that A =B, then A(A) = A(B).
Proposition 3. 2 .
2Let Ω ⊂ M be open and fix q ∈ Ω. Then, for any open neighborhood U ⊂ Ω of q there exists an open neighborhood V ⊂ U of q and C > 0 such that we have,
The result follows by [27, Theorem 18.d] (see also [9, Remark 53]). In fact, the latter and (3.3) yield the existence of an open set V ⊂ U such that m i,j=1
Proposition 3. 3 .
3Let Ω ⊂ M . Then, H 2 0 (Ω) is the domain of the closure of ∆ ω with domain C ∞ 0 (Ω). In particular, if p ∈ Ω is such that H 2 0 (Ω) = H 2 0 (Ω \ {p}), the self-adjoint extensions of the sub-Laplacian with domain C ∞ 0 (Ω) and C ∞ 0 (Ω \ {p}) coincide.
H H n,λ = (2n + 1)|λ|H n,λ . Definition 4.4. LetH 1 = N 2 × (R \ {0}), endowed with the measure dw defined by H1 θ(w) dw = n,m∈N R θ(n, m, λ) |λ| 4π 2 dλ,w = (n, m, λ).
n,m = A n, B ,m . Proof. By definition, for any m ∈ N we havef λ H m = f ( , m, λ)H . Then, F(P f )(n, m, λ) = (P λ f λ H m,λ |H n,λ ) = (P λ H ,λ |H n,λ )f ( , m, λ) = P (n, , λ)f ( , m, λ).
Remark 4. 6 .
6By the above proposition and (4.11) we have (4.13)F(∆ H f )(n, m, λ) = −|λ|(2n + 1)f (n, m, λ).
Lemma 4. 7 .
7Let D be a diagonal operator on 2 (N) defined by D n,m = d n δ n,m . Then,
Proposition 4 . 8 .
48Let D be the diagonal operator on 2 (N) defined by D n,m = n 2 δ n,m . Then, in the notations of Proposition 4.5, it holds X H (·, ·, λ) = −|λ| 1/2 (S −1 − Id) • D • (Id −S 1 ), (4.14)Ỹ H (·, ·, λ) = −i sgn(λ)|λ| 1/2 (S −1 + Id) • D • (Id +S 1 ), (4.15)Z H (·, ·, λ) = iλ Id .
Theorem 4 . 11 .
411Let Ω ⊂ H 1 be a smooth open set. Then, for any p ∈ Ω, the set of self-adjoint extensions ofH = −∆ H with domain Dom(H) = C ∞ 0 (Ω \ {p}) coincides with the one of H = −∆ H with domain Dom(H) = C ∞ 0 (Ω). Proof. By Proposition 3.3, it suffices to prove that H 2 0 (Ω) = H 2 0 (Ω \ {p}).
Laplacian ∆ ω and the measure ω is equivalent to the one w.r.t. ∆ H and the Lebesgue measure.By smoothness of ω there exists > 0 such that, letting dω = ω(q)dq, we have).
Proof. Let us consider the coordinates given by Proposition 5.2. We denote by
H 2
0 (U ε , H 1 ) and H 2
0 (U ε \ {p}, H 1 ) the Sobolev spaces associated with the Heisen-
berg sub-Laplacian ∆ H = X 2
H + Y 2
H . In particular, by Theorem 4.11, H 2
0 (U ε , H 1 ) =
H 2
0 (U ε \ {p}, H 1 ). Thus, in order to prove the statement, it suffices to show that the
H 2 norm w.r.t. the sub-
That is, one can find an open neighborhood U of q 0 and a family of linearly independent vector fields {Y 1 , . . . , Y k(q0) } ⊂ Vec(M ), such that D q = span{Y 1 (q), . . . , Y k(q0) (q)} for any q ∈ U , and that the sub-Riemannian distances defined by {Y 1 , . . . , Y k(q0) } and {X 1 , . . . , X m } coincide on U .
Observe that X 0 is simply the left-invariant representation of R × {0} × {0} < H 1 (i.e., the translation), as it is standard by Mackey machinery and the fact that H 1 = R 2 R 1 .
FMJH & IMO, Université Paris Sud, 91405 Orsay, France E-mail address: [email protected] 4 CNRS, L2S, Centrale Supelec, France E-mail address: [email protected]
Acknowledgments. The authors are grateful to Alessandro Teta for suggesting reference[23], that led to the strategy of proof for the essential self-adjointness of the pointed Laplacian on the Heisenberg group. The authors acknowledge that the present research is partially supported by: MIUR Grant Dipartimenti di Eccellenza
A class of nonlinear Schrödinger equations with concentrated nonlinearity. R Adami, A Teta, J. Funct. Anal. 1801R. Adami and A. Teta. A class of nonlinear Schrödinger equations with concentrated nonlin- earity. J. Funct. Anal., 180(1):148-175, 2001.
A Comprehensive Introduction to sub-Riemannian Geometry. A Agrachev, D Barilari, U Boscain, Cambridge University PressA. Agrachev, D. Barilari, and U. Boscain. A Comprehensive Introduction to sub-Riemannian Geometry. Cambridge University Press., 2019.
The intrinsic hypoelliptic Laplacian and its heat kernel on unimodular Lie groups. A Agrachev, U Boscain, J.-P Gauthier, F Rossi, Journal of Functional Analysis. 2568A. Agrachev, U. Boscain, J.-P. Gauthier, and F. Rossi. The intrinsic hypoelliptic Laplacian and its heat kernel on unimodular Lie groups. Journal of Functional Analysis, 256(8):2621-2655, apr 2009.
A Gauss-Bonnet-like formula on two-dimensional almost-Riemannian manifolds. A Agrachev, U Boscain, M Sigalotti, Discrete Contin. Dyn. Syst. 204A. Agrachev, U. Boscain, and M. Sigalotti. A Gauss-Bonnet-like formula on two-dimensional almost-Riemannian manifolds. Discrete Contin. Dyn. Syst., 20(4):801-822, 2008.
Solvable models in quantum mechanics. S Albeverio, F Gesztesy, R Høegh-Krohn, H Holden, AMS Chelsea PublishingProvidence, RIsecond edition. With an appendix by Pavel ExnerS. Albeverio, F. Gesztesy, R. Høegh-Krohn, and H. Holden. Solvable models in quantum me- chanics. AMS Chelsea Publishing, Providence, RI, second edition, 2005. With an appendix by Pavel Exner.
Singular perturbations of differential operators. S Albeverio, P Kurasov, London Mathematical Society Lecture Note Series. 271Cambridge University PressSolvable Schrödinger type operatorsS. Albeverio and P. Kurasov. Singular perturbations of differential operators, volume 271 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2000. Solvable Schrödinger type operators.
H Bahouri, J.-Y Chemin, R Danchin, arXiv:1609.03850A Frequency Space for the Heisenberg Group. arXiv e-prints. H. Bahouri, J.-Y. Chemin, and R. Danchin. A Frequency Space for the Heisenberg Group. arXiv e-prints, page arXiv:1609.03850, September 2016.
H Bahouri, J.-Y Chemin, R Danchin, arXiv:1705.02195Tempered distributions and Fourier transform on the Heisenberg group. arXiv e-prints. H. Bahouri, J.-Y. Chemin, and R. Danchin. Tempered distributions and Fourier transform on the Heisenberg group. arXiv e-prints, page arXiv:1705.02195, May 2017.
An Invitation to Hypoelliptic Operators and Hörmander's Vector Fields. M Bramanti, Springer-Briefs in MathematicsM. Bramanti. An Invitation to Hypoelliptic Operators and Hörmander's Vector Fields. Springer- Briefs in Mathematics. 2014.
Functional analysis, Sobolev spaces and partial differential equations. H Brezis, SpringerNew YorkUniversitextH. Brezis. Functional analysis, Sobolev spaces and partial differential equations. Universitext. Springer, New York, 2011.
An introduction to the Heisenberg group and the sub-Riemannian isoperimetric problem. L Capogna, D Danielli, S D Pauls, J T Tyson, Progress in Mathematics. 259Birkhäuser VerlagL. Capogna, D. Danielli, S. D. Pauls, and J. T. Tyson. An introduction to the Heisenberg group and the sub-Riemannian isoperimetric problem, volume 259 of Progress in Mathematics. Birkhäuser Verlag, Basel, 2007.
. Y Colin De Verdière, Pseudo-laplaciens. I. Ann. Institut Fourier. 323Y. Colin de Verdière. Pseudo-laplaciens. I. Ann. Institut Fourier, 32(3):275-286, 1982.
Sul moto dei neutroni nelle sostanze idrogenate. E Fermi, Ricerca scientifica. 7E. Fermi. Sul moto dei neutroni nelle sostanze idrogenate. Ricerca scientifica, 7(2):13-52, 1936.
A fundamental solution for a subelliptic operator. G B Folland, Bulletin of the American Mathematical Society. 792G. B. Folland. A fundamental solution for a subelliptic operator. Bulletin of the American Mathematical Society, 79(2):373-376, 1973.
On the essential self-adjointness of sub-Laplacians. V Franceschi, D Prandi, L Rizzi, ArXiv e-printsV. Franceschi, D. Prandi, and L. RIzzi. On the essential self-adjointness of sub-Laplacians. ArXiv e-prints, August 2017.
A special Stokes' theorem for complete Riemannian manifolds. M P Gaffney, Annals of Mathematics. 601M. P. Gaffney. A special Stokes' theorem for complete Riemannian manifolds. Annals of Math- ematics, 60(1):140-145, 1954.
Frequency functions on the Heisenberg group, the uncertainty principle and unique continuation. N Garofalo, E Lanconelli, Annales de l'institut Fourier. 402N. Garofalo and E. Lanconelli. Frequency functions on the Heisenberg group, the uncertainty principle and unique continuation. Annales de l'institut Fourier, 40(2):313-356, 1990.
Hypoelliptic second order differential equations. L Hörmander, Acta Mathematica. 1191L. Hörmander. Hypoelliptic second order differential equations. Acta Mathematica, 119(1):147- 171, dec 1967.
Hardy inequalities and Assouad dimensions. J Lehrbäck, Journal d'Analyse Mathematique. 1311J. Lehrbäck. Hardy inequalities and Assouad dimensions. Journal d'Analyse Mathematique, 131(1):367-398, 2017.
On Carnot-Carathéodory Metrics. J Mitchell, J. Differential Geom. 21J. Mitchell. On Carnot-Carathéodory Metrics. J. Differential Geom, 21:35-45, 1985.
Richard Montgomery, A tour of subriemannian geometries, their geodesics and applications. Providence, RIAmerican Mathematical SocietyRichard Montgomery. A tour of subriemannian geometries, their geodesics and applications, vol- ume 91 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2002.
On confining potentials and essential self-adjointness for Schrödinger operators on bounded domains in R n. G Nenciu, I Nenciu, Ann. Henri Poincaré. 102G. Nenciu and I. Nenciu. On confining potentials and essential self-adjointness for Schrödinger operators on bounded domains in R n . Ann. Henri Poincaré, 10(2):377-394, 2009.
Boundary conditions on thin manifolds and the semiboundedness of the threeparticle Schrödinger operator with pointwise potential. B S Pavlov, Math. USSR Sbornik. 641B.S. Pavlov. Boundary conditions on thin manifolds and the semiboundedness of the three- particle Schrödinger operator with pointwise potential. Math. USSR Sbornik, 64(1):161-175, 1989.
Quantum confinement on non-complete Riemannian manifolds. D Prandi, L Rizzi, M Seri, J. Spectr. Theory. 84D. Prandi, L. Rizzi, and M. Seri. Quantum confinement on non-complete Riemannian manifolds. J. Spectr. Theory, 8(4):1221-1280, 2018.
Methods of modern mathematical physics. II. Fourier analysis, selfadjointness. M Reed, B Simon, Academic PressNew York-LondonHarcourt Brace Jovanovich, PublishersM. Reed and B. Simon. Methods of modern mathematical physics. II. Fourier analysis, self- adjointness. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975.
Methods of modern mathematical physics. I. M Reed, B Simon, Academic Press, IncNew YorkHarcourt Brace Jovanovich, Publishers. second edition. Functional analysisM. Reed and B. Simon. Methods of modern mathematical physics. I. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, second edition, 1980. Functional analysis.
Hypoelliptic differential operators and nilpotent groups. L P Rothschild, E M Stein, Acta Mathematica. 137IL.P. Rothschild and E.M. Stein. Hypoelliptic differential operators and nilpotent groups. Acta Mathematica, 137(I):247-320, 1976.
Analysis of the Laplacian on the complete Riemannian manifold. R S Strichartz, Journal of Functional Analysis. 521R. S. Strichartz. Analysis of the Laplacian on the complete Riemannian manifold. Journal of Functional Analysis, 52(1):48-79, 1983.
Sub-Riemannian geometry. R S Strichartz, Journal of Differential Geometry. 242R.S. Strichartz. Sub-Riemannian geometry. Journal of Differential Geometry, 24(2):221-263, 1986.
Hardy type inequalities related to Carnot-Carathéodory distance on the Heisenberg group. Q.-H Yang, Proceedings of the. theAmerican Mathematical Society141Q.-H. Yang. Hardy type inequalities related to Carnot-Carathéodory distance on the Heisenberg group. Proceedings of the American Mathematical Society, 141(1):351-362, 2013.
Singularities and normal forms of smooth distributions. M Zhitomirskii, Banach Center Publ32M Zhitomirskii. Singularities and normal forms of smooth distributions. Banach Center Publ, 32:395-409, 1995.
Dipartimento Politecnico Di Torino, " G L Di Scienze Matematiche, Lagrange, address: [email protected] 2 CNRS, LJLL, Sorbonne Université, 4 Place Jussieu, 75005. Torino, Italy E-mail; Paris, France & Equipe24INRIA Paris CAGE E-mail address: [email protected] di Torino, Dipartimento di Scienze Matematiche "G.L. Lagrange", Corso Duca degli Abruzzi, 24, 10129, Torino, Italy E-mail address: [email protected] 2 CNRS, LJLL, Sorbonne Université, 4 Place Jussieu, 75005 Paris, France & Equipe INRIA Paris CAGE E-mail address: [email protected]
| [] |
[
"A Comment on the Path Integral Approach to Cosmological Perturbation Theory",
"A Comment on the Path Integral Approach to Cosmological Perturbation Theory"
] | [
"Oliver J Rosten \nDublin Institute for Advanced Studies\n10 Burlington RoadDublin 4Ireland\n"
] | [
"Dublin Institute for Advanced Studies\n10 Burlington RoadDublin 4Ireland"
] | [] | It is pointed out that the exact renormalization group approach to cosmological perturbation theory, proposed in Matarrese and Pietroni, JCAP 0706 (2007) 026, astro-ph/0703563 and astroph/0702653, constitutes a misnomer. Rather, having instructively cast this classical problem into path integral form, the evolution equation then derived comes about as a special case of considering how the generating functional responds to variations of the primordial power spectrum. | 10.1088/1475-7516/2008/01/029 | [
"https://arxiv.org/pdf/0711.0867v2.pdf"
] | 119,204,515 | 0711.0867 | f9e52c7206939580b592b6555ff1aa49f8a177bc |
A Comment on the Path Integral Approach to Cosmological Perturbation Theory
1 Feb 2008
Oliver J Rosten
Dublin Institute for Advanced Studies
10 Burlington RoadDublin 4Ireland
A Comment on the Path Integral Approach to Cosmological Perturbation Theory
1 Feb 2008
It is pointed out that the exact renormalization group approach to cosmological perturbation theory, proposed in Matarrese and Pietroni, JCAP 0706 (2007) 026, astro-ph/0703563 and astroph/0702653, constitutes a misnomer. Rather, having instructively cast this classical problem into path integral form, the evolution equation then derived comes about as a special case of considering how the generating functional responds to variations of the primordial power spectrum.
Cosmological perturbation theory received a new lease of life when Crocce and Scoccimarro demonstrated, in a diagrammatic tour-de-force, how the perturbation series could be reorganized [1,2]. In the context of the evolution cold dark matter, the central object in their approach is the full non-linear propagator, which measures the ensemble averaged response of the final density and velocity perturbations to variations in the initial conditions. [16] Not only does the reorganization make the encapsulation of the physics by the perturbation series more intuitive, but it also results in a much better behaved expansion. Crucially, by making some well motivated approximations, Crocce and Scoccimarro were able to resum the full propagator in the short distance limit and then interpolate between this regime and the long distance one to obtain an approximate form for the full propagator, at all scales.
Inspired by this, Matarrese and Pietroni [3,4] cast the (classical) problem into path integral form. The end point of their instructive demonstration is a generating functional which encodes the complete physics of the underlying equations, which themselves follow from applying the single stream approximation to the Vlasov equation. This approximation results in two coupled equations (the continuity and Euler equations) for the density perturbation, δ, and peculiar velocity divergence, θ. Following Crocce and Scoccimarro, but using the notation of Matarrese and Pietroni, this pair of equations can be combined into a single equation which, when written in Fourier space (with repeated momentum arguments integrated over), takes the form:
(δ ab ∂ η + Ω ab )ϕ b (k, η) = e η γ abc (k, −p, −q)ϕ b (p, η)ϕ c (q, η). (1)
For the fine details, the reader is referred to [3]; for our purposes we note the following. The doublet ϕ a is proportional to δ when a = 1 and θ when a = 2. γ abc is a vertex function, which couples together different modes.
In an Einstein-de Sitter cosmology, η ∼ ln a where a is the cosmological scale factor and Ω ab is a constant matrix. (For generalizations to other cosmologies, see [2].) Now, assuming Gaussian initial conditions, Matarrese and Pietroni demonstrated that the physics of (1) is en- * Electronic address: [email protected] coded by the following generating functional (all momentum dependence and some of the η dependence is suppressed, for brevity):
Z[J a , K b ; P 0 ] = Dϕ a Dχ b exp (2) dηdη ′ − 1 2 χ a P 0 ab δ(η)δ(η ′ )χ b + iχ a g −1 ab ϕ b − i dη [e η γ abc χ a ϕ b ϕ c − J a ϕ a − K b χ b ] .
χ a is an auxiliary doublet field, which carries information about the initial conditions, which are specified through the primordial power spectrum, P 0 . The linear propagator, g ab [which can be obtained by solving the linearized version of (1)], couples the initial conditions to the final state, as expected [1]. J a and K b are sources. The strategy which Matarrese and Pietroni claimed to follow was to use (2) to derive an Exact Renormalization Group (ERG) equation. Whilst there is nothing mathematically incorrect about what was done, it nevertheless does not amount to an implementation of the ERG; rather, what they did was a special case of noticing that an evolution equation can be derived from (2) by examining variations with respect to the primordial power spectrum. Despite the initial motivation for doing this being, perhaps, somewhat suspect, it should be noted that the approach as a whole is certainly useful; in particular, it allows the machinery of functional techniques to be applied to the problem at hand. In so doing, Matarrese and Pietroni were able to reproduce Crocce and Scoccimarro's large-k resummation of the propagator in a much simpler way and also derive their own expression for the full propagator at all scales, using transparent approximations.
In Quantum Field Theory (QFT), the central idea of the ERG is to integrate out degrees of freedom in such a way that the partition function-and hence the physics derived from it-remains invariant. As a first step in this procedure, an overall momentum cutoff is applied to the theory (in Euclidean space), with the action at this scale being the bare action. Next, one considers integrating out (coarse-graining) degrees of freedom between the bare scale and a lower, 'effective' scale, Λ. The action at the effective scale is called the Wilsonian effective action, S Λ (which we here take to denote just the effective interactions, and not the regularized kinetic term as well). By considering how the Wilsonian effective action must evolve as Λ changes, if the partition function is to stay the same, one can derive the ERG equation, which describes how the effective action changes under changes of the effective scale [5,6,7,8,9]. Working with the scalar field, ϕ, we partition the standard propagator, ∆(q) = 1/q 2 , into an ultraviolet (UV) regularized part, ∆ UV , and an infrared (IR) regularized part, ∆ IR , such that ∆ = ∆ UV + ∆ IR . [17] Polchinski's form of the ERG reads [7]:
∂S Λ [ϕ] ∂Λ = 1 2 δS Λ δϕ · ∂∆ UV ∂Λ · δS Λ δϕ − 1 2 δ δϕ · ∂∆ UV ∂Λ · δS Λ δϕ . (3)
By performing a Legendre transform of the Wilsonian effective action, the ERG equation can be transformed into a flow equation for the (IR regulated) generator of 1PI diagrams, Γ Λ , the 'effective average action' [8,9,10]:
∂Γ Λ,int [ϕ c ] ∂Λ = 1 2 tr ∂∆ −1 IR ∂Λ · ∆ −1 IR + Γ (2) Λ,int −1 ,(4)
where ϕ c is the classical field, the trace indicates a momentum integral, the superscript '(2)' indicates a double functional derivative with respect to ϕ c , and 'int' denotes the interaction part of Γ Λ . Let us contrast the ERG approach to that of Matarrese and Pietroni. Their first step is to introduce a high frequency cutoff in the primordial power spectrum, at a scale λ. We emphasise that this is a restriction on the boundary conditions and not on the final perturbations: the non-linear interactions can generate power at the 'missing' scales. The next step simply amounts to observing that, since the generating functional (2) depends on the primordial power spectrum, it now depends on λ. By differentiating with respect to λ, Matarrese and Pietroni obtain the following evolution equation (equation (52) of [3]):
∂ λ Γ λ,int = i 2 tr ∂ λ Γ (2) λ,free · Γ (2) λ,free + Γ (2) λ,int −1 ,(5)
where 'free' denotes the free part of the appropriate object. The double derivatives are with respect to combinations of χ a and φ b and so form a matrix, and the trace now also includes an integral over η and summations over the doublet indices. Equation (5) is clearly of a very similar form to (4). However, it should be emphasised that there has been no coarse-graining of modes; indeed, it is not intuitively clear what such a procedure would amount to in this scenario. Furthermore, the non-locality of the three-point vertex anyway indicates that the ERG is the wrong language to be using, since a fundamental requirement of the ERG is that the Kadanoff blocking (coarse-graining) transformation only affects variables in a localized patch [5,11]. Finally, we note that, contrary to the ERG approach, where the introduction of Λ is central, the introduction of λ is actually not necessary: the results of [3,4] can be derived simply by considering general variations of P 0 . [18] Consequently, none of the intuition behind the ERG nor, for example, the powerful derivative expansion which is so fruitfully applied within this framework (see [13] for a review) are appropriate to cosmological perturbation theory. [19] On the other hand, the generating functional (2) seems the perfect device with which to efficiently understand the resummations of Crocce and Scoccimarro and, one might hope, provides an starting point for future study. For interesting possible directions, see [14]. For a bona-fide application of the ERG in a cosmological context, see [15].
AcknowledgmentsIt is a pleasure to thank Martín Crocce for extremely helpful discussions and for comments on the manuscript, and Daniel Litim for some vigourous debating.
. M Crocce, R Scoccimarro, astro-ph/0509418Phys. Rev. D. 73M. Crocce and R. Scoccimarro, Phys. Rev. D 73 (2006) 063519, astro-ph/0509418.
. M Crocce, R Scoccimarro, astro-ph/0509419Phys. Rev. D. 73M. Crocce and R. Scoccimarro, Phys. Rev. D 73 (2006) 063520, astro-ph/0509419.
. S Matarrese, M Pietroni, astro-ph/0703563JCAP. 0706S. Matarrese and M. Pietroni, JCAP 0706 (2007) 026 astro-ph/0703563.
. S Matarrese, M Pietroni, astro-ph/0702653S. Matarrese and M. Pietroni, astro-ph/0702653.
. K Wilson, J Kogut, Phys. Rept. 1275K. Wilson and J. Kogut, Phys. Rept. 12 (1974) 75.
. F J Wegner, A Houghton, Phys. Rev. A. 8401F. J. Wegner and A. Houghton, Phys. Rev. A 8 (1973) 401.
. J Polchinski, Nucl. Phys. B. 231269J. Polchinski, Nucl. Phys. B 231 (1984) 269.
. T R Morris, hep- ph/9308265Int. J. Mod. Phys. A. 92411T. R. Morris, Int. J. Mod. Phys. A 9 (1994) 2411, hep- ph/9308265.
. T R Morris, hep-th/9802039Prog. Theor. Phys. Suppl. 131T. R. Morris, Prog. Theor. Phys. Suppl. 131 (1998) 395, hep-th/9802039.
. C Wetterich, Phys. Lett. B. 30190C. Wetterich, Phys. Lett. B 301 (1993) 90.
. T R Morris, hep- th/9910058Nucl. Phys. B. 573T. R. Morris, Nucl. Phys. B 573 (2000) 97, hep- th/9910058.
. L Y Chen, N Goldenfeld, Y Oono, hep-th/9506161L. Y. Chen, N. Goldenfeld and Y. Oono, hep-th/9506161.
. C Bagnuls, C Bervillier, hep-th/0002034Phys. Rept. 348C. Bagnuls and C. Bervillier, Phys. Rept. 348 (2001) 91, hep-th/0002034.
. S G Rajeev, arXiv:0705.2139math-phS. G. Rajeev, arXiv:0705.2139 [math-ph];
. T Gasenzer, J M Pawlowski, arXiv:0710.4627cond-mat.otherT. Gasenzer and J. M. Pawlowski, arXiv:0710.4627 [cond-mat.other].
. J Gaite, ; J Gaite, A Dominguez, astro-ph/0610886Int. J. Mod. Phys. A. 16J. Phys. AJ. Gaite, Int. J. Mod. Phys. A 16 (2001) 2041, cond- mat/0101219; J. Gaite and A. Dominguez, J. Phys. A 40 (2007) 6849, astro-ph/0610886.
It is the process of ensemble averaging which produces loop diagrams, characteristic of quantum field theory. in this classical problemIt is the process of ensemble averaging which produces loop diagrams, characteristic of quantum field theory, in this classical problem.
Given a rapidly decaying cutoff function, CUV(q 2 /Λ 2 ), we would write ∆UV(q, Λ) = CUV(q 2 /Λ 2 )/q 2. Given a rapidly decaying cutoff function, CUV(q 2 /Λ 2 ), we would write ∆UV(q, Λ) = CUV(q 2 /Λ 2 )/q 2 .
RG-inspired' would seem to be the appropriate terminology. Note that the RG has proven itself a powerful tool in classical problems, see for example. terms of classifying the approach of Matarrese and Pietroni. 12In terms of classifying the approach of Matarrese and Pietroni, 'RG-inspired' would seem to be the appropri- ate terminology. Note that the RG has proven itself a powerful tool in classical problems, see for example [12].
The derivative expansion relies on Taylor expanding the flow equation in external momenta, which is automatically spoilt by the nonlocal three-point vertex. The derivative expansion relies on Taylor expanding the flow equation in external momenta, which is automati- cally spoilt by the nonlocal three-point vertex.
| [] |
[
"A glassy contribution to the heat capacity of hcp 4 He solids",
"A glassy contribution to the heat capacity of hcp 4 He solids"
] | [
"Jung-Jung Su ",
"Matthias J Graf ",
"Alexander V Balatsky ",
"\n1 Introduction\n\n"
] | [
"1 Introduction\n"
] | [] | We model the low-temperature specific heat of solid 4 He in the hexagonal closed packed structure by invoking two-level tunneling states in addition to the usual phonon contribution of a Debye crystal for temperatures far below the Debye temperature, T < Θ D /50. By introducing a cutoff energy in the two-level tunneling density of states, we can describe the excess specific heat observed in solid hcp 4 He, as well as the low-temperature linear term in the specific heat. Agreement is found with recent measurements of the temperature behavior of both specific heat and pressure. These results suggest the presence of a very small fraction, at the parts-per-million (ppm) level, of two-level tunneling systems in solid 4 He, irrespective of the existence of supersolidity. | 10.1007/s10909-010-0163-x | [
"https://arxiv.org/pdf/0912.4647v2.pdf"
] | 118,389,924 | 0912.4647 | c1c2b896fb205dfa9d354456d6befe17aa31f8d7 |
A glassy contribution to the heat capacity of hcp 4 He solids
27 Jan 2010 January 27, 2010
Jung-Jung Su
Matthias J Graf
Alexander V Balatsky
1 Introduction
A glassy contribution to the heat capacity of hcp 4 He solids
27 Jan 2010 January 27, 2010Journal of Low Temperature Physics manuscript No. (will be inserted by the editor)solid 4 He · glass · supersolid · quantum phase transition
We model the low-temperature specific heat of solid 4 He in the hexagonal closed packed structure by invoking two-level tunneling states in addition to the usual phonon contribution of a Debye crystal for temperatures far below the Debye temperature, T < Θ D /50. By introducing a cutoff energy in the two-level tunneling density of states, we can describe the excess specific heat observed in solid hcp 4 He, as well as the low-temperature linear term in the specific heat. Agreement is found with recent measurements of the temperature behavior of both specific heat and pressure. These results suggest the presence of a very small fraction, at the parts-per-million (ppm) level, of two-level tunneling systems in solid 4 He, irrespective of the existence of supersolidity.
Introduction
The anomalous frequency and dissipation behavior seen in torsion oscillators (TO) [1] at low temperatures has stimulated numerous investigations, since it was suggested to be the signature of supersolidity [2,3,4,5,6]. Recent successive TO experiments [7,8,9,10,11,12] confirmed the finding of the anomalous behavior. In addition, hysteresis behavior and long equilibration times have been observed [9,12,13], which depend strongly on growth history and annealing [7]. In the same temperature range, transport experiments including shear modulus [14,15], ultrasonic [16,17] and heat propagation [16] have shown various anomalous behaviors. However, no clear sign in mass flow [18,19,14,15,20,21,22,23] and structural measurements [24,25] emerged that can demonstrate the occurrence of a phase transition.
It has been anticipated that thermodynamic measurements will resolve the existing controversy, since any true phase transition should be accompanied by a thermodynamic signature. The search for such signatures proved to be challenging by the experiments conducted so far, including measurements of the specific heat [26,27,28,29,30,31], pressure dependence of the melting curve [32,33], and pressure-temperature measurements of the solid [34,35]. The main difficulties lie in measuring small signals at low temperatures in the presence of large backgrounds. With improving experiments at low temperatures, measurements down to 20 mK were conducted. While there is still no clear evidence of transition in the melting curve experiments, recent pressure measurements and specific heat measurements have both shown deviations from the expected pure Debye lattice behavior. Balatsky and coworkers [36,37] argued that the deviations occurring at low temperatures might be related to a glass phase, where the role of two-level systems could be taken by tunneling dislocation loop segments. This description is also accompanied by an argument that the excess entropy associated with the deviation is too small for supersolidity to lead to detectable mechanical effects, if it is due to a supersolid fraction alone.
In this paper, we expand the above glass model to describe the glass freezing transition occurring in the thermodynamic experiments of ultrapure 4 He solid. We model the subsystem of tunneling dislocation segments with a compact distribution of two-level excitation spacings, see Fig. 1. We can then describe the behavior of a transition as well as the zero-temperature extrapolation of the glassy behavior seen in thermodynamic measurements. Our results show that the low-temperature deviations in the measured specific heat can be explained by contributions from a glassy fraction of the solid. These results add further support to our previous interpretation of torsion oscillator experiments in terms of a backaction due to a glassy subsystem, which is possibly present in solid 4 He [38,37,39].
Glass model for the specific heat
We propose a thermodynamic model to describe the measured low-temperature specific heat. We postulate the existence of a distribution of two-level tunneling (TLS) systems in solid hcp 4 He. These TLS may be created through complex configurations of dislocation loops embedded in the crystal or strain during growth. The distributions of length and number of dislocation segments depend on 3 He concentration, quenching and other growth processes. In this paper, we are comparing the effect of different growth processes on ultrapure 4 He containing at most (nominally) 1 ppb of 3 He impurities. At such low levels of impurities, we expect to see the intrinsic properties of solid 4 He.
We start with the expansion of the standard TLS model (Fig. 1). In the standard glass model [40,41,36] for solids, the density of the TLS states is assumed to be constant, D(E) = const., to account for the linear temperature
D (E) D g (E)
standard glass model glass model with cutoff g E Fig. 1 Density of states (DOS) of the two-level tunneling system. The black-dash line represents the DOS for the standard glass model [40,41,36], while the bluesolid line is the truncated DOS used in this work describing a two-level system with a cutoff energy. coefficient in the specific heat at low temperatures. However, it is well-known in the context of conventional glasses that a more careful analysis of specific heat data gives rise to a power law deviating slightly from linearity. The deviation may be attributed to phonon relaxation processes [42] or to a density of states (DOS) of the TLS that is not constant over a characteristic energy E c of level spacings [43,44]. Here, we neglect the time dependence in the specific heat due to phonons scattering off from the TLS and for simplicity concentrate on the cutoff dependence of the DOS at high energies. For example, such a cutoff could be due to the finite barrier height of double-well potentials giving rise to the TLS, because in real materials the tunneling barrier has an upper bound set by lattice and dislocation configurations [45]. This is also the reason why glassy behavior is usually only seen at sufficiently low temperatures. At high temperatures the thermal energy can easily overcome the barrier and the TLS effectively become a system of noninteracting single oscillators. At T < Θ D /50, the specific heat of solid 4 He is well described by
C(T ) = C L (T ) + C g (T ),(1)
where the phonon contribution to the molar specific heat is given by
C L (T ) = B L T 3 , with coefficient B L = 12π 4 R/5Θ 3 D , R = 8314 mJ/(mol K)
is the gas constant, and Θ D is the Debye temperature. The second term describes the glass contribution due to the TLS subsystem and is given by
C g (T ) = k B R d dT ∞ 0 dE E D g (E) f (E),(2)
with k B being the Boltzmann constant and f (E) is the Fermi function. The DOS of the TLS may be modeled by the box distribution function
D g (E) = 1 2 D 0 [1 − tanh((E − E c )/W )] .(3)
Here D 0 is the zero-energy DOS per energy, E c is a characteristic cutoff energy, and W is the width of the truncated density of states. For E c → ∞, one obtains the standard hallmark result of glasses at low temperatures:
C g (T ) = B g T,(4)
where B g = k B RD 0 . As we will elaborate in the next section, the glass coefficient B g has an intrinsic finite value at low temperature even for the purest 4 He samples, independent of 3 He concentration.
3 Results and discussion
Specific Heat
We compare our model calculations with experimental data by the Penn State group [30,29] for four different growth processes in ultrapure 4 He with at most 1 ppb of 3 He impurities. We re-analyzed their experiments and obtained quite different assignments for the state of two of their samples: We find two solidliquid coexistence samples with solid ratios 0.34 (SL34) and 0.31 (SL31) [46], while we agree with the assignments for the samples grown with the blocked capillary method (BC). One was grown over 20 hours (BC20), the other one was grown over 4 hours (BC04). Notice that sample SL34 corresponds to the 75% solid-liquid coexistence sample reported by the Penn State group and that the SL31 sample corresponds to their constant pressure sample (CP). The experimental data are described with three parameters: D 0 , E c and the Debye temperature Θ D . We first determine Θ D , or the lattice contribution, from the high-temperature data (see Fig. 2). The phonon contribution is then subtracted from C to obtain the difference δC = C − C L . We fit δC/T with our specific heat formula in Eq.( 2) for a glass. We show the difference in specific heat over temperature, δC/T , for four different growth processes in Fig. 3. The model describes well the lowtemperature part for all four cases and the high-temperature part is within the scatter of the experimental data. In these plots we fixed the width of the cutoff to W = 1 µeV. With W ≪ E c , we have verified that there is no qualitative difference when varying W within reasonable ranges. Notice that the shape of δC(T ) depends strongly on the subtraction of the high-temperature phonon contribution. Although the Debye temperature used for the BC samples is reasonable, it is not unique in determining the shape of δC(T ), since a usual 1% uncertainty in the data of C(T ) at high temperatures leads to quite large uncertainties in δC(T )/T . The physical and model parameters of the four samples grown under different conditions are summarized in Table 1. The Debye temperature Θ D of 4 He increases linearly with decreasing molar volume V m in the range between 21 and 16 cm 3 /mol [47,48]. This is also the pressure range in which the experimental data were taken. We find that Θ D for both BC grown samples agrees reasonably well with literature values of 28-29 K at molar volume V m = 20.46 cm 3 /mol and pressure P = 33 bar. The same consistency is found for the coexistence samples, after correcting for the solid-liquid ratio Fig. 2 The specific heat C of solid 4 He with nominally 1 ppb 3 He for four different growth processes taken from Lin et al. [30]. The experimental data (squares) are fit by BLT 3 at high temperatures T > 0. 16 A caveat is warranted regarding sample SL31. Since it has a slightly larger concentration of TLS than SL34, although it should be equally pure. Hence, one needs to consider that it was supposedly grown under CP conditions at P = 38 bar. However, it is not clear if the final pressure of the cell was actually 25 bar or higher, nor how strained the crystal was when the initial pressure of 38 bar was lost. All these experimental unknowns may change our assignment for the solid-liquid coexistence ratio for SL31. Thus its solidto-liquid ratio of x = 0.31 should be considered a lower bound [49]. The glassy behavior is mainly characterized by the zero-energy DOS and cutoff energy of the TLS, which are both noticeably larger in BC04 than in the others. This may be explained by a rapid growth process creating a strained crystal, which gives rise to both a larger TLS concentration and a smaller cutoff energy, i.e., a smaller maximum tunneling barrier height. On the other hand, the comparison of sample SL31 with BC20 leads us to believe that 3 He concentration does not appear to play an important role at the 1 ppb concentration level and below. The coexistence sample SL31 is supposed to be purer than BC20 ( 3 He atoms are more soluble in liquid than solid 4 He). Our analysis does not show significant differences between parameters for both cases. This is supported by our analysis that the TLS concentration of these samples ranges from 3.7 to 21.5 ppm, which is at least 1000 times larger than the nominal 3 He concentration. We argue that this small concentration of 3 He impurities, compared to the intrinsic concentration of TLS in even the best crystals grown by the Penn State group, explains the experimental finding that the sample grown with the BC method for over 4 hours and 0.3 ppm of 3 He impurities exhibits nearly identical behavior as sample BC04 with only 1 ppb of 3 He. We therefore conclude that the intrinsic property of solid 4 He was measured in the samples with 1 ppb 3 He impurities, given the much larger TLS concentration levels found.
Entropy Analysis
We also calculated the excess entropy,
∆S(T ) = T 0 dT ′ δC(T ′ )/T ′ ,(5)
associated with the excess specific heat due to the low-temperature transition or crossover into a glass phase. The advantage of an entropy analysis over that of the specific heat lies in the robustness and simplicity of counting states in an equilibrium phase compared to detailed model calculations for the specific heat. We find consistently for specific heat experiments [28,29,30] that the obtained values are 5 to 6 orders of magnitude smaller compared to the theoretical prediction for a supersolid if the entire sample underwent Bose-Einstein condensation (BEC). In the limit of a non-interacting BEC one finds ∆S BEC = 15/4(ζ(5/2)/ζ(3/2)) R(T /T c ) 3/2 ∼ (5/4) R ∼ 10.4 J/(K mol). This means that if ∆S is indeed due to supersolidity, then the supersolid volume fraction is at most 11 ppm or 0.0011% in the most disordered or quenched sample of the four ultrapure samples studied in this work, i.e., sample BC04. Such a supersolid fraction in the specific heat of bulk 4 He is more than 100 to 1000 times smaller than is usually reported for the nonclassical rotational inertia fraction (NCRIF) in torsion oscillator experiments. This enormous discrepancy between supersolid fractions in specific heat and torsion oscillator experiments was already noticed in Refs. [36,37]. Until to date, this discrepancy remains a major puzzle that is hard to reconcile within a purely supersolid scenario.
Comparison with Pressure Measurement
Next we compare the specific heat with the pressure measurements and their deviations from a perfect Debye lattice behavior. We are using thermodynamic relations between the specific heat and pressure. The quantities to characterize the pressure measurement in the combined lattice and glass models are a L and a g defined by
P (T ) ≡ P 0 + P L (T ) + P g (T ) = P 0 + a L T 4 + a g T 2 ,(6)
where P (T ) is the pressure at temperature T . P 0 , P L , P g are the corresponding pressure contributions of the ions at zero temperature, lattice vibrations, and two-level excitations of the glass. The Mie-Grüneisen theory gives the thermodynamic relation between pressure and specific heat
∂P ∂T V = γ g V m C g,V + γ L V m C L,V ,(7)
where γ i are the Grüneisen coefficients of the glass excitations (g) and of the lattice vibrations (L). Literature values for the Grüneisen coefficient of phonons in solid hcp 4 He range between 2.6 < γ L < 3.0 [35,50], while nothing is known about γ g of glassy 4 He. For simplicity we assume in our calculation of P (T ) based on our model of the specific heat that γ g ∼ γ L = 2.6. In Fig. 4(a) we show the temperature dependence of the pressure data (P − P 0 /T 2 ) reported by Grigor'ev et al. [34,35] as well as those predicted for samples BC04 and BC20 from specific heat measurements using a glass description. The glass contribution utilized in modeling the pressure data is given by
P g (T ) T 2 = γ g V m k B R ∞ 0 dE ED g (E)f (E),(8)
with the TLS density of states D(E) defined in Eq. (3). All curves show finite intercepts with the ordinate, which we attribute to the glassy contribution on top of the T 2 -term due to lattice vibrations. The glassy contribution can be shown most clearly by plotting (P − P 0 − P L )/T 2 vs. T (see Fig. 4 (b)). The glass contribution is larger in Grigor'ev's sample and survives to higher temperatures compared to predictions for samples BC04 and BC20. The parameters D 0 and E c are roughly seven times bigger in Grigor'ev's pressure data than predicted from the specific heat data by Lin et al. This may be expected for a much more rapidly grown solid. It is worth mentioning that the Grüneisen coefficient γ g affects the result in the same way as D 0 . The coefficient γ g can actually vary significantly from one sample to another [52].
Here, for simplicity, it is taken to be the same both in pressure and in specific heat measurements. To summarize, we have shown that the pressure measurement as well as the specific heat measurement are consistently described in the framework of the modified glass model. The higher TLS concentration extracted from Grigor'ev's data compared to Lin's data is likely due to a much faster cooling rate or due to a larger Grüeneisen coefficient or both.
Conclusions
We have shown that a simple TLS model explains quantitatively the linear temperature coefficient in the specific heat and the low-temperature deviation of the specific heat, δC, from that of a perfect Debye crystal without postulating the existence of supersolidity. By truncating the TLS DOS above a characteristic cutoff energy E c , the bump-like feature in δC can fairly well be captured within the scatter and uncertainty of the data. Furthermore, our thermodynamic analysis results in Debye temperatures Θ D and TLS DOS D 0 that are in qualitative agreement with known P (T ) measurements in the solid [34,35]. This suggests the possibility of a glassy subsystem at the ppm level in hcp 4 He crystals. The presence of a glassy subsystem is consistent with recent reports of long relaxation times in torsion oscillators [12], P vs. T measurements [34,35], and transport measurements [16,17]. In order to uniquely determine the ground state of solid 4 He, i.e., whether it exhibits a supersolid or glass transition, more accurate measurements of the specific heat and thermal conductivity at lower temperatures and up to 0.5 K are needed. Finally, relaxation time measurements of heat pulses could provide useful and much needed information about the dynamics of a possible glass or supersolid subsystem in hcp 4 He solid.
[ 49 ]Fig. 3
493, with the reported literature value of Θ D ∼ 25 K at P = 25 bar[47,48]. δC/T for experiments (squares) and the modified glass model with a cutoff energy in the TLS DOS (blue line) for same four samples shown inFig. 2, with δC = C − CL. The error bars are obtained from Ref.[30].
Fig. 4
4(a) (P − P0)/T 2 vs. T 2 for P ∼ 33 bar. The squares represent the data reported by Grigor'ev et al.[34,35] The black dotted line is the model calculation with D0 = 4.95 × 10 −3 (1/meV), Ec = 0.2meV, ΘD = 28.6K and the same W as that in the specific heat calculation. The blue-thick and black-thin lines are predictions for BC04 and BC20, respectively. The predicted curves for BC04 and BC20 use the same parameters as those shown inFig. 2.(b) (P − P0 − PL)/T 2 vs. T for the same data collection as in (a). This plot clearly shows that the Grigor'ev et al. data is characterized by a much larger D0 and Ec(roughly seven times larger) than those in the Lin et al. data.
K (blue line). The black-dash line shows the total calculation with both lattice and glass contributions.Table 1 Physical and model parameters: Debye temperature ΘD, zero energy TLS DOS D0, cutoff energy Ec, concentration of TLS nTLS and excess entropy ∆S.P
Vm
ΘD
D0 × 10 4 Ec × 10 2
nTLS
∆S
(bar) (cm 3 / mol) (K) (1/meV)
(meV)
(ppm) (µJ/(mol K))
SL34
25
21.25
24.5
2.2
1.7
3.7
21.3
SL31
25
21.25
24.8
2.9
2.2
6.4
36.9
BC20
33
20.46
29.7
3.0
2.3
6.9
39.5
BC04
33
20.46
28.9
6.5
3.3
21.5
115.0
. E Kim, M H W Chan, Nature. 427225E. Kim and M. H. W. Chan, Nature (London) 427, 225 (2004);
. Science. 3051941Science 305, 1941 (2005);
. Phys. Rev. Lett. 97115302Phys. Rev. Lett. 97, 115302 (2006).
. A F Andreev, I M Lifshitz, Sov. Phys. JETP. 291107A. F. Andreev and I. M. Lifshitz, Sov. Phys. JETP 29, 1107 (1969).
. G V Chester, L Reatto, Phys. Rev. 15588G. V. Chester and L. Reatto, Phys. Rev. 155, 88 (1967).
. L Reatto, Phys. Rev. 183334L. Reatto, Phys. Rev. 183, 334 (1969).
. A J Leggett, Phys. Rev. Lett. 252543A. J. Leggett, Phys. Rev. Lett. 25, 2543 (1970).
Basic Notions of Condensed Matter Physics. P W Anderson, 143Benjamin, Menlo Park, CAP.W. Anderson, Basic Notions of Condensed Matter Physics, (Benjamin, Menlo Park, CA), Ch. 4, 143 (1984).
. A S C Rittner, J D Reppy, ibid. 98Phys. Rev. Lett. 97175302A. S. C. Rittner and J. D. Reppy, Phys. Rev. Lett. 97, 165301 (2006); ibid. 98, 175302 (2007).
. M Kondo, S Takada, Y Shibayama, K Shirahama, J. Low Temp. Phys. 148695M. Kondo, S. Takada, Y. Shibayama, and K. Shirahama, J. Low Temp. Phys. 148, 695 (2007).
. Y Aoki, J C Graves, H Kojima, Phys. Rev. Lett. 9915301Y. Aoki, J. C. Graves, and H. Kojima, Phys. Rev. Lett. 99, 015301 (2007);
. J. Low Temp. Phys. 150252J. Low Temp. Phys. 150, 252 (2008).
. A C Clark, J T West, M H W Chan, Phys. Rev. Lett. 99135302A. C. Clark, J. T. West, and M. H. W. Chan, Phys. Rev. Lett. 99, 135302 (2007).
. A Penzev, Y Yasuta, M Kubota, J. Low Temp. Phys. 148677A. Penzev, Y. Yasuta, and M. Kubota, J. Low Temp. Phys. 148, 677 (2007);
. Phys. Rev. Lett. 10165301Phys. Rev. Lett. 101, 065301 (2008).
. B Hunt, E Pratt, V Gadagkar, M Yamashita, A V Balatsky, J C Davis, Science. 324632B. Hunt, E. Pratt, V. Gadagkar, M. Yamashita, A. V. Balatsky, and J. C. Davis, Science 324, 632 (2009).
. D Y Kim, S Kwon, H Choi, H C Kim, E Kim, arXiv:0909.1948D. Y. Kim, S. Kwon, H. Choi, H. C. Kim, E. Kim, arXiv:0909.1948 (2009).
. J Day, T Herman, J Beamish, Phys. Rev. Lett. 9535301J. Day, T. Herman, and J. Beamish, Phys. Rev. Lett. 95, 035301 (2005).
. J Day, J Beamish, Phys. Rev. Lett. 96105304J. Day and J. Beamish, Phys. Rev. Lett. 96, 105304 (2006).
. J M Goodkind, Phys. Rev. Lett. 8995301J.M. Goodkind, Phys. Rev. Lett. 89, 095301 (2002).
. C A Burns, J M Goodkind, J. Low Temp. Phys. 9315C. A. Burns and J.M. Goodkind, J. Low Temp. Phys. 93, 15 (1993).
. D S Greywall, Phys. Rev. B. 161291D. S. Greywall, Phys. Rev. B 16, 1291 (1977).
. M A Paalanen, D J Bishop, H W Dail, Phys. Rev. Lett. 46664M. A. Paalanen, D. J. Bishop, and H. W. Dail, Phys. Rev. Lett. 46, 664 (1981).
. S Sasaki, R Ishiguro, F Caupin, H J Maris, S Balibar, Science. 3131098S. Sasaki, R. Ishiguro, F. Caupin, H. J. Maris, and S. Balibar, Science 313, 1098 (2006).
. M W Ray, R B Hallock, ibid. 101Phys. Rev. Lett. 100189602M. W. Ray and R. B. Hallock, Phys. Rev. Lett. 100, 235301 (2008); ibid. 101, 189602 (2008).
. G Bonfait, H Godfrin, S Balibar, J. Phys. (Paris). 501997G. Bonfait, H. Godfrin, and S. Balibar, J. Phys. (Paris) 50, 1997 (1989).
. S Balibar, F Caupin, Phys. Rev. Lett. 100235301S. Balibar and F. Caupin, Phys. Rev. Lett. 100, 235301 (2008).
. C A Burns, N Mulders, L Lurio, M H W Chan, A Said, C N Kodituwakku, P M Platzman, Phys. Rev B. 78224305C. A. Burns, N. Mulders, L. Lurio, M. H. W. Chan, A. Said, C. N. Kodi- tuwakku, and P. M. Platzman, Phys. Rev B 78, 224305 (2008).
. E Blackburn, J M Goodkind, S K Sinha, J Hudis, C Broholm, J Van Duijn, C D Frost, O Kirichek, R B E Down, Phys. Rev. B. 7624523E. Blackburn, J. M. Goodkind, S. K. Sinha, J. Hudis, C. Broholm, J. van Duijn, C. D. Frost, O. Kirichek, and R. B. E. Down, Phys. Rev. B 76, 024523 (2007).
. E C Helmets, C A Swenson, Phys. Rev. 1281512E. C. Helmets and C. A. Swenson, Phys. Rev. 128, 1512 (1962).
. J P Frank, Phys. Lett. 11208J. P. Frank, Phys. Lett. 11, 208 (1964).
. A C Clark, M H W Chan, J. Low Temp. Phys. 138853A. C. Clark and M. H. W. Chan, J. Low Temp. Phys. 138, 853 (2005).
. X Lin, A C Clark, M H W Chan, Nature. 4491025X. Lin, A. C. Clark, and M. H. W. Chan, Nature 449, 1025 (2007).
. X Lin, A C Clark, Z G Cheng, M H W Chan, Phys. Rev. Lett. 102125302X. Lin, A. C. Clark, Z. G. Cheng, and M. H. W. Chan, Phys. Rev. Lett. 102, 125302 (2009).
. J T West, X Lin, Z G Cheng, M H W Chan, Phys. Rev. Lett. 102185302J. T. West and X. Lin, Z. G. Cheng, M. H. W. Chan, Phys. Rev. Lett. 102, 185302 (2009).
. I A Todoshchenko, H Alles, J Bueno, H J Junes, A Ya, V Parshin, Tsepelin, Phys. Rev. Lett. 97165302I. A. Todoshchenko, H. Alles, J. Bueno, H. J. Junes, A. Ya. Parshin, and V. Tsepelin, Phys. Rev. Lett 97, 165302 (2006).
. I A Todoshchenko, H Alles, H J Junes, A Ya, V Parshin, Tsepelin, JETP Lett. 85454I. A. Todoshchenko, H. Alles, H. J. Junes, A. Ya. Parshin, and V. Tsepelin, JETP Lett. 85, 454 (2007).
. V N Grigor'ev, V A Maidanov, V Yu, S P Rubanskii, E Rubets, Ya, A S Rudavskii, Ye V Rybalko, V A Syrnikov, Tikhii, Phys. Rev. B. 76224524V. N. Grigor'ev, V. A. Maidanov, V. Yu. Rubanskii, S.P. Rubets, E. Ya. Ru- davskii, A. S. Rybalko, Ye. V. Syrnikov, and V. A. Tikhii, Phys. Rev. B 76, 224524 (2007).
. V N , Ye O Vekhov, J. Low Temp. Phys. 14941V. N. Grigor'ev and Ye. O. Vekhov, J. Low Temp. Phys. 149, 41 (2007).
. A V Balatsky, M J Graf, Z Nussinov, S A Trugman, Phys. Rev. B. 7594201A. V. Balatsky, M. J. Graf, Z. Nussinov, and S. A. Trugman, Phys. Rev. B 75, 094201 (2007).
. M J Graf, A V Balatsky, Z Nussinov, I Grigorenko, S A Trugman, J. Phys.: Conf. Ser. 15032025M. J. Graf, A. V. Balatsky, Z. Nussinov, I. Grigorenko, and S. A. Trugman, J. Phys.: Conf. Ser. 150, 032025 (2009).
. M J Graf, Z Nussinov, A V Balatsky, 10.1007/s10909-009-9958-z)J. Low Temp. Phys. in press. M. J. Graf, Z. Nussinov, and A. V. Balatsky, J. Low Temp. Phys. in press, (doi: 10.1007/s10909-009-9958-z).
. Z Nussinov, M J Graf, A V Balatsky, S A Trugman, Phys. Rev. B. 7614530Z. Nussinov, M. J. Graf, A. V. Balatsky, and S. A. Trugman, Phys. Rev. B 76, 014530 (2007).
. P W Anderson, B I Halperin, C M Varma, Philos. Mag. 251P. W. Anderson and B. I. Halperin and C. M. Varma, Philos. Mag. 25, 1 (1972).
. W A Phillips, J. Low Temp. Phys. 11757W. A. Phillips, J. Low Temp. Phys 11, 757 (1972).
. J Zimmermann, G Weber, Phys. Lett. 8632J. Zimmermann and G. Weber, Phys. Lett. 86A, 32 (1981).
. J C Lasjaunias, A Raver, M Vandorpe, S Hunklinger, Solid State Commun. 171045J. C. Lasjaunias, A. Raver, M. Vandorpe, S. Hunklinger, Solid State Commun. 17, 1045 (1975).
. J C Lasjaunias, R Maynard, M Vandorpe, J De Phys, 39ParisJ. C. Lasjaunias, R. Maynard, and M. Vandorpe, J. de Phys. (Paris) 39, C6-973 (1978).
. J Jäckle, Z. Physik. 257212J. Jäckle, Z. Physik 257, 212 (1972).
We calculate the molar specific heat from the specific heat of sample SL34 by obtaining the number of mole of solid 4 He in the solid-liquid coexistence sample of Ref. [30]. First, we extract from the data the slopes of C/T 3 of the solidliquid phase and the pure liquid at high temperatures, BSL = 2.31 mJ/K and BLiq = 0.72 mJ/K. Next, we get from literature the slope for the solid at the melting line, where solid and liquid coexist, B Sol = 5.45 mJ/K at P = 25 bar and Vm = 21. 25 cm 3 /mol [51, 47We calculate the molar specific heat from the specific heat of sample SL34 by obtaining the number of mole of solid 4 He in the solid-liquid coexistence sample of Ref. [30]. First, we extract from the data the slopes of C/T 3 of the solid- liquid phase and the pure liquid at high temperatures, BSL = 2.31 mJ/K and BLiq = 0.72 mJ/K. Next, we get from literature the slope for the solid at the melting line, where solid and liquid coexist, B Sol = 5.45 mJ/K at P = 25 bar and Vm = 21.25 cm 3 /mol [51, 47].
we calculate the molar ratio of the solid from the two-phase mixing relation: BSL = xB Sol + (1 − x)BLiq with x = 0. Finally, 34 and the mole number of the solid (0.93 cm 3 )/(21.25 cm 3 mol −1 )=0.0149 molFinally, we calculate the molar ratio of the solid from the two-phase mixing relation: BSL = xB Sol + (1 − x)BLiq with x = 0.34 and the mole number of the solid (0.93 cm 3 )/(21.25 cm 3 mol −1 )=0.0149 mol.
. E O Edwards, R C Pandorf, Phys. Rev. 140AE. O. Edwards and R. C. Pandorf, Phys. Rev. 140, A 816 (1965).
. D S Greywall, Phys. Rev. A. 32106D. S. Greywall, Phys. Rev. A 3, 2106 (1971).
. M H W Chan, private communicationsM. H. W. Chan (private communications).
. A Driessen, E Van Der Poll, I F Silvera, Phys. Rev. B. 333269A. Driessen, E. van der Poll, and I. F. Silvera, Phys. Rev. B 33, 3269 (1986).
. D S Greywall, Phys. Rev. B. 182127D. S. Greywall, Phys. Rev. B 18, 2127 (1978).
. D A Ackerman, A C Anderson, E J Cotts, J N Dobbs, W M Macdonald, F J Walker, Phys. Rev. B. 29966D. A. Ackerman, A. C. Anderson, E. J. Cotts, J. N. Dobbs, W. M. MacDonald, and F. J. Walker, Phys. Rev. B 29, 966 (1984).
| [] |
[
"Worldsheet dilatation operator for the AdS superstring",
"Worldsheet dilatation operator for the AdS superstring"
] | [
"Israel Ramírez \nDepartamento de Física\nUniversidad Técnica Federico Santa María\nCasilla 110-VValparaísoChile\n\nInstitut für Mathematik und Institut für Physik\nHumboldt-Universität zu Berlin IRIS Haus\nZum Großen Windkanal 612489BerlinGermany\n",
"Brenno Carlini \nDepartamento de Ciencias Físicas\nUniversidad Andres Bello\nRepublica 220SantiagoChile\n",
"Vallilo ♠ "
] | [
"Departamento de Física\nUniversidad Técnica Federico Santa María\nCasilla 110-VValparaísoChile",
"Institut für Mathematik und Institut für Physik\nHumboldt-Universität zu Berlin IRIS Haus\nZum Großen Windkanal 612489BerlinGermany",
"Departamento de Ciencias Físicas\nUniversidad Andres Bello\nRepublica 220SantiagoChile"
] | [] | In this work we propose a systematic way to compute the logarithmic divergences of composite operators in the pure spinor description of the AdS 5 × S 5 superstring. The computations of these divergences can be summarized in terms of a dilatation operator acting on the local operators. We check our results with some important composite operators of the formalism.During the last decade there was a great improvement in the understanding of N = 4 super Yang-Mills theory due to integrability techniques, culminating in a proposal where the anomalous dimension of any operator can be computed at any coupling [1]. The crucial point of this advance was the realization that the computations of anomalous dimensions could be systematically done by studying the dilatation operator of the theory [2, 3]. For a general review and an extensive list of references, we recommend[4]. An alternative to the TBA approach not covered in [4], the Quantum Spectral Curve, was developed in[5,6]. For some of its applications, including high loops computations, see[7,8,9,10,11,12]On the string theory side it is that known the world sheet sigma-model is classically integrable[13,14]. However, it is not yet known how to fully quantize the theory, identifying all physical vertex operators and their correlation functions. In the case of the pure spinor string it is known that the model is conformally invariant at all orders of perturbation theory and that the non-local J 0 , J 3 =I[J 3 , T α ][N, Tα]η αα , (C.117) J 0 ,J 3 =I[J 3 , T α ][N, Tα]η αα + I[J 1 , Tα][J 2 , T α ]η αα + I[J 2 , T m ][J 1 , T n ]η mn . (C.118) The J 1 , · terms are J 1 , J 1 = − I ([J 2 , Tα][N, T α ] − [N, T α ][J 2 , Tα]) η αα + [J 3 , T m ][J 3 , T n ]η mn , (C.119) | 10.1007/jhep05(2016)129 | [
"https://arxiv.org/pdf/1509.00769v3.pdf"
] | 118,393,696 | 1509.00769 | 4ffcc0608fc6fbb7d13cc008b3d019275255ff13 |
Worldsheet dilatation operator for the AdS superstring
30 May 2016
Israel Ramírez
Departamento de Física
Universidad Técnica Federico Santa María
Casilla 110-VValparaísoChile
Institut für Mathematik und Institut für Physik
Humboldt-Universität zu Berlin IRIS Haus
Zum Großen Windkanal 612489BerlinGermany
Brenno Carlini
Departamento de Ciencias Físicas
Universidad Andres Bello
Republica 220SantiagoChile
Vallilo ♠
Worldsheet dilatation operator for the AdS superstring
30 May 2016
In this work we propose a systematic way to compute the logarithmic divergences of composite operators in the pure spinor description of the AdS 5 × S 5 superstring. The computations of these divergences can be summarized in terms of a dilatation operator acting on the local operators. We check our results with some important composite operators of the formalism.During the last decade there was a great improvement in the understanding of N = 4 super Yang-Mills theory due to integrability techniques, culminating in a proposal where the anomalous dimension of any operator can be computed at any coupling [1]. The crucial point of this advance was the realization that the computations of anomalous dimensions could be systematically done by studying the dilatation operator of the theory [2, 3]. For a general review and an extensive list of references, we recommend[4]. An alternative to the TBA approach not covered in [4], the Quantum Spectral Curve, was developed in[5,6]. For some of its applications, including high loops computations, see[7,8,9,10,11,12]On the string theory side it is that known the world sheet sigma-model is classically integrable[13,14]. However, it is not yet known how to fully quantize the theory, identifying all physical vertex operators and their correlation functions. In the case of the pure spinor string it is known that the model is conformally invariant at all orders of perturbation theory and that the non-local J 0 , J 3 =I[J 3 , T α ][N, Tα]η αα , (C.117) J 0 ,J 3 =I[J 3 , T α ][N, Tα]η αα + I[J 1 , Tα][J 2 , T α ]η αα + I[J 2 , T m ][J 1 , T n ]η mn . (C.118) The J 1 , · terms are J 1 , J 1 = − I ([J 2 , Tα][N, T α ] − [N, T α ][J 2 , Tα]) η αα + [J 3 , T m ][J 3 , T n ]η mn , (C.119)
charges found in [14] exist in the quantum theory [15]. In a very interesting paper, [16] showed how to obtain the Y-system equations from the holonomy operator.
Another direction in which the pure spinor formalism was used with success was the quantization around classical configurations. In [17] it was shown that the semi-classical quantization of a large class of classical backgrounds agrees with the Green-Schwarz formalism. This was later generalized in [18,19]. Previously, Mazzucato and one of the authors [20] attempted to use canonical quantization around a massive string solution to calculate the anomalous dimension of a member of the Konishi multiplet at strong coupling. Although the result agrees with both the prediction from integrability and Green-Schwarz formalism, this approach has several issues that make results unreliable [21]. 3 An alternative and more desirable approach is to use CFT techniques to study vertex operators and correlation functions since scattering amplitudes are more easily calculated using this approach. A first step is to identify physical vertex operators. Since the pure spinor formalism is based on BRST quantization, physical vertex operators should be in the cohomology of the BRST charge. For massless states, progress has been made in [22,23,24,25]. For massive states the computation of the cohomology in a covariant way is a daunting task even in flat space [26].
A simpler requirement for physical vertices is that they should be primary operators of dimension zero for the unintegrated vertices and primaries of dimension (1,1) in the integrated case. Massless unintegrated vertex operators in the pure spinor formalism are local operators with ghost number (1,1) constructed in terms of zero classical conformal dimension fields [27]. So for them to remain primary when quantum corrections are taken into account, their anomalous dimension should vanish. Massless integrated vertices have zero ghost number and classical conformal dimension (1,1). Therefore they will also be primaries when their anomalous dimension vanishes. Operators of higher mass level are constructed using fields with higher classical conformal dimension. For general mass level n (where n = 1 corresponds to the massless states) the unintegrated vertex operators have classical conformal dimension (n−1, n−1). If such vertex has anomalous dimension γ, the condition for it to be primary is 2n − 2 + γ = 0. The case for integrated vertex operators is similar. For strings in flat space γ is always α ′ k 2 2 , which is the anomalous dimension of the plane wave e ik·X . This reproduces the usual mass level formula.
This task of computing γ can be made algorithmic in the same spirit as the four dimensional SYM case [2,3]. However, here we are interested in finding the subset of operators satisfying the requirements described above. The value of the energy of the corresponding string state should come as the solution to an algebraic equation obtained from this requirement. However we do not expect the energy to be simply one of the parameters in the vertex operator. The proper way to identify the energy is to compute the conserved charge related to it and apply it to the vertex operator.
In this paper we intend to systematize the computation of anomalous dimensions in the worldsheet by computing all one-loop logarithmic short distance singularities in the product of operators with at most two derivatives. To find the answer for operators with more derivatives one simply has to compute the higher order expansion in the momentum of our basic propagator. We used the method applied by Wegner in [28] for the O(n) model, but modified for the background field method. This was already used with success in [29,30] for some Z 2 -super-coset sigma models. The pure spinor string is a Z 4 coset and it has an interacting ghost system. This makes it more difficult to organize the dilatation operator in a concise expression and to find a solution to D · O = 0.
(1.1)
We can select a set of "letters" {φ P } among the basic fields of the sigma model, e.g. the AdS coordinates, ghosts and derivatives of these fields. Unlike the case of N = 4 SYM, the worldsheet derivative is not one of the elements of the set, so fields with a different number of derivatives correspond to different letters. Then D is of the form
D = 1 2 D P Q ∂ 2 ∂φ P ∂φ Q . (1.2)
Local worldsheet operators are of the form
O = V P Q R S T ··· φ P φ Q φ R φ S φ T · · · ,(1.3)
the problem is to find V A B C D E··· such that O satisfy (1.1). Another important difference with the usual case is that the order of letters does not matter, so O is not a spin chain.
The problem of finding physical vertices satisfying this condition will be postponed to a future publication. Here we will compute D and apply it to some local operators in the sigma model which should have vanishing anomalous dimension. The search for vertex operators in AdS using this approach was already discussed in [31] but without the contribution from the superspace variables. The author used the same "pairing" rules computed in [28]. This paper is organized as follows. In section 2 we describe the method used by Wegner in [28] for the simple case of the principal chiral field. This method consists of solving a Schwinger-Dyson equation in the background field expansion. In section 3 we explain how to apply these aforementioned method to the pure spinor AdS string case. The main derivation and results are presented in the Appendix B. Section 4 contains applications, where we use our results to compute the anomalous dimension of several conserved currents. Conclusions and further applications are in section 5.
Renormalization of operators in the principal chiral model
The purpose of this section is to review the computation of logarithmic divergences of operators in principal chiral models using the background field method. Although this is standard knowledge, the approach taken here is somewhat unorthodox so we include it for the sake of completeness. Also, the derivation of the full propagators in the case of AdS 5 × S 5 is analogous to what is done in this section, so we omit their derivations.
Consider a principal model in some group G, with corresponding Lie algebra g, in two dimensions. The action is given by
S = − 1 2πα 2 d 2 z Tr ∂g −1∂ g, (2.1)
where α is the coupling constant and g ∈ G. Using the left-invariant currents J = g −1 ∂g and defining √ λ = 1/α 2 we can also write
S = √ λ 2π d 2 z Tr JJ . (2.2)
The full one-loop propagator is derived from the Schwinger-Dyson equation
δ z S O(y) = δ z O(y) , (2.3)
where δ z is an arbitrary local variation of the fundamental fields and O(y) is a local operator. This equation comes from the functional integral definition of · · · . In order to be more explicit, let us consider a parametrization of g in terms of quantum fluctuations and a classical background g = g 0 e X , where g 0 is the classical background, X = X a J a and J a ∈ g are the generators of the algebra. Then a variation of g is given by δg = gδX, and δX = δX a J a where we have the variation of the independent fields X a . Also, the variation of some general operator O is δO = δO δX a δX a . Then we can write the Schwinger-Dyson equation as
δS δX a (z) O(y) = δO(y) δX a (z) ,(2.4)
and now it is clear that this is a consequence of the identity
[DX] δ δX a (z) e −S O(y) = 0. (2.5)
In the case that O(y) = X(y) we get the Schwinger-Dyson equation for the propagator
δS δX a (z) X b (y) = δ b a δ 2 (y − z). (2.6)
This is a textbook way to get the equation for the propagator in free field theories and our goal here is to solve this equation for the interacting case at one loop order. The perturbative expansion of the action is done using the background field method. A fixed background g 0 is chosen and the quantum fluctuation is defined as g = g 0 e X . The expansion of the current is given by J = e −X J e X + e −X ∂e X , (2.7)
where J = g −1 0 ∂g 0 is the background current. At one loop order only quadratic terms in the quantum field expansion contribute and, as usual, linear terms cancel by the use of the background equation of motion. This means that we can separate the relevant terms action in two pieces S = S (0) + S (2) . Furthermore, S (2) contains the kinetic term plus interactions with the background. So we have
S = S (0) + S kin + S int .
(2.8)
If we insert this into (2.6) the terms that depend purely on the background cancel and we are left with
δS kin δX a (z) X b (y) + δS int δX a (z) X b (y) = δ b a δ 2 (y − z). (2.9) Since δS kin δX a (z) = − √ λ
2π ∂∂X a (z) and δS int δX a (z) is linear in quantum fields we can write
− √ λ π ∂ z∂z X a (z)X b (y) + d 2 w δ 2 S int δX c (w)δX d (z) X c (w)X b (y) η ad = η ab δ 2 (y − z). (2.10)
Finally, this is the equation that we have to solve. It is an integral equation for X a (z)X b (y) = G ab (z, y) which is the one-loop corrected propagator. The interacting part of the action is
S int = √ λ 2π d 2 z − 1 2 Tr([∂X, X]J ) − 1 2 Tr([∂X, X]J ) ,(2.11)
where the boldface fields stand for the background fields. Now we calculate 4
δ 2 S int δX c (w)δX a (z) = √ λ 2π [∂ w δ 2 (w − z)Tr([T c , T a ]J ) +∂ w δ 2 (w − z)Tr([T c , T a ]J )],(2.12)
which is symmetric under exchange of (a, z) and (c, w), as expected. We define f ab c = f b cd η da . So we get the following equation for the propagator
∂ z∂z G ab (z, y) = − π √ λ η ab δ 2 (y − z) + f a ce 2 ∂ z G cb (z, y))J e +∂ z G cb (z, y)J e . (2.13)
Performing the Fourier transform G ac (z, k) = d 2 ye −ik·(z−y) G ac (z, y), (2.14)
we finally get
G ab (z, k) = η ab √ λ π |k| 2 + |k| 2 G ab (z, k) + i ∂ k G ab (z, k) + i∂ k G ab (z, k) − f a ceJ e i 2k + ∂ z 2|k| 2 G cb (z, k) − f a ce J e i k +∂ z 2|k| 2 G cb (z, k). (2.15)
The dependence on one of the coordinates remains because the presence of background fields breaks translation invariance on the worldsheet. We can solve the equation above iteratively in inverse powers of k. The first few contributions are given by
G ab (z, k) = η ab √ λ π |k| 2 − iπf a ce 2 √ λ|k| 2 η cb J ē k + J e k − π 4 √ λ f a ce f c df η db 1 |k| 2 1 |k| 2 (J eJ f +J e J f ) + 1 k 2J eJ f + 1 k 2 J e J f + π 2 √ λ η db f a df |k| 2 1 k 2∂J f + 1 k 2 ∂J f + . . . (2.16)
With this solution we can finally do the inverse Fourier transform,
G ac (z, y) = d 2 k 4π 2 e ik·(z−y) G ac (z, k),(2.17)
to calculate G ac (z, y). If we are only interested in the divergent part of the propagator we can already set z = y. Furthermore, selecting only the divergent terms in the momentum integrals we get in d = 2 + ǫ dimensions, using the standard dimensional regularization [28]. Since ∂ X a ∂X c = ∂X a ∂X c + X a ∂ 2 X c we can further compute
X a (z)X c (z) = Iπ √ λ η ac , (2.18) X a (z)∂X c (z) = − Iπ 2 √ λ η dc f a de J e , (2.19) X a (z)∂X c (z) = − Iπ 2 √ λ η dc f a deJ e , (2.20) X a (z)∂∂X b (z) = Iπ 4 √ λ η db f c df f a ce J eJ f +J e J f , (2.21) X a (z)∂∂X b (z) = − Iπ 2 √ λ η cb f a ce ∂J e + Iπ 4 √ λ η db f a ce f c df J e J f ,(2.∂X a (z)∂X b (z) = − Iπ 4 √ λ (f a ce f c df η db J e J f ), (2.25) ∂ X a (z)∂X b (z) = − Iπ 2 √ λ η cb f a ce∂ J e − Iπ 4 √ λ η db f c df f a ce (J e J f +J f J e ). (2.26)
From now on · will mean only the logarithmically divergent part of the expectation value. A simple way to extract this information is by defining
O = 1 2 d 2 zd 2 y δ 2 O δX a (z)δX b (y) X a (z)X b (y) ,(2.27)
for any local operator O. Furthermore, we define
O , O ′ = d 2 zd 2 y δO δX a (z) δO ′ δX b (y) X a (z)X b (y) . (2.28)
Following [31] we will call it "pairing" rules. For local operators these two definitions always give two delta functions, effectively setting all fields at the same point. So the computation of · can be summarized as
O = 1 2 X a X b ∂ 2 ∂X a ∂X b O = DO, (2.29) where D = 1 2 X a X b ∂ 2 ∂X a ∂X b (2.30)
is the dilatation operator. We can also define · , · as
O , O ′ = X a X b ∂O ∂X a ∂O ′ ∂X b . (2.31)
With the above definitions, the divergent part of any product of local operators at the same point can be computed using.
OO ′ = O O ′ + O O ′ + O , O ′ .
(2.32)
Several known results can be derived using this simple set of rules. Following this procedure in the case of the symmetric space SO(N + 1)/SO(N) gives the same results obtained by Wegner [28] using a different method.
3 Dilatation operator for the AdS 5 × S 5 superstring
In this section we will apply the same technique to the case of the pure spinor AdS string. We begin with a review of the pure spinor description, pointing out the differences between this model and the principal chiral model, and then describe the main steps of the computation.
Pure spinor AdS string
The pure spinor string [26,14,15] in AdS has the same starting point as the Metsaev-Tseyltin [32]. The maximally supersymmetric type IIB background AdS 5 × S 5 is described by the supercoset
G H = P SU(2, 2|4) SO(1, 4) × SO(5)
.
(3.1)
The pure spinor action is given by
S P S = R 2 2π d 2 zSTr 1 2 J 2J2 + 1 4 J 1J3 + 3 4J 1 J 3 + ω∇λ +ω∇λ − NN ,(3.2)
where ∇· =∂ · +[J 0 , ·], N ={ω, λ},N ={ω,λ}. There are several difference between the principal chiral model action and (3.2). First, the model is coupled to ghosts. The pure spinor action also contains a Wess-Zumino term, and the global invariant current J belongs to the psu(2, 2|4) algebra, which is a graded algebra, with grading 4. Thus we split the current as J = A + J 1 + J 2 + J 3 , where A = J 0 belongs to the algebra of the quotient group H = SO(1, 4) × SO(5). The notation that we use for currents of different grade is
J 0 =J i 0 T i ; J 1 = J α 1 T α ; J 2 = J m 2 T m ; J 3 = Jα 3 Tα. (3.4)
The ghosts fields are defined as
λ =λ A T A ; ω = −ω A η A T ;λ =λÂT ;ω =ωBη BB T B . (3.5)
Note that A and A ′ indices on the ghosts mean α andα, but we will use a different letter in order to make it easier to distinguish which terms come from ghosts and which come from the algebra. The pure spinor condition can be written as
{λ, λ} = {λ,λ} = 0. (3.6)
Following the principal chiral model example, we expand g around a classical background g 0 using the g = g 0 e X parametrization. It is worth noting that X = x 0 + x 1 + x 2 + x 3 belongs to the psu(2, 2|4) algebra, but we can use the coset property to fix x 0 = 0. With this information the quantum expansion of the left invariant current is
A =A + 3 i=1 [J i , x 4−i ] + 1 2 [∇x i , x 4−i ] + 3 i,j=1 [[J i , x j ] , x 8−i−j ] , J l =J l + ∇x l + 3 i=1 [J i , x 4+l−i ] + 1 2 [∇x i , x 4+l−i ] + 3 i,j=1 [[J i , x j ] , x 8+l−i−j ] , λ =λ + δλ, ω =ω + δω, λ =λ + δλ, ω =ω + δω. (3.7)
Where we take x 0 = 0 as mentioned before, and we used g −1
0 ∂g 0 = J = A + J 1 + J 2 + J 3 .
The boldface terms stand for the background term, both for the currents and for the ghost fields.
Using all this information inside the action we get
S P S = R 2 2π d 2 z 1 2 ∇x m 2∇ x n 2 η mn − ∇x α 1∇ xα 3 η αα + δω A∂ δλ A + δωÂ∂δλ + S int . (3.8)
The full expansion can be found in the Appendix C. In order to compute the logarithmic divergences, we need to generalize the method explained in section 2 for a coset model with ghosts. The following subsection is devoted to explain this generalization.
General coset model coupled to ghosts
In this subsection we generalize the method of Section 2 to the case of a general coset G/H and then specialize for the pure spinor string case. We will denote the corresponding algebras g and h, where h should be a subalgebra of g. The generators of g − h will be denoted by T a where a = 1 to dim g − dim h and the generators of h will be denoted by T i where i = 1 to dim h . We also include a pair of first order systems (λ A , ω B ) and
(λ A ′ ,ω B ′ ) transforming in two representations (Γ iB A , Γ iB ′ A ′ ) of h.
We will assume that the algebra g has the following commutation relations
[T a , T b ] = f c ab T c + f i ab T i , [T a , T i ] = f b ai T b , [T i , T j ] = f k ij T k , (3.9)
where f c ab = 0 for a general coset and f c ab = 0 if there is a Z 2 symmetry, i.e., G/H is a symmetric space. As in the usual sigma model g ∈ G/H and the currents J = g −1 ∂g are invariant by left global transformations in G. We can decompose J = J a T a + A i T i where J a T a ∈ g − h and A i T i ∈ h. With this decomposition K transforms in the adjoint representation of h and A transforms as a connection. We will also allow a quartic interaction in the first order sector.
Defining N i = λ A Γ iB A ω B andN i =λ A ′ Γ iB ′ A ′ωB ′
, the interaction will be βN iN i where β is a new coupling constant that in principle is not related with the sigma model coupling.
The total action is given by
S = d 2 z Tr (J − A)(J −Ā) + ω A∇ λ A +ω A ′ ∇λ A ′ + βN iN i , (3.10) where (∇,∇) = (∂ − A i Γ i A ′ B ′ ,∂ −Ā i Γ i A B
) are the covariant derivatives for the first order system ensuring gauge invariance.
The background field expansion is different if we are in a general coset or a symmetric space. Since we want to generalize the results to the case of AdS 5 ×S 5 , we will use a notation that keeps both types of interactions. Again, the quantum coset element is written as g = g 0 e X where g 0 is the classical background and X = X a T a are the quantum fluctuations. Up to quadratic terms in the quantum fluctuation the expansion of the action is
S =S 0 + d 2 z η ab ∇X a∇ X b − Z abc J a X b∇ X c −Z abcJ a X b ∇X c + R abcd J aJ b X c X d +δω A∂ δλ A + δω A ′ ∂δλ A ′ +Ā i N i + A iN i + β ({δλ, ω} + {λ, δω}) i {δλ,ω} + {λ, δω} i , (3.11) where the covariant derivatives on X are (∂ − [A, · ],∂ − [Ā, · ])
. The tensors (Z abc ,Z abc , R abcd ) appearing above are model dependent. In the case of a symmetric space Z =Z = 0 and R abcd = f i ab f icd . In the general coset case Z abc =Z abc = 1 2 f abc . If there is a Wess-Zumino term, the values of Z abc andZ abc can differ. Since we want to do the general case, we will not substitute the values of these tensor until the end of the computations. In the action above the quantum connections have the following expansion
A i = A i + f i ab J a X b + 1 2 f i ab ∇X a X b + W i abc J a X b X c + · · · (3.12) A i =Ā i + f i abJ a X b + 1 2 f i ab∇ X a X b + W i abcJ a X b X c + · · · (3.13)
where W i abc = 1 2 f d ab f i dc for a general coset and vanishes for a symmetric space. To proceed, we have to compute the second order variation of the action with respect to the quantum fields. The difference this time is that there are many more couplings, so we expect a system of coupled Schwinger-Dyson equations, corresponding to each possible corrected propagator. For example, in the free theory approximation there is no propagator between the sigma model fluctuation and the first order system, but due to the interactions there we may have corrected propagators between them.
Since a propagator is not a gauge invariant quantity, it can depend on gauge dependent combinations of the background gauge fields (A i ,Ā i ). Furthermore, since we have chiral fields transforming in two different representations of h it is possible that the quantum theory has anomalies. In the case of the AdS 5 × S 5 string sigma-model it was argued by Berkovits [15] that there is no anomaly for all loops. An explicit one loop computation was done in [33]. Therefore it is safe to assume that the background gauge fields only appear in physical quantities in a gauge invariant combination. The simplest combination of this type is Tr[∇,∇] 2 . Since the classical conformal dimension of this combination is four and so far we are interested in operators of classical conformal dimension 0 and 2, we can safely ignore all interactions with (A i ,Ā i ).
We will assume a linear quantum variation of the first order system, e.g., λ A → λ A + δλ A . Instead of introducing more notation and a cumbersome interaction Lagrangian, we will simply compute the variations of these fields in the action and set to their background values the remaining fields.
With all these simplifications and constraints in mind, let us start constructing the Schwinger-Dyson equations. First we compute all possible non-vanishing second variations of the action
δ 2 S int δX a δX b = δ 2 (z − w) J cJ d (R cdab + R cdba ) + N iJ c (W i cab + W i cba ) +N i J c (W i cab + W i cba ) − ∂ w δ 2 (z − w)[Z cabJ c + f i abN i ] −∂ w δ 2 (z − w)[Z cab J c + f i ab N i ], (3.14a) δ 2 S int δλ A δω B = δ 2 (z − w)βΓ iB AN i , (3.14b) δ 2 S int δλ A δX a = δ 2 (z − w)(Γ i ω) A f i baJ b , (3.14c) δ 2 S int δλ A δλ B ′ = δ 2 (z − w)β(Γ i ω) A (Γ iω ) B ′ , (3.14d) δ 2 S int δλ A δω B ′ = δ 2 (z − w)β(Γ i ω) A (λΓ i ) B ′ , (3.14e) δ 2 S int δλ A ′ δω B ′ = δ 2 (z − w)βN iΓ i B ′ A ′ , (3.14f) δ 2 S int δλ A ′ δX a = δ 2 (z − w)(Γ iω ) A ′ f i ba J b , (3.14g) δ 2 S int δλ A ′ δω B = δ 2 (z − w)β(λΓ i ) B (Γ iω ) A ′ , (3.14h) δ 2 S int δX a δω B = δ 2 (z − w)(λΓ i ) B f i baJ b , (3.14i) δ 2 S int δX a δω B ′ = δ 2 (z − w)(λΓ i ) B ′ f i ba J b , (3.14j) δ 2 S int δω A δω B ′ = δ 2 (z − w)β(λΓ i ) A (λΓ i ) B ′ . (3.14k)
We are going to denote these second order derivatives generically as I ΣΛ (z, w) where Σ and Λ can be any of the indices ( a , A , B , A ′ , B ′ ). Also, the quantum fields will be denoted by Φ Σ (z).
With this notation the Schwinger-Dyson equations are
δS kin δΦ Λ (z) Φ Σ (y) + d 2 w δ 2 S int δΦ Υ (w)δΦ Λ (z) Φ Υ (w)Φ Σ (y) = δ Σ Λ δ(z − y). (3.15)
Note that the only non-vanishing components of δ Σ Λ are η ab , δ A B and δ A ′ B ′ . Since the type and the position of the indices completely identity the field, the propagators are going to be denoted by G ΣΛ (z, y) = Φ Σ (z)Φ Λ (y) . Since we five different types of fields, we have fifteen coupled Schwinger-Dyson equations to solve. Again we have to make a simplification. Interpreting (λ A ,λ A ′ ) as left and right moving ghosts and knowing that in the pure spinor superstring unintegrated vertex operators have ghost number (1, 1) with respect to (G,Ĝ), we will concentrate on only four corrected propagators X a (z)
X b (y) , X a (z)λ A (y) , X a (z)λ A ′ (y) and λ A (z)λ A ′ (y) .
As in the principal chiral model case we are going to solve the Schwinger-Dyson equations first in momentum space. It is useful to note that since we will solve this equations in inverse powers of k, the first contributions to the corrected propagators will have the form
X c X a ≈ η ca |k| 2 , ω A λ B ≈ δ B Ā k , ω A ′λ B ′ ≈ δ B ′ A ′ k . (3.16)
Regarding (A, A ′ ) as one type of index we can arrange the whole Schwinger-Dyson equation into a matrix notation with three main blocks. Doing the same Fourier transform as before we get a matrix equation that can be solved iteratively
G Υ Σ = I Υ Σ + (F ΣΓ + ∆ ΣΓ )G ΓΥ ,(3.17)
where
I Υ Σ = δ a b |k| 2 0 0 0 0 −i δ A B k 0 i δ B A k 0 , (3.18) F ΣΓ = δ 2 S int δΦ Σ δΦ Γ , ∆ ΣΓ = ∂∂ |k| 2 + i ∂ k + i∂ k 0 0 0 −i ∂ k 0 0 0 i ∂ k . (3.19)
All elements of the interaction matrix F ΣΓ are shown in Appendix C. As in section 2, the solution to equation (3.17) is computed iteratively
G (0)Υ Σ = I Υ Σ , G (1)Υ Σ = F Γ Σ I Υ Γ ,(3.20)
and so on for higher inverse powers of k.
Pairing rules
As discussed in the introduction and Section 2, the computation of the divergent part of any local operator can be summarized by the pairing rules of a set of letters {φ P }. The complete set of these pairing rules can be found in the Appendix C. If we choose a set of letters such that φ P = 0, then the divergent part of the product of two letters is simply
φ P φ Q = φ P , φ Q . (3.21)
We computed the momentum space Green function up to quartic inverse power of momentum so we must restrict our set of letters to fundamental fields up to classical dimension one. The convenient set of letter we will use is
{φ P } = {x a 2 , x α 1 , xα 3 , J a 2 , J α 1 , Jα 3 , J i 0 ,J a 2 ,J α 1 ,Jα 3 ,J i 0 , λ A , ω A ,λÂ,ωÂ, N i ,N i }. (3.22)
If we extend the computation to take into account operators with more than two derivatives the set of letters has to be extended to include them. The matrix elements of the dilation operator D P Q = φ P , φ Q are the full set of pairings described in Appendix C.4. To avoid cumbersome notation, the pairing rules are written contracting with the corresponding psu(2, 2|4) generator. The computations done in next section are a straightforward application of the differential operator
D = 1 2 D P Q ∂ 2 ∂φ P ∂φ Q (3.23) on a a local operator of the form O = V P Q R S T ··· φ P φ Q φ R φ S φ T · · · .
Applications
In this section we use our results to prove that certain important operators in the pure spinor sigma model are not renormalized. The operators we choose are stress energy tensor, the conserved currents related to the global P SU(2, 2|4) symmetry and the composite b-ghost. All these operators are a fundamental part of the formalism and it is a consistency check that they are indeed not renormalized. All the computations bellow are an application of the differential operator (3.23). We use the notation O = D · O.
Stress-energy tensor
The holomorphic and anti-holomorphic stress-energy tensor for (3.2) are given by
T =STr 1 2 J 2 J 2 + J 1 J 3 + ω∇λ , (4.1) T =STr 1 2J 2J2 +J 1J3 +ω∇λ . (4.2)
For the holomorphic one
T =STr 1 2 J 2 , J 2 + J 1 , J 3 − N J 0 =STr 1 2 [N , T m ][N , T n ]η mn − [N , T α ][N , Tα]η αα + 1 2 N {[N , Tα], T α }η αα − {[N , T α ], Tα}η αα + [[N , T m ], T n ]η mn
Conserved currents
The string sigma model is invariant under global left-multiplications by an element of psu(2, 2|4), δg = Λg. We can calculate the conserved currents related to this symmetry using standard Noether method. The currents are given by
j =g J 2 + 3 2 J 3 + 1 2 J 1 − 2N g −1 = gAg −1 , (4.4) j =g J 2 + 1 2J 3 + 3 2J 1 − 2N g −1 = gĀg −1 . (4.5)
They should be free of divergences. To see that this is the case, it is easier to compute by parts:
j = g Ag −1 0 + g, A g −1 0 + gA, g −1 + g 0 A, g −1 + g 0 A g −1 + g 0 A g −1 0 . (4.6)
We have defined AB, C as usual, but taking B as a classical field, thus AB, C = A, BC . From (C.101) we get A = 0, and using (C.100) we obtain
g Ag −1 0 + gA, g −1 + g 0 A g −1 = 1 2 g 0 [[A, X] , X] g −1 0 = 1 2 g 0 [[A, T m ] , T n ]η mn + {[A, Tα], T α }η αα −{[A, T α ], Tα}η αα g −1 0 . (4.7)
For the currents, using the results (C.106-C.111),
g −1 0 g, J 1 + J 1 , g −1 g 0 = − {[J 2 , Tα], T α }η αα − {[J 3 , Tα], T α }η αα + {[N , Tα], T α }η αα , (4.8) g −1 0 g, J 2 + J 2 , g −1 g 0 = − [[J 3 , T m ], T n ]η mn + [[N , T m ], T n ]η mn ,(4.g −1 0 j g 0 = − 1 2 ({[N , Tα] , T α } + {[N , T α ] , Tα}) η αα + 1 2 {[J 2 , T m ] , T n } η mn − {[J 2 , T α ] , Tα} η αα . (4.12)
By lowering all the terms in the structure coefficients, we can see that the first term is just (f iαβ f jαβ − f iαβ f jαβ )η αα η ββ , and the second term is proportional to the dual coxeter number, see (B.5,B.6), which is 0. Thus, summing everything, we get j = 0.
(4.13)
For the antiholomorphic current we just obtain, using the same results as before,
g −1 0 g,J 1 + J 1 , g −1 g 0 ={[N , Tα], T α }η αα ,(4.
14)
g −1 0 g,J 2 + J 2 , g −1 g 0 = − [[J 1 , T m ], T n ]η mn + [[N , T m ], T n ]η mn , (4.15) g −1 0 g,J 3 + J 3 , g −1 g 0 ={[J 1 , T α ], Tα}η αα + {[J 2 , T α ], Tα}η αα − {[N , T α ], Tα}η αα ,(4.16)
and using {[J 1,3 , T a ] , T b } g ab = 0 we see that doing the same as j, we arrive at j = 0.
b ghost
The pure spinor formalism does not have fundamental conformal ghosts. However, in a consistent string theory, the stress-energy tensor must be BRST exact T = {Q, b}. So there must exist a composite operator of ghost number −1 and conformal weight 2. The flat space b-ghost was first computed in [34] and a simplified expression for it in the AdS 5 × S 5 background was derived in [35]. The λλ term is easy,
(λλ) = − λ Aλ f B Ai fB Aj g ij η BB = 0; (4.20)
where we have used (B.7). The ωJ 1 term is also 0. The other terms are
STr λ [J 2 , J 3 ] = − STr [λ, T i ]([[J 2 , T j ], J 3 ] + [J 2 , [J 3 , T j ]])g ij = − STr [λ, T i ][T j , [J 2 , J 3 ]]g ij = −STr [ λ , T i , T j ][J 2 , J 3 ]g ij = 0, (4.21)
we used f iαβ f jαβ g ij η αα = 0, see (B.7). The next term is
STr {ω,λ}[λ, J 1 ] = − STr {[ω, T i ], [λ, T j ]}[λ, J 1 ] + {ω, [λ, T i ]}[[λ, T j ] , J 1 ] +{ω, [λ, T i ]}[λ, [J 1 , T j ]] g ij = − STr {ω, [λ, T i ]}([T j , [λ, J 1 ]] + [[λ, T j ], J 1 ] + [λ, [J 1 , T j ]]) g ij =0,(4.22)
which comes from the Jacobi identity, see appendix B. The remaining terms are computed using which is true due to the pure spinor condition.
λ A [λÂ, T i ]η A = − [λ A , T i ]λÂη A = {λ,λ} j g ij ,(4.
For b one needs to use the same relations from above.
Conclusions and further directions
In this paper we outlined a general method to compute the logarithmic divergences of local operators of the pure spinor string in an AdS 5 × S 5 background. In the text we derived in detail the case for operators up to classical dimension two, but the method extends to any classical dimension. Although the worldsheet anomalous dimension is not related to a physical observable, as in the case of N=4 SYM, physical vertex operators should not have quantum corrections to their classical dimension. The main application of our work is to systematize the search for physical vertex operators. We presented some consistency checks verifying that some conserved local operators are not renormalized.
The basic example is the radius operator discussed in [35]. It has ghost number (1, 1) and zero classical dimension. In our notation it can be written as V = Str(λλ), (5.1)
If we apply the pairing rules to compute V we obtain
V = −Ig ij Str([λ, T i ][λ, T j ]) = 0, (5.2)
where in the last equality we replaced the structure constants and used one of the identities in the Appendix A. This can be generalized to other massless and massive vertex operators. We plan to return to this problem in the future.
A more interesting direction is to try to organize the dilatation operator including the higher derivative contributions. As we commented in the introduction, the difficulty here is that the pure spinor action is not an usual coset action as in [29,30]. However, it might still be possible to obtain the complete one loop dilatation operator restricting to some subsector of the psu(2, 2|4) algebra, in a way similar as it was done for super Yang-Mills dilatation operator [2].
A Notation and conventions
Here we collect the conventions and notation used in this paper. We work with euclidean world sheet with coordinates (z,z).
We split the current as J = A+K. We define K = J 1 +J 2 +J 3 ∈ psu(2, 2|4) and A = J 0 belongs to the stability group algebra. 5 The notation that we use for the different graded generators is given by
J 0 =J i 0 T i ; J 1 = J α 1 T α ; J 2 = J m 2 T m ; J 3 = Jα 3 Tα. (A.1)
The ghosts fields are defined as
λ =λ A T A ; ω = −ω A η A T ;λ =λÂT ;ω =ωBη BB T B . (A.2)
The only non-zero Str of generators are
g ij =STrT i T j , (A.3) η mn =STrT m T n , (A.4) η αα =STrT α Tα. (A.5)
For the raising and lowering of fermionic indices in the structure constants we use
f mαβ = η αα fα βm and f mαβ = −η αα f α βm , (A.6)
and for the f ααi the rule is the same. For the bosonic case we use the standard raising/lowering procedure. The Jacobi identity yields f mαβ f nαβ η mn η αα = 0 and f iαβ f jαβ g ij η αα = 0. This implies that
[[J 1,3 , T i ], T j ]g ij = [[J 1,3 , T n ], T m ]η mn = {[J 1,3 , T α ], Tα}η αα = {[J 1,3 , Tα], T α }η αα =0, (B.7) [[ω + λ +ω +λ, T i ], T j ]g ij = [[ω + λ +ω +λ, T n ], T m ]η mn =0, (B.8) [{ω + λ +ω +λ, T α }, Tα]η αα = [{ω + λ +ω +λ, Tα}, T α ]η αα =0. (B.9)
Another useful property of this theory is the pure spinor condition Eq. 3.6. Using it, it is easy to prove that λ , λ , A In this Appendix we apply the method explained in Section 2, and generalized in Section 3, to the AdS 5 × S 5 superstring.
Step by step, the procedure is as follows:
1. Using an expansion around a classical background, g = g 0 e X , we compute all the currents up to second order in X,
Expand the action (3.2) up to second order in X,
3. Write down the Schwinger-Dyson equation for the model and compute the interaction matrix,
Compute the Green functions in powers of
1 k , 5. Compute φ i , φ j .
The expansion of the currents was already done in (3.7). The remaining subsections are devoted, each one, to each of the steps listed above.
We will drop the use of the boldface notation for the background fields in this section. All the quantum corrections come from either an x-term or a δω, δλ, δω, δλ -term. Thus, every field in S int , the F -terms, the Green's functions and in the RHS of the pairing rules should be treated as classical.
C.1 Action
In (3.8) we showed the kinetic part of the expansion of (3.2) and we promised to show the interaction part later, here we fulfil our promise. Up to second order in X the interaction part is
S int = R 2 2π d 2 z 1 2∂ x α 1 x β 1 J m 2 f mαβ + 1 2 x α 1 x β 1 Jα 3Jβ 3 f iαα f jββ g ij + 1 8 3x α 1∂ x m 2 − 5∂x α 1 x m 2 J β 1 f mαβ + 1 8 x α 1 x m 2 −∂J β 1 f mαβ + 3J n 2Jα 3 + 5J n 2 Jα 3 f iαα f jmn g ij + 3 J n 2Jα 3 −J n 2 Jα 3 f nαβ f mβα η ββ − 1 4 x α 1 xα 3 J β 1 Jβ 3 − J β 1Jβ 3 f mαβ f nαβ η mn + J β 1Jβ 3 + 3J β 1 Jβ 3 f iαβ f jαβ g ij +J m 2J n 2 f mαβ f nαβ − f nαβ f mαβ η ββ + 1 2 ∂xα 3 xβ 3J m 2 f mαβ + 1 2 xα 3 xβ 3 J α 1J β 1 f iαα f jββ g ij − 1 2 x m 2 x n 2 J α 1Jα 3 −J α 1 Jα 3 f mαβ f nαβ η ββ + J p 2J q 2 f ipm f jqn g ij + 1 8 3∂x m 2 xα 3 −5x m 2 ∂xα 3 Jβ 3 f mαβ + 1 8 x m 2 xα 3 −∂Jβ 3 f mαβ +3 J α 1 J n 2 − J α 1J n 2 f mαβ f nαβ η ββ + 3J α 1J n 2 + 5J α 1 J n 2 f iαα f jmn g ij − δ 2 (N iN j )g ij − x α 1 δN iJα 3 + δN i Jα 3 f iαα + x m 2 δN iJ n 2 + δN i J n 2 f imn − xα 3 δN iJ α 1 + δN i J α 1 f iαα − 1 2 x α 1 x β 1 N iJ m 2 +N i J m 2 f mαµ f iβμ η µμ − 1 2 x α 1 x m 2 N iJ β 1 +N i J β 1 f ipm f qαβ η pq + f iαμ f mβµ η µμ + 1 2 ∂x α 1 xα 3 − x α 1 ∂xα 3 N i f iαα + 1 2 x m 2 ∂ x n 2 N i + ∂x n 2N i f imn − 1 2 x m 2 xα 3 N iJβ 3 +N i Jβ 3 f ipm f qαβ η pq − f iαµ f mβμ η µμ + 1 2 xα 3 xβ 3 N iJ m 2 +N i J m 2 f mαμ f iµβ η µμ + 1 2 ∂ x α 1 xα 3 − x α 1∂ xα 3 N i f iαα , (C.1) with N i = − ω A λ B η AB f i BB , (C.2) N i =ωÂλBη A f i BB , (C.3) δN i =(δω A λ B + ω A δλ B )η AB f i BB , (C.4) δN i =(δωÂλB +ωÂδλB)η A f i BB , (C.5) δ 2 (N iN j ) =δN i δN j − δω A δλ B η AB f i BBN j + N i δωÂδλBη B f j BB . (C.6)
The lack of covariant derivatives is, as explained previously, because the pure spinor sigma model is anomaly free. This means that physical quantities only appear in gauge invariant expressions, thus the interchange ∂ ↔ ∇ can be done at any moment in our computation. A more detailed explanation can be found in Subsection 3.2.
C.2 Schwinger-Dyson equation and the Interaction Matrix
The Schwinger-Dyson equation in momentum space for (3.2) reads
G αΛ = 2π R 2 η αΛ |k| 2 + 1 |k| 2 (ik∂ + ik∂ + )G αΛ − η αΩ |k| 2 F ΣΩ G ΣΛ , (C.7) G mΛ = 2π R 2 η mΛ |k| 2 + 1 |k| 2 (ik∂ + ik∂ + )G αΛ − η mΩ |k| 2 F ΣΩ G ΣΛ , (C.8) Gα Λ = − 2π R 2 ηα Λ |k| 2 + 1 |k| 2 (ik∂ + ik∂ + )G αΛ + η Ωα |k| 2 F ΣΩ G ΣΛ , (C.9) G Λ A = 2π R 2 ī k δ Λ A + ī k∂ G Λ A − ī k F ΣA G ΣΛ , (C.10) G BΛ = − 2π R 2 ī k δ BΛ + ī k∂ G BΛ + ī k F B Σ G ΣΛ , (C.11) G Λ A = 2π R 2 i k δ Λ A + i k ∂G Λ A − i k F ΣÂ G ΣΛ , (C.12) GB Λ = − 2π R 2 i k δB Λ + i k ∂GB Λ + i k FB Σ G ΣΛ , (C.13) where Λ = {α, m,α, A , A ,Â,Â}.
The interaction matrix is given by
F ΣΩ (x, y) = ← − δ ← − δ Φ Σ (y) δS int δΦ Ω (x)
.
(C.14)
The directional derivative means that we compute the functional derivative of S int with respect to Φ Σ acting from right to left. Because we are working in momentum space is useful to write also F in momentum space, for that reason the equation we work with is
F ΛΩ (x, k)f (x) = d 2 y ← − δ ← − δ Φ Σ (y) δS int δΦ Ω (x) exp(iky)f (y). (C.15)
Note that the f (y) stands for the previous Green's function and the exponential came from the Fourier Transform. The directional derivative has the same meaning as above.
We organize the interaction matrix by the Z 4 charge of its indices, and in the end we add the ghosts contributions.
The first we compute the F αΛ terms of the matrix:
F αβ = − J m 2 (ik +∂)f mαβ − 1 2∂ J m 2 f mαβ − 1 2 Jα 3Jβ 3 f iαα f jββ − f iβα f jαβ g ij + 1 2 N iJ m 2 +N i J m 2 (f mαµ f iβμ − f mβµ f iαμ ) η µμ , (C.16) F αm =J β 1 ik +∂ f mαβ + 1 8 ∂J β 1 + 3∂J β 1 f mαβ − 1 8 3J n 2Jα 3 + 5J n 2 Jα 3 f iαα f jmn g ij (C.17) − 3 8 J n 2Jα 3 −J n 2 Jα 3 f nαβ f mβα η ββ + 1 2 N iJ β 1 −N i J β 1 f ipm f qαβ η pq + f iαμ f mβµ η µμ F αα = − N i f iαα ik +∂ −N i f iαα (ik + ∂) + 1 4 J β 1 Jβ 3 − J β 1Jβ 3 f mαβ f nαβ η mn + 1 4 J β 1Jβ 3 + 3J β 1 Jβ 3 f iαβ f jαβ g ij + 1 4 J m 2J n 2 f mαβ f nαβ − f nαβ f mαβ η ββ , (C.18) F αB = − ω AJα 3 A A B αα = −F Bα , (C.19) F A α = − λ BJα 3 A A B αα = −F A α , (C.20) F αB =ωÂJα 3 A A B αα = −FB α , (C.21) F α =λBJα 3 A A B αα = −F α . (C.22)
The terms of the F mΛ kind are
F mα =J β 1 ik +∂ f mαβ + 1 2 N iJ β 1 +N i J β 1 f ipm f qαβ η pq + f iαμ f mβµ η µμ + 1 8 3J n 2Jα 3 + 5J n 2 Jα 3 f iαα f jmn g ij + 3 8 J n 2Jα 3 −J n 2 Jα 3 f nαβ f mβα η ββ + 1 8 5∂J β 1 − ∂J β 1 f mαβ , (C.23) F mn =N i f imn ik +∂ +N i f imn (ik + ∂) − 1 2 J α 1Jα 3 −J α 1 Jα 3 (f mαβ f nαβ + f nαβ f mαβ )η ββ − 1 2 J p 2J q 2 (f ipm f jqn + f ipn f jqm )g ij , (C.24) F mα =Jβ 3 f mαβ (ik + ∂) + 1 8 5∂Jβ 3 −∂Jβ 3 f mαβ + 3 8 J α 1 J n 2 − J α 1J n 2 f mαβ f nαβ η ββ (C.25) + 1 8 3J α 1J n 2 + 5J α 1 J n 2 f iαα f jmn g ij + 1 2 N iJβ 3 +N i Jβ 3 f ipm f qαβ η pq − f iαµ f mβμ η µμ , F mB = − ω AJ n 2 A A B mn = F Bm , (C.26) F A m = − λ BJ n 2 A A B mn = F B m , (C.27) F mB =ωÂJ n 2 A A B mn = FB m , (C.28) F m =λBJ n 2 A A B mn = F m . (C.29)
The last contribution from the non-ghost terms is given by the Fα Λ elements:
Fα α = − N i f iαα ik +∂ −N i f iαα (ik + ∂) − 1 4 J β 1 Jβ 3 − J β 1Jβ 3 f mαβ f nαβ η mn − 1 4 J β 1Jβ 3 + 3J β 1 Jβ 3 f iαβ f jαβ g ij − 1 4 J m 2J n 2 f mαβ f nαβ − f nαβ f mαβ η ββ , (C.30) Fα m =Jβ 3 f mαβ (ik + ∂) + 1 8 3∂Jβ 3 +∂Jβ 3 f mαβ − 3 8 J α 1 J n 2 − J α 1J n 2 f mαβ f nαβ η ββ (C.31) − 1 8 3J α 1J n 2 + 5J α 1 J n 2 f iαα f jmn g ij − 1 2 N iJβ 3 +N i Jβ 3 f ipm f qαβ η pq − f iαµ f mβμ η µμ , Fαβ = −J m 2 (ik + ∂) f mαβ − 1 2 ∂J m 2 f mαβ − 1 2 J α 1J β 1 f iαα f jββ − f iβα f jαβ g ij − 1 2 N iJ m 2 +N i J m 2 f mαμ f iµβ − f mβμ f iµα η µμ , (C.32) Fα B = − ω AJ α 1 A A B αα = −F Bα , (C.33) F  α = − λ BJ α 1 A A B αα = −F  α , (C.34) FαB =ωÂJ α 1 A A B αα = −FBα, (C.35) Fα B =λÂJ α 1 A A B αα = −FBα. (C.36)
Finally we compute the pure ghost terms, and we save some trees by not adding the symmetric terms already listed:
F A B =N B A = F A B , (C.37) F B =ω AλB A A BB = F B , (C.38) F BB =ω Aω A A BB = F B , (C.39) F AB =λ Bω A A BB = F  B , (C.40) F A =λ BλB A A BB = F A , (C.41) F B =N B = FÂB, (C.42)
where we have defined
A A BB =η AĈ η C f i BĈ f ĵ BC g ij , (C.43) N B A =ωÂλBA A BB , (C.44) NB A =ω A λ B A A BB . (C.45)
C.3 Green functions
With all the previous results, we begin the computation of the Green's Functions as a power series in 1/k. We follow the prescription given in (3.17). The Green functions are presented order by order, which makes the reading easier.
The only contributions of order 1/k come from the ghosts propagators
G B 1A = 2π R 2 ī k δ B A = −G B 1 A , (C.46) GB 1Â = 2π R 2 i k δB A = −GB 1Â . (C.47)
For the 1/k 2 terms, we have a contribution from the non-ghosts propagators
G αα 2 = 2π R 2 1 |k| 2 η αα = −Gα α 2 , (C.48) G mn 2 = 2π R 2 1 |k| 2 η mn , (C.49)
and another from the ghost interactions
G B 2A = − ī k F C A G B 1C = 2π R 2 1 k 2N B A = G B 2 A , (C.50) G 2A = − ī k F AĈ GĈ 1 = − 2π R 2 1 |k| 2 ω BωB A BB A = G 2ÂA , (C.51) GB 2A = − ī k FĈ A GB 1Ĉ = 2π R 2 1 |k| 2 ω Bλ A BB A = GB 2 A , (C.52) G B 2 = ī k F BĈ GĈ 1 = 2π R 2 1 |k| 2 λ AωB A BB A = G B 2 , (C.53) G BB 2 = ī k F BĈ GB 1Ĉ = − 2π R 2 1 |k| 2 λ Aλ A BB A = GB B 2 , (C.54) GB 2 = − i k FĈ A GB 1Ĉ = 2π R 2 1 k 2 NB A = GB 2 . (C.55)
At order 1/k 3 we have interaction between the non-ghost fields. We organize these terms in the same order as in the previous section, when G ΛΩ = cG ΩΛ , with c = ±1 we only list the first term.
Using the given prescription, we find that the G αΛ 3 terms are
G αβ 3 = − η αα |k| 2 FβαGβ β 2 = − 2π R 2 i |k| 2J m 2 k f mαβ η αα η ββ , (C.56) G αm 3 = − η αα |k| 2 (F nα G nm 2 ) = − 2π R 2 i |k| 2Jβ 3 k f nαβ η αα η mn = −G mα 3 , (C.57) G αα 3 = − η αβ |k| 2 F ββ G βα 2 = 2π R 2 i |k| 2 N i k +N ī k f iββ η αβ η βα = Gα α 3 , (C.58) G α 3 A = − η αα |k| 2 F Bα G B 1 A = 2π R 2 i |k| 2J β 1 k ω B A B A βα η αα = −G α 3A , (C.59) G αB 3 = − η αα |k| 2 F  α G B 1A = − 2π R 2 i |k| 2J β 1 k λ A A B A βα η αα = −G Bα 3 , (C.60) G α 3 = − η αα |k| 2 FBαGB 1 = − 2π R 2 i |k| 2 J β 1 kωB AB A βα η αα = −G α 3 , (C.61) G αB 3 = − η αα |k| 2 FÂαGB 1 = 2π R 2 i |k| 2 J β 1 kλ AB A βα η αα = −GB α 3 . (C.62)
For the G mΛ 3 terms we find
G mn 3 = − η mp |k| 2 (F qp G qn 2 ) = − 2π R 2 i |k| 2 N i k +N ī k f ipq η mp η nq , (C.63) G mα 3 = − η mn |k| 2 F αn G αα 2 = − 2π R 2 i |k| 2 J β 1 k f nαβ η αα η mn = −Gα m 3 (C.64) G m 3 A = − η mn |k| 2 F Bn G B 1 A = − 2π R 2 i |k| 2J p 2 k ω B A B A np η mn = −G m 3A , (C.65) G mB 3 = − η mn |k| 2 F  α G B 1A = 2π R 2 i |k| 2J p 2 k λ A A B A np η mn = G Bm 3 , (C.66) G m 3 = − η mn |k| 2 FBαGB 1 = 2π R 2 i |k| 2 J p 2 kωB AB A np η mn = −G m 3 , (C.67) G mB 3 = − η mn |k| 2 FÂαGB 1 = − 2π R 2 i |k| 2 J p 2 kλ AB A np η mn = −GB m 3 . (C.68)
The Gα Λ 3 terms computed are
Gαβ 3 = − 2π R 2 i |k| 2 J m 2 k f mαβ η αα η ββ (C.69) Gα 3 A = η αα |k| 2 F Bα G B 1 A = − 2π R 2 i |k| 2Jβ 3 k ω B A B Aβα η αα = −Gα 3A , (C.70) Gα B 3 = η αα |k| 2 F  α G B 1A = 2π R 2 i |k| 2Jβ 3 k λ A A B Aβα η αα = −G Bα 3 , (C.71) Gα 3 = η αα |k| 2 FBαGB 1 = 2π R 2 i |k| 2 Jβ 3 kωB AB Aβα η αα = −Gα 3 , (C.72) GαB 3 = η αα |k| 2 FÂαGB 1 = − 2π R 2 i |k| 2 Jβ 3 kλ AB Aβα η αα = −GBα 3 , (C.73) (C.74)
The G 3 with only ghost indices are
G 3AC = − 2π R 2 i |k| 2 1 k ω B ω DλÂωB A BĈ A A DB CĈ − A BB AĈ A DĈ C , (C.75) G 3A B = 2π R 2 ī k 3 δ D A∂ −N D A N B D + 2π R 2 i |k| 2 1 k ω D λ CλÂωB A DB AĈ A BĈ C − A DĈ A A BB CĈ , (C.76) G 3A = − 2π R 2 i |k| 2 1 k δ D A∂ −N D A ω BωB A BB D − 2π R 2 i |k| 2 1 k ω BωB ND A A BB AD , (C.77) GB 3A = 2π R 2 i |k| 2 1 k δ D A∂ −N D A ω Bλ A BB D − 2π R 2 i |k| 2 1 k ω Bλ NB D A BD A , (C.78) G B 3 A = 2π R 2 ī k 3 δ B D∂ +N B D N D A + 2π R 2 i |k| 2 1 k ω D λ CλÂωB A DĈ A A BB CĈ − A DB AĈ A BĈ C , (C.79) G BD 3 = 2π R 2 i |k| 2 1 k λ A λ CλÂωB A BĈ A A DB CĈ − A BB AĈ A DĈ C , (C.80) G B 3 = 2π R 2 ī k 1 |k| 2 δ B D∂ +N B D λ AωB A BB A + 2π R 2 i |k| 2 1 k λ AωB ND A A BB AD , (C.81) G BB 3 = − 2π R 2 ī k 1 |k| 2 δ B D∂ +N B D λ Aλ A BB A + 2π R 2 i |k| 2 1 k λ Aλ NB D A BD A , (C.82) G 3ÂA = 2π R 2 i |k| 2 1 k −δD A ∂ + ND A ω BωB A BB A − 2π R 2 i |k| 2 1 k ω BωBN D A A BB D , (C.83) G 3 B = − 2π R 2 i |k| 2 1 k −δD A ∂ + ND A λ AωB A BB A − 2π R 2 i |k| 2 1 k λ AωBN B C A CB A , (C.84) G 3ÂĈ = 2π R 2 i |k| 2 1 kωBωD λ A ω B A CB A A BD CĈ − A BB C A CD AĈ , (C.85) G 3ÂB = − 2π R 2 i k 3 −δD A ∂ + ND A NB D + 2π R 2 i |k| 2 1 kωDλĈ λ A ω B A BD C A CB AĈ − A CD A A BB CĈ , (C.86) GB 3 A = 2π R 2 i |k| 2 1 k δB D ∂ + NB D ω Bλ A BD A + 2π R 2 i |k| 2 1 k ω BλÂN C A A BB C , (C.87) GB B 3 = − 2π R 2 i |k| 2 1 k δB D ∂ + NB D λ Aλ A BD A + 2π R 2 i |k| 2 1 k λ AλÂN B A A CB A , (C.88) GB 3 = 2π R 2 i k 3 δB D ∂ + NB D ND A − 2π R 2 i |k| 2 1 kωDλĈ λ A ω B A CB AĈ A BD C − A BB CĈ A CB A , (C.89) GBD 3 = 2π R 2 i |k| 2 1 kλÂλĈ λ A ω B A CB A A BD CĈ − A BB C A CD AĈ . (C.90)
The 1/k 4 terms are needed when we compute terms with two derivatives. Since we are not computing anything with two derivatives and at least one ghost field, we don't list those Green's functions. The G αΛ 4 terms are:
G αβ 4 = 2π R 2 1 |k| 2k2 ∂J m 2 f mαβ +J m 2N i f iµα f mμβ − f iµβ f mμα η µμ +Jμ 3Jν 3 f mαμ f nβν η mn η αα η ββ + 2π R 2 1 |k| 4 1 2 ∂J m 2 f mαβ + 1 2 J µ 1J ν 1 g ij f iµα f jνβ − f iνα f jµβ + 1 2 −J m 2 N i + J m 2N i f mαμ f iµβ − f mβμ f iµα η µμ η αα η ββ , (C.91) G αm 4 = 2π R 2 1 |k| 2k2 ∂Jβ 3 f nαβ +Jβ 3N i f pαβ f inq η pq + f nμβ f iµα η µμ η mn η αα + 2π R 2 1 |k| 4 1 8 3∂Jβ 3 +∂Jβ 3 f nαβ − 1 2 (3N iJβ 3 +N i Jβ 3 ) f ipn f qαβ η pq − f iαµ f nβμ η µμ − 1 8 5J β 1 J p 2 + 3J β 1J p 2 f iβα f jnp g ij − 1 8 3J β 1 J p 2 + 5J β 1J p 2 f nβµ f pαμ η µμ η αα η mn , (C.92) G αα 4 = 2π R 2 1 |k| 2 k 2 −∂N i f iββ − N i N j f iµβ f jβμ η µμ η αβ η βα + 2π R 2 1 |k| 2k2 −∂N i f iββ −N iN j f iµβ f jβμ η µμ η αβ η βα + 2π R 2 1 |k| 4 − N iN j + N jN i f iµβ f jμβ η µμ + 1 4 J m 2J n 2 3f mµβ f nβμ + f nµβ f mβμ η µμ + 1 4 J µ 1Jμ 3 5f mµβ f nμβ η mn − f iµβ f jμβ g ij − 1 4J µ 1 Jμ 3 f mµβ f nμβ η mn + 3f iµβ f jμβ g ij η αβ η βα . (C.93) The G mΛ 4
Green's functions are
G mn 4 = 2π R 2 1 |k| 2k2 ∂N i f ipq −N iN j f irp f jsq η rs η nq η mp + 2π R 2 1 |k| 2 k 2 ∂N i f ipq − N i N j f irp f jsq η rs η nq η mp + 2π R 2 1 |k| 4 η mp η nq − N iN j + N jN i f irp f jsq η rs + 1 2 J r 2J s 2 (f irp f jsq + f irq f jsp ) g ij − 1 2 J α 1Jα 3 +J α 1 Jα 3 f qαβ f pαβ + f pαβ f qαβ η ββ , (C.94) Gα m 4 = 2π R 2 1 |k| 2 k 2 −∂J β 1 f nαβ + J β 1 N i f ipn f qαβ η pq + f nµβ f iμα η µμ η mn η αα + 2π R 2 1 |k| 4 − 1 8 3∂J β 1 + ∂J β 1 f nαβ + 1 2 N iJ β 1 + 3N i J β 1 f ipn f qαβ η pq + f iαμ f nβµ η µμ + 1 8 3J p 2Jβ 3 + 5J p 2 Jβ 3 f iαβ f jnp g ij − 1 8 5J p 2Jβ 3 + 3J p 2 Jβ 3 f pαµ f nβμ η µμ η αα η nm . (C.95)
Finally, we list the Gαβ 4 term
Gαβ 4 = 2π R 2 1 |k| 2 k 2 ∂J m 2 f mαβ + J m 2 N i (f mαµ f iβμ − f mβµ f iαμ ) η µμ + J µ 1 J ν 1 f mαµ f nβν η mn η αα η ββ + 2π R 2 1 |k| 4 1 2∂ J m 2 f mαβ + 1 2 Jμ 3Jν 3 f iμα f jνβ − f iμβ f jνα g ij + 1 2 N iJ m 2 −N i J m 2 (f mβµ f iαμ − f mαµ f iβμ ) η µμ η αα η ββ . (C.96)
The reason we don't compute terms such as Gα m 4 is that we can deduce their contribution from the relation ∂X∂X = ∂ X∂X − X∂∂X , as explained in section 2.
C.4 Pairing rules
We split the current in its gauge part J 0 and the vielbein K:
J =J 0 + K, (C.97) K =J 1 + J 2 + J 3 . (C.98)
We also join the quantum fluctuations into a single term
X = x 1 + x 2 + x 3 . (C.99)
The following is the list of all divergent parts up to two derivatives. The order of the results is: first terms with no derivatives, then the currents, then one X with one current, and finally two currents. Finally, we list the pairing rules involving ghost fields. The definition of I in this appendix is I = −1/(2R 2 ǫ).
The non-vanishing terms with no derivatives are the ones given by the first term in the Schwinger-Dyson equation: For one X with one current, we find that the simplest current is J 0
X, J 0 = − I[K, T j ]T k g jk , (C.104) X,J 0 = − I[K, T j ]T k g jk , (C.105)
for the other currents we find We present the J 3 , · terms before the J 2 , · due to their similarity with the J 1 , · terms:
x 1 , J 1 = − I[J 2 , Tα]T α η αα , x 2 , J 1 = I[J 3 , Tα]T α η αα , x 3 , J 1 =I[N, Tα]T α η αα , (C.106) x 1 ,J 1 =0, x 2 ,J 1 = 0, x 3 ,J 1 =I[N ,
the results in (C.102,C.127) and the identity (B.5). A similar computations happens to the antiholomorphicT , where now we use the results in (C.103,C.128) and the identity (B.6).
J 3 + J 3 , g −1 g 0 = − {[N , T α ], Tα}η αα , (4.10) g −1 0 g, N + N, g −1 g 0 =0,(4.11) but we already know that {[J 1,3 , T a ] , T b } g ab = 0, for a = {i, m, α,α}, see (B.7). Thus,
In our notation, the left and right moving b-ghosts can be written as b =(λλ) −1 STr λ [J 2 , J 3 ] + {ω,λ}[λ, J 1 ] − STr (ωJ 1 ) , (4.17) b =(λλ) −1 STr λ[J 2 ,J 1 ] + {ω, λ}[λ,J 3 ] − STr ωJ 3 , (4.18) where (λλ) = λ Aλ η A . Let us first compute the divergent part of the left moving ghost; we will need the results from (C.143) to (C.153): b =(λλ) −1 STr λ [J 2 , J 3 ] + {ω,λ}[λ, J 1 ] − (λλ) −2 λλ STr λ [J 2 , J 3 ] + {ω,λ}[λ, J 1 ] − (λλ) −2 (λλ), STr λ [J 2 , J 3 ] + {ω,λ}[λ, J 1 ] − STr ωJ 1 , (4.19)
B
Some identities for psu(2, 2|4) Let A, B and C be bosons, X, Y and Z fermions, then, the generalized Jacobi Identities are[A, [B, C]] + [B, [C, A]] + [C, [A, B]] =0, (B.1) [A, [B, X]] + [B, [X, A]] + [X, [A, B]] =0, (B.2) {X, [Y, A]} + {Y, [X, A]} + [A, {X, Y }] =0, (B.3) [X, {Y, Z}] + [Y, {Z, X}] + [Z, {X, Y }] =0. (B.4)In this theory the dual-coxeter number is 0, this implies[[A, T i ], T j ]g ij − {[A, T α ], Tα}η αα + [[A, T m ], T n ]η mn + {[A, Tα], T α }η αα =0, (B.5) [[X, T i ], T j ]g ij − [{X, T α }, Tα]η αα + [[X, T m ], T n ]η mn + [{X, Tα}, T α ]η αα =0. (B.6)
=
λ, [λ, A] ± ∓ = 0. (B.10) C Complete solution of the SD equation for the AdS 5 × S 5 pure spinor string
x 1 , x 3 = −T α Tαη αα and x 2 , x 2 = T m T n η mn . (C.100) Now we show the divergent part of the currents: Tα], T α }η αα − {[N, T α ], Tα}η αα + [[N, T m ], T n ]η mn , Tα], T α }η αα − {[N , T α ], Tα}η αα + [[N , T m ], T n ]η mn . (C.103)
J 2 ,J 2 ,
22Tα]T α η αα , (C.107) x 1 , J 2 = − I[J 3 , T m ]T n η mn , x 2 , J 2 = I[N, T m ]T n η mn , x 3 , J 2 =0, (C.108) x 1 ,J 2 =0, x 2 ,J 2 = I[N , T m ]T n η mn , x 3 ,J 2 =I[J 1 , T m ]T n η mn , (C.109)x 1 , J 3 = − I[N, T α ]Tαη αα , x 2 , J 3 = 0, x 3 , J 3 =0, (C.110) x 1 ,J 3 = − I[N , T α ]Tαη αα , x 2 ,J 3 = I[J 1 , T α ]Tαη αα , x 3 ,J 3 =I[J 2 , T α ]Tαη αα . (C.111)Now we show the divergent part of two currents. The first group are the J 0 , · terms:J 0 , J 0 =I[J 1 , Tα][J 3 , T α ]η αα − I[J 3 , T α ][J 1 , Tα]η αα + I[J 2 , T m ][J 2 , T n ]η mn , (C.112) J 0 , J 1 = − I[J 1 , Tα][N, T α ]η αα − I[J 3 , T α ][J 2 , Tα]η αα + I[J 2 , T m ][J 3 , T n ]η mn , (C.113) J 0 ,J 1 = − I[J 1 , Tα][N, T α ]η αα , (C.114) J 0 , J 2 = − I[J 3 , T α ][J 3 , Tα]η αα − I[J 2 , T m ][N, T n ]η mn , (C.115) J 0 ,J 2 =I[J 1 , Tα][J 1 , T α ]η αα − I[J 2 , T m ][N , T n ]η mn , Tα][N, T α ] − [J 2 , Tα][N , T α ] + 3[N, T α ][J 2 , Tα] − [N , T α ][J 2 , Tα] η αα , J 1 , J 1 = − I 2 [∂J 2 , Tα]T α η αα + I 2 [J 1 , T i ][J 1 , T j ] + [J 1 , T i ][J 1 , T j ] g ij (Tα][N, T α ] − [J 2 , Tα][N , T α ] + 3[N , T α ][J 2 , T α ] − [N, T α ][J 2 , Tα] η αα , J 1 , J 2 =I[N, T α ][J 2 , Tα]η αα + [J 3 , T m ][J 3 , T n ]η mn , 3 −∂J 3 , T m ]T n η mn + I 8 11[J 2 , Tα][J 1 , T α ] + 5[J 2 , Tα][J 1 , T α ] η αα + I 8 5[J 1 , T i ][J 2 , T j ] + 3[J 1 , T i ][J 2 , T j ] g ij − I 2 [N, T a ][J 3 , Tα] + [N , T α ][J 3 , Tα] η αα + I 2 3[J 3 , T m ][N, T n ] − [J 3 , T m ][N , T n ] η mn , (C.125) J 1 , J 2 = − I 8 [3∂J 3 +∂J 3 , T m ]T n η mn + 3I 8 [J 2 , Tα][J 1 , T α ] − [J 2 , Tα][J 1 , T α ] η αα + I 8 5[J 1 , T i ][J 2 , T j ] + 3[J 1 , T i ][J 2 , T j ] g ij − I 2 3[N, T a ][J 3 , Tα] − [N , T α ][J 3 , Tα] η αα + I 2 [J 3 , T m ][N, T n ] + [J 3 , T m ][N, T n ] ηmn , (C.126) J 1 , J 3 = − I[N, T α ][N, Tα]η αα , (C.127) J 1 ,J 3 = − I[N , T α ][N, Tα]η αα , (C.128) J 1 ,J 3 = − I [N, T α ][N , Tα] + [N, T α ][N, Tα] η αα + I 4 3[J 2 , Tα][J 2 , T a ] + 5[J 2 , Tα][J 2 , T α ] η αα + I 4 5[J 3 , T m ][J 1 , T n ] + 3[J 3 , T m ][J 1 , T n ] η mn + I 4 [J 1 , T i ][J 3 , T j ] + 3[J 1 , T i ][J 3 , T j ] g ij , (C.129) J 1 , J 3 = − I [N, T α ][N , Tα] + [N, T α ][N, Tα] η αα − I 4 [J 2 , Tα][J 2 , T a ] − [J 2 , Tα][J 2 , T α ] η αα + I 4 [J 3 , T m ][J 1 , T n ] − [J 3 , T m ][J 1 , T n ] η mn + I 4 [J 1 , T i ][J 3 , T j ] + 3[J 1 , T i ][J 3 , T j ] g ij . (C.130)
J 3 , J 2
32=0, (C.131) J 3 ,J 2 = − I[N, Tα][J 1 , T α ]η αα − I[J 1 , T m ][N , T n ]η mn , (C.132) J 3 , J 2 = I 8 [5∂J 1 − ∂J 1 , T m ]T nη mn − I 2 [J 1 , T m ][N, T n ] − 3[J 1 , T m ][N, T n ] η mn 1 + ∂J 1 , T m ]T n η mn +I 2 [J 1 , T m ][N, T n ] + [J 1 , T m ][N , T n ] Tα][J 1 , T α ] − [N, Tα][J 1 , T α ] η αα + I 8 3[J 3 , T i ][J 2 , T j ] + 5[J 3 , T i ][J 2 , T j ] g ij + 3I 8 [J 2 , T α ][J 3 , Tα] − [J 2 , T α ][J 3 , Tα] η αα , (C.134) J 3 , J 3 =0, (C.135) J 3 ,J 3 =I [J 2 , T α ][N , Tα] − [N , Tα][J 2 , T α ] η αα + I[J 1 , T m ][J 1 , T n ]η mn , (C.136) J 3 , J 3 = − I 2 [∂J 2 , T α ]Tαη αα + I 2 [J 3 , T i ][J 3 , T j ] + [J 3 , T i ][J 3 , T j ] g ij + I 2 −[N, Tα][J 2 , T α ] − [N, Tα][J 2 , T α ] + 3[J 2 , T α ][N, Tα] − [J 2 , T α ][N, Tα] η αα , 2 , T α ]Tαη αα + I 2 [J 3 , T i ][J 3 , T j ] + [J 3 , T i ][J 3 , T j ] g ij + I 2 −3[N, Tα][J 2 , T α ] + [N, Tα][J 2 , T α ] + [J 2 , T α ][N, Tα] + [J 2 , T α ][N , Tα] η αα . (C.138)Finally, the remaining J 2 , · terms:J 2 , J 2 =I[N, T m ][N, T n ]η mn , (C.139) J 2 ,J 2 =I[N , T m ][N, T n ]η mn , (C.140) J 2 , J 2 = − I [N, T m ][N , T n ] + [N , T m ][N, T n ] η mn + I 2 [J 2 , T i ][J 2 , T j ] + [J 2 , T i ][J 2 , T j ] g ij − I 2 [J 1 , T α ][J 3 , Tα] − 3[J 3 , Tα][J 1 , T α ] + 3[J 1 , T α ][J 3 , Tα] − [J 3 , Tα][J 1 , T α ] η αα , (C.141) J 2 ,J 2 = − I [N, T m ][N , T n ] + [N , T m ][N, T n ] η mn + I 2 [J 2 , T i ][J 2 , T j ] + [J 2 , T i ][J 2 , T j ] g ij + I 2 [J 3 , Tα][J 1 , T α ] − 3[J 1 , T α ][J 3 , Tα] + 3[J 1 , T α ][J 3 , Tα] − [J 1 , T α ][J 3 , Tα] η αα .(C.142) The terms involving ghost fields that have vanishing anomalous dimension are X, N = X,N = ω, λ = ω,λ =0, (C.143) ω, J = λ, J = ω,J = λ ,J =0, (C.144) involving two ghosts and no derivatives areω,λ = − I[ω, T i ][λ, T j ]g ij , λ,ω = −I[λ, T i ][ω, T j ]g ij , (C.148) ω,ω = − I[ω, T i ][ω, T j ]g ij , λ,λ = −I[λ, T i ][λ, T j ]g ij . (C.149)For one ghost and one current, including the ghost currents,ω,K = − I[ω, T i ][K, T j ]g ij , ω, K = −I[ω, T i ][K, T j ]g ij , (C.150) λ,K = − I[λ, T i ][K, T j ]g ij , λ , K = −I[λ, T i ][K, T j ]g ij , (C.151) ω,N = − I[ω, T i ][N , T j ]g ij , ω, N = −I[ω, T i ][N, T j ]g ij , (C.152) λ,N = − I[λ, T i ][N, T j ]g ij , λ , N = −I[λ, T i ][N, T j ]g ij . (C.153)Finally, the terms with two currents, with at least one ghost current:K , N = − I[K, T i ][N, T j ]g ij , (C.154) K,N = − I[K, T i ][N , T j ]g ij , (C.155) N,N = − I[N, T i ][N , T j ]g ij . (C.156)
The authors would like to thank Martin Heinze for discussions on the subject.
Using the equation of motion for the background ∂J +∂J = 0.
Although we did not use the K term in the main text, it will be useful from now on to use this term in order to pack several results.
AcknowledgementsWe would like to thank Nathan Berkovits, Osvaldo Chandía and William Linch for useful discussion. We are especially grateful to Luca Mazzucato for collaboration during initial stages of this work and to an anonymous Referee for several suggestions and corrections. The work of IR is supported by CONICYT project No. 21120105 and the USM-DGIP PIIC grant. The work of BCV is partially supported by FONDECYT grant number 1151409.
Exact Spectrum of Anomalous Dimensions of Planar N = 4 Supersymmetric Yang-Mills Theory: TBA and excited states. N Gromov, V Kazakov, A Kozak, P Vieira, 10.1007/s11005-010-0374-8arXiv:0902.4458Lett. Math. Phys. 91hep-thN. Gromov, V. Kazakov, A. Kozak, and P. Vieira, Exact Spectrum of Anomalous Dimensions of Planar N = 4 Supersymmetric Yang-Mills Theory: TBA and excited states, Lett. Math. Phys. 91 (2010) 265-287, arXiv:0902.4458 [hep-th].
The complete one loop dilatation operator of N=4 superYang-Mills theory. N Beisert, 10.1016/j.nuclphysb.2003.10.019arXiv:hep-th/0307015Nucl. Phys. 676hep-thN. Beisert, The complete one loop dilatation operator of N=4 superYang-Mills theory, Nucl. Phys. B676 (2004) 3-42, arXiv:hep-th/0307015 [hep-th].
The N=4 SYM integrable super spin chain. N Beisert, M Staudacher, 10.1016/j.nuclphysb.2003.08.015arXiv:hep-th/0307042Nucl. Phys. 670hep-thN. Beisert and M. Staudacher, The N=4 SYM integrable super spin chain, Nucl. Phys. B670 (2003) 439-463, arXiv:hep-th/0307042 [hep-th].
. N Beisert, 10.1007/s11005-011-0529-2arXiv:1012.3982Review of AdS/CFT Integrability: An Overview, Lett. Math. Phys. 99hep-thN. Beisert et al., Review of AdS/CFT Integrability: An Overview, Lett. Math. Phys. 99 (2012) 3-32, arXiv:1012.3982 [hep-th].
Quantum Spectral Curve for Planar N = Super-Yang-Mills Theory. N Gromov, V Kazakov, S Leurent, D Volin, 10.1103/PhysRevLett.112.011602arXiv:1305.1939Phys. Rev. Lett. 112111602hep-thN. Gromov, V. Kazakov, S. Leurent, and D. Volin, Quantum Spectral Curve for Planar N = Super-Yang-Mills Theory, Phys. Rev. Lett. 112 (2014) no. 1, 011602, arXiv:1305.1939 [hep-th].
Quantum spectral curve for arbitrary state/operator in AdS 5 /CFT 4. N Gromov, V Kazakov, S Leurent, D Volin, 10.1007/JHEP09(2015)187arXiv:1405.4857JHEP. 09187hep-thN. Gromov, V. Kazakov, S. Leurent, and D. Volin, Quantum spectral curve for arbitrary state/operator in AdS 5 /CFT 4 , JHEP 09 (2015) 187, arXiv:1405.4857 [hep-th].
Quantum spectral curve at work: from small spin to strong coupling in N = 4 SYM. N Gromov, F Levkovich-Maslyuk, G Sizov, S Valatka, 10.1007/JHEP07(2014)156arXiv:1402.0871JHEP. 07156hep-thN. Gromov, F. Levkovich-Maslyuk, G. Sizov, and S. Valatka, Quantum spectral curve at work: from small spin to strong coupling in N = 4 SYM, JHEP 07 (2014) 156, arXiv:1402.0871 [hep-th].
QCD Pomeron from AdS/CFT Quantum Spectral Curve. M Alfimov, N Gromov, V Kazakov, 10.1007/JHEP07(2015)164arXiv:1408.2530JHEP. 07164hep-thM. Alfimov, N. Gromov, and V. Kazakov, QCD Pomeron from AdS/CFT Quantum Spectral Curve, JHEP 07 (2015) 164, arXiv:1408.2530 [hep-th].
Quantum spectral curve as a tool for a perturbative quantum field theory. C Marboe, D Volin, 10.1016/j.nuclphysb.2015.08.021arXiv:1411.4758Nucl. Phys. 899hep-thC. Marboe and D. Volin, Quantum spectral curve as a tool for a perturbative quantum field theory, Nucl. Phys. B899 (2015) 810-847, arXiv:1411.4758 [hep-th].
Six-loop anomalous dimension of twist-two operators in planar N = 4 SYM theory. C Marboe, V Velizhanin, D Volin, 10.1007/JHEP07(2015)084arXiv:1412.4762JHEP. 0784hep-thC. Marboe, V. Velizhanin, and D. Volin, Six-loop anomalous dimension of twist-two operators in planar N = 4 SYM theory, JHEP 07 (2015) 084, arXiv:1412.4762 [hep-th].
N Gromov, F Levkovich-Maslyuk, G Sizov, arXiv:1504.06640Quantum Spectral Curve and the Numerical Solution of the Spectral Problem in AdS5/CFT4. hep-thN. Gromov, F. Levkovich-Maslyuk, and G. Sizov, Quantum Spectral Curve and the Numerical Solution of the Spectral Problem in AdS5/CFT4, arXiv:1504.06640 [hep-th].
Pomeron Eigenvalue at Three Loops in N = 4 Supersymmetric Yang-Mills Theory. N Gromov, F Levkovich-Maslyuk, G Sizov, 10.1103/PhysRevLett.115.251601arXiv:1507.04010Phys. Rev. Lett. 11525251601hep-thN. Gromov, F. Levkovich-Maslyuk, and G. Sizov, Pomeron Eigenvalue at Three Loops in N = 4 Supersymmetric Yang-Mills Theory, Phys. Rev. Lett. 115 (2015) no. 25, 251601, arXiv:1507.04010 [hep-th].
Hidden symmetries of the AdS(5) x S**5 superstring. I Bena, J Polchinski, R Roiban, 10.1103/PhysRevD.69.046002arXiv:hep-th/0305116Phys. Rev. 6946002hep-thI. Bena, J. Polchinski, and R. Roiban, Hidden symmetries of the AdS(5) x S**5 superstring, Phys. Rev. D69 (2004) 046002, arXiv:hep-th/0305116 [hep-th].
Flat currents in the classical AdS(5) x S**5 pure spinor superstring. B C Vallilo, 10.1088/1126-6708/2004/03/037arXiv:hep-th/0307018JHEP. 0337hep-thB. C. Vallilo, Flat currents in the classical AdS(5) x S**5 pure spinor superstring, JHEP 03 (2004) 037, arXiv:hep-th/0307018 [hep-th].
Quantum consistency of the superstring in AdS(5) x S**5 background. N Berkovits, 10.1088/1126-6708/2005/03/041arXiv:hep-th/0411170JHEP. 0341hep-thN. Berkovits, Quantum consistency of the superstring in AdS(5) x S**5 background, JHEP 03 (2005) 041, arXiv:hep-th/0411170 [hep-th].
The Hirota equation for string theory in AdS5xS5 from the fusion of line operators. R Benichou, 10.1002/prop.201200024arXiv:1202.0084Fortsch. Phys. 60hep-thR. Benichou, The Hirota equation for string theory in AdS5xS5 from the fusion of line operators, Fortsch. Phys. 60 (2012) 896-900, arXiv:1202.0084 [hep-th].
On semiclassical analysis of pure spinor superstring in an AdS 5 x S 5 background. Y Aisaka, L I Bevilaqua, B C Vallilo, 10.1007/JHEP09(2012)068arXiv:1206.5134JHEP. 0968hep-thY. Aisaka, L. I. Bevilaqua, and B. C. Vallilo, On semiclassical analysis of pure spinor superstring in an AdS 5 x S 5 background, JHEP 09 (2012) 068, arXiv:1206.5134 [hep-th].
Semiclassical equivalence of Green-Schwarz and Pure-Spinor/Hybrid formulations of superstrings in AdS(5) x S(5) and AdS(2) x S(2) x T(6). A Cagnazzo, D Sorokin, A A Tseytlin, L Wulff, 10.1088/1751-8113/46/6/065401arXiv:1211.1554J. Phys. 4665401hep-thA. Cagnazzo, D. Sorokin, A. A. Tseytlin, and L. Wulff, Semiclassical equivalence of Green-Schwarz and Pure-Spinor/Hybrid formulations of superstrings in AdS(5) x S(5) and AdS(2) x S(2) x T(6), J. Phys. A46 (2013) 065401, arXiv:1211.1554 [hep-th].
On Semiclassical Equivalence of Green-Schwarz and Pure Spinor Strings in AdS(5) x S(5). M Tonin, 10.1088/1751-8113/46/24/245401arXiv:1302.2488J. Phys. 46245401hep-thM. Tonin, On Semiclassical Equivalence of Green-Schwarz and Pure Spinor Strings in AdS(5) x S(5), J. Phys. A46 (2013) 245401, arXiv:1302.2488 [hep-th].
The Konishi multiplet at strong coupling. B C Vallilo, L Mazzucato, 10.1007/JHEP12(2011)029arXiv:1102.1219JHEP. 1229hep-thB. C. Vallilo and L. Mazzucato, The Konishi multiplet at strong coupling, JHEP 12 (2011) 029, arXiv:1102.1219 [hep-th].
M Heinze, arXiv:1507.03005Spectrum and Quantum Symmetries of the AdS 5 × S 5 Superstring. BerlinPhD thesishep-thM. Heinze, Spectrum and Quantum Symmetries of the AdS 5 × S 5 Superstring. PhD thesis, Humboldt U., Berlin, 2014. arXiv:1507.03005 [hep-th].
Symmetries of massless vertex operators in AdS(5) x S**5. A Mikhailov, 10.1016/j.geomphys.2011.09.002arXiv:0903.5022J. Geom. Phys. 62hep-thA. Mikhailov, Symmetries of massless vertex operators in AdS(5) x S**5, J. Geom. Phys. 62 (2012) 479-505, arXiv:0903.5022 [hep-th].
Finite dimensional vertex. A Mikhailov, 10.1007/JHEP12(2011)005arXiv:1105.2231JHEP. 125hep-thA. Mikhailov, Finite dimensional vertex, JHEP 12 (2011) 005, arXiv:1105.2231 [hep-th].
Harmonic Superspace from the AdS 5 × S 5 Pure Spinor Formalism. N Berkovits, T Fleury, 10.1007/JHEP03(2013)022arXiv:1212.3296JHEP. 0322hep-thN. Berkovits and T. Fleury, Harmonic Superspace from the AdS 5 × S 5 Pure Spinor Formalism, JHEP 03 (2013) 022, arXiv:1212.3296 [hep-th].
A construction of integrated vertex operator in the pure spinor sigma-model in AdS 5 × S 5. O Chandia, A Mikhailov, B C Vallilo, 10.1007/JHEP11(2013)124arXiv:1306.0145JHEP. 11124hep-thO. Chandia, A. Mikhailov, and B. C. Vallilo, A construction of integrated vertex operator in the pure spinor sigma-model in AdS 5 × S 5 , JHEP 11 (2013) 124, arXiv:1306.0145 [hep-th].
Massive superstring vertex operator in D = 10 superspace. N Berkovits, O Chandia, 10.1088/1126-6708/2002/08/040arXiv:hep-th/0204121JHEP. 0840hep-thN. Berkovits and O. Chandia, Massive superstring vertex operator in D = 10 superspace, JHEP 08 (2002) 040, arXiv:hep-th/0204121 [hep-th].
Super Poincare covariant quantization of the superstring. N Berkovits, 10.1088/1126-6708/2000/04/018arXiv:hep-th/0001035JHEP. 0418hep-thN. Berkovits, Super Poincare covariant quantization of the superstring, JHEP 04 (2000) 018, arXiv:hep-th/0001035 [hep-th].
Anomalous dimensions of high-gradient operators in then-vector model in 2+ǫ dimensions. F Wegner, 10.1007/BF01317354Zeitschrift für Physik B Condensed Matter. 781F. Wegner, Anomalous dimensions of high-gradient operators in then-vector model in 2+ǫ dimensions, Zeitschrift für Physik B Condensed Matter 78 no. 1, 33-43. http://dx.doi.org/10.1007/BF01317354.
Spectra of Coset Sigma Models. C Candu, V Mitev, V Schomerus, 10.1016/j.nuclphysb.2013.10.026arXiv:1308.5981Nucl. Phys. 877hep-thC. Candu, V. Mitev, and V. Schomerus, Spectra of Coset Sigma Models, Nucl. Phys. B877 (2013) 900-935, arXiv:1308.5981 [hep-th].
On the Spectrum of Superspheres. A Cagnazzo, V Schomerus, V Tlapak, 10.1007/JHEP03(2015)013arXiv:1408.6838JHEP. 0313hep-thA. Cagnazzo, V. Schomerus, and V. Tlapak, On the Spectrum of Superspheres, JHEP 03 (2015) 013, arXiv:1408.6838 [hep-th].
On semiclassical approximation and spinning string vertex operators in AdS(5) x S**5. A A Tseytlin, 10.1016/S0550-3213(03)00456-5arXiv:hep-th/0304139Nucl. Phys. 664hep-thA. A. Tseytlin, On semiclassical approximation and spinning string vertex operators in AdS(5) x S**5, Nucl. Phys. B664 (2003) 247-275, arXiv:hep-th/0304139 [hep-th].
Type IIB superstring action in AdS(5) x S**5 background. R R Metsaev, A A Tseytlin, 10.1016/S0550-3213(98)00570-7arXiv:hep-th/9805028Nucl. Phys. 533hep-thR. R. Metsaev and A. A. Tseytlin, Type IIB superstring action in AdS(5) x S**5 background, Nucl. Phys. B533 (1998) 109-126, arXiv:hep-th/9805028 [hep-th].
On the Non-renormalization of the AdS Radius. L Mazzucato, B C Vallilo, 10.1088/1126-6708/2009/09/056arXiv:0906.4572JHEP. 0956hep-thL. Mazzucato and B. C. Vallilo, On the Non-renormalization of the AdS Radius, JHEP 09 (2009) 056, arXiv:0906.4572 [hep-th].
Multiloop amplitudes and vanishing theorems using the pure spinor formalism for the superstring. N Berkovits, 10.1088/1126-6708/2004/09/047arXiv:hep-th/0406055JHEP. 0947hep-thN. Berkovits, Multiloop amplitudes and vanishing theorems using the pure spinor formalism for the superstring, JHEP 09 (2004) 047, arXiv:hep-th/0406055 [hep-th].
Simplifying and Extending the AdS(5) x S**5 Pure Spinor Formalism. N Berkovits, 10.1088/1126-6708/2009/09/051arXiv:0812.5074JHEP. 0951hep-thN. Berkovits, Simplifying and Extending the AdS(5) x S**5 Pure Spinor Formalism, JHEP 09 (2009) 051, arXiv:0812.5074 [hep-th].
| [] |
[
"The multidegree of the multi-image variety",
"The multidegree of the multi-image variety"
] | [
"Laura Escobar ",
"Allen Knutson "
] | [] | [] | The multi-image variety is a subvariety of Gr(1, P 3 ) n that models taking pictures with n rational cameras. We compute its cohomology class in the cohomology of Gr(1, P 3 ) n , and from there its multidegree as a subvariety of (P 5 ) n under the Plücker embedding. | 10.1007/978-1-4939-7486-3_13 | [
"https://arxiv.org/pdf/1701.03852v1.pdf"
] | 119,296,587 | 1701.03852 | 75f6b267805c2180b995c573ff5531f2feb95436 |
The multidegree of the multi-image variety
Laura Escobar
Allen Knutson
The multidegree of the multi-image variety
The multi-image variety is a subvariety of Gr(1, P 3 ) n that models taking pictures with n rational cameras. We compute its cohomology class in the cohomology of Gr(1, P 3 ) n , and from there its multidegree as a subvariety of (P 5 ) n under the Plücker embedding.
Introduction
Multi-view geometry studies the constraints imposed on a three-dimensional scene from various two-dimensional images of the scene. Each image is produced by a camera. Algebraic vision is a recent field of mathematics in which the techniques of algebraic geometry and optimization are used to formulate and solve problems in computer vision. One of the main objects studied by this field is a multi-view variety. Roughly speaking, a multi-view variety parametrizes all the possible images that can be taken by a fixed collection of cameras. See [1,11,14] for more details on multi-view varieties. See [13] for a survey on various camera models.
The work [11] presents a new point of view to study the multi-view varieties. A photographic camera maps a point in the scene to a point in the image. [11] defines a geometric camera, which maps a point in the scene not to a point, but to a viewing ray. More precisely, a photographic camera is a map P 3 P 2 or P 3 P 1 × P 1 , whereas a geometric camera is a map P 3 Gr(1, P 3 ) such that the image of each point is a line containing the point. The viewing ray corresponding to a point p ∈ P 3 is the line in Gr(1, P 3 ) this point gets mapped to. An important assumption is that light travels along rays which means that the image of a geometric camera is twodimensional.
We now illustrate these definitions using the example of pinhole cameras or camerae obscurae, see Figure 1. As a geometric camera, a pinhole camera maps a point p ∈ P 3 to the line φ (p) connecting p to the focal point. This map is rational since it is undefined at the focal point. As a photographic camera, a pinhole camera maps a point p ∈ P 3 to the intersection of φ (p) and the plane at the back of the camera. Notice that, as a photographic camera, there are many pinhole cameras corresponding to a given focal point. All of these cameras are equivalent up to a projective transformation. The essential part of a pinhole camera is the mapping of the scene points to viewing rays, i.e. its modeling as a geometric camera. A multi-image variety is the Zariski closure of the image of a map φ : P 3 Gr(1, P 3 ) n p → (φ 1 (p), . . . , φ n (p)),
where each φ i is a geometric camera. The multi-view variety is a multi-image variety such that each φ i is a pinhole camera. Let C i be the closure of the image of φ i inside the i-th Gr(1, P 3 ). Then under some assumptions, [11,Theorem 5.1] shows that
(C 1 × · · · ×C n ) ∩V n = φ (P 3 ),
where V n is the concurrent lines variety consisting of ordered n-tuples of lines in P 3 that meet in a point x. The concurrent lines and multi-image varieties are embedded into (P 5 ) n using the Plücker embedding.
The multidegree of a variety embedded into a product of projective spaces is the polynomial whose coefficients give the numbers (when finite) of intersection points in the variety intersected with a product of general linear subspaces. The present paper verifies the conjectured formula [11,Equation (11)] for the multidegree of V n and computes the multidegree of the multi-image variety. To do so, we describe V n using a projection of a partial flag variety and use Schubert calculus to compute the cohomology classes of V n and (C 1 × · · · × C n ) ∩ V n in the cohomology ring of Gr(1, P 3 ) n . We then push forward these formulae into (P 5 ) n to obtain the multidegrees.
We now describe the organization of this paper. In §2 we define the main objects of study: the multi-image variety and the concurrent lines variety. We present the main theorem which computes the multidegrees of these objects. The main tool to prove the main theorem is Schubert calculus. In §3 we give a brief introduction to Schubert calculus for Gr(1, P 3 ). In §4 we compute the cohomology class of the multi-image variety and the concurrent lines variety in terms of the Schubert cycles in Gr(1, P 3 ). We prove the main theorem by taking the pushforward of these equations to the cohomology ring of P 5 . In §5 we refine these results to a computation of the K-class for the concurrent lines variety.
The multi-image variety
The Grassmannian Gr(k, P d ) consists of k-dimensional planes inside P d . A congruence is a two-dimensional family of lines in P 3 , i.e. a surface C in Gr(1, P 3 ). The bidegree (α, β ) of a congruence C is a pair of nonnegative integers such that the cohomology class of C in Gr(1, P 3 ) has the form (1)
The first integer α is called the order and counts the number of lines in C that pass through a general point of P 3 , and β is called the class and counts the number of lines in C that lie in a general plane of P 3 . The focal locus of C consists of the points in P 3 that do not belong to α distinct lines of C.
Example 2.1. A congruence with bidegree (1, 0) consists of all lines in Gr(1, P 3 ) that contain a fixed point. A geometric camera for such a congruence represents a pinhole camera where the fixed point is the focal point.
Example 2.2. A two-slit camera assigns to p ∈ P 3 the unique line passing through p and intersecting two fixed lines L 1 , L 2 ∈ Gr(1, P 3 ), see Figure 2. Its focal locus is {L 1 , L 2 }. These cameras correspond to the congruences with bidegrees (1, 1).
The study of congruences started with [7] which classified those of order one. They were studied by many mathematicians during the second half of the 19th century; see the book [5] for some of these results. Remark 2.3. Congruences are studied the article [6]. In particular, [6, §2] discusses congruences and their bidegrees. The bidegrees of curves in P 1 × P 1 are discussed in the article [10]. Consider a rational map
φ : P 3 C 1 × · · · ×C n ⊂ Gr(1, P 3 ) n x → (φ 1 (x), . . . , φ n (x))
where the closure or φ i (P 3 ) equals C i , each C i is a congruence and x ∈ φ i (x) for all i. Each map φ i is defined everywhere except on the focal locus of C i . In the language of algebraic vision such a map means taking pictures with n rational cameras where each C i is the i-th image plane. An important assumption is that light travels along rays which means that C i is 2-dimensional. Let C 1 , . . . ,C n be congruences of bidegree (1, β i ). Note that φ i (x) is the unique line in C i passing through x. The multi-image variety of (C 1 , . . . ,C n ) is the Zariski closure of φ (P 3 ). The concurrent lines variety V n consists of ordered n-tuples of lines in P 3 that meet in a point x. By [11,Theorem 5.1], if the focal loci of the congruences are pairwise disjoint then the multi-image equals the intersection (C 1 × · · · ×C n ) ∩V n ⊂ Gr(1, P 3 ) n .
Most of the cameras studied in computer vision are associated with congruences of order 1. However, cameras of higher order also appear in computer vision, see [13] and [11, §7]. As an example of a camera associated to a congruence of bidegree (2, 2) let us discuss a non-central panoramic camera, see Figure 3. Consider a circle X obtained by rotating a point about a vertical axis L. There are two lines in Gr(1, P 3 ) passing through a general point p ∈ P 3 and intersecting both L and X. The congruence C consisting of all lines intersecting both X and L has bidegree (2, 2). A physical realization of a non-central panoramic camera consists of a sensor on the circle taking measurements pointing outwards. This orientation of the sensor yields a map φ : P 3
Gr(1, P 3 ), i.e. it assigns only one line φ (p) to a point p ∈ P 3 . The concurrent lines and multi-image varieties are embedded into (P 5 ) n using the Pücker embedding
Gr(1, P 3 ) → P 5 .
The multidegree of a variety X embedded into a product of projective spaces P a 1 × · · · × P a n is a homogeneous polynomial whose term q z r 1 1 · · · z r n n indicates that there are q intersection points when X is intersected with the product H 1 × · · · × H n ⊂ P a 1 × · · · × P a n of general linear subspaces, where dim(H i ) = r i . The degree of this polynomial equals the codimension of X in P a 1 × · · · × P a n . An equivalent definition of the multidegree of X is its cohomology class in the cohomology ring of P a 1 ×· · ·× P a n . The built-in command multidegree in the software Macaulay2 [4] computes the multidegree of X from its defining ideal. We refer to the book [9, §8.5] for more details on multidegrees.
The main theorem of this paper computes the multidegree of the multi-image variety:
Theorem 2.4. The multidegree of the concurrent lines variety V n in (P 5 ) n equals
(z 1 z 2 · · · z n ) 3 4 ∑ (i, j) i = j z −2 i z −1 j + 8 ∑ {i, j,k} z −1 i z −1 j z −1 k .(2)
Let (α i , β i ) be the bidegree of C i for i = 1, . . . , n. The multidegree of (C 1 × · · · ×C n ) ∩ V n in (P 5 ) n equals
(α 1 α 2 · · · α n )(z 1 z 2 · · · z n ) 5 ∑ (i, j) i = j (α i + β i )(α j + β j ) α i α j z −2 i z −1 j + ∑ {i, j,k} (α i + β i )(α j + β j )(α k + β k ) α i α j α k z −1 i z −1 j z −1 k ,
where we distribute accordingly whenever α i = 0.
In particular, the multidegree of the multi-image variety of (C 1 , . . . ,C n ) where the bidegree of C i is (1, β i ) equals
(z 1 z 2 · · · z n ) 5 ∑ (i, j) i = j (1 + β i )(1 + β j )z −2 i z −1 j + ∑ {i, j,k} (1 + β i )(1 + β j )(1 + β k )z −1 i z −1 j z −1 k .
Remark 2.5. Ponce-Sturmfels-Trager [11, equation (11)] had conjectured Formula 2 for the multidegree of V n , based on experimental evidence from Macaulay2. To prove this theorem, we first use Schubert calculus to obtain the cohomology classes of V n and the multi-image variety in the cohomology ring of Gr(1, P 3 ) n ; see Theorems 4.2 and 4.5. Using this formula we then describe the multidegrees of these varieties and prove Theorem 2.4. In the next section we introduce our notation for Schubert varieties and review the part of Schubert calculus that we need.
Schubert varieties
In this section we review Schubert varieties in Gr(k, P d ) keeping Gr(1, P 3 ) as the main example. For further details we recommend the book [3]. Fix a coordinate system for P d . Let P (i,i+1,..., j) denote the coordinate subspace of P n spanned by the coordinates (i, i + 1, . . . , j) and consider the standard flag
E • := P (0) ⊂ P (0,1) ⊂ · · · ⊂ P (0,1,...,d) = P d . The Schubert variety in Gr(k, P d ) corresponding to the subset { j 1 , . . . , j k+1 } ⊂ {1, 2, . . . , d + 1} is X { j 1 ,..., j k+1 } := p ∈ Gr(k, P d ) : dim(p ∩ P (0,1,..., j l −1) ) ≥ l − 1 for l = 1, . . . , k + 1 .(3)Example 3.1. The d+1 k+1 = 4 2 Schubert varieties in Gr(1, P 3 ) are X {1,2} = {P (0,1) }, X {1,3} = {L ∈ Gr(1, P 3 ) : P (0) ⊂ L ⊂ P (0,1,2) }, X {1,4} = {L ∈ Gr(1, P 3 ) : P (0) ⊂ L}, X {2,3} = {L ∈ Gr(1, P 3 ) : L ⊂ P (0,1,2) }, X {2,4} = {L ∈ Gr(1, P 3 ) : dim(L ∩ P (0,1) ) ≥ 1}, and X {3,4} = Gr(1, P 3 ).
Consider the Schubert cells defined by replacing ≥ by = in Equation (3). Since Gr(1, P 3 ) is a disjoint union of the Schubert cells, and they are each contractible, the classes [X J ] for J ⊂ {1, 2, . . . , d + 1} form a basis for the cohomology ring of Gr(k, P d ). The ring operation is the cup product
[X I ] [X J ] := [X I (E op • ) ∩ X J ], where X I (E op • ) is defined by using P (d− j l +1,d− j l +2,...,d) instead of P (0,1,..., j l −1) in Equation 3
. (Unlike X I , this variety intersects X J transversely, while having the same cohomology class as X I .) One also gets a basis for the cohomology ring of products of Grassmannians via the Künneth isomorphism.
Example 3.2. Note that [X J ] [X {1,2} ] = 0 for any J = {3, 4} since P (2,3) / ∈ X J . Any line L containing P (0) is not contained in P (1,2,3) and therefore [X {1,4} ] [X {2,3} ] = 0. We have that [X {1,4} ] [X {1,4} ] = [X {1,2}
] since there is a unique line containing the points P (0) and P (3) . On general Grassmannians, computing [X I ] [X J ] can be done using the "Pieri rule" for special classes, see Equation (4), and the "Littlewood-Richardson rule" [8] for arbitrary dimensions.
Schubert varieties stratify Gr(k, P d ). The poset of their inclusions is most easily described when Schubert varieties are indexed using partitions. A partition of n into k + 1 parts is a list λ = (λ 1 ≥ λ 2 ≥ · · · ≥ λ k+1 > 0) such that n = ∑ i λ i . There is a bijection between (k + 1)-subsets of [d + 1] and partitions λ with at most k + 1 parts such that λ 1 ≤ d − k which is given by
j = { j 1 , . . . , j k+1 } ↔ λ = (d − k + 1 − j 1 , d − k + 2 − j 2 , . . . , d − j k , d + 1 − j k+1 ).
Partitions can be visualized in the following way. Given λ 1 ≥ λ 2 ≥ . . . ≥ λ k+1 , we draw a figure made up of squares sharing edges that has λ 1 squares in the first row, λ 2 squares in the second row starting out right below the beginning of the first row, and so on. This figure is called a Young diagram. See Figure 4 for an example. We say that λ ⊂ λ if λ i ≤ λ i for all i, i.e. if the diagram of λ lies inside the one for λ . From now on we will index Schubert cells and varieties by partitions. With this indexing set we have the following facts:
• dim(X λ ) = (k + 1)(d − k) − ∑ i λ i , • X λ ⊃ X µ iff µ ⊃ λ , and
• The Pieri rule: suppose that λ = (λ 1 , 0, . . . , 0) and µ = (µ 1 , µ 2 , . . . , µ k+1 ) is any partition.
Then
[X λ ] [X µ ] = ∑ [X ν ],(4)
where the sum is over all partitions ν = (ν 1 , ν 2 , . . . , ν k+1 ) such that
ν 1 ≤ d − k, µ i ≤ ν i ≤ µ i−1 , and ∑ ν i = ∑(λi + µ i ).X 2,1 = {L : P (0) ⊂ L ⊂ P (0,1,2) } X 1,1 = {L : L ⊂ P (0,1,2) } X 2 = {L : P (0) ⊂ L} X 1 = {L : L ∩ P (0,1) = ∅} X ∅ = Gr(1, P 3 )
Computing the multidegrees
Let M ⊂ Gr(0, P 3 ) × Gr(1, P 3 ) be the partial flag manifold consisting of pairs (P, L) where P is a point in P 3 and L is a line through P. Such a pair is called a (partial) flag in P 3 . Define M n ∆ to be the subvariety of M n consisting of lists of n flags such that the point P is the same in all of them. Consider the diagonal (P 3 ) n ∆ := {(P 1 , . . . , P n ) : P 1 = · · · = P n };
then M n ∆ is the preimage of (P 3 ) n ∆ under the projection M n → (P 3 ) n induced by
M −→ P 3 , (P, L) −→ P.
For example, M 2 ∆ ⊆ M 2 consists of pairs of flags of the form ((P, L 1 ), (P, L 2 )). The following is straightforward:
The intersection
M i−1 × M 2 ∆ × M n−i−1 ∩ M i × M 2 ∆ × M n−i−2
consists of the lists of flags ((P 1 , L 1 ), . . . , (P n , L n )) such that P i = P i+1 = P i+2 , and this intersection is transverse. We can write M n ∆ as the transverse intersection
M n ∆ = n−1 i=1 M i−1 × M 2 ∆ × M n−i−1 .(6)
From this description we deduce the following Theorems which give the cohomology classes of V n and the multi-image variety.
[V n ] = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n−1 i=0 L : L ∩ P (v i ,v i +1,...,v i+1 ) = ∅ .(7)
Moreover, [V n ] can be written as
[V n ] = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n i=1 [X 2 ] if v i − v i−1 = 0 [X 1 ] if v i − v i−1 = 1 [Gr(1, P 3 )] if v i − v i−1 = 2 .(8)
Remark 4.3. Equation (7) is more natural than Equation (8), in that (7) is correct in T -equivariant cohomology, whereas (8) is only correct in ordinary cohomology. In ordinary cohomology we have that
[V n ] =[X 2 ] ⊗ [X 1 ] ⊗ [Gr(1, P 3 )] + [X 2 ] ⊗ [Gr(1, P 3 )] ⊗ [X 1 ] + [X 1 ] ⊗ [X 2 ] ⊗ [Gr(1, P 3 )] + [X 1 ] ⊗ [X 1 ] ⊗ [X 1 ] + [X 1 ] ⊗ [Gr(1, P 3 )] ⊗ [X 2 ] + [Gr(1, P 3 )] ⊗ [X 2 ] ⊗ [X 1 ] + [Gr(1, P 3 )] ⊗ [X 1 ] ⊗ [X 2 ].
Proof. From equation (6) we have that
[M n ∆ ] = n−1 ∏ i=1 1 ⊗ · · · ⊗ 1 ⊗ [M 2 ∆ ] ⊗ 1 ⊗ · · · ⊗ 1
in the cohomology of (Gr(0,
P 3 ) × Gr(1, P 3 )) n . Identifying H * ((P 3 ) 2 ) ∼ = H * (P 3 ) ⊗ H * (P 3 ) using the Künneth isomorphism, we have (P 3 ) 2 ∆ = ∑ 0≤v≤3 P (0,1,...v) ⊗ P (v,v+1,...3)
and hence
(P 3 ) n ∆ = ∑ 0=v 0 ≤v 1 ≤...≤v n =3 P (v 0 ,...,v 1 ) ⊗ · · · ⊗ P (v n−1 ,...,v n ) .
Pulling this back to M n , we get essentially the same formula • The other terms (P, L) : P ∈ P (i,i+1,..., j) push down to L : L ∩ P (i,i+1,..., j) = ∅ .
It follows that
[V n ] = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 L : L ∩ P (v 0 ,...,v 1 ) = ∅ ⊗ · · · ⊗ L : L ∩ P (v n−1 ,...,v n ) = ∅ .
Note that for any L ∈ Gr(1, P 3 ) we have that L ∩ P (0,1,2) = ∅ and L ∩ P (1,2,3) = ∅. Therefore the terms corresponding to P (0,1,2) and P (1,2,3) push down to [Gr(1, P 3 )].
Theorem 4.5. For i = 1, . . . , n, let C i ⊂ Gr(1, P 3 ) be a general congruence with class
α i [X 2 ] + β i [X 1,1 ] ∈ H 4 (Gr(1, P 3 )).
Then
[(C 1 × · · · ×C n ) ∩V n ] = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n i=1 α i [X 2,2 ] if v i = v i−1 (α i + β i )[X 2,1 ] if v i = v i−1 + 1 α i [X 2 ] + β i [X 1,1 ] if v i = v i−1 + 2 . Example 4.6. Let C 1 ,C 2 ,C 3 be three congruences with bidegrees (α i , β i ) i=1,2,3 .
Then
[(C 1 ×C 2 ×C 3 ) ∩V 3 ] = α 1 [X 2,2 ] ⊗ (α 2 + β 2 )[X 2,1 ] ⊗ (α 3 [X 2 ] + β 3 [X 1,1 ]) + α 1 [X 2,2 ] ⊗ (α 2 [X 2 ] + β 2 [X 1,1 ]) ⊗ (α 3 + β 3 )[X 2,1 ] + (α 1 + β 1 )[X 2,2 ] ⊗ α 2 [X 2,2 ] ⊗ (α 3 [X 2 ] + β 3 [X 1,1 ]) +
(4 other similar terms).
Proof. Let C i ⊂ Gr(1, P 3 ) be a surface of class α i [X 1,1 ] + β i [X 2 ] ∈ H 4 (Gr(1, P 3 )).
By Theorem 4.2,
[(C 1 × · · · ×C n ) ∩V n ] = [V n ] ([C 1 ] ⊗ · · · ⊗ [C n ]) = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n i=1 L : L ∩ P (v i−1 ,...,v i ) = ∅ (α i [X 2 ] + β i [X 1,1 ]).
Using the computations from Examples 3.2 and 3.3 we have that:
• When v i = v i−1 , then L : L ⊃ P (v i ) = [X 2 ],
and the factor is
[X 2 ] (α i [X 2 ] + β i [X 1,1 ]) = α i [X 2,2 ]. • When v i = v i−1 + 1, then L : L ∩ P (v i−1 ,v i ) = ∅ = [X 1 ],
and the factor is
[X 1 ] (α i [X 2 ] + β i [X 1,1 ]) = (α i + β i )[X 2,1 ]. • When v i = v i−1 + 2, then L : L ∩ P (v i−1 ,v i−1 +1,v i ) = ∅ = [Gr(1, P 3 )] = 1, giving 1 (α i [X 2 ] + β i [X 1,1 ]) = α i [X 2 ] + β i [X 1,1 ].
The result is
[(C 1 × · · · ×C n ) ∩V n ] = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n i=1 α i [X 2,2 ] if v i = v i−1 (α i + β i )[X 2,1 ] if v i = v i−1 + 1 α i [X 2 ] + β i [X 1,1 ] if v i = v i−1 + 2.
Remark 4.7. When α i = 0 we can drop the terms with v i = v i+1 in the equation for
[(C 1 × · · · ×C n ) ∩V n ].
Let us now describe the classes ι * [X λ ] in H * (P 5 ) under the Plücker embedding ι : Gr(1, P 3 ) → P 5 . To do so we describe their equations inside P 5 . The degrees of general Schubert varieties were computed by Schubert in [12].
A line L ∈ Gr(1, P 3 ) passing through the points (x 0 : x 1 : x 2 : x 3 ), (y 0 : y 1 : y 2 : y 3 ) ∈ P 3 is uniquely determined by the 2 × 2 minors of the 2 × 4 matrix with rows (x 0 : x 1 : x 2 : x 3 ) and (y 0 : y 1 : y 2 : y 3 ). Let p i, j denote the minor of the columns i and j. The Plücker embedding associates the vector (p 1,2 : p 1,3 : . . . : p 3,4 ) ∈ P 5 of the 2 × 2 minors to each line in Gr(1, P 3 ).
• Gr(1, P 3 ) is defined by the Plücker relation p 1,2 p 3,4 − p 1,3 p 2,4 + p 2,3 p 1,4 = 0.
Therefore its degree is 2 and its codimension 1, so ι * [Gr(1, P 3 )] = 2z 1 . • The condition L ∩ P (0,1) = ∅ is equivalent to p 3,4 = 0. So X 1 is the complete intersection {p 3,4 = 0} ∩ {p 1,2 p 3,4 − p 1,3 p 2,4 + p 2,3 p 1,4 = 0}. Therefore it has degree 1 * 2 = 2 and codimension 2, and so ι * [X 1 ] = 2z 2 . • Similarly, the condition L∩P (0,1) ⊂ P (0,1,2) is equivalent to p 1,4 = p 2,4 = p 3,4 = 0.
Therefore X 1,1 is the intersection We summarize these computations in Table 1. Table 1 The classes ι * [X λ ] in H * (P 5 ).
X λ Gr(1, P 3 ) X 1 X 1,1 X 2 X 2,1 X 2,2 ι * [X λ ] 2z 2z 2 z 3 z 3 z 4 z 5
Remark 4.8. The small case Gr(1, P 3 ) relevant for this paper is convenient but misleading: already in Gr(1, P 4 ) one meets Schubert varieties that are not complete intersections in the Plücker embedding, making it less straightforward to compute their degrees.
Proof (of Theorem 2.4). We compute the multidegrees of V n and (C 1 × · · · ×C n ) ∩V n by using Table 1 to specialize
[X 2,2 ] → z 5 i , [X 2,1 ] → z 4 i , [X 1,1 ], [X 2 ] → z 3 i , [X 1 ] → 2z 2 i , [Gr(1, P 3 )] → 2z i
in the i-th component of Equation (8). For V n we obtain In the first case, we have a term of the form 8(z 1 z 2 · · · z n ) 3 z −1 i z −1 j z −1 k . In the second case we have a term of the form 4(z 1 z 2 · · · z n ) 3 z −2 j z −1 k . Similarly, for (C 1 × · · ·C n ) ∩V n we obtain
[V n ] → ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n ∏ i=1 z 3 i if v i − v i−1 = 0 2z 2 i if v i − v i−1 = 1 2z i if v i − v i−1 = 2 (9) Note that given 0 = v 0 ≤ v 1 ≤ . . . ≤ v n−1 ≤ v n = 3 such that v i − v i−1 ∈ {0, 1,[(C 1 × · · · ×C n ) ∩V n ] = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n ∏ i=1 z 5−(v i −v i−1 ) i α i if v i = v i−1 α i + β i if v i > v i−1 .(10)
The formula in the theorem is deduced similarly as in the [V n ] case.
K-theory
We conclude this paper by computing the K-class of the concurrent lines variety V n . Our reference for K-polynomials is [9, §8.5], but we include some words of motivation here.
A K-class example, and the definition
Consider the diagonal (P 1 ) ∆ ⊂ (P 1 ) 2 , the space of pairs we're about to define: rather, it obeys a sort of inclusion-exclusion formula
[O A∪B ] = [O A ] + [O B ] − [O A∩B ]
(the Os to be defined below). Put another way, K-theory cares about the overcounting of the intersection, even though its dimension is smaller than that of the components. In this sense, homology just sees the top-dimensional part of a K-class. Very precisely, there is a filtration on K(M) and an isomorphism gr K(M) ⊗ Q ∼ = H * (M; Q).
We only need to define the K-theory of (P a ) b , which we do now. Begin with the abelian group generated by (isomorphism classes of) finitely generated Z b -graded modules over the polynomial ring in (a + 1)b variables x i, j , where x i, j acts homogeneously with weight e j . We impose two types of relations on this group, the first coming from exact sequences of multigraded modules, and the second for each j, coming from modules annihilated by x 0, j , . . . , x a, j . The resulting set of equivalence classes, "K-classes", we call K 0 ((P a ) b ). Actually Grothendieck derived the name "K-theory" from the German word "klasse", so K-class is redundant.
The multigraded modules just described define sheaves on (P a ) b , i.e. a place for local functions on (P a ) b to act. (By Liouville's theorem, the only global functions are constant.) In particular, for X ⊂ M a closed subscheme (defined by some multigraded ideal), we know how to multiply local functions on X by local functions on M, giving us a sheaf we call O X with corresponding K-class [O X ]. In the (P 1 ) ∆ example above the two K-equivalent modules are just the quotients by the ideals x 0,1 x 1,2 − x 0,2 x 1,1 , x 0,2 x 1,1 .
Defined as above, when M is smooth the abelian group K 0 (M) has a somewhat non-obvious product, but the only part of it we will need is
[O X ][O Y ] = [O X∩Y ]
if the intersection is transverse for example if X,Y are smooth and codim(X ∩Y ) = codim X + codimY . In P a , any two hyperplanes P a−1 define the same class H, hence H a+1 = 0, and it turns out that K 0 (P a ) ∼ = Z[H]/ H a+1 . While it will be nice that our methods serve to compute this finer invariant of V n , our motivation for introducing K-classes is more concrete. For X ⊆ P a a subscheme, the data of [O X ] is exactly the same information as the Hilbert polynomial of X, which one can compute from a Gröbner basis for X's ideal. In particular, a basis (g i ) for X's ideal is Gröbner if and only if the scheme defined by {the leading monomials of the g i } has the same Hilbert function as X, whose stable behavior is captured by the K-class [X].
Let us return to the example of (P 1 × {0}) ∪ (∞,0) ({∞} × P 1 ), whose K-class we can now write as H ⊗ 1 + 1 ⊗ H − H ⊗ H from the inclusion-exclusion formula. We will prefer to write H ⊗ (1 − H) + 1 ⊗ H. We again have, and are using here, a Künneth isomorphism K 0 ((P a ) b ) ∼ = b Z[H]/ H a+1 .
The K-class of V n
Using a degeneration much like the one we had for (P 1 ) ∆ , we find the K-class [(P 3 ) ∆ ] K ∈ K((P 3 ) 2 ) to be From this formula we can follow arguments similar to those of §4 to compute the K-class of V n in K((P 5 ) n ). Letting H denote [O P 4 ] K ∈ K(P 5 ) the resulting formula is
[V n ] K = ∑ 0=v 0 ≤v 1 ≤...≤vn=3 v j+1 −v j <3 n i=1 −H 3 ⊗ (2H − H 2 ) if v i − v i−1 = 0 (2H 2 − H 3 ) ⊗ (2H − 3H 2 + H 3 ) if v i − v i−1 = 1 (2H − H 2 ) ⊗ (2H 2 − 2H 3 ) if v i − v i−1 = 2.
This is an analogue of the Hilbert polynomial calculation of [1, theorem 3.6]. There are two important differences: theirs concerns a rational map P 3 (P 2 ) n whereas ours is about a rational map P 3
Gr(1, P 3 ) n → (P 5 ) n . Also, their class in H * ((P 2 ) n ) is multiplicity-free in the sense of [2], which is what shows that every degeneration of their variety will be reduced (and Cohen-Macaulay), i.e. that they can have a universal Gröbner basis with squarefree initial terms [1, §2]. Our class in H * ((P 5 ) n ) is not multiplicity-free (thanks to those coefficients 2 from Theorem 4.2).
However, Equation (8) shows that V n is multiplicity-free in the sense of [2] when considered as a subvariety of Gr(1, P 3 ) n , and as such every degeneration of it inside that ambient space (not the larger space (P 5 ) n ) is reduced and Cohen-Macaulay.
Fig. 1
1A pinhole camera.
[
C] = α [L : L contains a fixed point] + β [L : L lies in a fixed plane] .
Fig. 2
2A two-slit camera.
Fig. 3
3A non-central panoramic camera picks one of the two lines intersecting the circle X and the line L.
Remark 2. 6 .
6Aholt-Sturmfels-Thomas [1, Corollary 3.5] gives an equation for the multidegree of the multi-image variety for the congruences with bidegree (1, 0). This equation coincides with the equation of Theorem 2.4.
Fig. 4
4The Young diagram of the partition λ = (5, 3, 3, 2)
Example 3. 3 .
3The Schubert varieties in Gr(1, P 3 ) are ordered by containment in the poset inFigure 5. This poset is ranked by the dimensions of the Schubert varieties.
X 2, 2 =
2{L = P (0,1) }.
Fig. 5
5The containment poset of the Schubert varieties in Gr(1, P 3 ).
Proposition 4 . 1 .
41The concurrent lines variety V n is the image of M n ∆ under the projection M n p −→ Gr(1, P 3 ) n induced by M → Gr(0, P 3 ) × Gr(1, P 3 ) −→ Gr(1, P 3 ).
Theorem 4. 2 .
2The class [V n ] of the concurrent lines variety in the cohomology ring of Gr(1, P 3 ) n is
Example 4. 4 .
4Consider n = 3. In T -equivariant cohomology we have that [V n ] = L : L ∩ P (0) = ∅ ⊗ L : L ∩ P (0,1) = ∅ ⊗ L : L ∩ P (1,2,3) = ∅ + L : L ∩ P (0) = ∅ ⊗ L : L ∩ P (0,1,2) = ∅ ⊗ L : L ∩ P (2,3) = ∅ + L : L ∩ P (0,1) = ∅ ⊗ L : L ∩ P (1) = ∅ ⊗ L : L ∩ P (1,2,3) = ∅ + (4 other terms).
≤v 1 ≤...≤v n =3 (P, L) : P ∈ P (v 0 ,...,v 1 ) ⊗ · · · ⊗ (P, L) : P ∈ P (v n−1 ,...,v n ) .Pushing forward this class under the projection mapM n p −→ Gr(1, P 3 ) n , we get p * [M n ∆ ] = [V n ].Not all terms of the [M n ∆ ] formula above survive when we project M n ∆ to Gr(1, P 3 ) n by forgetting P: • The image of {(P, L) : P ∈ P (0,1,2,3) } under p equals Gr(1, P 3 ). Since for v i − v i−1 = 3 the dimension of {(P, L) : P ∈ P (0,1,2,3) } drops under p, any factor (P, L) : P ∈ P (0,1,2,3) pushes down to 0.
{p 1,2 p 3,4 − p 1,3 p 2,4 + p 2,3 p 1,4 = 0} ∩ {p 1,4 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0} which becomes the complete intersection {p 1,4 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0}.Therefore it has degree 1 * 1 * 1 = 1 and codimension 3, and so ι * [X 1,1 ] = z 3 . • X 2 is the intersection{p 1,2 p 3,4 − p 1,3 p 2,4 + p 2,3 p 1,4 = 0} ∩ {p 2,3 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0}which becomes the complete intersection {p 2,3 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0}. As just above, ι * [X 2 ] = z 3 . • X 2,1 is the intersection{p 1,2 p 3,4 − p 1,3 p 2,4 + p 2,3 p 1,4 = 0} ∩ {p 1,4 = 0} ∩ {p 2,3 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0} so the complete intersection {p 1,4 = 0} ∩ {p 2,3 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0}. Therefore it has degree 1 and codimension 4, and so ι * [X 2,1 ] = z 4 . • X 2,2 is the complete intersection {p 1,3 = 0} ∩ {p 1,4 = 0} ∩ {p 2,3 = 0} ∩ {p 2,4 = 0} ∩ {p 3,4 = 0}. Therefore it has degree 1, codimension 5 and ι * [X 2,2 ] = z 5 .
2}, there are exactly two different possibilities: either 1. v i − v i−1 = 0 for all but three indices at which v i − v i−1 = 1, or 2. v i − v i−1 = 0 for all but two indices j, k at which v j − v j−1 = 2 and v k − v k−1 = 1.
{([x 0 , 1
01, x 1,1 ], [x 0,2 , x 1,2 ]) : x 0,1 /x 1,1 = x 0,2 /x 1,2 or really x 0,1 x 1,2 = x 0,2 x 1,1 } . This equation degenerates to 0 = x 0,2 x 1,1 (preserving the homology class), which vanishes on the union (P 1 × {0}) ∪ (∞,0) ({∞} × P 1 ). Therefore the homology class [(P 1 ) ∆ ] is the sum of the classes of the two components. (We used the analogous P 2 calculation in the proof of Theorem 4.2.) If A, B are subvarieties (of some manifold) of the same dimension, then the equation [A ∪ B] = [A] + [B] we just used in homology doesn't hold for the "K-classes"
O
P (0,··· ,v) K ⊗ O P (v,...,3) K − O P (v+1,...,3) K .
By the Pieri rule, we have that [X 1 ] [X 2 ] = [X 2,1 ] and [X 1 ] [X 1,1 ] = [X 2,1 ].Remark 3.4. We can rewrite Equation 1 for the cohomology class of a congruence
as
[C] = α[X 2 ] + β [X 1,1 ].
Remark 3.5. In the article [6, §6], Schubert varieties and the intersection theory of
Gr(1, P 3 ) are also discussed.
A Hilbert scheme in computer vision. Chris Aholt, Bernd Sturmfels, Rekha Thomas, Canad. J. Math. 655Chris Aholt, Bernd Sturmfels, and Rekha Thomas. A Hilbert scheme in computer vision. Canad. J. Math., 65(5):961-988, 2013.
Multiplicity-free subvarieties of flag varieties. Michel Brion, Contemporary Math. 331Amer. Math. SocMichel Brion. Multiplicity-free subvarieties of flag varieties. Contemporary Math. 331, 13- 23, Amer. Math. Soc., Providence, 2003.
. William Fulton, Cambridge University Press35CambridgeWilliam Fulton. Young tableaux, volume 35 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 1997.
R Daniel, Michael E Grayson, Stillman, Macaulay2, a software system for research in algebraic geometry. Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www.math.uiuc.edu/Macaulay2/.
M Charles, Jessop: A Treatise on the Line Complex. American Mathematical SocietyCharles M. Jessop: A Treatise on the Line Complex, Cambridge University Press, 1903, (American Mathematical Society, 2001).
Kathlén Kohn, op. citBernt Ivar Utstøl Nødland, and Paolo Tripoli: Secants, bitangents, and their congruences. Kathlén Kohn, Bernt Ivar Utstøl Nødland, and Paolo Tripoli: Secants, bitangents, and their congruences, op. cit.
Ernst Kummer, Über die algebraischen Strahlensysteme, insbesondereüber die der ersten und zweiten Ordnung, Abh. K. Preuss. Akad. Wiss. Berlin (1866). Ernst Kummer:Über die algebraischen Strahlensysteme, insbesondereüber die der ersten und zweiten Ordnung, Abh. K. Preuss. Akad. Wiss. Berlin (1866) 1-120.
Group characters and algebra. Dudley E Littlewood, Archibald R Richardson, Philos. Trans. Royal Soc. London. 233Dudley E. Littlewood and Archibald R. Richardson. Group characters and algebra. Philos. Trans. Royal Soc. London., 233:99-141, 1934.
Combinatorial commutative algebra. Ezra Miller, Bernd Sturmfels, Graduate Texts in Mathematics. New YorkSpringer-Verlag227Ezra Miller and Bernd Sturmfels. Combinatorial commutative algebra, volume 227 of Grad- uate Texts in Mathematics. Springer-Verlag, New York, 2005.
The convex Hull of Two Circles in R 3. Evan D Nash, Ata Firat Pir, Frank Sottile, Li Ying, citEvan D. Nash, Ata Firat Pir, Frank Sottile, and Li Ying: The convex Hull of Two Circles in R 3 , op. cit.
Jean Ponce, Bernd Sturmfels, Matthew Trager, arXiv:1608.05924v1Congruences and concurrent lines in multi-view geometry. Jean Ponce, Bernd Sturmfels, and Matthew Trager. Congruences and concurrent lines in multi-view geometry, arXiv:1608.05924v1.
Anzahl-Bestimmungen für Lineare Räume. Hermann Schubert, Acta Math. 81Beliebiger dimensionHermann Schubert. Anzahl-Bestimmungen für Lineare Räume. Acta Math., 8(1):97-118, 1886. Beliebiger dimension.
Barreto: Camera models and fundamental concepts used in geometric computer vision, Foundations and Trends in Computer Graphics and Vision. Peter Sturm, Srikumar Ramalingam, Jean-Philippe Tardif, Simone Gasparini, João , 6Peter Sturm, Srikumar Ramalingam, Jean-Philippe Tardif, Simone Gasparini, and João Bar- reto: Camera models and fundamental concepts used in geometric computer vision, Founda- tions and Trends in Computer Graphics and Vision 6 (2011) 1-183.
The joint image handbook. Matthew Trager, Jean Hebert, Ponce, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionMatthew Trager, Martial Hebert, and Jean Ponce: The joint image handbook, Proceedings of the IEEE International Conference on Computer Vision, 2015.
| [] |
[
"Minimal parameterizations for modified gravity",
"Minimal parameterizations for modified gravity"
] | [
"Ali Narimani \nDepartment of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n",
"Douglas Scott \nDepartment of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n"
] | [
"Department of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada",
"Department of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada"
] | [] | The increasing precision of cosmological data provides us with an opportunity to test general relativity (GR) on the largest accessible scales. Parameterizing modified gravity models facilitates the systematic testing of the predictions of GR, and gives a framework for detecting possible deviations from it. Several different parameterizations have already been suggested, some linked to classifications of theories, and others more empirically motivated. Here we describe a particular new approach which casts modifications to gravity through two free functions of time and scale, which are directly linked to the field equations, but also easy to confront with observational data. We compare our approach with other existing methods of parameterizing modied gravity, specifically the parameterized post-Friedmann approach and the older method using the parameter set {µ, γ}. We explain the connection between our parameters and the physics that is most important for generating cosmic microwave background anisotropies. Some qualitative features of this new parameterization, and therefore modifications to the gravitational equations of motion, are illustrated in a toy model, where the two functions are simply assumed to be constant parameters. | 10.1103/physrevd.88.083513 | [
"https://arxiv.org/pdf/1303.3197v2.pdf"
] | 119,264,680 | 1303.3197 | 7d80ff88a71436d29a62042a9305aa8bd59d4c2d |
Minimal parameterizations for modified gravity
3 Sep 2013
Ali Narimani
Department of Physics & Astronomy
University of British Columbia
6224 Agricultural RoadV6T 1Z1VancouverBCCanada
Douglas Scott
Department of Physics & Astronomy
University of British Columbia
6224 Agricultural RoadV6T 1Z1VancouverBCCanada
Minimal parameterizations for modified gravity
3 Sep 2013
The increasing precision of cosmological data provides us with an opportunity to test general relativity (GR) on the largest accessible scales. Parameterizing modified gravity models facilitates the systematic testing of the predictions of GR, and gives a framework for detecting possible deviations from it. Several different parameterizations have already been suggested, some linked to classifications of theories, and others more empirically motivated. Here we describe a particular new approach which casts modifications to gravity through two free functions of time and scale, which are directly linked to the field equations, but also easy to confront with observational data. We compare our approach with other existing methods of parameterizing modied gravity, specifically the parameterized post-Friedmann approach and the older method using the parameter set {µ, γ}. We explain the connection between our parameters and the physics that is most important for generating cosmic microwave background anisotropies. Some qualitative features of this new parameterization, and therefore modifications to the gravitational equations of motion, are illustrated in a toy model, where the two functions are simply assumed to be constant parameters.
INTRODUCTION
General Relativity (GR) has been confronted with many theoretical and experimental tests since its birth in 1915. From the gravitational lensing experiments in 1919 [1] up to the extensive studies and tests in the 1960s and 1970s [2][3][4][5][6], the theory has been confirmed observationally and theoretically bolstered in many different respects. Weak field gravity on experimentally accessible scales has been so well tested that there are only two remaining directions in which we might find modifications to GR: strong field gravity, which may be probed by studying black holes; and gravity at large scales and early times, which is the cosmological arena.
Cosmology has challenged GR with two, yet to be fully understood discoveries: dark matter and dark energy [7,8]. Along with these two phenomena, the lack of renormalizability in GR [9] and the apparently exponential expansion in the very early Universe [10] are usually taken as signs for the incompleteness of the theory at high energies. Due to these shortcomings in GR the study of modified gravity has become a broad field. Scalar-tensor theories [11,12], f (R) modifications [13,14], Horava-Lifshitz theory [15], multidimensional theories of gravity [16,17], and many other suggestions have been made in the hope of finding, or at least deriving, some hints for, a fully consistent theory that can successfully explain the observations and satisfy the theoretical expectations. (Ref. [9] has an extensive review).
The new data coming from various experiments such as the WMAP and Planck satellite measurements of the cosmic microwave background (CMB) anisotropies [18], * Electronic address: [email protected] † Electronic address: [email protected] and the WiggleZ [19] or Baryon Oscillation Spectroscopic Survey [20] measurements of the matter power spectrum, provide us with an opportunity to test specific modified theories of gravity. However, since there are many different modified theories, all with their own sets of parameters, there has recently been some effort to come up with a way to describe generic modified theories using only a few parameters, and to try to constrain those parameters with general theoretical arguments and by direct comparison with cosmological data.
Parametrizing modified theories of gravity with a small number of parameters has the benefit of tracking the effects of modified gravity on a number of different observables consistently and systematically, rather than considering the consistency of each single observable with GR predictions. This can, at least in principle, lead to constraints on the theory space of modified gravity models.
The parametrized post-Friedmann (PPF) approach, as described in Ref. [21], is an effective way to parameterize many of the modified theories of gravity. However, it is not really feasible to constrain its more or less dozen additional free functions, even with the power of Markov Chain codes such as CosmoMC [22]; there are just too many degrees of freedom to provide useful constraints in the general case. In this paper we will describe a somewhat different way to parametrize modified theories of gravity in which we try to retain only a small number of parameters, which we then constrain using WMAP 9year [23] and SPT data [24].
In the next section we will describe the formulation of this new parametrization, and will show its connection with PPF and other approaches in Sec. 3. In Sec. 4 we will discuss the results of a numerical analysis using CAMB [25] and CosmoMC, and we will conclude the paper in Sec. 5 with a brief discussion.
MODIFIED GRAVITY FORMULATION
There are two common strategies for modifying gravity. One can start from the point of view of the Lagrangian or from the equations of motion. The Lagrangian seems like the more obvious path for writing down specific new theories, where one imagines retaining some desired symmetries while breaking some others. However, the equations of motion provide an easier way in practice to parametrize a general theory of modified gravity, especially in the case of first order perturbations in a cosmological context.
The evolution of the cosmological background has been well tested at different redshift slices, specifically at Big Bang nucleosynthesis and at recombination through the CMB anisotropies. It therefore seems reasonable to assume that the background evolution is not affected by the gravity modification, with the only background level effect being a possible explanation for a fluid behaving like dark energy.
The linearized and modified equations of motion for gravity can be written in the following form in a covariant theory:
δG µν = 8πG δT µν + δU µν .(1)
Here, δG µν is the perturbed Einstein tensor around a background metric, δT µν is the first order perturbation in the energy-momentum tensor and δU µν is the modification tensor source from any term that is not already embedded in GR. Since we will be using CAMB for numerical calculations, we will choose the synchronous gauge from now on, and focus only on the spin-0 (scalar) perturbations. This will make it much more straightforward to adapt the relevant perturbed Boltzmann equations. The metric in the synchronous gauge is written as
ds 2 = a 2 (τ ) −dτ 2 + (δ ij + h ij )dx i dx j , h ij = d 3 ke i k. x {k ikj h( k, τ ) + (k ikj − 1 3 δ ij )6η( k, τ )}, k = kk,(2)
where k is the wave vector. Putting this metric into Eq. 1 results in the following four equations [26]:
k 2 η − 1 2ȧ aḣ = −4 π G a 2 δρ + k 2 A(k, τ ); (3) kη = 4 π G a 2 (ρ +p) V + k 2 B(k, τ );(4)
h + 2ȧ aḣ − 2 k 2 η = −24 π G a 2 (δP ) + k 2 C(k, τ ); (5) h + 6η + 2ȧ a (ḣ + 6η) − 2 k 2 η = −24πG a 2 (ρ +p)Σ +k 2 D(k, τ ).
Here we have used the following definitions:
δT 0 0 = −δρ δT 0 i = (ρ +p) V i δT i i = 3 δP D ij δT ij = (ρ +p)Σ , , , , a 2 δU 0 0 = k 2 A(k, τ ), a 2 δU 0 i = k 2 B(k, τ ), a 2 δU i i = k 2 C(k, τ ), a 2 D ij δU ij = k 2 D(k, τ ),(7)
withρ andp being the background energy density and pressure, respectively, and a dot representing a derivative with respect to τ . The factors of k are chosen to make the modifying functions, {A, B, C, D}, dimensionless. The quantity D ij is defined ask ikj − 1 3 δ ij . The parameterization described here has a very close connection in practice with the PPF method explained in Ref. [21]. The most important differences are that we have grouped a number of separate parameters into a single parameter, and have used the synchronous gauge in Eqs. 3 to 6.
In general, Einstein's equations provide six independent equations. For the case of first order perturbations in cosmology, two of these six equations are used for the two spin-2 (tensor) degrees of freedom, two of the equations are used for the spin-1 (vector) variables and only two independent equations are left for the spin-0 (scalar) degrees of freedom. This means that Eqs. 3 to 6 are not independent and one has to impose two consistency relations on this set of four equations. These consistency relations of course come from the energy-momentum conservation equation, ∇ µ (T µ ν + U µ ν ) = 0. Assuming that energy conservation holds independently for the conventional fluids, ∇ µ T µ ν = 0, (see Ref. [21] for the strengths and weaknesses of such an assumption) one then obtains the following two consistency equations:
2Ȧ H + 2 A − 2 k B H + C = 0 ; (8) 6Ḃ + k C + 12 H B − k D = 0 .(9)
Here we have defined H ≡ȧ/a and dropped the arguments of the functions A to D.
Eqs. 3 to 6, together with Eqs. 8 and 9, show that two general functions of space and time would be enough to parametrize a wide range of modified theories of gravity. This approach of course does not provide a test for any specific modified theory. However, given the current prejudice that GR is the true theory of gravity at low energies (e.g. see Ref. [27] for a discussion), the main question is whether or not cosmological data can distinguish between GR and any other generic theory of modified gravity. Clearly, if we found evidence for deviations from GR, then we would have a parametric way of constraining the space of allowed models, and hence hone in on the correct theory.
CONNECTION WITH OTHER METHODS OF PARAMETRIZATION
In this section we show the connection between the conventional {µ, γ} parameterization of modified gravity, the PPF parameters and the parameterization introduced in Sec. 2.
The parameters defined in Sec. 2 are related to the PPF parameters according to
A(k, τ ) = A 0Φ + F 0Γ + α 0χ + α 1 kχ ,(10)B(k, τ ) = B 0Φ + I 0Γ + β 0χ + β 1 kχ ,(11)C(k, τ ) = C 0Φ + C 1 kΦ + J 0Γ + J 1 kΓ + γ 0χ + γ 1 kχ + γ 2 k 2χ ,(12)D(k, τ ) = D 0Φ + D 1 kΦ + K 0Γ + K 1 kΓ + ǫ 0χ + ǫ 1 kχ + ǫ 2 k 2χ .(13)
While the authors of Ref. [21] have insisted on the modifications being gauge invariant, it is good to keep in mind that there is nothing special about the use of gauge invariant parameters, as is shown in Ref. [28]. The important issue is to track the degrees of freedom in the equations. There are originally four free functions for the spin-0 degrees of freedom in the metric, but the gauge freedom can be used to set two of them to zero. Using only two gauge invariant functions instead of four, means that the gauge freedom has been implicitly used somewhere to omit the redundant variables. All 22 of the parameters on the right hand side of Eqs. 10 to 13 are in fact two-dimensional functions of the wave number, k, and time. A hat on a function means that it is a gauge invariant quantity. The symbolχ is the gauge invariant form of any extra degree of freedom that can appear, for example, in a scalar-tensor theory, or in an f (R) theory as a result of a number of conformal transformations (see section D.2 of Ref. [21] for further explanation).Φ andΓ are related to the synchronous gauge metric perturbations through:
Φ = η − H 2 k 2 (ḣ + 6η);(14)Ψ = 1 2 k 2 (ḧ + 6η + H (ḣ + 6η));(15)Γ = 1 k (Φ + HΨ).(16)
One needs to add more parameters to the right hand side of Eqs. 10 to 13 if there is more than one extra degree of freedom, or if the equations of motion of the theory are higher than second order and the theory cannot be conformally transformed into a second order theory. The reason this many parameters were introduced in Ref. [21] is that there is a direct connection between these parameters and the Lagrangians of a number of specific theories, like the Horava-Lifshitz, scalar-tensor or Einstein Aether theories. Therefore, in principle, constraining these parameters is equivalent to constraining the theory space of those Lagrangians. However, there are a number of issues that may encourage one to consider alternatives to the PPF approach for parameterizing modifications to gravity. First of all, it is practically impossible to run a Markov chain code for 22 two-dimensional functions. One can reduce the number of free functions to perhaps 15 using Eqs. 8 and 9, but there is still a huge amount of freedom in the problem. The second reason is that the whole power of the PPF method lies in distinguishing among a number of classically modified theories of gravity that are mostly proven to be either theoretically inconsistent, like the Horava-Lifshitz theory [9], or already ruled out observationally, like TeVeS (at least for explaining away dark matter) [29]. While it is certainly important and useful to check the GR predictions with the new coming data sets, it does not appear reasonable at this stage to stick with the motivation of any specific theory. For the moment it therefore seems prudent to consider an even simpler approach, as we describe here.
There is another popular parametrization in the literature, described fully in Refs. [30][31][32]. This second parametrization is best described in the conformal Newtonian gauge, via the following metric:
ds 2 = a 2 (τ )[−(1 + 2 ψ)dτ 2 + (1 − 2 φ)δ ij dx i dx j ]. (17)
The modifying parameters, {µ, γ}, are defined through the following: Here ∆ = δρ + 3 H k (1 +p/ρ)V , and all of the matter perturbation quantities are in the Newtonian gauge.
k 2 ψ = −µ(k,
In order to see the connection between this method of parametrization and the one described in the previous section through Eqs. 3 to 6, one needs to use the modified equations of motion in the Newtonian gauge:
k 2 φ + 3H φ + Hψ = −4πGa 2 δρ + k 2 A N ; (20) k 2 φ + Hψ = 4πGa 2 (ρ +p)k V + k 3 B N ; (21) k 2 (φ − ψ) = 12πGa 2 (ρ +P )Σ + k 2 D N .(22)
The parameters {A N , B N , D N } are the modifying functions in the Newtonian gauge. These parameters are related to γ and µ via
α ≡ 1 − µ ,(23)β ≡ γ − 1,(24)4πGa 2 α{ρ∆ + 3(ρ +p)Σ} = k 2 (A N − 3 H k B N +D N ),(25)β ψ = 12 πGa 2 αΣ + k 2 D N ,(26)
where one can choose between using the functions {A N , B N , C N , D N }, along with two constraint equations similar to the Eqs. 8 and 9, or using the two parameters γ and µ and trying to remain consistent in the equations of motion. It is argued in Ref. [33] that the {γ, µ} choice is not capable of parameterizing second order theories in the case of an unmodified background and no extra fields. To show this the authors use the fact that, in the absence of extra fields, all of the Greek coefficients in Eqs. 10 to 13, i.e. {α 0 , ..., ǫ 2 }, have to be zero. Furthermore, they argue that in the case of second order theories, F 0 and I 0 have to be zero, and therefore the constraints of Eqs. 8 and 9 show that J 0 and K 0 are zero as well. After all of this, one can see that Eq. 22 can be written as the following in this special case:
k 2 (φ − ψ) = 12πGa 2 (ρ +P )Σ + k 2 (D 0 φ + D 1 kφ ). (27)
Ref. [33] then shows that the absence of a term proportional to the metric derivative will lead to an inconsistency. However, this conclusion is valid only if one assumes that β in Eq. 26 is a function of background quantities, which usually is not the case. Otherwise, one can use Eq. 26 to define β:
β ≡ 12 πGa 2 αΣ + k 2 (D 0 φ + D1 kφ ) ψ ,(28)
leaving no ambiguity or inconsistency. 1 It is also claimed in Ref. [34] that the {γ, µ} parametrization becomes ambiguous on large scales, while none of these shortcomings apply to the PPF method. However, these criticisms do not seem legitimate, since, as was shown in this section, there is a direct connection between {γ, µ}, and the PPF parameters. 2 For any given set of functions for the PPF method, one can find a corresponding set of functions {γ, µ}, using Eqs. 10 to 13 and 26, that will produce the exact same result for any observable quantity. One only needs to ensure the use of consistent equations while modifying gravity through codes such as CAMB.
Although we believe that there is no ambiguity in the {γ, µ} parameterization, we also believe that our {A, B, C, D} parameterization can be implemented much more easily in Boltzmann codes. Furthermore, there is a potential problem for the {γ, µ} parametrization on the small scales that enter the horizon during the radiation domination era. The metric perturbation ψ will oscillate around zero a couple of times for these scales and that makes the γ function blind to any modification at those 1 Note that this might be troublesome if ψ goes to zero at some moments of time. This can happen for the scales that enter the horizon during radiation domination. 2 In particular there is nothing wrong with the {γ, µ} parametrization on large scales, since ψ is certainly always non-zero.
instants of time. This behaviour also has the potential to lead to numerical instabilities.
NUMERICAL CALCULATION
In this section we will constrain the parameterization described in Sec. 2 using the CMB anisotropy power spectra. We will describe the effects of the modifying parameters on the CMB and show the results of numerical calculations from CAMB and CosmoMC.
Effects of A and B on the power spectra
Before showing numerical results, we first describe some of the physical effects of having non-zero values of A or B. So far we have not placed any constraints on these quantities, which are in general functions of both space and time. There are some effects that can be explicitly seen from the equations of motion and energy conservation. For example, a positive A enhances the pressure perturbation and anisotropic stress, while reducing the density perturbation. On the other hand, a positive B will enhance the momentum perturbation and reduce the pressure perturbation and anisotropic stress.
There are also some other effects that need a little more algebra to see, and we now discuss three examples.
Neutrino moments:
The neutrinos' zeroth and second moments, {F ν0 , F ν2 }, are coupled to the modifying gravity terms according to the Boltzmann equations [26] and Eqs. 3 to 6:
F ν0 = −k F ν1 − 2 3ḣ ; (29) F ν2 = 2 5 k F ν1 − 3 5 k F ν3 + 4 15 (ḣ + 6η).(30)
Hereḣ is modified according to Eq. 3, and the termḣ+6η is coupled to A and B through Eqs. 3 and 4:
h + 6η = 2 k 2 η + 8 πG a 2 δρ H + 24 πG a 2 (ρ +p) V k − 2 k 2 A H + 6 k B.(31)
Therefore, modified gravity can have a significant effect on the neutrino second moment. Photon moments: While the same thing is valid for the photons' second moment after decoupling, the situation is different during the tight coupling regime. The Thomson scattering rate is so high in the tight coupling era that it makes the second moment insensitive to gravity. In other words, the electromagnetic force is so strong that it does not let the photons feel gravity.
ISW effect:
The integrated Sachs-Wolfe (ISW) [35] effect is proportional toφ +ψ in the Newtonian gauge. In the synchronous gauge this iṡ φ +ψ =ḣ + 6η 2 k 2 +η. (32) Hereη is modified according to Eq. 4 and, therefore, a subtle change in the function B(k, τ ), can have a considerable influence on the ISW effect. Fig. 1 shows the effects of a constant non-zero A and B on the ISW effect.
For the case of a constant non-zero B, the ISW effect is always present, since the time derivative of the potential is constantly sourced by this function. This will result in more power on all scales, including the tail of the CMB power spectrum (see Fig. 2). Fig. 2 shows that if a nonzero B is favoured by CMB data, it will be mostly due to the large ℓs, (ℓ > 1500), and comes from its anti-damping behaviour. Fig. 3 shows the CMB power spectra for the case of a constant but non-zero A or B. Note how a constant nonzero B raises the tail of the spectrum up. One might also point to the degeneracy of A and the initial amplitude (usually called A s ) mostly by looking at the height of the peaks.
Matter overdensity:
The Boltzmann equation for cold dark matter overdensity in the synchronous gauge reads [26]
δ CDM ≡ δρ CDM ρ CDM . = − 1 2ḣ .(33)
Using Eqs. 3, 4, and the Friedmann equation, H 2 = 8 π G a 2 3ρ , and assuming a matter-dominated Universe with no baryons, one obtains the following equation for The plot shows the difference in power for the case of zero B minus the best fit non-zero B, using WMAP 9 and SPT12 data, while keeping all the rest of the parameters the same. The error band plotted is based on the reported error on the binned CMB power spectra from the WMAP 9 [23] and SPT12 [24] groups. the cold dark matter overdensity:
Hδ CDM = − 3 H 2 2 δ CDM + k 2 A − k 2 η, i.e.δ CDM + Ḣ H + 3 2 H δ CDM + 3Ḣδ CDM = − k 3 H B + k 2 HȦ .(34)
The above equation clearly shows the role of A and B as driving forces for the matter overdensity. The k 3 prefactor makes the first term on the right hand side dominant on small scales and this will therefore have a significant effect on the matter fluctuation amplitude at late times. The matter power spectrum will therefore be expected to put strong constraints on modified gravity models.
Markov Chain constraints on A and B
Since A(k, τ ) and B(k, τ ) are free functions, we need to choose some simple cases to investigate. We choose here to focus on the simple cases of A and B being separate constants (i.e. independent of both scale and time). We do not claim that this is in any sense a preferred choice -we simply have to pick something tractable. With better data one can imagine constraining a larger set of parameters, for example describing A and B as piecewise constants or polynomial functions.
We used CosmoMC to constrain constant A and B, together with the WMAP -9 [23] and SPT12 [24] CMB data. The amplitudes of the CMB foregrounds were added as additional parameters and were marginalized over for the case of SPT12. The resulting constraints and distributions are shown in Fig. 4. Here we focus entirely on the effects of A and B on the CMB. Hence we turn off the post-processing effects of lensing [36], and ignore constraints from any other astrophysical data-sets. One might conclude from Fig. 4 that general relativity is ruled out by nearly 3σ using CMB alone, since a nonzero value of B is preferred. However, adding lensing to the picture will considerably change the results. As was shown in Eq. 34, a non-zero B will change the matter power spectrum so drastically that in a universe with non-zero B, lensing will be one of the main secondary effects on the CMB. The results of a Markov chain calculation that includes the effects of lensing (i.e. assuming B was constant not only in the CMB era, but all the way until today) is shown in Fig. 5, and are entirely consistent with GR. The broad constraint on A is mainly due to a strong (anti-)correlation between A and the initial amplitude of the scalar perturbations. Two-dimensional contour plots of A versus A s , the initial amplitude of scalar curvature perturbations, are shown in Fig. 6. The strong anti-correlation between parameter A and the initial amplitude, As, makes the constraints on either one of these two parameters weaker.
The mean of the likelihood and 68% confidence interval for the six cosmological parameters together with A and B are tabulated in Table I. Note that the simplified case we are considering here treats CMB constraints only. If we really took B as constant for all time, then there would be large effects on the late time growth, affecting the matter power spectrum, and hence tight constraints coming from a relevant observable, such as σ 8 from cluster abundance today.
Alternative powers of k in B
Examining Eq. 34 reveals that the only term modifying the matter power spectrum in the case of constant A and B, is k 3 B/H. This term is important for two reasons. Firstly, this is the only term introducing a k dependence in the cold dark matter amplitude at late times and at sufficiently large scales where one can completely ignore the effect of baryons on the matter power spectra. Secondly, the k 3 factor enhances this term significantly on small scales in the case of a constant B. Since the amplitude of matter power spectrum (via lensing effects) was the main source of constraints on B, it would seem reasonable to choose B = H B 0 /k, where B 0 is a dimensionless constant. This should avoid too much power in the matter densities on small scales, and therefore reduce lensing as well. However, this choice will lead to enormous power on the largest scales, as is shown in Fig. 7. In order to match with data, one could choose a form in which B switches from B = H B 0 /k to B = H B 0 /k 0 , where k 0 is some small enough transition scale. We discuss this simply as an alternative to the B = constant case. There is clearly scope for exploring a wider class of forms for the functions A(k, τ ) and B(k, τ ).
DISCUSSION
Since a constant A is essentially degenerate with the initial amplitude of the primordial fluctuations, the CMB alone cannot constrain this parameter. On the other hand, constant B seems to be fairly well constrained by the CMB data. However, if B was an oscillating function of time, changing sign from time to time, its total effect on the CMB power spectra would become weaker and the constraints would be broader. According to Eq. 4, a constant B will change η monotonically, while the effect of an oscillating B will partially cancel some of the time. Together, the results of Sec. 4 show that the ISW effect and the growth at relatively recent times (driving the amplitude of matter perturbations) can have huge constraining power for many generic theories of modified gravity. (See Ref. [37] for a recent example). One can consider different positive or negative powers of (H/k) as part of the dependence of B in order to get around the matter constraints, as was discussed in Sec. 4.3.
We have seen that when considering CMB data alone, there seems to be a mild preference for non-zero B. This is essentially because it provides an extra degree of freedom for resolving a mild tension between WMAP and SPT. Neveretheless it remains true that a model with B constant for all time would be tightly constrained by observations of the matter power spectrum at redshift zero. We leave for a future study the question of whether there might be any preference for more general forms for A(k, τ ) and B(k, τ ) using a combination of Planck CMB data and other astrophysical data-sets.
a)4πGa 2 {ρ∆ + 3(ρ +p)Σ} ; (18) k 2 [φ − γ(k, a)ψ] = µ(k, a)12πGa 2 (ρ +p)Σ . (19)
FIG. 1 :
1Effects of a constant, non-zero A and B on the ISW effect. This plot shows the ISW effect for a specific scale of k = 0.21 Mpc −1 .
FIG. 2 :
2Effects of a constant B on the CMB power spectra.
FIG. 3 :
3Effects of a constant, non-zero A or B on the CMB power spectra. One can see that the two parameters have quite different effects.
FIG. 4 :
468 and 95 percent contours of the constants A and B using WMAP 9-year data alone (left) and SPT12 (right) without including lensing effects, and neglecting late time growth effects.
FIG. 5 :
568 and 95 percent contours of the constants A and B using WMAP 9-year data alone (left) and WAMP 9 + SPT12 (right), with lensing effects included.
FIG. 6: The strong anti-correlation between parameter A and the initial amplitude, As, makes the constraints on either one of these two parameters weaker.
FIG. 7 :
7Effect of a non-zero B0 on the CMB power spectra, with the choice B = H B0/k.
Parameter . . . . . . . . . . . . . . . WMAP9 WMAP9+SPT12 WMAP9+SPT12+lensing 100 Ω b h 2 . . . . . . . . . . . . . . . . . 2.22 ± 0.05 2.10 ± 0.03 2.22 ± 0.04 ΩDMh 2 . . . . . . . . . . . . . . . . . . . 0.118 ± 0.005 0.122 ± 0.005 0.115 ± 0.004 100 θ . . . . . . . . . . . . . . . . . . . . 1.038 ± 0.002 1.040 ± 0.001 1.042 ± 0.0010 τ . . . . . . . . . . . . . . . . . . . . . . . 0.086 ± 0.014 0.076 ± 0.012 0.084 ± 0.012 log(10 10 As) . . . . . . . . . . . . . . . ns . . . . . . . . . . . . . . . . . . . . . . . 0.96 ± 0.01 0.93 ± 0.01 0.96 ± 0.01 100A . . . . . . . . . . . . . . . . . . . . 1000B . . . . . . . . . . . . . . . . . . . 0.44 ± 0.413.1 ± 0.2
3.1 ± 0.2
3.0 ± 0.4
1 ± 10
0.02 ± 10
2.7 ± 1.7
0.74 ± 0.24
−0.0097 ± 0.016
TABLE I :
IThe mean likelihood values together with the 68% confidence interval for the usual six cosmological parameters, together with constant A and B, using CMB constraints only.
AcknowledgmentsWe acknowledge many detailed and helpful discussions with our colleague James Zibin. We also had very useful conversations about this and related work with Levon Pogosian. This research was supported by the Natural Sciences and Engineering Research Council of Canada.
. F W Dyson, A S Eddington, C Davidson, Royal Society of London Philosophical Transactions Series A. 220F. W. Dyson, A. S. Eddington, and C. Davidson. Royal Society of London Philosophical Transactions Series A, 220:291-333, 1920.
. I I Shapiro, Physical Review Letters. 13DecemberI. I. Shapiro. Physical Review Letters, 13:789-791, De- cember 1964.
R P Kerr, Quasi-Stellar Sources and Gravitational Collapse. I. Robinson, A. Schild, and E. L. Schucking99R. P. Kerr. In I. Robinson, A. Schild, and E. L. Schuck- ing, editors, Quasi-Stellar Sources and Gravitational Col- lapse, page 99, 1965.
. S W Hawking, R Penrose, Royal Society of London Proceedings Series A. 314S. W. Hawking and R. Penrose. Royal Society of London Proceedings Series A, 314:529-548, January 1970.
S W Hawking, Sixth Texas Symposium on Relativistic Astrophysics. D. J. Hegyi224268S. W. Hawking. In D. J. Hegyi, editor, Sixth Texas Sym- posium on Relativistic Astrophysics, volume 224 of An- nals of the New York Academy of Sciences, page 268, 1973.
. R A Hulse, J H Taylor, 195R. A. Hulse and J. H. Taylor. apjl, 195:L51-L53, January 1975.
. C S Frenk, S D M White, arXiv:1210.0544Annalen der Physik. 524C. S. Frenk and S. D. M. White. Annalen der Physik, 524:507-534, October 2012. arXiv:1210.0544.
. S Perlmutter, G Aldering, S. Perlmutter, G. Aldering, and et al.
. T Clifton, P G Ferreira, A Padilla, C Skordis, Physrep, arXiv:1106.2476513T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis. physrep, 513:1-189, March 2012. arXiv:1106.2476.
Measuring and Modeling the Universe. A H Guth, arXiv:astro-ph/040454631A. H. Guth. Measuring and Modeling the Universe, page 31, 2004. arXiv:astro-ph/0404546.
. C H Brans, H Dicke, R , Physical Review. 124925935C. H. Brans and H. Dicke, R. Physical Review, 124:925935, November 1961.
. K NordtvedtJr, Astrophys. J. 1611059K. Nordtvedt, Jr. Astrophys. J. , 161:1059, September 1970.
. S Nojiri, S D Odintsov, arXiv:hep-th/0608008Phys. Rev. D. 74886005S. Nojiri and S. D. Odintsov. Phys. Rev. D , 74(8):086005, October 2006. arXiv:hep-th/0608008.
. T P Sotiriou, V Faraoni, arXiv:0805.1726Reviews of Modern Physics. 82T. P. Sotiriou and V. Faraoni. Reviews of Modern Physics, 82:451-497, January 2010. arXiv:0805.1726.
. P Hořava, arXiv:0901.3775Phys. Rev. D. 79884008P. Hořava. Phys. Rev. D , 79(8):084008, April 2009. arXiv:0901.3775.
. E W Kolb, D Lindley, D Seckel, Phys. Rev. D. 30E. W. Kolb, D. Lindley, and D. Seckel. Phys. Rev. D , 30:1205-1213, September 1984.
. H Kodama, A Ishibashi, O Seto, arXiv:hep-th/0004160Phys. Rev. D. 62664022H. Kodama, A. Ishibashi, and O. Seto. Phys. Rev. D , 62(6):064022, September 2000. arXiv:hep-th/0004160.
. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationArnaud , Planck CollaborationarXiv:1101.20225361Planck Collaboration, P. A. R. Ade, N. Aghanim, Arnaud, and et al. aap, 536:A1, December 2011. arXiv:1101.2022.
ArXiv e-prints. D Parkinson, S Riemer-Sørensen, Blake , arXiv:1210.2130D. Parkinson, S. Riemer-Sørensen, Blake, and et al. ArXiv e-prints, October 2012. arXiv:1210.2130.
ArXiv e-prints. K S Dawson, D J Schlegel, C P Ahn, arXiv:1208.0022K. S. Dawson, D. J. Schlegel, C. P. Ahn, and et al. ArXiv e-prints, July 2012. arXiv:1208.0022.
ArXiv e-prints. T Baker, P G Ferreira, C Skordis, arXiv:1209.2117T. Baker, P. G. Ferreira, and C. Skordis. ArXiv e-prints, September 2012. arXiv:1209.2117.
. A Lewis, S Bridle, arXiv:astro-ph/0205436Phys. Rev. D. 6610103511A. Lewis and S. Bridle. Phys. Rev. D , 66(10):103511, November 2002. arXiv:astro-ph/0205436.
ArXiv e-prints. G Hinshaw, D Larson, arXiv:1212.5226G. Hinshaw, D. Larson, and et. al. ArXiv e-prints, De- cember 2012. arXiv:1212.5226.
ArXiv e-prints. K T Story, C L Reichardt, arXiv:1210.7231K. T. Story, C. L. Reichardt, and et al. ArXiv e-prints, October 2012. arXiv:1210.7231.
. A Lewis, A Challinor, A Lasenby, arXiv:astro-ph/9911177Astrophys. J. 538A. Lewis, A. Challinor, and A. Lasenby. Astrophys. J. , 538:473-476, August 2000. arXiv:astro-ph/9911177.
. C.-P Ma, E Bertschinger, arXiv:astro-ph/9506072Astrophys. J. 4557C.-P. Ma and E. Bertschinger. Astrophys. J. , 455:7, December 1995. arXiv:astro-ph/9506072.
. C P Burgess, arXiv:gr-qc/0311082Living Reviews in Relativity. 75C. P. Burgess. Living Reviews in Relativity, 7:5, April 2004. arXiv:gr-qc/0311082.
ArXiv General Relativity and Quantum Cosmology e-prints. K A Malik, D Wands, arXiv:gr-qc/9804046K. A. Malik and D. Wands. ArXiv General Relativity and Quantum Cosmology e-prints, April 1998. arXiv:gr- qc/9804046.
. I Ferreras, N E Mavromatos, M Sakellariadou, M F Yusaf, arXiv:0907.1463Phys. Rev. D. 8010103506I. Ferreras, N. E. Mavromatos, M. Sakellariadou, and M. F. Yusaf. Phys. Rev. D , 80(10):103506, November 2009. arXiv:0907.1463.
. A Hojjati, L Pogosian, G.-B Zhao, Jcap, arXiv:1106.45438AugustA. Hojjati, L. Pogosian, and G.-B. Zhao. jcap, 8:5, Au- gust 2011. arXiv:1106.4543.
. L Pogosian, A Silvestri, arXiv:1002.2382Phys. Rev. D. 8110104023L. Pogosian, A. Silvestri, and et.al. Phys. Rev. D , 81(10):104023, May 2010. arXiv:1002.2382.
. A Silvestri, L Pogosian, R V Buniy, arXiv:1302.1193Phys. Rev. D. 8710104015A. Silvestri, L. Pogosian, and R. V. Buniy. Phys. Rev. D , 87(10):104015, May 2013. arXiv:1302.1193.
. T Baker, P G Ferreira, C Skordis, J Zuntz, arXiv:1107.0491Phys. Rev. D. 8412124018T. Baker, P. G. Ferreira, C. Skordis, and J. Zuntz. Phys. Rev. D , 84(12):124018, December 2011. arXiv:1107.0491.
. J Zuntz, T Baker, P G Ferreira, C Skordis, Jcap, arXiv:1110.3830632J. Zuntz, T. Baker, P. G. Ferreira, and C. Skordis. jcap, 6:32, June 2012. arXiv:1110.3830.
. R K Sachs, A M Wolfe, Astrophys. J. 14773R. K. Sachs and A. M. Wolfe. Astrophys. J. , 147:73, January 1967.
. D Hanson, A Challinor, arXiv:1008.4403Phys. Rev. D. 83443005D. Hanson, A. Challinor, and et al. Phys. Rev. D , 83(4):043005, February 2011. arXiv:1008.4403.
. A Barreira, B Li, arXiv:1302.6241Phys. Rev. D. 8710103511A. Barreira, B. Li, and et al. Phys. Rev. D , 87(10):103511, May 2013. arXiv:1302.6241.
| [] |
[
"ENTANGLEMENT BREAKING RANK",
"ENTANGLEMENT BREAKING RANK"
] | [
"Satish K Pandey ",
"Vern I Paulsen ",
"ANDJitendra Prakash ",
"Mizanur Rahaman "
] | [] | [] | We introduce and study the entanglement breaking rank of an entanglement breaking channel. We show that the problem of computing the entanglement breaking rank of the channelis equivalent to the existence problem of symmetric informationally-complete POVMs.where the R k 's are rank one matrices in M m,d . Since Choi-Kraus representations are not unique, we define the entanglement breaking rank of Φ to be the minimum K required in such an expression (1.1), and denote it by ebr(Φ). Clearly, the entanglement breaking rank of Φ is never less than its Choi-rank. Another equivalent criterion[15]for a map to be entanglement breaking is that its Choi-matrixIf Φ is entanglement breaking, then it is easy to see that (C Φ ) = ebr(Φ) so that studying entanglement breaking Date: Last revised: May 15, 2018. | 10.1063/1.5045184 | [
"https://arxiv.org/pdf/1805.04583v1.pdf"
] | 119,652,374 | 1805.04583 | 496cb397dea38fd1ce98edc08ec97bc84c2b0b21 |
ENTANGLEMENT BREAKING RANK
Satish K Pandey
Vern I Paulsen
ANDJitendra Prakash
Mizanur Rahaman
ENTANGLEMENT BREAKING RANK
We introduce and study the entanglement breaking rank of an entanglement breaking channel. We show that the problem of computing the entanglement breaking rank of the channelis equivalent to the existence problem of symmetric informationally-complete POVMs.where the R k 's are rank one matrices in M m,d . Since Choi-Kraus representations are not unique, we define the entanglement breaking rank of Φ to be the minimum K required in such an expression (1.1), and denote it by ebr(Φ). Clearly, the entanglement breaking rank of Φ is never less than its Choi-rank. Another equivalent criterion[15]for a map to be entanglement breaking is that its Choi-matrixIf Φ is entanglement breaking, then it is easy to see that (C Φ ) = ebr(Φ) so that studying entanglement breaking Date: Last revised: May 15, 2018.
Introduction
The study of separable states is an important topic in quantum information theory and helps to shed light on the nature of entanglement. Recall that, a state ρ ∈ M m ⊗ M n is called separable if it can be written as a finite convex combination ρ = i λ i σ i ⊗ δ i , where σ i and δ i are pure states. In general, a separable state can have many such representations [16]. It is then natural to introduce the notion of optimal ensemble cardinality [9] or length [5] of a separable state by defining it to be the minimum number (ρ) of pure states σ i ⊗δ i required to write the separable state ρ as their convex combination. In [9] it was shown that, in general, the inequality rank(ρ) ≤ (ρ) can be strict for certain classes of separable states.
The notion of entanglement breaking maps was introduced and studied in [15,21]. We say that a linear map Φ : M d → M m is entanglement breaking if the tensor product Φ ⊗ I n maps states of M d ⊗ M n to separable states in M m ⊗ M n , for all n ∈ N. An equivalent criterion of entanglement breaking [15] is that Φ admits a Choi-Kraus representation of the form
Φ(X) = K k=1 R k XR * k , X ∈ M d , (1.1)
rank is the same as studying length. However, for our purposes it is more natural to study channels instead of states, so we will think in terms of entanglement breaking rank instead of length.
The existence problem of symmetric informationally-complete positive operatorvalued measures (SIC POVM) in an arbitrary dimension d is an open problem and is an active area of research [3,8,13,19,20]. This problem has connections to frame theory and the theory of t-designs [20], and has also been verified numerically in finitely many dimensions [1,12,23]. This problem is equivalent to Zauner's (weaker) conjecture [26,27] about equiangular lines in C d . Stated in terms of equiangular lines, the problem asks about the existence of d 2 unit vectors {v i :
1 ≤ i ≤ d 2 } ⊆ C d such that | v i , v j | 2 = 1
d+1 for all i = j. We refer the reader to [11] for a survey on this subject.
It can be shown [4] that if such a set of vectors {v i } d 2 i=1 exists in a given dimension d, then the rank one projections P i onto the span of v i satisfy:
1 d d 2 i=1 P i XP i = 1 d + 1 (X + Tr(X)I d ), X ∈ M d .
Denoting this channel by Z, it is independently known to be entanglement breaking and its Choi-rank is d 2 [14,18]. Thus, if a SIC POVM exists in dimension d, then the entanglement breaking rank of the channel Z is d 2 .
We prove the converse: if ebr(Z) = d 2 , then there exists a SIC POVM in dimension d. This is a slight relaxation of the Zauner conditions, since the d 2 rank one matrices involved in the Choi-Kraus representation need not be positive. This leads to the conclusion (Corollary 3.5) that for d ≥ 2, ebr(Z) = d 2 if and only if a SIC POVM exists in dimension d.
This result leads us to seek, and find, other channels with the property that a statement about their entanglement breaking rank is equivalent to Zauner's conjecture. In particular, we show that determining if the Werner-Holevo channel [24,Example 3.36], Φ 0 (X) = 1 d + 1 (X T + Tr(X)I d ),
X ∈ M d ,
where X T is the transpose of X, has entanglement breaking rank equal to d 2 is also equivalent to Zauner's conjecture. This Werner-Holevo channel also serves as an example of a channel for which the entanglement breaking rank is strictly bigger than its Choi rank. This approach to Zauner's conjecture also allows us to prove some perturbative results that would imply Zauner's conjecture and leads naturally to a conjecture that appears to be stronger than Zauner's conjecture. We prove that ebr is lower semicontinuous, so that if there are channels arbitrarily near to Z with ebr bounded by d 2 , then Zauner's conjecture is true.
The channel Z is a particular convex combination of the identity channel I d and the completely depolarizing channel, Ψ d (X) = Tr(X) d I d . Namely,
Z = 1 d + 1 I d + d d + 1 Ψ d .
The channel tI d + (1 − t)Ψ d is entanglement breaking for all 0 ≤ t ≤ 1 d+1 [14,18]. We conjecture that, for all t with 0 ≤ t ≤ 1 d+1 , the entanglement breaking rank of each of these channels is d 2 .
We verify this stronger conjecture in dimensions d = 2 and d = 3. Moreover, we show, in these dimensions, that there is a continuous family of d 2 rank one matrices
R i : 0, 1 d+1 → M d for 1 ≤ i ≤ d 2 such that (tI d + (1 − t)Ψ d )(X) = d 2 i=1 R i (t)XR i (t) * , X ∈ M d .
In particular, when t = 1 d+1 , we get a Choi-Kraus representation of Z consisting of d 2 rank one matrices, so that (by Theorem 3.3) we get the existence of a SIC POVM in that dimension. We conjecture that such continuous families of rank one matrices exist in all dimensions. Interestingly, we are able to prove that when such families do exist, then R i (t) cannot be positive semi-definite for 0 ≤ t < 1 d+1 . Since determining if ebr(Z) = d 2 is equivalent to Zauner's conjecture, it is interesting to know what sorts of bounds we can obtain on ebr(Z), that do not rely on Zauner's conjecture. Similar to the SIC POVM existence problem, we have the existence problem of mutually unbiased bases (MUB) in dimension d. For a comprehensive survey and literature we refer the reader to [10]. The MUB existence problem asks whether for each d ≥ 2, there exists a set of d + 1 orthonormal bases
{V i : 1 ≤ i ≤ d + 1} of C d , where V i = {v i,j : 1 ≤ j ≤ d}, such that | v i,j , v k,l | 2 = 1 d for all i = k. Unlike
Zauner's conjecture, there are infinitely many d for which it is known that d + 1 MUB exist [2,17,25]. We show that the existence of d + 1 MUB in dimension d implies ebr(Z) ≤ d(d + 1).
Preliminaries
Let d ∈ N. We shall let C d denote the Hilbert space of n-tuples of complex numbers with the usual inner product. We shall denote the space of
d × d complex matrices by M d . The trace of a matrix A = [a i,j ] ∈ M d is defined by Tr(A) = d i=1 a i,i . We let I d denote the identity matrix in M d . The space of all linear maps from M d to M m is denoted by L(M d , M m ).
We review some of the definitions and known results that we shall be needing. We begin with stating the SIC POVM existence problem in terms of equiangular vectors in C d .
Definition 2.1. A set of d 2 matrices {R i : 1 ≤ i ≤ d 2 } ⊆ M d is called a symmetric informationally-complete positive operator-valued measure (SIC POVM) if,
(a) it forms a positive operator-valued measure (POVM), that is, each R i is positive semi-definite (written R i ≥ 0) and
d 2 i=1 R i = I d , (b) it is informationally-complete, that is, span{R i : 1 ≤ i ≤ d 2 } = M d , (c) it is symmetric, that is, Tr(R 2 i ) = λ for all 1 ≤ i ≤ d 2 ,
and Tr(R i R j ) = µ for all i = j, for some constants λ and µ, and, (d) rank(R i ) = 1 for all 1 ≤ i ≤ d 2 .
The following proposition gives a characterization of a SIC POVM in terms of unit vectors.
Proposition 2.2. A set of matrices {R i } d 2 i=1 ⊆ M d is a SIC POVM if and only if there exist d 2 unit vectors {w i } d 2 i=1 ⊆ C d such that R i = 1 d w i w * i for each 1 ≤ i ≤ d 2 , and | w i , w j | 2 = 1 d+1 for all i = j. Because of Proposition 2.2, we shall call a set {w i } d 2 i=1 ⊆ C d of d 2 unit vectors a SIC POVM if | w i , w j | 2 = 1 d+1 for all i = j. The constant 1
d+1 appearing in Proposition 2.2 can be derived independently.
Proposition 2.3. If {w i } d 2 i=1 ⊆ C d is a set of unit vectors such that | w i , w j | 2 = c = 1 for all i = j, then c = 1 d+1
. We now state the SIC POVM existence conjecture in terms of equiangular vectors, which we shall call Zauner's conjecture. [22] for the latest progress in finding these solutions. Many of the numerical solutions to Zauner's conjecture have been found assuming a covariance property. We shall focus on a particular group covariance.
Conjecture 2.4 (Zauner's conjecture). For each positive integer d ≥ 2, there exist d 2 unit vectors {w i } d 2 i=1 in C d such that | w i , w j | 2 = 1 d+1 for all i = j.U = 0 0 · · · 1 1 0 · · · 0 0 . . . 0 0 0 · · · 1 0 , V = 1 0 · · · 0 0 ω · · · 0 0 0 . . . 0 0 0 · · · ω d−1 .
Thus, U is the (forward) cyclic shift, and V is the diagonal matrix with all the d-th roots of unity on the diagonal. For any (i, j) ∈ Z d × Z d , define the discrete Weyl matrices as W i,j = U i V j . (For properties of the discrete Weyl matrices, we refer the reader to [24, Section 4.1.2].)
We now have the following stronger conjecture, also due to Zauner. It is not known whether Zauner's conjecture implies this stronger conjecture.
{W i,j w : (i, j) ∈ Z d × Z d } ⊆ C d ,
where W i,j are the discrete Weyl matrices, form a SIC POVM. In this case, w is called a fiducial vector.
We now review some of the standard results from the theory of quantum channels. The proofs may be found in [7,24].
Recall that a linear map Ω : M d → M m is completely positive and trace preserving (henceforth a quantum channel ) if and only if Ω can be expressed as
Ω(X) = K k=1 B k XB * k , X ∈ M d , (2.1) for some B 1 , ..., B K ∈ M m,d with K k=1 B * k B k = I d .
The expression (2.1) is called a Choi-Kraus representation of the quantum channel Ω. Choi-Kraus representations of a quantum channel are not unique, and therefore the minimum K possible in (2.1) is called the Choi-rank of Ω, and is denoted by cr(Ω). The Choi-rank of Ω is equal to the rank of the dm × dm Choi-
matrix C Ω = [Ω(E i,j )] d i,j=1 , where {E i,j : 1 ≤ i, j ≤ d} are the canonical matrix units of M d . If the value of K in expression (2.1) equals the Choi-rank of Ω, then {B k } K k=1 ⊆ M m,d is a linearly independent set.
We identify two simple quantum channels and their convex combinations for our purpose.
Definition 2.7. The quantum channel I d :
M d → M d defined by I d (X) = X, for all X ∈ M d , is called the identity channel. The quantum channel Ψ d : M d → M d defined by Ψ d (X) = 1 d Tr(X)I d , for all X ∈ M d , is called the completely depolarizing channel. For t ∈ [0, 1], we define Φ t : M d → M d to be Φ t = tI d + (1 − t)Ψ d . When t = 1 d+1 , we set, Z := Φ 1 d+1 = 1 d + 1 I d + d d + 1 Ψ d .
A Choi-Kraus representation of the identity channel I d is simply I d (X) = I d XI d , and hence its Choi-rank is 1.
Since C Ψ d = [Ψ d (E i,j )] = 1 d I d 2 ,
it follows that the Choi-rank of the completely depolarizing channel Ψ d is d 2 .
Since the set of quantum channels Ω : M d → M d forms a convex set in the space L(M d , M d ), it follows that the map Φ t = tI d + (1 − t)Ψ d is also a quantum channel for every t ∈ [0, 1]. When t ∈ (0, 1), it is easy to see that the Choi-rank of Φ t is d 2 . Indeed, notice that the Choi-
matrix of Φ t is C Φt = t[E i,j ] + 1−t d I d 2 . Let ξ ∈ C d 2 . If C Φt (ξ) = 0, then [E i,j ]ξ = − 1−t td ξ.
But the only eigenvalues of [E i,j ] are 0 and d, which implies that ξ = 0. Thus rank(C Φt ) = d 2 .
We now introduce the notion of entanglement breaking maps and then list some equivalent conditions on a map to be entanglement breaking suited for our purpose. (a) The map Φ is entanglement breaking.
(b) The Choi-matrix C Φ = [Φ(E i,j )] ∈ M d ⊗ M m is a separable state, where {E i,j : 1 ≤ i, j ≤ d} are the canonical matrix units of M d . (c) The map Φ has a Choi-Kraus representation Φ(X) = K k=1 R k XR * k , X ∈ M d ,
where each R k is a rank one matrix.
Φ(X) = K k=1 R k XR * k , X ∈ M d , where each R k is a rank one matrix. Then for each 1 ≤ k ≤ K, there exist vectors v k ∈ C m and w k ∈ C d such that R k = v k w * k , which yields Φ(X) = K k=1 R k XR * k = K k=1 v k w * k Xw k v * k = K k=1 Xw k , w k v k v * k .
We shall use both these forms of Φ interchangeably.
We end this section with the following known result which characterizes when the maps Φ t = tI d + (1 − t)Ψ d as in Definition 2.7 are entanglement breaking.
Proposition 2.11 ([14, 18]). Let t ∈ R. The map Φ t = tI d + (1 − t)Ψ d is entan- glement breaking if and only if −1 d 2 −1 ≤ t ≤ 1 d+1 .
Entanglement Breaking Rank
The main goal of this section is to establish the equivalence of computing entanglement breaking rank of the channel Z to the existence problem of SIC POVMs. Let us begin with the definition of the entanglement breaking rank of an entanglement breaking map.
Definition 3.1. Let Φ : M d → M m be an entanglement breaking map. The entanglement breaking rank of Φ, denoted by ebr(Φ), is the minimum number of rank one operators R k such that Φ can be written as Φ(X) = K k=1 R k XR * k . Let cr(Φ) denote the Choi-rank of an entanglement breaking channel Φ : M d → M m . We have the following estimate:
d ≤ cr(Φ) ≤ ebr(Φ). (3.1)
The first inequality follows from the fact that an entanglement breaking channel cannot have less than d Choi-Kraus operators in its Choi-Kraus representation [15,Theorem 6].
By Proposition 2.11, it follows that the quantum channel Z (Definition 2.7) is entanglement breaking, and hence it has a Choi-Kraus representation consisting of rank one Choi-Kraus operators. Zauner's conjecture can then be related to the problem of obtaining a minimal Choi-Kraus representation of the quantum channel Z consisting of rank one operators. First we establish a weaker proposition.
Z = 1 d+1 I d + d d+1 Ψ d , has a Choi-Kraus representation Z(X) = d 2 i=1 B i XB * i , for all X ∈ M d , where each B i ∈ M d is a rank one positive operator.
Proof. The forward implication is a known result [4]. However, we include a proof for the sake of completeness.
Suppose that Zauner's conjecture is true.
Then for each d ≥ 2, there exist d 2 unit vectors {w i } d 2 i=1 in C d such that | w i , w j | 2 = 1 d+1 for all i = j. By Proposition 2.2, the set {R i } d 2 i=1 ⊆ M d , where R i = 1 d w i w * i , forms a SIC POVM. Note that R 2 i = 1 d R i for every 1 ≤ i ≤ d 2 . Define a map Φ : M d → M d by Φ(X) = d d 2 i=1 R i XR * i , X ∈ M d .
Clearly, Φ is completely positive, and the following shows that it is unital and trace preserving:
d d 2 i=1 R * i R i = d d 2 i=1 R 2 i = d d 2 i=1 1 d R i = d 2 i=1 R i = I d .
Thus, Φ is a unital quantum channel. It remains to show that Φ = Z, from which the desired Choi-Kraus representation of Z can be obtained by setting
B i = √ dR i for all 1 ≤ i ≤ d 2 .
The set {R j } d 2 j=1 being informationally complete spans M d , and so each X ∈ M d may be written as X = d 2 j=1 r j R j for some unique scalars r j . Taking trace on both sides of X =
d 2 j=1 r j R j yields d 2 j=1 r j = d Tr(X). (3.2) Next observe that Φ(R j ) = d R 3 j + 1≤i≤d 2 i =j R i R j R i = d 1 d 2 R j + 1 d 2 (d + 1) 1≤i≤d 2 i =j R i = d 1 d 2 R j + 1 d 2 (d + 1) (I d − R j ) = 1 d + 1 R j + 1 d I d ,
which implies
Φ(X) = d 2 j=1 r j Φ(R j ) = 1 d + 1 d 2 j=1 r j R j + I d d(d + 1) d 2 j=1 r j = 1 d + 1 X + I d d(d + 1) d Tr(X) = 1 d + 1 I d (X) + d d + 1 Ψ d (X),
where we used Equation (3.2) in the third equality. Hence Φ = Z. Conversely, suppose that Z has a Choi-Kraus representation given by
Z(X) = d 2 i=1 B i XB * i , where each B i ∈ M d is a rank one positive operator. Then for every 1 ≤ i ≤ d 2 , B i = v i v * i for some v i ∈ C d .
Since the channel Z is unital, we have
I d = d 2 i=1 B 2 i = d 2 i=1 v i 2 B i .
Using this, and using Z = 1
d+1 I d + d d+1 Ψ d , on one hand we get Z(B j ) = 1 d + 1 (B j + v j 2 I d ) = 1 d + 1 B j + d 2 i=1 v j 2 v i 2 B i = 1 d + 1 (1 + v j 4 )B j + 1≤i≤d 2 i =j v i 2 v j 2 B i ;
and on the other hand, using Choi-Kraus representation of Z, we get,
Z(B j ) = d 2 i=1 B i B j B i = B 3 j + 1≤i≤d 2 i =j B i B j B i = v j 4 B j + 1≤i≤d 2 i =j | v i , v j | 2 B i .
Comparing Z(B j ) obtained in two ways, we get
1 + v j 4 d + 1 − v j 4 B j + 1≤i≤j i =j v i 2 v j 2 d + 1 − | v i , v j | 2 B i = 0. Since {B i } d 2 i=1
is a linearly independent set (because number of Choi-Kraus operators equals the Choi-rank; see the discussion after Definition 2.7), we must have
1 + v j 4 d + 1 − v j 4 = 0, v i 2 v j 2 d + 1 − | v i , v j | 2 = 0, ∀ i = j.
The first one yields, v j 4 = 1 d , which is constant for all 1 ≤ j ≤ d 2 , and using this the second one yields
| v i , v j | 2 = 1 d(d+1) , for all i = j.
Then it is easy to see that the normalized vectors w i := vi vi satisfy | w i , w j | 2 = 1 d+1 for all i = j, so that Zauner's conjecture holds.
The following result shows that the positivity condition on B i in the Proposition 3.2 is redundant and can be dropped.
Z = 1 d+1 I d + d d+1 Ψ d , has a Choi-Kraus representation Z(X) = d 2 i=1 B i XB * i , for all X ∈ M d , where each B i ∈ M d is a rank one operator.
Proof. It suffices to prove the backward implication. Suppose that Z has a Choi-Kraus representation given by
Z(X) = d 2 i=1 B i XB * i , where each B i ∈ M d is a rank one operator. Then for every 1 ≤ i ≤ d 2 , B i = x i y * i for some vectors x i , y i ∈ C d .
Without loss of generality, we may assume that each y i is a unit vector. If we show that for every 1 ≤ i ≤ d 2 , x i = λ i y i for some λ i ∈ C, then the Choi-Kraus representation of Z reduces to a form which consists of rank one positive operators. Then the result follows from Proposition 3.2. We accomplish this in three steps.
We first prove that the set
{x i x * i } d 2 i=1 ⊆ M d is linearly independent.
To do this we note that Z is unital and hence we have
I d = Z(I d ) = d 2 i=1 B i B * i = d 2 i=1 x i y * i y i x * i = d 2 i=1 x i x * i . (3.3)
Using this equation, on one hand we get
Z(B j ) = 1 d + 1 (B j + Tr(B j )I d ) = 1 d + 1 B j + x j , y j d 2 i=1 x i x * i ,
and on the other hand we get
Z(B j ) = d 2 i=1 B i B j B * i = d 2 i=1 (x i y * i )(x j y * j )(x i y * i ) * = d 2 i=1 x j , y i y i , y j x i x * i , for 1 ≤ j ≤ d 2 .
Comparing Z(B j ) obtained in the above two ways, we get
1 d + 1 B j + x j , y j d 2 i=1 x i x * i = d 2 i=1 x j , y i y i , y j x i x * i , which implies B j = d 2 i=1 ((d + 1) x j , y i y i , y j − x j , y j ) x i x * i .
This shows that the set
{x i x * i } d 2 i=1 spans {B j } d 2 j=1 . But since the Choi-rank of Z is d 2 , {B j } d 2 j=1 is a basis for M d . Consequently, the set {x i x * i } d 2 i=1
is a minimal spanning set, and hence a basis for M d .
We next prove that x j 2 = 1 d for every 1 ≤ j ≤ d 2 . Using Equation (3.3), for each 1 ≤ j ≤ d 2 , on one hand we get
Z(x j x * j ) = 1 d + 1 x j x * j + Tr(x j x * j )I d = 1 d + 1 x j x * j + x j 2 d 2 i=1 x i x * i ,
and on the other hand, we get
Z(x j x * j ) = d 2 i=1 B i (x j x * j )B * i = d 2 i=1 x i y * i x j x * j y i x * i = d 2 i=1 | y i , x j | 2 x i x * i .
Comparing the two expressions of Z(x j x * j ), we get
1 d + 1 x j x * j + x j 2 d 2 i=1 x i x * i = d 2 i=1 | y i , x j | 2 x i x * i .
Because of linear independence of the set {x
i x * i } d 2 i=1
, comparing the coefficients of
x i x * i , it follows that (d + 1)| y j , x j | 2 = 1 + x j 2 , ∀j, (3.4) (d + 1)| y i , x j | 2 = x j 2 , ∀i = j. (3.5)
Using the Cauchy-Schwarz inequality in Equation (3.4), we obtain
1 + x j 2 = (d + 1)| y j , x j | 2 ≤ (d + 1) y j 2 x j 2 = (d + 1) x j 2 , which implies 1 d ≤ x j 2 .
Taking the trace of Equation (3.3), we get
d 2 i=1 x i 2 = d which in turn yields 0 ≤ d 2 i=1 x i 2 − 1 d = d 2 i=1 x i 2 − d 2 i=1 1 d = d − d 2 1 d = 0.
This shows that x j 2 = 1 d , for all 1 ≤ j ≤ d 2 . We finish the proof by showing that each x i is some scalar multiple of y i . Plugging x j 2 = 1 d in Equations (3.4) and (3.5) readily shows that for 1 ≤ i, j ≤ d 2 ,
| x i , y j | 2 = 1 d , if i = j, 1 d(d+1) , if i = j.
(3.6)
But 1 d = | x i , y i | 2 ≤ x i 2 y i 2 = 1 d ,
so that equality holds everywhere. This implies (from the equality case in Cauchy-Schwarz inequality) that x i = λ i y i , for some λ i ∈ C, and the assertion is proved.
1 d = | x i , y i | 2 = | λ i y i , y i | 2 = |λ i | 2 .
Then for i = j, from Equation (3.6) we have (a) 1
d(d + 1) = | x i , y j | 2 = |λ i | 2 | y i , y j | 2 = 1 d | y i , y j | 2 ,
which implies that | y i , y j | 2 = 1 d+1 , thereby showing that {y i } d 2 i=1 satisfy Zauner's conjecture; and, (b)
x i x i , x j x j 2 = |λ i | 2 |λ j | 2 x j 2 x j 2 | y i , y j | 2 = | y i , y j | 2 = 1 d + 1 , which shows that xi xi d 2 i=1
satisfy Zauner's conjecture.
The following corollary is an immediate consequence of Theorem 3.3 along with the fact that cr(Z) = d 2 . Interestingly, for t ∈ −1 d 2 −1 , 1 d+1 , (see Proposition 2.11) the channels Φ t : M d → M d defined by Φ t = tI d + (1 − t)Ψ d cannot have a Choi-Kraus representation with d 2 positive rank one Choi-Kraus operators, which we prove next.
Proposition 3.6. Let t ∈ −1 d 2 −1 , 1 d+1 . Let Φ t : M d → M d be the quantum channel given by Φ t = tI d + (1 − t)Ψ d . Then Φ t cannot have a Choi-Kraus representation, Φ t (X) = d 2 i=1 R i XR * i ,
where each R i is a positive rank one operator.
Proof. We shall follow the arguments as in Proposition 3.2. Suppose Φ t has a Choi-Kraus representation, Φ t (X) =
d 2 i=1 R i XR * i , where R i = v i v * i for some vector v i ∈ C d . Since Φ t is unital, we have I d = d 2 i=1 v i 2 R i . Applying Φ t to R j , Φ t (R j ) = tR j + 1 − t d Tr(R j )I d = t + 1 − t d v j 4 R j + 1≤i≤d 2 i =j 1 − t d v j 2 v i 2 R i , and also Φ t (R j ) = d 2 i=1 R i R j R * i = v j 4 R j + 1≤i≤d 2 i =j | v i , v j | 2 R i .
Comparing both the expressions of Φ t (R j ) and using the linear independence of R i 's, we get
t + 1 − t d v j 4 = v j 4 , 1 − t d v j 2 v i 2 = | v i , v j | 2 , ∀i = j. This implies v j 4 = dt d + t − 1 , | v i , v j | 2 = t(1 − t) d + t − 1 , ∀i = j.
Let w i = vi vi . Then for all i = j, we have
| w i , w j | 2 = 1 v i 2 v j 2 | v i , v j | 2 = d + t − 1 dt t(1 − t) d + t − 1 = 1 − t d .
But by Corollary 2.3, we must have 1−t d = 1 1+d , which implies that t = 1 d+1 , which is a contradiction.
In what follows, we seek other entanglement breaking channels with the property that a statement about their entanglement breaking rank is equivalent to Zauner's conjecture. If Φ : M d → M m is an entanglement breaking map, and if Φ : M m → M k is a positive map, then Φ •Φ is also entanglement breaking. This follows readily from Theorem 2.9(d). The following proposition states the relation between their entanglement breaking ranks. Proof. Following Remark 2.10, the map Φ may be expressed as
Φ(X) = K k=1 Xw k , w k v k v * k , X ∈ M d ,
for some set of vectors w k ∈ C d and v k ∈ C m . Without loss of generality, we may assume that v k = 1 for all 1 ≤ k ≤ K. Then we have
Φ (Φ(X)) = K k=1 Xw k , w k Φ (v k v * k ), X ∈ M d .
Since Φ is positive, it follows that Φ (v k v * k ) ≥ 0 for all 1 ≤ k ≤ K. The spectral decomposition then allows us to write each Φ (v k v * k ) as a sum of (non-zero) rank one positive operators and each such sum will contain at most ri(Φ ) terms. Counting the total number of terms in the resulting sum of Φ (Φ(X)) yields the required inequality.
K k=1 Xw k , w k v k v * k , for all X ∈ M d , and for some w k ∈ C d , v k ∈ C m with v k = 1. If Φ : M m → M k is a unital quantum channel such that the projections {v k v * k } K k=1 are in the multiplicative domain of Φ , then we have ebr(Φ • Φ) ≤ ebr(Φ).
Proof. If a projection v k v * k is in the multiplicative domain of Φ , then Choi's theorem on multiplicative domain [6] asserts that Φ (v k v * k ) is a rank one projection, for all 1 ≤ k ≤ K. The result then follows from Proposition 3.7.
Φ 0 (X) = 1 d + 1 (X T + Tr(X)I d ), X ∈ M d ,
where X T denotes the transpose of X. Thus the following statements are equivalent:
(1) Zauner's conjecture is true.
(2) ebr(Φ 0 ) = d 2 . The Werner-Holevo Channel Φ 0 serves as an example of a quantum channel whose entanglement breaking rank is strictly greater than its Choi-rank. To see this first note that by Corollary 3.9, we have ebr(Φ 0 ) = ebr(Z) ≥ rank(C Z ) = d 2 .
Now it is not hard to see that the Choi-
rank of Φ 0 is d(d+1) 2
. Indeed, the range of the Choi-matrix C Φ0 is the subspace of C d ⊗ C d spanned by the vectors
(3.7) 1 √ 2 (e i ⊗ e j + e j ⊗ e i ) : i = j, 1 ≤ i, j ≤ d ∪ {e i ⊗ e i : 1 ≤ i ≤ d},
where {e i : 1 ≤ i ≤ d} is the canonical basis of C d . This is because C Φ0 has same range as the projection P = 1 , its entanglement breaking rank "could" theoretically be strictly less than d 2 . However, due to Corollary 3.9, we infer that ebr(Φ 0 ) = ebr(Z) ≥ d 2 and hence the entanglement breaking rank of Φ 0 is at least d 2 .
2 (I d ⊗ I d + W ), where W is the flip-operator defined on C d ⊗ C d by W (ξ ⊗ η) = η ⊗ ξ.
The Werner-Holevo channel is, of course, not the only channel with its entanglement breaking rank strictly greater than its Choi-rank. The following result generalizes [9, Theorem 1] and provides at least one way to find quantum channels with this property. Proof. The proof is obvious.
Lower semicontinuity of entanglement breaking rank
The following result shows that ebr is a lower semicontinuous function. Recall that if X is a metric space, x 0 ∈ X, and f : X → R ∪ {±∞}, then we say that f is lower semicontinuous at x 0 if and only if whenever (x n ) ∞ n=1 is a sequence in X which converges to x 0 , we have lim inf n→∞ f (x n ) ≥ f (x 0 ). Proof. It is enough to show that if ebr(Ψ n ) ≤ k for all n ∈ N, then ebr(Ψ) ≤ k. With that assumption, for each n ∈ N, there exist rank one operators {R i,n : 1 ≤ i ≤ k} such that Ψ n (X) = k i=1 R i,n XR * i,n . Since Ψ n (I d ) converges to Ψ(I d ) the sequence (Ψ n (I d )) is bounded and so there is c > 0 so that Ψ n (I d ) ≤ cI d for all n ∈ N. Hence, R j,n R * j,n ≤ k i=1 R i,n R * i,n = Ψ n (I d ) ≤ cI d , and so these rank one operators are also bounded. By compactness, we have a subsequence (n m ) ∞ m=1 so that, lim m R i,nm = R i which is also rank one. Thus,
Ψ(X) = lim m Ψ nm (X) = lim m k i=1 R i,nm XR * i,nm = k i=1 R i XR * i ,
and the result follows.
Proposition 4.1 yields the following corollary.
Corollary 4.2. Let 0 ≤ t ≤ 1 d+1 , and let Φ t = tI d + (1 − t)Ψ d . If lim inf t→ 1 d+1 − ebr(Φ t ) ≤ d 2 ,
then Zauner's conjecture is true.
This corollary motivates us to compute ebr(Φ t ). We show that in dimensions d = 2 and d = 3, ebr(Φ t ) = d 2 when 0 ≤ t ≤ 1 d+1 . In fact, we have the following stronger conjecture than ebr
(Φ t ) = d 2 for all 0 ≤ t ≤ 1 d+1 . Conjecture 4.3. Let d ≥ 2. For 0 ≤ t ≤ 1 d+1 , consider the channel Φ t = tI d + (1 − t)Ψ d .
There exists a set of 2d 2 continuous functions
x i , y i : 0, 1 d + 1 → C d , x i (t) = y i (t) = 1 for all 0 ≤ t ≤ 1 d + 1 , 1 ≤ i ≤ d 2 , such that Φ t (X) = 1 d d 2 i=1 (x i (t)y i (t) * ) X (x i (t)y i (t) * ) * , X ∈ M d ,
for all 0 ≤ t ≤ 1 d+1 . We show that for dimensions d = 2 and d = 3, we can indeed find such continuous families of vectors. To find them we assume a covariance property and some ad hoc assumptions. Indeed, instead of 2d 2 continuous functions, now we wish to find two continuous functions,
x, y : 0, 1 d + 1 → C d , x(t) = y(t) = 1 for all 0 ≤ t ≤ 1 d + 1 ,
such that the map defined by
Φ t (X) = 1 d d−1 i,j=0 (W i,j x(t)) (W i,j y(t)) * X (W i,j x(t)) (W i,j y(t)) * * satisfies Φ t = Φ t , where of course {W i,j : 0 ≤ i, j ≤ d − 1}
are the discrete Weyl matrices, and 0 ≤ t ≤ 1 d+1 . To find the functions x and y, we solve a set of simultaneous equations obtained by comparing their Choi-matrices:
Φ t (E k,l ) = Φ t (E k,l ), 0 ≤ k, l ≤ d − 1, where {E k,l : 0 ≤ k, l ≤ d − 1} are the canonical matrix units of M d .
We establish a couple of lemmas to help in the calculations for computing the functions x and y.
Lemma 4.4. Let {W i,j : 0 ≤ i, j ≤ d − 1} ⊆ M d be the discrete Weyl matrices. If E p,q ∈ M d is any canonical matrix unit with p = q and 0 ≤ p, q ≤ d − 1, then d−1 j=0 W * i,j E p,q W i,j = 0.
Proof. Computation shows that the (r, s)-entry of
d−1 j=0 W * i,j E p,q W i,j is d−1 j=0 ω j(p−q) δ r,p−i δ s,q−i . But, d−1
j=0 ω j(p−q) = 0, when p = q, and hence the result follows.
Φ(X) = d−1 i,j=0 (W i,j x)(W i,j y) * X(W i,j y)(W i,j x) * = d−1 i,j=0 W * i,j XW i,j y, y W i,j xx * W * i,j .
If E k,l ∈ M d is a canonical matrix unit, then, Φ(E k,k ) is a diagonal matrix for all 0 ≤ k ≤ d − 1; and if k = l, then the diagonal entries of Φ(E k,l ) are all zero.
Proof. Let x = (x 0 , ..., x d−1 ) and y = (y 0 , ..., y d−1 ). If {e j : 0 ≤ j ≤ d − 1} is the canonical orthonormal basis of C d , then a computation shows that the (p, q)-entry of Φ(E k,l ) is given by
Φ(E k,l )e q , e p = d−1 i,j=0 W i,j y, e k W i,j y, e l Tr xx * W * i,j E q,p W i,j . (4.1)
When 0 ≤ k ≤ d − 1, and p = q, using Equation (4.1) we have
Φ(E k,k )e q , e p = d−1 i,j=0 | W i,j y, e k | 2 Tr xx * W * i,j E q,p W i,j = d−1 i=0 |y k−i | 2 Tr xx * d−1 j=0 W * i,j E q,p W i,j = 0,
where the last equality follows from Lemma 4.4. When k = l, then using Equation (4.1), the (p, p)-entry of Φ(E k,l ) is,
Φ(E k,l )e p , e p = d−1 i,j=0 ω j(l−k) y k−i y l−i Tr xx * W * i,j E p,p W i,j = d−1 i,j=0 ω j(l−k) y k−i y l−i |x p−i | 2 = d−1 i=0 y k−i y l−i |x p−i | 2 d−1 j=0 ω j(l−k) = 0,
where the last equality follows from d−1
j=0 ω j(l−k) = 0, since k = l. We now prove Conjecture 4.3 for d = 2 and d = 3. We remark that there are many possible solutions for the functions x and y, but we shall not be occupied with finding all of them. → C 2 such that x(t) = y(t) = 1, and such that
Φ t (X) = 1 i,j=0 (W i,j x(t))(W i,j y(t)) * X(W i,j y(t))(W i,j x(t)) * , X ∈ M 2 for all 0 ≤ t ≤ 1 3 , where Φ t = tI 2 + (1 − t)Ψ 2 and {W i,j : 0 ≤ i, j ≤ 1} are the discrete Weyl matrices.
Proof. Without loss of generality we may assume that
x = r √ 1 − r 2 e iθ , y = se iθ1 √ 1 − s 2 e iθ2 ,
where 0 ≤ r, s ≤ 1 and θ, θ 1 , θ 2 ∈ [0, 2π), and where we have suppressed the dependence on t. Set
a = rs, b = (1 − r 2 )(1 − s 2 ). (4.2)
With the help of Lemma 4.5, computation shows that
Φ t (E 1,1 ) = a 2 + b 2 0 0 1 − (a 2 + b 2 ) , Φ t (E 1,2 ) = 0 2ab cos(θ + θ 1 − θ 2 ) 2ab cos(θ − θ 1 + θ 2 ) 0 , Φ t (E 2,1 ) = 0 2ab cos(θ − θ 1 + θ 2 ) 2ab cos(θ + θ 1 − θ 2 ) 0 , Φ t (E 2,2 ) = 1 − (a 2 + b 2 ) 0 0 a 2 + b 2 .
Comparing these with Φ t (E k,l ), we get the following simultaneous equations:
1 + t 2 = a 2 + b 2 , t = 2ab cos(θ + θ 1 − θ 2 ), 0 = 2ab cos(θ − θ 1 + θ 2 ). (4.3)
If t = 0, then the last two equations in Equation (4.3) imply that ab = 0. Therefore the last equation in Equation (4.3) implies that cos (θ − θ 1 + θ 2 ) = 0, which happens only when θ − θ 1 + θ 2 = (2k + 1) π 2 for some k ∈ N. Thus we have θ 2 − θ 1 = (2k + 1) π 2 − θ, and hence
cos (θ + θ 1 − θ 2 ) = cos 2θ − (2k + 1) π 2 = (−1) k sin(2θ).
Therefore the second equation of Equation (4.3) becomes t = (−1) k 2ab sin(2θ). (4.4) Now notice that by the AM-GM inequality, we have a 2 +b 2 2 ≥ ab. Plugging the values of a 2 + b 2 and ab from Equations (4.3) and (4.4), respectively, and simplifying,
(−1) k sin(2θ) ≥ 2t 1 + t .
While we can let θ depend on t, for keeping things simple, we choose a value of θ which satisfies the above inequality for all 0 ≤ t ≤ 1 3 . Since 2t 1+t ∈ 0, 1 2 for 0 ≤ t ≤ 1 3 , we choose k to be even, say k = 0, and θ = π 4 so that the above inequality is satisfied. Then the relation θ 2 − θ 1 = (2k + 1) π 2 − θ yields θ 2 − θ 1 = π 4 . Again, to keep analysis simpler, we set θ 1 = 0 and θ 2 = π 4 . Thus,
θ = π 4 , θ 1 = 0, θ 2 = π 4 . (4.5)
With these values, the system of equations (4.3) reduces to
a 2 + b 2 = 1 + t 2 , 2ab = t,(4.6)
A bit of computation shows that one solution to Equations (4.6) is,
a = 1 2 1 + 3t 2 + 1 − t 2 , b = 1 2 1 + 3t 2 − 1 − t 2 . (4.7)
Finally, we can invert equations (4.2) to express r and s in terms of a and b. It is easy to see that r 2 + s 2 = 1 + a 2 − b 2 and 2rs = 2a, so that one solution is,
r = 1 2 (1 + a) 2 − b 2 + (1 − a) 2 − b 2 s = 1 2 (1 + a) 2 − b 2 − (1 − a) 2 − b 2 .
(4.8)
With these values, we get an instance of the functions x(t) and y(t) which satisfy the system of equations (4.3). Clearly, x(t) and y(t) are continuous at t = 1 3 . Moreover, in the limit t → 0, we get
x(0) = 1 e iπ 4 , y(0) = 1 √ 2 1 e 1π 4 .
It is easily checked that these vectors do satisfy the system of equations (4.3).
Hence, x(t) and y(t) are continuous at t = 0.
While the proof is complete, we do few more calculations to show that when t = 1 3 , we do recover a set of vectors which satisfy Zauner's conjecture. We also show how the equiangular property depends on t.
When t = 1 3 , it is straightforward to compute from the above equations that
x 1 3 = y 1 3 = 1 √ 6 3 + √ 3 e iπ 4 3 − √ 3 ,
which is indeed one of the fiducial vectors in dimension two mentioned in [20].
To get the other fiducial vector we choose θ = 5π 4 and replace r =
3+ √ 3 6 by √ 1 − r 2 = 3− √ 3
6 . Finally, we compute the angle | W i,j x(t), W k,l x(t) | 2 for all (i, j) = (k, l). It suffices to compute | W i,j x(t), x(t) | 2 for all (i, j) = (0, 0). Computation shows that
A 1 (t) := | W 0,1 x(t), x(t) | 2 = 1 − 3t 2 + (1 − 9t 2 )(1 − t 2 ) 2 , A 2 (t) := | W 1,0 x(t), x(t) | 2 = | W 1,1 x(t), x(t) | 2 = 1 + 3t 2 − (1 − 9t 2 )(1 − t 2 ) 4 .
Thus we get two distinct angles as a function of t. When t = 1 3 . we see that both values are 1 3 , which we would expect if the vectors are equiangular. We plot these two functions in Figure 1. We can do a similar analysis for y(t). (W i,j x(t))(W i,j y(t)) * X(W i,j y(t))(W i,j x(t)) * , X ∈ M 3 , for all 0 ≤ t ≤ 1 4 , where Φ t = tI 3 + (1 − t)Ψ 3 , and {W i,j : 0 ≤ i, j ≤ 2} are the discrete Weyl matrices.
Proof. Consider the functions u, v : 0, 1 4 → C 3 defined by
u(t) = 1 λ λ , v(t) = α β β ,
where λ, α, β are also functions of t, but we have suppressed the dependence upon t. The function α is given by the positive square root of
α 2 = 5 + 4t + 4 √ 1 + 7t − 8t 2 81 .
The function λ is a solution of λ(λ + 1) = ρ, say λ = −1+ √ 1+4ρ 2 , where ρ is given by
ρ = 1 + 2t − √ 1 + 7t − 8t 2 1 − 4t .
Finally, β is given by β = −α(λ + 1). Now define x, y : 0, 1 4 → C 3 as
x(t) = √ 3 v(t) u(t), y(t) = v(t) v(t) .
Clearly, y(t) = 1 and another string of calculations shows:
x(t) 2 = 3α 2 (1 + 2λ 2 )(2λ 2 + 4λ + 3) = 3α 2 (4ρ 2 + 4ρ + 3) = 1.
Computations show that x and y are continuous on the domain. Moreover,
x 1 4 = 2 3 −1 √ 6 −1 √ 6 = y 1 4 .
This particular fiducial vector indeed matches with [20] when r 0 = 2 3 and θ 1 = θ 2 = π.
There are again two values of | W i,j x(t), x(t) | 2 for all 0 ≤ i, j ≤ 2 and (i, j) = (0, 0). Calling these values A 1 (t) and A 2 (t), we see that
A 1 (t) := 9 v(t) 2 (1 − λ 2 ) 2 = 9α 4 (2λ 2 + 4λ + 3) 2 (1 − λ 2 ) 2 , A 2 (t) := 9 v(t) 2 (λ − λ 2 ) 2 = 9α 4 (2λ 2 + 4λ + 3) 2 (λ − λ 2 ) 2 ,
which are graphed in Figure 2.
Figure 2. Plot of | W i,j x(t), x(t) | 2 for (i, j) = (0, 0) when d = 3.
Mutually unbiased bases
The existence problem of d + 1 mutually unbiased bases in C d (d ≥ 2) is another major open problem in quantum information theory. In this final section we show that the vectors constituting the d + 1 mutually unbiased bases give rise to a Choi-Kraus representation of the channel Z.
if | v i , w j | 2 = 1 d for all 1 ≤ i, j ≤ d. (It is easy to see that if | v i , w j | 2 = c is constant for all 1 ≤ i, j ≤ d, then c = 1 d .)
Let m(d) be the maximum number of possible mutually unbiased bases in C d . It is an open question to decide how m(d) depends on d. It is conjectured that m(d) = d + 1 for all d ≥ 2. When d = p n where p is prime and n is any natural number, then indeed m(p n ) = p n + 1 [2,17]. However, even for d = 6 which is not of the form p n , it is not known whether m(6) = 7. In fact, only three mutually unbiased bases have been found so far for dimension six.
In what follows, our aim is to show that if there exist d + 1 mutually unbiased bases V i = {v i,j : 1 ≤ j ≤ d} (1 ≤ i ≤ d + 1) for C d , then the channel defined by
Φ(X) = 1 d + 1 d+1 i=1 d j=1 P i,j XP i,j , X ∈ M d
where P i,j is the projection onto span{v i,j }, is nothing but the channel Z.
Lemma 5.2. For 1 ≤ i ≤ d + 1, let V i = {v i,j : 1 ≤ j ≤ d} be a collection of d + 1 mutually unbiased bases of C d . If P i,j = v i,j v * i,j for all i, j, then the set
P = {P 1,j : 1 ≤ j ≤ d} ∪ {P i,j : 2 ≤ i ≤ d + 1, 1 ≤ j ≤ d − 1} ⊆ M d ,
is a basis for M d .
Proof. Since |P| = d 2 , it is sufficient to show that it is linearly independent in M d . Using the fact that {V i } d+1 i=1 are mutually unbiased bases, for 1 ≤ i, k ≤ d + 1 and 1 ≤ j, l ≤ d, we have Tr(P i,j P k,l ) = | v i,j , v k,l | 2 = Multiplying Equation (5.2) by P 1,l for 1 ≤ l ≤ d, taking trace and using Relations (5.1), we have Similarly multiplying Equation (5.2) by P k,l , where 2 ≤ k ≤ d + 1 and 1 ≤ l ≤ d − 1, taking trace, and using Relations (5.1) we arrive at
0 = λ 1,d | v 1,d , v 1,l | 2 + d+1 i=1 d−1 j=1 λ i,j | v i,j , v 1,l | 2 = λ 1,l + 1 d d+1 i=2 d−1 j=1 λ i,j ,λ k,l = −1 d λ1,d + d+1 i=1 i =k d−1 j=1 λ i,j . (5.6)
Again since the right hand side of Equation (5.6) is independent of l, we conclude that λ k,1 = λ k,2 = ... = λ k,d−1 =: λ k . (5.7)
However, for any 2 ≤ k ≤ d + 1, using Equation (5.3),
λ k = λ k,l = −1 d λ1,d + d+1 i=1 i =k d−1 j=1 λ i,j = −1 d λ 1,d + d+1 i=1 d−1 j=1 λ i,j − d−1 j=1 λ k,j = −1 d − d−1 j=1 λ k,j = d − 1 d λ k ,
which yields λ k = 0. Then Equation (5.2) reduces to 0 = λ 1,1 P 1,1 + ... + λ 1,d P 1,d = λ 1 (P 1,1 + ... + P 1,d ) = λ 1 I d ,
and thus λ 1 = 0. Hence we have shown that λ i,j = 0 for all 1 ≤ i ≤ d + 1 and 1 ≤ j ≤ d − 1, and λ 1,d = 0.
Φ(X) = 1 d + 1 d+1 i=1 d j=1 P i,j XP i,j , X ∈ M d ,
is a unital quantum channel, and Φ = Z.
Proof. Clearly, the map Φ is completely positive. It is trace preserving and unital since
1 d + 1 d+1 i=1 d j=1 P i,j = 1 d + 1 d+1 i=1 I d = I d . (5.8)
Using Relations (5.1), it follows that for 1 ≤ k ≤ d + 1 and 1 ≤ l ≤ d,
Φ(P k,l ) = 1 d + 1 d+1 i=1 d j=1
| v i,j , v k,l | 2 P i,j = 1 d + 1 (P k,l + I d ) .
Definition 2 . 5 .
25For a positive integer d, let Z d = {0, 1, 2, · · · , d − 1} be the cyclic group of order d. Let ω = exp 2πi d be a d-th root of unity. Define the following two d × d matrices
Conjecture 2. 6 (
6Strong Zauner conjecture). For each positive integer d ≥ 2, there exists a unit vector w ∈ C d such that the d 2 vectors,
Definition 2.8. A linear map Φ : M d → M m is called entanglement breaking if for all n ∈ N, the n-th amplification map Φ ⊗ I n : M d ⊗ M n → M m ⊗ M n maps all states (entangled or not) in M d ⊗ M n to separable states in M m ⊗ M n .
Theorem 2.9 ([15]). Let Φ : M d → M m be a linear map. Then the following are equivalent.
(d) The linear map Φ • Φ : M d → M k is completely positive for every positive map Φ : M m → M k .
( e )
eThe linear map Φ • Φ : M k → M m is completely positive for every positive map Φ : M k → M d . Moreover, the set of all entanglement breaking maps Φ : M d → M m forms a convex set in L(M d , M m ). Remark 2.10. If Φ : M d → M m is an entanglement breaking map, then by Theorem 2.9(c), there exists a Choi-Kraus representation of Φ,
Proposition 3 . 2 .
32Zauner's conjecture is true if and only if for each positive integer d ≥ 2, the quantum channel Z defined by
Theorem 3 . 3 .
33Zauner's conjecture is true if and only if for each positive integer d ≥ 2, the quantum channel Z defined by
Corollary 3. 5 .
5Zauner's conjecture holds if and only if ebr(Z) = d 2 for all d ≥ 2.
Proposition 3. 7 .
7Let Φ : M d → M m be an entanglement breaking map and let Φ : M m → M k be a positive map. Then ebr(Φ • Φ) ≤ ebr(Φ) ri(Φ ), where ri(Φ ) = max{rank(Φ (xx * )) : x = 1}.
Corollary 3. 8 .
8Let Φ : M d → M m be an entanglement breaking map. Set K = ebr(Φ) and suppose Φ(X) =
Let Φ : M d → M d be an entanglement breaking map, T : M d → M d be the transpose map, and for a given unitary U ∈ M d let Ad U : M d → M d be the map Ad U (X) = U XU * . Since T and Ad U are both positive, it follows that T • Φ and Ad U •Φ are entanglement breaking. The following result determines the entanglement breaking rank of these two channels.
Corollary 3 . 9 .
39Let Φ : M d → M d be an entanglement breaking map. Let T : M d → M d be the transpose map. Given a unitary U ∈ M d , let Ad U : M d → M d be the map Ad U (X) = U XU * . Then ebr(T • Φ) = ebr(Φ) = ebr(Ad U •Φ).Proof. It is clear that ri(T ) = 1 and ri(Ad U ) = 1. Using these and Proposition 3.7, we haveebr(Φ) = ebr(T • T • Φ) ≤ ebr(T • Φ) ≤ ebr(Φ), ebr(Φ) = ebr(Ad U * • Ad U •Φ) ≤ ebr(Ad U •Φ) ≤ ebr(Φ),Thus, we have equalities everywhere and the corollary follows.
Corollary 3 . 10 .
310Let d ≥ 2 and Z = 1 d+1 I d + d d+1 Ψ d . Then the following are equivalent.(a) Zauner's conjecture is true.(b) ebr(Z) = d 2 . (c) ebr(T • Z) = d 2 , where T is the transpose map. (d) ebr(Ad U •Z) = d 2 , for any unitary U ∈ M d .Remark 3.11. Note that the channel T • Z is precisely the Werner-Holevo channel[24, Example 3.36]
Proposition 3 . 12 .
312Let Φ : M d → M m be an entanglement breaking map and let Φ : M m → M k be a positive map. If ebr(Φ) = ebr(Φ • Φ) and cr(Φ) = cr(Φ • Φ), then either ebr(Φ) > cr(Φ) or ebr(Φ • Φ) > cr(Φ • Φ).
Proposition 4. 1 .
1Let (Ψ n ) ∞ n=1 be a sequence of entanglement breaking maps where Ψ n : M d → M m , and suppose that Ψ n → Ψ. Then ebr(Ψ) ≤ lim inf n ebr(Ψ n ).
Lemma 4 . 5 .
45Let x, y ∈ C d be unit vectors and let {W i,j : 0 ≤ i, j ≤ d − 1} ⊆ M d be the discrete Weyl matrices. Let Φ : M d → M d be a linear map defined by
Figure 1 .
1Plot of | W i,j x(t), x(t) | 2 for (i, j) = (0, 0) when d = 2.For dimension 3, we simply state the vectors.
Definition 5 . 1 .
51Let V = {v 1 , ..., v d } and W = {w 1 , ..., w d } be two orthonormal bases of C d . We say that V and W are mutually unbiased
i = k and j = l, 1 if i = k and j = l, 1 d if i = k.
exist scalars {λ i,j : 1 ≤ i ≤ d + 1, 1 ≤ j ≤ d − 1} ∪ {λ 1,d
right hand side of Equation (5.4) is independent of l, we conclude that λ 1,1 = λ 1,2 = ... = λ 1,d =: λ 1 . (5.5)
Theorem 5. 3 .
3For 1 ≤ i ≤ d + 1, let V i = {v i,j : 1 ≤ j ≤ d} be a collection of d + 1 mutually unbiased bases of C d . If P i,j = v i,j v * i,j for all i, j, then the linear map Φ : M d → M d defined by
The existence of the SIC POVM for all dimension d ≥ 2 is still an open question, however in some specific dimensions (d = 1 − 21, 24, 28, 30, 31, 35 etc.) the analytic solutions of SIC sets have been found. See
The subspace spanned by the vectors mentioned in (3.7) is called the symmetric subspace and has dimension d(d+1)2
[24, Example
6.10].
Since d 2 > d(d+1)
2
for all d ≥ 2, we have
ebr(Φ 0 ) ≥ d 2 >
d(d + 1)
2
= cr(Φ 0 ),
and the assertion follows.
Note that since cr(Φ 0 ) = d(d+1)
2
and d 2 > d(d+1)
2
Symmetric informationally complete-positive operator valued measures and the extended Clifford group. D M Appleby, J. Math. Phys. 4652142983D. M. Appleby, Symmetric informationally complete-positive operator valued measures and the extended Clifford group, J. Math. Phys. 46 (2005), no. 5, 052107, 29. MR2142983
A new proof for the existence of mutually unbiased bases. S Bandyopadhyay, P O Boykin, V Roychowdhury, F Vatan, Algorithmica. 3441943521S. Bandyopadhyay, P. O. Boykin, V. Roychowdhury, and F. Vatan, A new proof for the existence of mutually unbiased bases, Algorithmica 34 (2002), no. 4, 512-528. Quantum com- putation and quantum cryptography. MR1943521
Informationally complete sets of physical quantities. P Busch, Internat. J. Theoret. Phys. 309P. Busch, Informationally complete sets of physical quantities, Internat. J. Theoret. Phys. 30 (1991), no. 9, 1217-1227. MR1122026
C M Caves, Symmetric informationally complete POVMs. C. M. Caves, Symmetric informationally complete POVMs, 1999.
Dimensions, lengths, and separability in finite-dimensional quantum systems. L Chen, D Ž Doković, J. Math. Phys. 5423076372L. Chen and D.Ž. Doković, Dimensions, lengths, and separability in finite-dimensional quan- tum systems, J. Math. Phys. 54 (2013), no. 2, 022201, 13. MR3076372
A Schwarz inequality for positive linear maps on C * -algebras. M D Choi, Illinois J. Math. 18M. D. Choi, A Schwarz inequality for positive linear maps on C * -algebras, Illinois J. Math. 18 (1974), 565-574. MR0355615
Completely positive linear maps on complex matrices. Linear Algebra and Appl. 10, Completely positive linear maps on complex matrices, Linear Algebra and Appl. 10 (1975), 285-290. MR0376726
Informationally complete measurements and group representation. G M D'ariano, P Perinotti, M F Sacchi, Journal of Optics B: Quantum and Semiclassical Optics. 66G. M. D'Ariano, P. Perinotti, and M. F. Sacchi, Informationally complete measurements and group representation, Journal of Optics B: Quantum and Semiclassical Optics 6 (2004), no. 6, S487-S491.
Optimal decompositions of barely separable states. D P Divincenzo, B M Terhal, A V , Physics of quantum information. 472-31756861J. Modern Opt.D. P. DiVincenzo, B. M. Terhal, and A. V. Thapliyal, Optimal decompositions of barely separable states, J. Modern Opt. 47 (2000), no. 2-3, 377-385. Physics of quantum information. MR1756861
On mutually unbiased bases. T Durt, B-G Englert, I Bengtsson, K , International Journal of Quantum Information. 0804T. Durt, B-G. Englert, I. Bengtsson, andŻyczkowski K., On mutually unbiased bases, Inter- national Journal of Quantum Information 08 (2010), no. 04, 535-640.
The SIC question: History and state of play. C Fuchs, M Hoang, B Stacey, Axioms. 6421C. Fuchs, M. Hoang, and B. Stacey, The SIC question: History and state of play, Axioms 6 (2017), no. 4, 21.
Tomography of quantum states in small dimensions. M , Proceedings of the Workshop on Discrete Tomography and its Applications. the Workshop on Discrete Tomography and its Applications2301093M. Grassl, Tomography of quantum states in small dimensions, Proceedings of the Workshop on Discrete Tomography and its Applications, 2005, pp. 151-164. MR2301093
64 lines from a quaternionic polytope. S G Hoggar, MR1609397Geom. Dedicata. 693S. G. Hoggar, 64 lines from a quaternionic polytope, Geom. Dedicata 69 (1998), no. 3, 287- 289. MR1609397
Reduction criterion of separability and limits for a class of distillation protocols. M Horodecki, P Horodecki, Phys. Rev. A. 59M. Horodecki and P. Horodecki, Reduction criterion of separability and limits for a class of distillation protocols, Phys. Rev. A 59 (1999Jun), 4206-4216.
Entanglement breaking channels. M Horodecki, P W Shor, M B Ruskai, Rev. Math. Phys. 156M. Horodecki, P. W. Shor, and M. B. Ruskai, Entanglement breaking channels, Rev. Math. Phys. 15 (2003), no. 6, 629-641. MR2001114
A complete classification of quantum ensembles having a given density matrix. L P Hughston, R Jozsa, W K Wootters, Phys. Lett. A. 18311248347L. P. Hughston, R. Jozsa, and W. K. Wootters, A complete classification of quantum ensem- bles having a given density matrix, Phys. Lett. A 183 (1993), no. 1, 14-18. MR1248347
Geometrical description of quantal state determination. I D Ivanović, J. Phys. A. 1412I. D. Ivanović, Geometrical description of quantal state determination, J. Phys. A 14 (1981), no. 12, 3241-3245. MR639558
Bipartite depolarizing maps. L Lami, M Huber, J. Math. Phys. 5793548198L. Lami and M. Huber, Bipartite depolarizing maps, J. Math. Phys. 57 (2016), no. 9, 092201, 19. MR3548198
Equiangular lines. P W H Lemmens, J J Seidel, J. Algebra. 24P. W. H. Lemmens and J. J. Seidel, Equiangular lines, J. Algebra 24 (1973), 494-512. MR0307969
Symmetric informationally complete quantum measurements. J M Renes, R Blume-Kohout, A J Scott, C M Caves, 2171-2180. MR2059685J. Math. Phys. 456J. M. Renes, R. Blume-Kohout, A. J. Scott, and C. M. Caves, Symmetric informationally complete quantum measurements, J. Math. Phys. 45 (2004), no. 6, 2171-2180. MR2059685
Qubit entanglement breaking channels. M B Ruskai, Rev. Math. Phys. 156M. B. Ruskai, Qubit entanglement breaking channels, Rev. Math. Phys. 15 (2003), no. 6, 643-662. MR2001115
SICs: Extending the list of solutions. A J Scott, 1703.03993ArXiv e-printsA. J. Scott, SICs: Extending the list of solutions, ArXiv e-prints (March 2017), available at 1703.03993.
Symmetric informationally complete positive-operator-valued measures: a new computer study. A J Scott, M Grassl, J. Math. Phys. 5142662471A. J. Scott and M. Grassl, Symmetric informationally complete positive-operator-valued mea- sures: a new computer study, J. Math. Phys. 51 (2010), no. 4, 042203, 16. MR2662471
The theory of quantum information. J Watrous, Cambridge University PressCambridgeJ. Watrous, The theory of quantum information, Cambridge University Press, Cambridge, 2018.
Optimal state-determination by mutually unbiased measurements. W K Wootters, B D Fields, Ann. Physics. 1912W. K. Wootters and B. D. Fields, Optimal state-determination by mutually unbiased mea- surements, Ann. Physics 191 (1989), no. 2, 363-381. MR1003014
Quantendesigns: Grundzüge einer nichtkommutativen designtheorie, Dissertation. G Zauner, Universität WienG. Zauner, Quantendesigns: Grundzüge einer nichtkommutativen designtheorie, Dissertation, Universität Wien (1999).
Quantum designs: foundations of a noncommutative design theory. Int. J. Quantum Inf. 912931102, Quantum designs: foundations of a noncommutative design theory, Int. J. Quantum Inf. 9 (2011), no. 1, 445-507. MR2931102
| [] |
[
"Effective descent for differential operators",
"Effective descent for differential operators"
] | [
"Elie Compoint ",
"Marius Van Der Put ",
"Jacques-Arthur Weil "
] | [] | [] | A theorem of N. Katz [Ka] p.45, states that an irreducible differential operator L over a suitable differential field k, which has an isotypical decomposition over the algebraic closure of k, is a tensor product L = M ⊗ k N of an absolutely irreducible operator M over k and an irreducible operator N over k having a finite differential Galois group. Using the existence of the tensor decomposition L = M ⊗ N , an algorithm is given in [C-W], which computes an absolutely irreducible factor F of L over a finite extension of k. Here, an algorithmic approach to finding M and N is given, based on the knowledge of F . This involves a subtle descent problem for differential operators which can be solved for explicit differential fields k which are C 1 -fields. | 10.1016/j.jalgebra.2010.02.040 | [
"https://arxiv.org/pdf/1001.0153v1.pdf"
] | 115,161,928 | 1001.0153 | 8f2aba7328c5be8d02cb3c5c4bb501f1b87b8440 |
Effective descent for differential operators
31 Dec 2009
Elie Compoint
Marius Van Der Put
Jacques-Arthur Weil
Effective descent for differential operators
31 Dec 2009Dedicated to the memory of Jerry Kovacic. September 2009
A theorem of N. Katz [Ka] p.45, states that an irreducible differential operator L over a suitable differential field k, which has an isotypical decomposition over the algebraic closure of k, is a tensor product L = M ⊗ k N of an absolutely irreducible operator M over k and an irreducible operator N over k having a finite differential Galois group. Using the existence of the tensor decomposition L = M ⊗ N , an algorithm is given in [C-W], which computes an absolutely irreducible factor F of L over a finite extension of k. Here, an algorithmic approach to finding M and N is given, based on the knowledge of F . This involves a subtle descent problem for differential operators which can be solved for explicit differential fields k which are C 1 -fields.
operator L is called irreducible if it does not factor over k and absolutely irreducible if it does not factor over k. Here we are interested in the following special situation:
L is irreducible and L factors over k as a product F 1 . . . F s of s > 1 equivalent (monic) absolutely irreducible operators.
There are algorithms for factoring L over k, i.e., as element of k[∂] ( [H1, H2, vdP-S]). Algorithms for finding factors of order 1 in k [∂] are proposed in [S-U, H-R-U-W, vdP-S]. An algorithm for finding factors of arbitrary order in k[∂] is given in [C-W].
According to N. Katz [Ka], Proposition 2.7.2, p.45, the assumption on L implies that L is a tensor product M ⊗ N of monic operators in k [∂] such that M is absolutely irreducible and the irreducible operator N has a finite differential Galois group (or equivalently all its solutions are algebraic over k). We will present a quick, down-to-earth proof of this in terms of differential modules over k. Further we note that the converse statement is immediate because N decomposes over k as a "direct sum" (least common left multiple)
of operators ∂ − f ′ f with f ∈ k * .
The absolute factorization algorithm in [C-W] uses the existence of this tensor decomposition to ensure its correctness but, in full generality, the problem of computing M and N is yet left open in there. For M or N of small order, methods for detecting and computing M and N are given in [C-W, N-vdP] and particularly in [H3] where additional references can be found. We will illustrate this (and propose another method) at the end of the paper.
The problem which we address to produce the operators M, N ∈ k[∂] by some decision procedure for M and N of arbitrary order. It follows from L = M ⊗ N that F 1 ∈ k[∂] is equivalent to M (i.e., F 1 descends to k). Let K 1 ⊃ k be the smallest Galois extension such that F 1 ∈ K 1 [∂] (or equivalently K 1 is the field extension of k generated by all the coefficients of all F i ). One might think that the equivalence between F 1 and M , seen as elements of K 1 [∂], takes place over K 1 . However, in general, a (finite) extension K ′ ⊃ K 1 is needed for this equivalence. The title of this paper refers to this descent problem 1 .
Our method for finding K ′ is as follows. First the smallest extension K ⊃ K 1 is computed which guarantees that the factors F 1 , . . . , F s are equivalent over the field K. Using these equivalences, a certain 2-cocyle c for Gal(K/k) with values in C * , i.e. the obstruction for the descent of F 1 , is computed. Since k is a C 1 -field, the 2-cocycle c becomes trivial over a finite (cyclic) computable extension K ′ ⊃ K : we give a construction of K ′ in section 3.1.2, particularly part (4) of remark 3.4 which produces first order operators having the same obstruction to descent and for which the problem can be solved. Finally, once K ′ is found, the computation of M, N is easily completed.
In the sequel we will use differential modules because these are more natural for the problem. A translation in terms of differential operators is presented at the moment that actual algorithms are involved since the latter are frequently phrased in terms of differential operators.
The following notation is used. The trivial 1-dimensional differential module over a field K is Ke with ∂e = 0. This module will be denoted by 1 or 1 K . For a differential module A over K of dimension a one writes det A for the 1-dimensional module Λ a A.
A version of Katz' theorem
In the proof we will use the following notion of twist of a differential module. Let Gal(K/k) be the Galois group of any Galois extension K of k (finite or infinite). Let A be a differential module over K and σ ∈ Gal(K/k). The twist σ A is equal to A as additive group, has the same ∂ as A, but its structure as K-vector space is given by λ * a := (σ −1 λ) · a for λ ∈ K, a ∈ A.
The elements σ ∈ Gal(K/k) act in a natural way on K[∂] by the formula σ( n a n ∂ n ) = n σ(a n )∂ n . If one presents A as
K[∂]/K[∂]F (with F monic), then σ A = K[∂]/K[∂]σ(F ).
An isomorphism φ(σ) : σ A → A can also be interpreted as a C-linear bijection Φ(σ) : A → A, commuting with ∂ and such that Φ(σ)(λa) = σ(λ) · Φ(a). In other words Φ(σ) is σ-linear isomorphism.
Proposition 2.1 Let L be a differential module over k. Suppose that:
(1). The field of constants C of k is algebraically closed, k has characteristic zero and k is a C 1 -field.
(2). L is irreducible and L := k ⊗ k L decomposes as a direct sum ⊕ s i=1 A i of isomorphic irreducible differential modules over k.
Then there are modules M, N over k such that L ∼ = M ⊗ k N , M is absolutely irreducible and the irreducible module N has a finite differential Galois group. The pair (M, N ) is unique up to a change (M ⊗ k D, N ⊗ k D −1 ) where D has dimension 1 and D ⊗t is the trivial module for some t ≥ 1.
Proof. Write A = A 1 and let Gal denote the Galois group of k/k. For any σ ∈ Gal, the twisted module σ A is a submodule of σ (k ⊗ k L). As the latter module is isomorphic to k⊗
k L, there is a σ-linear isomorphism Φ(σ) : A → A.
This induces a 2-cocycle c for Gal with values in C * , defined by Φ(στ ) = c(σ, τ ) · Φ(σ) · Φ(τ ). Since k is a C 1 -field, the 2-cocycle is trivial ( [Se], II-9, §3.2). After multiplying the isomorphisms {Φ(σ)} by suitable elements in C * , one obtains descent data {Φ(σ)| σ ∈ Gal} satisfying the descent condition
Φ(στ ) = Φ(σ) · Φ(τ ) for all σ, τ ∈ Gal(k/k). Define M := {a ∈ A | Φ(σ)a = a for all σ ∈ Gal}.
It is easily verified that M is a differential module over k and that the canonical morphism M := k ⊗ k M → A is an isomorphism. Consider now Hom ∂ (M , L). This is a vector space, isomorphic to C s and provided with an action of Gal. Then k ⊗ C Hom ∂ (M , L) is a trivial differential module over k provided with an action of Gal. It is a submodule of the differential module Hom(M , L). Taking invariants under Gal one obtains a differential module N := (k ⊗ C Hom ∂ (M , L)) Gal over k which is a submodule of Hom(M, L).
The canonical morphism k ⊗ N → k ⊗ C Hom ∂ (M , L) is an isomorphism and thus the differential Galois group of N is finite.
The canonical morphism of differential modules M ⊗ k Hom(M, L) → L, namely m ⊗ ℓ → ℓ(m), can be restricted to a morphism f : M ⊗ k N → L. By construction the induced morphism M ⊗ k N → L is an isomorphism and thus so is f .
The descent data {Φ(σ)| σ ∈ Gal} for A are not unique. They can be changed into {h(σ)Φ(σ)| σ ∈ Gal} where h : Gal → C * is any continuous homomorphism. We note that the image of h is a subgroup µ t of the t-th roots of unity for some t. Consider the trivial differential module ke with ∂e = 0 and with Gal action given by σe = h(σ)e for all σ ∈ Gal. By taking the invariants under Gal one obtains a 1-dimensional module D over k such that the canonical morphism k ⊗ D → ke is an isomorphism and respects the actions of Gal. Further D ⊗t is the trivial differential module 1.
The differential module obtained with the new descent data can be seen to be M ⊗ k D and thus N will be changed into N ⊗ D −1 . From this observation the last statement of the proposition follows.
Algorithmic approach
First we make the relation between differential modules and differential operators explicit. Let A be a differential module and a ∈ A a cyclic vector. One associates to this the monic differential operator op ( Let k ′ be an algebraic extension of k. Then the above bijection extends to a bijection between the (monic) factorizations of op(A, a) in k ′ [∂] and the submodules of k ′ ⊗ k A.
As before, L denotes an irreducible differential module over k such that k ⊗ L is a direct sum of s > 1 copies of an absolutely irreducible differential module.
Choose ℓ ∈ L, ℓ = 0. Since L is irreducible, ℓ is a cyclic vector. We assume the knowledge of a factorization op(L, ℓ) = F.R with monic F, R ∈ k[∂] and F absolutely irreducible, given by [C-W].Using this information we will describe the computation leading to a tensor product decomposition L = M ⊗ N .
The special case dim M = 1
Assume that the irreducible L is equal to M ⊗ k N with dim M = 1, dim N = s > 1 and k ⊗ k N is trivial. Thus the Picard-Vessiot extension K + of N is a finite extension of k and can be considered as a subfield of k. The (covariant) solution space V of N is equal to ker(∂, K + ⊗ k N ). The differential Galois group G + = Gal(K + /k) acts on V and there is a canonical isomorphism K + ⊗ C V → K + ⊗ k N . Moreover, M := k ⊗ k M is not a trivial module (equivalently, the differential Galois group of M is infinite and then equal to the multiplicative group G m ).
There is a trivial way to produce a decomposition L = M ⊗ k N . Indeed, write op(L, ℓ) = (∂ s + a s−1 ∂ s−1 + · · · + a 0 ). Then the tensor product decomposition op(L, ℓ) = (∂ + a s−1 s )⊗(∂ s +b s−2 ∂ s−2 +· · ·+b 0 ), for suitable elements b i ∈ k, solves already the problem, since (as one easily sees) all the solutions of ∂ s−1 + b s−2 ∂ s−2 + · · · + b 0 are in k. However, the aim of this subsection is to describe in this easy situation an algorithm for obtaining the pair (M, N ), up to a change (M ⊗ D, D −1 ⊗ N ), which works with small modifications for the general case.
After fixing a non zero element ℓ ∈ L, the module is represented by the monic operator op(L, ℓ). The first step is to produce the smallest subfield K ⊂ k (containing k) such that op(L, ℓ) decomposes in K[∂] as a product F 1 · · · F s of (monic) equivalent operators of degree 1.
The (monic) left hand factors
F = ∂ + u ∈ k[∂] of op(L, ℓ) correspond to the 1-dimensional submodules of k ⊗ L = M ⊗ k (k ⊗ k N ) = M ⊗ k (k ⊗ C V )
and these are the M ⊗ k (k⊗ C W ) where W runs in the set of the 1-dimensional subspaces of V . The same can be done with k replaced by K + . Therefore u ∈ K + and K 0 := k(u) ⊂ K + . More precisely, let St(W ) ⊂ G + be the stabilizer of W . This is the subgroup of G + leaving F invariant and thus
K 0 = (K + ) St(W ) ⊂ K + .
Now we suppose that a (monic) left hand factor F = ∂ + u of op(L, ℓ) is known and explain how to obtain K from this. Let K 1 ⊂ k be the normal closure of K 0 . Then K 1 ⊗ k L contains a 1-dimensional submodule that we will call again D. The submodule σ∈Gal(
K 1 /k) σ(D) of K 1 ⊗ k L is invariant under the action of Gal(K 1 /k). Since L is irreducible, σ∈Gal(K 1 /k) σ(D) = K 1 ⊗ k L and it follows that K 1 ⊗ k L is a direct sum of 1-dimensional submodules. As a consequence, op(L, ℓ) factors as F 1 · · · F s in K 1 [∂]. A priori, the factors F i = ∂ + u i ∈ K 1 [∂] need not be equivalent. For i < j we consider a non zero element f ij ∈ k satisfying f ′ ij f ij = u i − u j . Put K = K 1 ({f ij }). Then K ⊃ k is the smallest field such that op(L, ℓ) factors as F 1 · · · F s ∈ K[∂]
where the monic degree one factors F i are equivalent.
Clearly K ⊂ K + . From the condition that K is minimal such that K ⊗ k N is a direct sum of isomorphic 1-dimensional submodules and the irreduciblity of N , it follows easily that the center
Z of G + ⊂ GL(V ) is the finite cyclic group (C * · id V ) ∩ G + and that K = (K + ) Z .
Remarks 3.1 (1) The field K and the above algorithm
for K do not change if L is replaced by D ⊗ k L, where D is a 1-dimensional module satisfying D ⊗t = 1 for some t ≥ 1.
(2) The fields K 0 and K 1 depend on the given left hand factor of degree 1 of op(L, ℓ). We illustrate this by an example where the differential Galois group
G + ⊂ GL(C 3 ) is generated by the matrices 0 0 1 1 0 0 0 1 0 and a 0 0 0 b 0 0 0 c
with a 6 = b 6 = c 6 = 1. If this left hand factor corresponds to Ce 1 (or Ce 2 or Ce 3 ), then its stabilizer is the subgroup of the diagonal matrices in G + . Otherwise it is just the center Z. In the first case K 1 = K and in the last one K 1 = K. 2
The 2-cocycle c and descent fields
Now we have arrived at the situation where K ⊗ k L is a direct sum of s copies of a known 1-dimensional differential module
D = M ⊗ k E over K, where E is an, a priori, unknown 1-dimensional submodule of K ⊗ k N . We want to produce a field extension K ′ ⊃ K such that K ′ ⊗ K D descends to k. Let D correspond to the operator ∂ − u. Then, for σ ∈ Gal(K/k), the operator ∂ − σ(u) corresponds to σ D. There are elements f σ ∈ K * such that σ(u) − u = f ′ σ fσ . One obtains a 2-cocycle c for Gal(K/k) with values in C * by f στ = c(σ, τ ) · f σ · σ f τ for all σ, τ ∈ Gal(K/k).
The class of the 2-cocycle c in H 2 (Gal(K/k), C * ) is the obstruction for the descent of D (or, equivalently, for the descent of E).
Indeed, if D descends to k, then one can represent D by ∂ − u with u ∈ k. On the other hand, if the class of c is trivial, then after changing the
{f σ } one has f στ = f σ · σ f τ . By Hilbert 90, there exists F ∈ K * with f σ = σF F for all σ ∈ Gal(K/k). Then ∂ − u is equivalent to ∂ − u + F ′ F . Further u − F ′ F lies in k, since it is invariant under Gal(K/k). The exact sequence 1 → Z → G + pr → Gal(K/k) → 1 induces a 2- cocycle with values in Z, in the following way. Let φ : Gal(K/k) → G + be a section, i.e., pr • φ(g) = g for all g ∈ Gal(K/k). Then d, defined by φ(g 1 g 2 ) = d(g 1 , g 2 )φ(g 1 )φ(g 2 ), is a 2-cocycle with values in Z. The class of d in H 2 (Gal(K/k), Z) is independent of the choice of the section φ. As be- fore, Z is identified with a subgroup of C * . Thus d induces an element of H 2 (Gal(K/k), C * ). We note that the homomorphism H 2 (Gal(K/k), Z) → H 2 (Gal(K/k), C * ) is, in general, not injective. Lemma 3.2 The cocycles c and d have the same image in H 2 (Gal(K/k), C * ). This image does not depend of the choice of N . Let s = dim N . Then the image of c s in H 2 (Gal(K/k), C * ) is trivial.
Proof. Let D, as before, be given by
∂ − u with u ∈ K. Write u = F ′ F with F ∈ (K + ) * . Let φ : G → G + be a section. Then σ(u) − u = φ(σ)F ′ φ(σ)F − F ′ F = f ′ σ fσ where f σ := φ(σ)F F . One easily sees that f σ is invariant under Z and thus f σ ∈ K * . The equality f στ = c(σ, τ )f σ σ f τ implies φ(στ )F = c(σ, τ )φ(σ)φ(τ )F . Hence d(σ, τ ) = c(σ, τ ). Replacing N by (∂ − v) ⊗ k N with v ∈ k, induces the change of ∂ − u into ∂ − u − v. Since σ(u + v) − (u + v) = σ(u) − u, the element c and its image in H 2 (Gal(K/k), C * ) are unchanged.
Suppose that N is chosen such that det N = 1. The cocycle d has values in µ s , since dim N = s. Thus the image of d s in H 2 (Gal(K/k), C * ) is trivial and the same holds for c s . 2 Definition 3.3 Let (K, c) be a Galois extension K/k and c a 2-cocycle for Gal(K/k) with values in C * and such that c s is a trivial 2-cocycle. A descent field for (K, c) is a Galois extension K ′ ⊃ k containing K, such that the induced 2-cocycle c ′ for Gal(K ′ /k), defined by c ′ (g 1 , g 2 ) = c(prg 1 , prg 2 ), yields a trivial element in H 2 (Gal(K ′ /k), C * ). Here pr :
Gal(K ′ /k) → Gal(K/k) denotes the natural map. 2
The assumption that k is a C 1 -field implies the existence of a descent field for every pair (K, c). Indeed, by [Se], II §3, the cohomological dimension of Gal(k/k) is 1 and therefore H 2 (Gal(k/k), µ ∞ ) = 1, where µ ∞ denotes the torsion subgroup of C * . One has H 2 (Gal(K/k), C * ) = H 2 (Gal(K/k), µ ∞ ) and lim → H 2 (Gal(K ′ /k), µ ∞ ) = H 2 (Gal(k/k), µ ∞ ) = {1}, where the direct limit is taken over all Galois extensions K ′ ⊃ k, containing K. Hence for every class c ∈ H 2 (Gal(K/k), C * ) there exists a Galois extension K ′ ⊃ k, containing K, such that the image of c in H 2 (Gal(K ′ /k), C * ) is 1. Our contribution is now to produce a descent field for (K, c) by some algorithm.
A decision procedure constructing a descent field for (K, c)
Since k is a C 1 -field, H 2 (Gal(K/k), K * ) is trivial ( [Se], II-9, §3.2) and this implies the existence of elements
{f σ | σ ∈ Gal(K/k)} ⊂ K * satisfying f στ = c(σ, τ ) · f σ · σ f τ for all σ, τ ∈ Gal(K/k).
Suppose that c is given in the form f στ = c(σ, τ )f σ σ f τ with {f σ } ⊂ K * . (This is trivially true for the present case dim M = 1. For the general case, Remarks 3.4 part (4) describes a decision procedure producing suitable {f σ }, a key step in the construction). Then the following algorithm produces a descent field.
One has
f ′ στ fστ = f ′ σ fσ + σ( f ′ τ fτ ) and since H 1 (Gal(K/k), K) = 0 there is an element v ∈ K such that f ′ σ fσ = σ(v) − v for all σ ∈ Gal(K/k); explicitly v = −1 [K : k] τ ∈Gal(K/k) f ′ τ f τ .
One observes that c is the obstruction for descent of the operator ∂ − v.
Further −mv = G ′ G with G = τ ∈Gal(K/k) f τ and m = [K : k].
Hence the field K( m √ G) contains the Picard-Vessiot field of ∂ − v and is a descent field. A less brutal way to compute a descent field is as follows. Since the cocycle c s is trivial, there are computable elements [Se2], Chapitre X, §1, Prop. 2). One observes that v − 1
{d(σ)| σ ∈ Gal(K/k)} ⊂ C * satisfying d(στ ) = c(σ, τ ) s d(σ)d(τ ). The elements { f s σ d(σ) } form a 1-cocycle. Since H 1 (Gal(K/k), K * ) = {1}, one can effectively compute F ∈ K * such that f s σ d(σ) = σF F for all σ ∈ Gal(K/k) (sees F ′ F ∈ k since it is invariant under Gal(K/k). The field extension K ′ = K( s √ F ) has the property that (∂ − v) is equivalent to ∂ − v + 1 s F ′ F over K ′ . Hence K ′ is a descent field. 2
Remarks 3.4
(1) We note that the above algorithm proves, by considering 1-dimensional differential modules over K, the existence of a descent field only using that H 2 (Gal(K/k), K * ) = {1} (see Lemma 4.1 for the general statement).
(2). Instead of assuming that the 2-cocycle c s is trivial, we may consider a class c ∈ H 2 (Gal(K/k), µ ∞ ), where µ ∞ ⊂ C * denotes, as before, the group of the roots of unity. Any finite group G occurs as some Gal(K/k). Therefore the group H 2 (Gal(K/k), µ ∞ ) is in general not trivial and the descent problem, i.e., finding an extension K ′ ⊃ K such that the image of c in H 2 (Gal(K ′ /k), µ ∞ ) is 1, is non trivial. However for a cyclic Gal(K/k) one has H 2 (Gal(K/k), µ ∞ ) = {1} ( [Se2], VIII, §4). In particular, non trivial examples for the descent problem tend to be complicated.
(3). We ignore how to compute or characterize all minimal descent fields for a given pair (K, c).
(4). Computing elements f σ ∈ K * satisfying f στ = c(σ, τ )·f σ · σ f τ appears to be far from trivial. A possible method, which uses explicitly the C 1 -property of k, is the following. Starting with the 2-cocyle c, there is a well known construction (see [G-S]) of an algebra A = ⊕ σ∈G K[σ], where G = Gal(K/k), of dimension m = #G = [K : k] over K, defined by the rules:
[σ] · λ = σ(λ) · [σ] for λ ∈ K, σ ∈ G, [σ 1 σ 2 ] = c(σ 1 , σ 2 )[σ 1 ] · [σ 2 ].
Then A is a central simple algebra with center k. Since k is a C 1 -field there exists an isomorphism I : A → M atr(m, k). The latter algebra can be identified (by standard Galois theory) with the group algebra K[G] = ⊕K ·σ. Suppose that we knew already elements f σ ∈ K * satisfying f στ = c(σ, τ ) · f σ · σ f τ . Then I 0 : A → K[G], given by I 0 ( λ σ [σ]) = λ σ f σ .σ is an isomorphism and is also K-linear. By the Skolem-Noether theorem, any isomorphism I of k-algebras has the form I( λ σ [σ]) = x −1 { λ σ f σ .σ}x, where x is an invertible element of M atr(m, k). Our aim is to compute a K-linear isomorphism and in that case x commutes with K and therefore belongs to K * . Now I has the form I( λ σ [σ]) = λ σ f σ . σ(x) x .σ and thus any K-linear isomorphism A → K[G] has the form [σ] → g σ σ for suitable elements g σ ∈ K * . It follows that g στ = c(σ, τ ) · g σ · σ g τ .
The computation of the isomorphism I uses the reduced norm N orm of A (see e.g. [F, Pi] or [R] section 4, which adapts to C 1 fields). With respect to a basis of A over k, the reduced norm is a homogeneous form of degree m in m 2 variables. Again the C 1 property of k asserts that there are non trivial solutions a ∈ A, a = 0 for N orm(a) = 0. An explicit calculation of such a is possible (but rather expensive). Applying this several times one obtains the isomorphism I : A → M atr(m, k).
Example 3.5 Consider the case where K ⊃ k has degree 2 and c is a 2cocycle for G = {1, σ} with values in K * .
One easily sees that the 2-cocyle can be given by c(1, 1) = c(1, σ) = c(σ, 1) = 1 and c(σ, σ) = α −1 ∈ k * . The K-linear morphism φ :
A := K[1]⊕K[σ] → K[G] = K ⊕Kσ should have the form φ([1]) = 1, φ([σ]) = f σ with f ∈ K * .
We have to find f . Now the condition is α = f σ(f ). Write K = k ⊕ kw with w 2 ∈ k * and write f = a + bw. Then we have to solve a 2 − b 2 w 2 = α. Consider the equation X 2 1 − X 2 2 w 2 − X 2 3 α = 0. By the C 1property of k there is a solution (x 1 , x 2 , x 3 ) = 0. Now x 3 = 0, since w 2 ∈ k * is not a square. Then we can normalize to x 3 = 1 and the problem is solved.
(5) For dim N = 2 and M of any dimension we will give in the Appendix an easier algorithm for descent fields, not using the 2-cocycle c explicitly (and recall the former algorithms of [H3, C-W, N-vdP]). (6) Let again a Galois extension k ⊂ K with Galois group G and 2-cocycle c ∈ H 2 (G, C * ) be given. The 2-cocycle class has finite order (dividing s) and corresponds to a short exact sequence 1 → Z → G + → G → 1, where Z is a finite cyclic group, lying in the center of G + . Suppose that the Galois extension k ⊂ K + with group G + is such that (K + ) Z = K. Then the image of c in H 2 (G + , C * ) is trivial and the descent condition holds for the field K + . The C 1 -property of the field k guarantees the existence of K + , however there seems to be no explicit algorithm, based on the C 1 -property, producing an K + . Examples 4.4 are based on this remark. (7). Finally, we note that H 2 (Gal(K/k), µ s ) = 1 if g.c.d.([K : k], s) = 1. In that case there is no field extension needed for the descent.
Description of the algorithm for the general case
We will search for a decomposition L = M ⊗ N with det N = 1. The module k ⊗ L can be written as
M ⊗ k (k ⊗ k N ) = M ⊗ k (k ⊗ C V ) where V is the solution space of N .
The absolutely irreducible left hand factors F of op(L, ℓ) correspond to the 1-dimensional subspaces W of V . We suppose that at least one F is given. Let K 0 ⊃ k denote the field extension generated by the coefficients of F . Let K 1 be the normal closure of K 0 . For each σ ∈ Gal(K 1 /k) one considers the absolutely irreducible left hand factor σ(F ). This factor is over k equivalent to F . We have to compute the field extension of K 1 needed for this equivalence.
An algorithm in terms of differential modules (which easily translates in terms of differential operators) is based upon the following lemma.
Lemma 3.6 Let A be an irreducible differential module over k. Then the differential module Hom(A, A) over k has only one 1-dimensional submodule, namely k · id A .
Proof. It is possible to prove this by using [N-vdP] and irreducible representations of semi-simple Lie algebras.
However, a more down-to-earth proof is the following. Let V be the solution space of A. This is a C-vector space of dimension equal to a := dim k A, equipped with a faithful irreducible action of the differential Galois group G of A. The group G is connected since k is algebraically closed. A 1-dimensional submodule of Hom(A, A) corresponds to a 1-dimensional subspace Cf of Hom C (V, V ), invariant under the action of G. There is a homomorphism c : G → C * such that gf g −1 = c(g)·f holds for all g ∈ G.
The kernel of f is G-invariant and is {0} since the representation is irreducible. The action of G on Hom(Λ a V, Λ a V ) is trivial. In particular, the isomorphism Λ a (f ) : Λ a V → Λ a V is invariant under G. Also g(Λ a (f ))g −1 = c(g) a · Λ a (f ) and c(g) a = 1. Since G is connected, c(g) = 1 for all g ∈ G. Thus f is G-invariant and is a multiple of id V since the representation is irreducible. 2 Corollary 3.7 The differential module
T (σ) := Hom(K 1 [∂]/K 1 [∂]σ(F ), K 1 [∂]/K 1 [∂]F )
over K 1 has a single 1-dimensional submodule A(σ). Moreover, the Picard-Vessiot field of A(σ) is a finite cyclic extension K ′ 1 ⊂ k of K 1 .
Proof. S := k ⊗ K 1 T (σ) is isomorphic to Hom(M , M ), where M := k ⊗ k M . By Lemma 3.5, S has a unique 1-dimensional submodule, say, B. By uniqueness, B is invariant under the action of Gal(k/K 1 ) on S and has therefore the form k ⊗ K 1 A(σ) for some submodule A(σ) of T (σ). The uniqueness of A(σ) is clear. Further k ⊗ A(σ) is isomorphic to the trivial differential module k · id M . Thus the Picard-Vessiot field K ′ 1 is a finite extension of K 1 and this extension is cyclic since A(σ) has dimension 1.
2
By factorization the 1-dimensional submodule A(σ) can be obtained. The Picard-Vessiot field of A(σ) is a finite cyclic extension K ′ 1 ⊂ k of K 1 . Then ker(∂, K ′ 1 ⊗ T (σ)) has dimension 1 over C and a generator φ(σ) of this kernel is an isomorphism φ(σ) :
K ′ 1 [∂]/K ′ 1 [∂]σ(F ) → K ′ 1 [∂]/K ′ 1 [∂]F .
The field K ⊂ k is the compositum of the Picard-Vessiot fields of all A(σ). We note that K is the field called "stabilisateur" in [C-W]. The isomorphisms φ(σ) :
K[∂]/K[∂]σ(F ) → K[∂]/K[∂]
F are now also known, they are K-rational solutions of the modules K ⊗ K 1 A(σ). The 2-cocycle c for Gal(K/k) with values in C * has the property that c s is trivial by the assumption that det N = 1. Then, as in Subsection 3.1, one can construct a cyclic extension K ′ ⊃ K such that the module
K ′ [∂]/K ′ [∂]F descends to k. The result is called M .
The module N is obtained by computing the unique irreducible direct summand of M * ⊗ k L having dimension s. Indeed, this direct summand of
M * ⊗ k L = M * ⊗ k M ⊗ k N = Hom(M, M ) ⊗ k N is (k · id M ) ⊗ k N ∼ = N .
An exemple for the construction of a descent field
Consider the irreducible operator
L = ∂ 4 + (−4 + 8z) z 2 − 1 ∂ 3 + (53z 2 − 40z − 1) 4 (z 2 − 1) 2 ∂ 2 + (5z 3 − z 2 − 13z − 3) 2 (z 2 − 1) 3 ∂+ 61z 2 + 64z + 67 16 (z 2 − 1) 4 .
The algorithm of [C-W] produces the following absolutely irreducible righthand factor
L 1 = ∂ 2 + (3 u + 4 z 2 − 26 z + 24) 2 (z 2 − 1) (4 z − 5) ∂ + (−3 z − 6) u + 45 z 2 − 40 z − 13 4 (z 2 − 1) 2 (4 z − 5) , where u 2 = z 2 − 1. It is isomorphic to its conjugate L 2 over K = k(Φ) with Φ 4 − 2zΦ 2 + 1 = 0 (or Φ = √ z − u). Explicitely, there exists S ∈ K[∂] such that L 2 .R = S.L 1 with R = (1 − 2 z + 2 u) Φ(4 z − 5) 2 z 2 − 1 ∂ − z − 2 ,
i.e. R maps a solution of L 1 to a solution of L 2 . Let G = Gal(K/k), acting via σ 1 (Φ) = Φ, σ 2 (Φ) = −Φ, σ 3 (Φ) = 1/Φ, σ 4 (Φ) = −1/Φ. The 2-cocycle c is given by c(σ 2 , σ 3 ) = c(σ 2 , σ 4 ) = c(σ 4 , σ 3 ) = c(σ 4 , σ 4 ) = −1 (and c(σ i , σ j ) = 1 otherwise). Remark 3.4 part (4) (see also Example 4.3 in the Appendix) produce the elements f σ which are, respectively, 1, 1, Φ, Φ ; hence the construction from section 3.2.1 shows that K ′ = K( √ Φ) is a descent field. An anihilating operator for √ Φ is
N = ∂ 2 + z z 2 − 1 ∂ − 1/16 z 2 − 1 −1
(one could find it by writing K ′ as a k[∂]-module and decomposing it). Decomposing L ⊗ N ⋆ (over k), we then obtain
M = ∂ 2 − 2 z − 1 ∂ + 35 z + 37 16 (z + 1) (z − 1) 2
and L 1 is isomorphic over K ′ to M . At the end of the appendix, we give alternative (easier) methods to handle such small order examples.
Appendix
The following lemma makes the relation between 2-cocycles and descent for 1-dimensional differential modules more explicit. We present some examples and present an algorithm producing descent fields for the case dim N = 2.
Lemma 4.1 Let k be a differential field having the properties: the field of constants C is algebraically closed and has characteristic 0; k is a C 1 -field.
Let K/k be a Galois extension (finite or infinite). The collection H(K) of the (isomorphy classes of the) 1-dimensional differential modules A over K, satisfying σ A ∼ = A for all σ ∈ Gal(K/k), forms a group with respect to the operation tensor product. Let h(K) ⊂ H(K) denote the subgroup consisting of the modules of the form K ⊗ k B, where B is a 1-dimensional differential module over k.
There is a canonical isomorphism H(K)/h(K) → H 2 (Gal(K/k), C * ).
Proof. The first statement is obvious. Let the differential module Ke with ∂e = ue lie in H(K). Then for any σ ∈ Gal(K/k) there is an element f σ ∈ K * such that σ(u) − u = f ′ σ fσ . Define the 2-cocycle c by f στ = c(σ, τ ) · f σ · σ f τ for all σ, τ ∈ Gal(K/k). Replacing the f σ by d(σ)f σ , with d(σ) ∈ C * , changes the 2-cocycle into an equivalent one. Tensoring Ke with an element of h(K) changes u into u+v with v ∈ k and this does not effect the f σ . Thus the above construction defines a homomorphism H(K)/h(K) → H 2 (Gal(K/k), C * ).
This homomorphism is injective since the triviality of the 2-cocycle class
c implies that f στ = f σ · σ f τ . Since H 1 (Gal(K/k), K * ) = {1} there is an F ∈ K * with f σ = σF F for all σ. Thus σ(u)−u = σ( F ′ F )− F ′ F and u := u− F ′ F
is invariant under Gal(K/k) and belongs to k. Now Ke = K · e with e := F −1 e and ∂e = ue. Thus Ke belongs to h(K).
The homomorphism is surjective. Indeed, consider a 2-cocycle c for
Gal(K/k) with values in C * . Since H 2 (Gal(K/k), K * ) = {1} there are el- ements f σ ∈ K * such that f στ = c(σ, τ ) · f σ · σ f τ for all σ, τ ∈ Gal(K/k). Then f ′ στ fστ = f ′ σ fσ + σ( f ′ τ fτ ) and since H 1 (Gal(K/k), K) = {0} there is an element u ∈ K such that σ(u) − u = f ′ σ
fσ for all σ ∈ Gal(K/k). Thus the class of c is the image of the module Ke with ∂e = ue, belonging to H(K). 2
Example 4.2 Let k = C(z) and K = C(t) with t 2 = z. Let σ be the non trivial element in Gal(K/k). The module Ke with ∂e = ue belongs to h(K) if and only if
u = w + 1 2t α =0 d α √ α z − α with all d α ∈ Z and w ∈ k.
(We note that √ α denotes an arbitrary choice of a square root for α ∈ C * ).
This follows from the computation: if K * ∋ F = t n 0 β =0 (t − β) n β , then
F ′ F = n 0 2t 2 + β 2 =0 n β + n −β 2(t 2 − β 2 ) + 1 2t β 2 =0 n β β − n −β β t 2 − β 2 .
Consider now Ke with ∂e = ue, belonging to H(K). Write u = a + 1 2t b with a, b ∈ k. By assumption −1 Example 4.3 The group D 2 ∼ = (Z/2Z) 2 has the property that H 2 (D 2 , C * ) contains an element of order 2. In order to obtain this element for an obstruction to descent we consider the following differential fields
t b = σ(u) − u has the form G ′ G for some G ∈ K * . Write G = t m 0 β =0 (t − β) m β ,k = C(z) ⊂ K = C(t) ⊂ K ′ = C(s) with z = t 2 + t −2 , t = s 2 .
The group Gal(K/k) = {1, a, b, ab} ∼ = D 2 where the elements a, b are given by a(t) = −t and b(t) = t −1 . One considers the differential module Ke with ∂e = ue and u = 1 4(t 2 −t −2 ) = s ′ s . One observes that a(u) − u = 0 and b(u) − u = (t −1 ) ′ t −1 . Thus we may take f 1 = 1, f a = 1, f b = t −1 , f ab = t −1 . The corresponding 2-cocycle c has values in {±1} and f ab = c(a, b) · f a · a f b holds with c(a, b) = −1. It is easily verified that c ∈ H 2 (D 2 , C * ) is not trivial. The module K ′ ⊗ K e is trivial since u = s ′ s and therefore descends to k. In particular K ′ is a descent field for (K, c).
2
We note that for a finite group G, acting trivially on C * , the cohomology group H 2 (G, C * ) is called the Schur multiplier of G. This group is well studied, see [Su]. Suppose that G + is given as a finite irreducible subgroup of GL(V ) where dim C V = n > 1 and that the center Z of G + is non trivial. Further, assume that a Galois extension K + ⊃ k = C(z) with Galois group G + is given. Then K := (K + ) Z is a Galois extension of k with group G := G + /Z.
Consider the differential module K + ⊗ C V over K + , defined by ∂(f ⊗v) = f ′ ⊗ v for f ∈ K + , v ∈ V . This is a trivial differential module. The action of
G + on K + ⊗ C V is defined by σ(f ⊗ v) = σ(f ) ⊗ σ(v)
. This action commutes with ∂.
Define N := (K + ⊗ C V ) G + . This is an irreducible differential module over k with Picard-Vessiot field K + . The subfield K is the smallest field such that K ⊗ k N is a direct sum of isomorphic copies of a 1-dimensional differential module D over K. In particular σ D ∼ = D for all σ ∈ Gal(K/k). The 2-cocycle attached to D is non trivial if there is no subgroup H ⊂ G + mapping bijectively to G. More precisely, K + is a smallest field over which the 2-cocycle becomes trivial if and only if no proper subgroup H of G + maps surjectively to G.
Take now any absolutely irreducible differential module M of dimension m > 1 over k. Then L := M ⊗ k N has the required properties. For dim N = 2 there is a rich choice of examples and there are similar explicit cases for n = 3, see [vdP-U]. 2
Example 4.5 Algorithms for the descent field for the case dim N = 2. Let L = M ⊗ N be given with M absolutely irreducible and an irreducible N with dim N = 2, det N = 1 and finite differential Galois group G + . For this case, methods for finding N (hence the descent field) and M are proposed, e.g, in [H3, C-W, N-vdP] (and references therein). Below is another nice method, adapted to our case. We have G + ∈ {D SL 2 k , A SL 2 4 , S SL 2 4 , A SL 2 5 } ⊂ SL(2, C) with center Z = {± 1 0 0 1 } and there is no proper subgroup of G + mapping onto G := G + /Z ∈ {D k , A 4 , S 4 , A 5 }. According to Lemma 3.2, the 2-cocycle class c = d ∈ H 2 (G, C * ) is non trivial. The method of Section 3 provides the field K with Gal(K/k) = G, from the data op(L, ℓ) and an absolutely irreducible monic left hand factor F of op(L, ℓ).
The (unknown) group G + is a subgroup of SL(V ) with dim C V = 2. Write W = sym 2 V and let S ∈ sym 2 (W ) be a generator of the kernel of sym 2 (W ) → sym 4 (V ). Then S is a non degenerate symmetric form of degree two. The homorphism ψ : SL(V ) → SL(W ), defined by A → A ⊗ s A, has kernel {± 1 0 0 1 } and its image is {B ∈ SL(W )| S is invariant under B}. One observes that ψ(G + ) = G. Conversely, for any subgroup G ⊂ SL(W ) preserving the form S, one has that ψ −1 (G) = G + .
Let a ∈ K be a general element, then the orbit Ga is a basis of K/k and the C-vector space with basis Ga is the regular representation of G. This vector space contains an irreducible representation W of G of dimension three. In each of the cases for G, there exists a (unique) non degenerated symmetric form S on W which is invariant under G.
The unique monic differential operator T 3 ∈ K[∂] of degree 3 which is 0 on W belongs to k[∂] because W is invariant under G. This operator (or the corresponding differential module) is equivalent to the second symmetric power of an operator T 2 ∈ k[∂] (which can be found using [H3]). LetK denote the Picard-Vessiot field of T 2 . Then [K : K] = 2. Let V ⊂K denote the space of solutions of T 2 . Then W = {v 1 v 2 | v 2 , v 2 ∈ V } = sym 2 V . The differential Galois group of T 2 is ψ −1 (G) and thus isomorphic to G + . Hencẽ K is a descent field for (K, d) and then also for (K, c). Using this descent field one computes M and N .
Yet another observation (though less practical) is that J.J. Kovacic's fundamental algorithm for order 2 equations, [Ko], could also be applied to L = M ⊗ N . For example, the symmetric power sym m+1 (L) contains an irreducible factor over k, which is projectively isomorphic to M , for m = 2, 4, 6, 12 for the cases G + = D SL 2 k , A SL 2 4 , S SL 2 4 , A SL 2 5 . More refined factorisation patterns may be established for each of these cases.
Example 4.6 The referee's example. The operator L 4 = ∂ 4 + 6z z 2 − 1 ∂ 3 + 1971z 2 − 947 288(z 2 − 1) 2 ∂ 2 + 27z 32(z 2 − 1) 2 ∂+ 9 4096(z 2 − 1) 2 has the absolutely irreducible right hand factor L 2 = ∂ 2 + 3z − α + 1 6(z 2 − 1)
∂+ 3 64(z 2 − 1) ,
where α is a root of T 4 + 12(z − 1)T 2 − 32(z − 1)T − 12(z − 1) 2 = 0. There are the following methods: (1). By [C-W, H3]. L 4 = M ⊗ N with det M = det N = 1. The two factors of Λ 2 L 4 = sym 2 (M ) ⊕ sym 2 (N ) are easily computed and, using [H3], one finds M and N .
(2). By [N-vdP], Theorem 6.2. One computes F ∈ sym 2 (L 4 ) with ∂F = 0, F = 0 and a 2-dimensional isotropic subspace for F . From the last part of the proof of [N-vdP], Theorem 6.2 one reads off M and N .
(3). Example 4.5 works here as follows. K 0 = k(α) and its normal closure K 1 has Galois group A 4 and K = K 1 . The C-vector space W spanned by A 4 α has dimension 3. The operator T 3 = ∂ 3 + a 2 ∂ 2 + a 1 ∂ + a 0 ∈ k[∂] with solution space W is determined by the equation T 3 (α) = 0. This yields T 3 = ∂ 3 + 3z − 1 (z + 1)(z − 1) ∂ 2 + 27z + 5 36(z + 1) 2 (z − 1) 2 ∂ − 9z + 23 36(z + 1) 2 (z − 1) 3 .
The operator T 2 = ∂ 2 + 3z − 1 3(z + 1)(z − 1) ∂ − 3z − 11 48(z + 1)(z − 1) 2 satisfies sym 2 (T 2 ) = T 3 and its Picard-Vessiot field (an extension of k of degree 24) is a descent field. A minimum polynomial of an algebraic solution of T 2 is
P = Y 8 + 1 3 (z − 1) Y 4 + 4 27 (z − 1) Y 2 − 1 108 (z − 1) 2 .
2 Remarks 2. 2
22The non unicity of the pair (M, N ) can be restricted by the condition that det N = 1. Then only a change (M ⊗ D, N ⊗ D −1 ) is possible with D ⊗s = 1.
A, a) ∈ k[∂] of minimal degree satisfying op(A, a)a = 0. The morphism k[∂] → A, which maps 1 to a, induces an isomorphism k[∂]/k[∂]op(A, a) → A. Let B ⊂ A be a submodule. This yields a factorization op(A, a) = LR with R ∈ k[∂] is the monic operator of minimal degree such that Ra ∈ B. One observes that R = op(A/B, a + B). Further L ∈ k[∂] is the operator of minimal degree satisfying Lb = 0, where b := Ra. Clearly b is a cyclic vector for B and L = op(B, b). Moreover, any factorization op(A, a) = LR with monic L, R corresponds in this way to a unique submodule B ⊂ A, namely B = k[∂]Ra.
then, using the above formula, one finds that m 0 = 0 and m −β = −m β . Thus b = α , where α = β 2 (and some choice of √ α is made). According to the above result, one has that Ke lies in h(K). Therefore H(K)/h(K) = {0}. This is in accordance with H 2 (Gal(K/k), C * ) = {1} (since Gal(K/k) is cyclic). 2
Examples 4.4 A construction of many examples of the type L = M ⊗ k N under consideration, involving a non trivial descent problem, is the following.
Techniques for arithmetic descent were proposed in[H-P], where the case of a differential field k with non-algebraically closed constant field is handled
AcknowledgmentsThe authors are grateful to Mark van Hoeij and Lajos Ronyai and to the referee for interesting suggestions and for providing Example 4.6 leading to an improvement of our paper.It factors over k(α) aswhich illustrates the fact that K + is obtained from K 1 by adjunction of a square root, here − α 6 (in fact, adjoining any solution of T 2 would do). Continuation of our method (decomposing L 4 ⊗ T * 2 over k) then yieldsand we may check thatAs solutions of both M and T 2 can be expressed in terms of special functions (e.g using the methods of van Hoeij), this allows to solve L 4 in terms of algebraic and special functions.
Weil -Absolute reducibility of differential operators and Galois groups. É Compoint, J.-A , J. Algebra. 2751É. Compoint and J.-A. Weil -Absolute reducibility of differential operators and Galois groups, J. Algebra 275 (2004), no. 1, 77-105.
. T Fisher, T.Fisher -How to trivialise a central simple algebra, http://www.faculty.jacobs-university.de/mstoll/workshop2007/fisher2.pdf, see also http://www.warwick.ac.uk/staff/J.E.Cremona/descent/second/
G-S] Ph, T Gille, Szamuely, Central simple algebras and Galois cohomology. 101[G-S] Ph. Gille and T. Szamuely -Central simple algebras and Galois cohomology, Cambridge Studies in Advanced Mathematics, Vol 101, 2006.
Weil -Liouvillian solutions of linear differential equations of order three and higher. H-R-U-W] M Van Hoeij, J-F Ragot, F Ulmer, J-A , J. Symbolic Comput. 28[H-R-U-W] M. van Hoeij, J-F. Ragot, F. Ulmer and J-A. Weil -Liouvillian solutions of linear differential equations of order three and higher, J. Symbolic Comput. Vol 28, p. 589-609,1999.
Put -Descent for differential modules and skew fields. M Van Hoeij, M Van Der, J. Algebra. 2961M. van Hoeij and M. van der Put -Descent for differential modules and skew fields, J. Algebra 296 (2006), no. 1, 18-55.
Hoeij -Rational Solutions of the Mixed Differential Equation and its Application to Factorization of Differential Operators. M Van, M. van Hoeij -Rational Solutions of the Mixed Differential Equation and its Application to Factorization of Differential Operators, ISSAC 1996, p. 219-225.
Hoeij -Factorization of differential operators with rational functions coefficients. M Van, J. Symbolic Comput. 24M. van Hoeij -Factorization of differential operators with rational functions coefficients, J. Symbolic Comput., Vol 24, p.537-561, 1997.
Hoeij -Solving third order linear differential equations in terms of second order equations. M Van, ACMM. van Hoeij -Solving third order linear differential equations in terms of second order equations, ISSAC 2007, 355-360, ACM, 2007. http://www.math.fsu.edu/~hoeij/files/ReduceOrder/
N Katz, Exponential Sums and Differential Equations. Princeton University Press124N. Katz -Exponential Sums and Differential Equations, Annals of Mathematical Studies 124, Princeton University Press 1990.
Kovacic -An algorithm for solving second order linear homogeneous differential equations. J J , Journal of Symbolic Computation. 2J. J. Kovacic -An algorithm for solving second order linear homo- geneous differential equations, Journal of Symbolic Computation 2, 3-43, 1986.
Put -Solving differential equations, to be published in the Tate volume in PAMQ. K A Nguyen, M Van Der, arXiv.org/abs/0810.4039K.A. Nguyen and M. van der Put -Solving differential equa- tions, to be published in the Tate volume in PAMQ. Also arXiv.org/abs/0810.4039.
Trivializing a central simple algebra of degree 4 over the rational numbers. J Pilnikova, Journal of Symbolic Computation. 42J. Pilnikova -Trivializing a central simple algebra of degree 4 over the rational numbers, Journal of Symbolic Computation, vol. 42 (2007) 587-620
Singer -Galois theory of linear differential equations. M Van Der Put, M F , Springer Verlag328M. van der Put and M.F. Singer -Galois theory of linear differential equations, Grundlehren Volume 328, Springer Verlag 2003.
Ulmer -Differential equations and finite groups. M Van Der Put, F , J. Algebra. 226M. van der Put and F. Ulmer -Differential equations and finite groups, J. Algebra 226, 920-966, 2000.
Computing the structure of finite algebras. L Ronyai, Journal of Symbolic Computation. 93L. Ronyai -Computing the structure of finite algebras, Journal of Symbolic Computation, vol. 9 (1990), no. 3, 355-373.
. J.-P Serre -Cohomologie Galoisienne, Lecture Notes in Mathematics. 5Springer VerlagJ.-P. Serre -Cohomologie Galoisienne, Lecture Notes in Mathemat- ics 5, Springer Verlag 1973.
. J.-P Serre -Corps Locaux, Hermann Paris, J.-P. Serre -Corps locaux, Hermann Paris, 1968.
Ulmer -Linear differential equations and products of linear forms. M F Singer, F , J. Pure Appl. Algebra. 117M.F. Singer and F. Ulmer -Linear differential equations and prod- ucts of linear forms, J. Pure Appl. Algebra 117/118 (1997), 549-563.
M Suzuki, Group Theory I, Comprehensive Studies in Mathematics. BerlinSpringer-Verlag248M. Suzuki -Group Theory I, Comprehensive Studies in Mathemat- ics, vol. 248, Springer-Verlag, Berlin, 1986
| [] |
[
"Identifying and characterizing extrapolation in multivariate response data",
"Identifying and characterizing extrapolation in multivariate response data"
] | [
"Meridith L Bartleyid [email protected] \nDepartment of Statistics\nPennsylvania State University\nUniversity ParkPennsylvaniaUnited States of America\n",
"Ephraim M Hanks 1☯ \nDepartment of Statistics\nPennsylvania State University\nUniversity ParkPennsylvaniaUnited States of America\n",
"Erin M Schliep \nDepartment of Statistics\nUniversity of Missouri\nColumbiaMissouri\n\nUnited States of America\n\n",
"Patricia A Soranno \nDepartment of Fisheries and Wildlife\nMichigan State University\nEast LansingMichigan\n\nUnited States of America\n\n",
"Tyler Wagner \nU.S. Geological Survey\nPennsylvania Cooperative Fish and Wildlife Research Unit\nPennsylvania State University\nUniversity ParkPennsylvaniaUnited States of America\n"
] | [
"Department of Statistics\nPennsylvania State University\nUniversity ParkPennsylvaniaUnited States of America",
"Department of Statistics\nPennsylvania State University\nUniversity ParkPennsylvaniaUnited States of America",
"Department of Statistics\nUniversity of Missouri\nColumbiaMissouri",
"United States of America\n",
"Department of Fisheries and Wildlife\nMichigan State University\nEast LansingMichigan",
"United States of America\n",
"U.S. Geological Survey\nPennsylvania Cooperative Fish and Wildlife Research Unit\nPennsylvania State University\nUniversity ParkPennsylvaniaUnited States of America"
] | [] | Faced with limitations in data availability, funding, and time constraints, ecologists are often tasked with making predictions beyond the range of their data. In ecological studies, it is not always obvious when and where extrapolation occurs because of the multivariate nature of the data. Previous work on identifying extrapolation has focused on univariate response data, but these methods are not directly applicable to multivariate response data, which are common in ecological investigations. In this paper, we extend previous work that identified extrapolation by applying the predictive variance from the univariate setting to the multivariate case. We propose using the trace or determinant of the predictive variance matrix to obtain a scalar value measure that, when paired with a selected cutoff value, allows for delineation between prediction and extrapolation. We illustrate our approach through an analysis of jointly modeled lake nutrients and indicators of algal biomass and water clarity in over 7000 inland lakes from across the Northeast and Mid-west US. In addition, we outline novel exploratory approaches for identifying regions of covariate space where extrapolation is more likely to occur using classification and regression trees. The use of our Multivariate Predictive Variance (MVPV) measures and multiple cutoff values when exploring the validity of predictions made from multivariate statistical models can help guide ecological inferences. | 10.1371/journal.pone.0225715 | null | 189,928,143 | 1906.07036 | c4b83cd09bb35fdfd2541d47d3b5f3b6dd0e1a8d |
Identifying and characterizing extrapolation in multivariate response data
Meridith L Bartleyid [email protected]
Department of Statistics
Pennsylvania State University
University ParkPennsylvaniaUnited States of America
Ephraim M Hanks 1☯
Department of Statistics
Pennsylvania State University
University ParkPennsylvaniaUnited States of America
Erin M Schliep
Department of Statistics
University of Missouri
ColumbiaMissouri
United States of America
Patricia A Soranno
Department of Fisheries and Wildlife
Michigan State University
East LansingMichigan
United States of America
Tyler Wagner
U.S. Geological Survey
Pennsylvania Cooperative Fish and Wildlife Research Unit
Pennsylvania State University
University ParkPennsylvaniaUnited States of America
Identifying and characterizing extrapolation in multivariate response data
RESEARCH ARTICLE ☯ These authors contributed equally to this work. *
Faced with limitations in data availability, funding, and time constraints, ecologists are often tasked with making predictions beyond the range of their data. In ecological studies, it is not always obvious when and where extrapolation occurs because of the multivariate nature of the data. Previous work on identifying extrapolation has focused on univariate response data, but these methods are not directly applicable to multivariate response data, which are common in ecological investigations. In this paper, we extend previous work that identified extrapolation by applying the predictive variance from the univariate setting to the multivariate case. We propose using the trace or determinant of the predictive variance matrix to obtain a scalar value measure that, when paired with a selected cutoff value, allows for delineation between prediction and extrapolation. We illustrate our approach through an analysis of jointly modeled lake nutrients and indicators of algal biomass and water clarity in over 7000 inland lakes from across the Northeast and Mid-west US. In addition, we outline novel exploratory approaches for identifying regions of covariate space where extrapolation is more likely to occur using classification and regression trees. The use of our Multivariate Predictive Variance (MVPV) measures and multiple cutoff values when exploring the validity of predictions made from multivariate statistical models can help guide ecological inferences.
Introduction
The use of ecological modeling to translate observable patterns in nature into quantitative predictions is vital for scientific understanding, policy making, and ecosystem management. However, generating valid predictions requires robust information across a well-sampled system which is not always feasible given constraints in gathering and accessing data. Extrapolation is defined as a prediction from a model that is a projection, extension, or expansion of an estimated model (e.g. regression equation, or Bayesian hierarchical model) beyond the range PLOS of the data set used to fit that model [1]. When we use a model fit on available data to predict a value or values at a new location, it is important to consider how dissimilar this new observation is to previously observed values. If some or many covariate values of this new point are dissimilar enough from those used when the model was fitted (i.e. either because they are outside the range of individual covariates or because they are a novel combination of covariates) predictions at this point may be unreliable. Fig 1, adapted from work by Filstrup et al. [2], illustrates this risk with a simple linear regression between the log transformed measurements of total phosphorous (TP) and chlorophyll a (Chl a) in U.S. lakes. The data shown in blue were used to fit a linear model with the estimated regression line shown in the same color. While the selected range of data may be reasonably approximated with a linear model, the linear trend does not extend into more extreme values, and thus our model and predictions are no longer appropriate. While ecologists and other scientists know the risks associated with extrapolating beyond the range of their data, they are often tasked with making predictions beyond the range of the available data in efforts to understand processes at broad scales, or to make predictions about the effects of different policies or management actions in new locations. Forbes and Carlow [3] discuss the double-edged sword of supporting cost-effective progress while exhibiting caution for potential misleading results that would hinder environmental protections. They outline the need for extrapolation to balance these goals in ecological risk assessment. Other works [4][5][6] explore strategies for addressing the problem of ecological extrapolation, often in space and time, across applications in management tools and estimation practices. Previous work on identifying extrapolation includes Cook's early work on detecting outliers within a simple linear regression setting [7] and recent extensions to GLMs and similar models by Conn et al. [8].
The work of Conn et al. defines extrapolation as making predictions that occur outside of a generalized independent variable hull (gIVH), defined by the estimated predictive variance of the mean at observed data points. This definition allows for predictions to be either interpolations (inside the hull) or extrapolations (outside the hull). However, the work of Conn et al. [8] is restricted to univariate response data, which does not allow for the application of these methods to multivariate response models. This is an important limitation because many ecological and environmental research problems are inherently multivariate in nature. Elith and Leathwick [9] note the need for additional extrapolation assessments of fit in the context of using species distribution models (SDMs) for forecasting across different spatial and temporal scales. Mesgaran et al. [10] developed a new tool for identifying extrapolation using the Mahalanobis distance to detect and quantify the degree of dissimilarity for points either outside the univariate range or forming novel combinations of covariates.
In our paper, we present a general framework for quantifying and evaluating extrapolation in multivariate response models that can be applied to a broad class of problems. Our approach may be succinctly summarized as follows:
1. Fit an appropriate model to available multi-response data.
2. Choose a numeric measure associated with extrapolation that provides a scalar value in a multivariate setting.
3. Choose a cutoff or range of cutoffs for extrapolation/interpolation. 4. Given a cutoff, identify locations that are extrapolations.
5. Explore where extrapolations occur. Use this knowledge to help inform future analyses and predictions.
We draw on extensive tools for measures of leverage and influential points to inform decisions of a cutoff between extrapolation and interpolation. We illustrate our framework through an application of this approach on jointly modeled lake nutrients, productivity, and water clarity variables in over 7000 inland lakes from across the Northeast and Mid-west US.
Predicting lake nutrient and productivity variables
Inland lake ecosystems are threatened by cultural eutrophication, with excess nutrients such as nitrogen (N) and phosphorus (P) resulting in poor water quality, harmful algal blooms, and negative impacts to higher trophic levels [11]. Inland lakes are also critical components in the global carbon (C) cycle [12]. Understanding the water quality in lakes allows for informed ecosystem management and better predictions of the ecological impacts of environmental change. Water quality measurements are collected regularly by federal, state, local, and tribal governments, as well as citizen-science groups trained to sample water quality.
The LAGOS-NE database is a multi-scaled geospatial and temporal database for thousands of inland lakes in 17 of the most lake-rich states in the eastern Mid-west and the Northeast of the continental United States [13]. This database includes a variety of water quality measurements and variables that describe a lake's ecological context at multiple scales and across multiple dimensions (such as hydrology, geology, land use, and climate).
Wagner and Schliep [14] jointly modelled lake nutrient, productivity, and clarity variables and found strong evidence these nutrient-productivity variables are dependent. They also found that predictive performance was greatly enhanced by explicitly accounting for the multivariate nature of these data. Filstrup et al. [2] more closely examined the relationship between Chl a and TP and found nonlinear models fit the data better than a log-linear model. Most notably for this work, the relationship of these variables differ in the extreme values of the observed ranges; while a linear model may work for a moderate range of these data it is imperative that caution is shown before extending results to more extreme values (i.e., to extremely nutrient-poor or nutrient-rich lakes).
In this study, following Wagner and Schliep, we consider four variables: total phosphorous (TP), total nitrogen (TN), Chl a, and Secchi disk depth (Secchi) as joint response variables of interests. Each lake may have observations for all four of these variables, or only a subset. Fig 2 shows response variable availability (fully observed, partially observed, or missing) for each lake in the data set. A partially observed set of response variables for a lake indicates that at least one, but not all, of the water quality measures were sampled. We consider several covariates at the individual lake and watershed scales as explanatory variables including maximum depth (m), mean base flow (%), mean runoff (mm/yr), road density (km/ha), elevation (m), stream density (km/ha), the ratio of watershed area to lake area, and the proportion of forested Left: map of inland lake locations with full, partial, or missing response variables. Missing response variables are lakes where all water quality measures have not been observed, while partial status indicates only some lake response variables are unobserved. Covariates were quantified for all locations. Right: subset of data status (observed or missing) for each response variable. All spatial plots in this paper were created using the Maps package [15] in R to provide outline of US states.
https://doi.org/10.1371/journal.pone.0225715.g002 and agricultural land in each lake's watershed. One goal among many for developing this joint model is to be able to predict TN concentrations for all lakes across this region, and eventually the entire continental US. Our objective is to identify and characterize when predictions of these multivariate lake variables are extrapolations. To this end, we will review and develop methods for identifying and characterizing extrapolation in multivariate settings.
Materials and methods
Review of current work
Cook's independent variable hull. As this work builds upon the work of Cook [7] and Conn et al. [8], we start with a review of their independent variable hull (IVH) and generalized independent variable hull (gIVH) approaches. Cook's work focuses on the identification of influential points in a linear regression setting. A linear regression model is written as
y ¼ Xβ þ ϵð1Þ
where y = [y 1 , . . ., y n ] 0 denotes a vector of n univariate observed responses, X denotes the covariate matrix with an intercept, β are the covariate coeffecients, and ϵ are independent, mean-zero normally distributed residuals. Throughout this paper, we use bold lowercase letters to denote vectors and bold uppercase letter to denote matrices. The predicted value of y may be calculatedŷ
¼ Xβð2Þ
whereβ may be replaced with its OLS estimate (β ¼ ðX 0 XÞ À 1 X 0 y) to obtain
y ¼ XðX 0 XÞ À 1 X 0 yð3Þ
Equivalently, the hat matrix, H = X(X 0 X) −1 X 0 , when multiplied by the observed y vector will produce the predicted values. The predicted response for observation i can be written as a linear combination of the n response variables,
y i ¼ h i1 y 1 þ h i2 y 2 þ . . . þ h ii y i þ . . . þ h in y n for i ¼ 1; . . . ; n:ð4Þ
The diagonal elements of the hat matrix (h ii ¼ x 0 i ðX 0 XÞ À 1 x i ) are called leverages, and while they only depend on the explanatory variables, they indicate the influence observations, y i , have on their own predicted values,ŷ i . A higher leverage h ii indicates a higher influence of y i in determining the model fitted responseŷ i . This relationship means leverage values are useful quantities to explore when looking for influential points. The corresponding residual vector is r ¼ y Àŷ ¼ ðI À HÞy. Building on confidence ellipsoids for multiple coefficients, Cook's Distance, D i , is a measure to explore the individual contribution of the i th data point in a linear regression analysis. This measure may be calculated by D i ¼ ðβ À ðiÞ ÀβÞ 0 X 0 Xðβ À ðiÞ ÀβÞ
ps 2 ¼ t 2 i p h ii 1 À h ii � �ð5Þ
where p represents the number of parameters, s 2 is r 0 r/(n − p), and t i is the i th studentized residual. We useβ À ðiÞ to indicate the estimate of the the β vector without the i th data point. With all other values held constant, this measure increases as a function of the ratio of h ii over 1 − h ii , which depends only on the design points within X. As such, Cook defines his independent variable hull (IVH) as the smallest convex set containing all of the design points. Let h denote the maximum diagonal element of this hat matrix (i.e., h = max(diag(H))), then a new observation, x 0 , is within this defined IVH whenever
x 0 0 ðX 0 XÞ À 1 x 0 � hð6Þ
and predicting at a point beyond the hull will imply an extrapolation. The hat matrix and its diagonals are useful diagnostics for finding outliers in a linear regression setting. Similarly, the Mahalanobis distance (MD) [16] can be used for identifying outliers. MD and leverage are monotonically related, as the scale-invariant squared MD may be represented by
MD 2 i ¼ ðx i À � xÞΣ À 1 ðx i À � xÞ 0 ¼ ðl À 1Þ h ii À 1 l � � ð7Þ where x i is a data point (with p total covariate observations), � x is the mean vector for all x (i.e. � x i ¼ 1 l P l i¼1 x i
where l is the number of observed lakes), andΣ is the sample covariance matrix. We assume �
x ¼ 0 without loss of generality. This relationship assumes the model matrix X includes an intercept and makes use of the following partitioning
ðX 0 XÞ À 1 ¼ 1 l 0 0 0 1 lÀ 1Σ À 1 ! :ð8Þ
We may work backwards from h ii to obtain
h ii ¼ ð1; x i Þ 1 l 0 0 0 1 lÀ 1Σ À 1 0 @ 1 A ð1; x i Þ 0 ¼ 1 l þ 1 l À 1 x iΣ À 1 x 0 i ¼ 1 l þ 1 l À 1 MD 2 i :ð9Þ
This definition remains useful without any underlying distributional assumption of the data. For example, empirically obtained quantile cutoff values can serve reasonably well as threshold for declaring outliers. However, for multivariate-normal data, the squared MD can be transformed into probabilities using a chi-squared cumulative probability distribution [17] such that points that have a very high probability of not belonging to the distribution could be classified as outliers. In either scenario, outliers can be detected using only predictor variables by calculating x 0 (X 0 X) −1 x 0 and comparing with max(diag(X(X 0 X) −1 X)). Conn's generalized IVH. The work of Cook does not immediately extend to generalized linear models (GLMs) where the assumption of Gaussian errors is relaxed. To extend to the GLM case, Conn et al. (2015) define a generalized independent variable hull (gIVH) for a generalized linear model,
y i � f y ðg À 1 ðm i ÞÞ;ð10Þ
where f y denotes a probability density or mass function, g gives the necessary link function, and μ i is a linear predictor (e.g. where p 2 L P ,ŷ p ¼ g À 1 ðx pb Þ corresponds to the mean prediction at p, L O denotes the set of locations where data are observed, andŷ o denotes predictions of observations at x o 2 L O . In addition to this approach of using o and p to index observed and predicted locations, respectively, we will also in this paper use i to index the collective set of locations (i.e.
m i ¼ x 0 i β).i 2 L o [ L p ).
The variance of this predictive mean when a non-identity link is used may be found using the delta method which may be written as
varðŷ i Þ ¼ varðgðm i ÞÞ � Δvarðm i ÞΔ 0 ;ð12Þ
where Δ is a matrix of partial derivatives of the function g(μ) with respect to its parameters, evaluated at the estimators,m. Prediction variance. The IVH approach of Cook's work uses only the design matrix, X, to calculate the hat matrix, H. Since the hat matrix is not always well defined for more complicated models, prediction variance may be substituted as a boundary for Conn et al.'s gIVH. This Prediction Variance (PV) approach requires the design matrix, X, in addition to the response variable vector, y. Finding the prediction variance under a univariate response model is accomplished by either direct calculation of varðŷÞ or through posterior predictive inference resulting in a single scalar value for each location.
Writing our linear predictor generally as
μ ¼ Xβð13Þ
where X is the design matrix and β is the vector of unknown parameters to be estimated, we find
varðμÞ ¼ XvarðβÞX 0 :ð14Þ
Under a linear model,β
¼ ðX 0 XÞ À 1 X 0 yð15Þ
where the distribution ofβ isβ
� Nðβ; s 2 ðX 0 XÞ À 1 Þð16Þ
and thus varðμÞ ¼ s 2 XðX 0 XÞ À 1 X 0 is proportional to the hat matrix used in Cook's IVH criteria.
While this extrapolation approach can be applied under inference in both frequentist and Bayesian approaches, we focus on a Bayesian setting in which we may calculate the prediction variance using the posterior predictive distribution of an out-of-sample observation, y p , given the observed data, y. Using [�] to denote a probability distribution, this distribution is
½y p jy� ¼ Z ½y p jθ�½θjy�dθð17Þ
where θ = (β, σ 2 ) in our linear model and [θ|y] is the posterior distribution. We may approximate the posterior predictive distribution through MCMC by sampling y ðaÞ p � ½y p jθ ðaÞ � using θ (a) at each iteration (a = 1, . . ., A) of the algorithm. With the posterior predictive distribution and with observed covariates, x p at each new location we may calculate m ðaÞ ¼ x 0 p β ðaÞ at each MCMC iteration and Monte Carlo predictive inference can be obtained usingŷ ðaÞ p ¼ m ðaÞ for the converged MCMC samples. The prediction variance may be approximated bŷ
varðŷ p jyÞ ¼ P A a¼1 ðŷ ðaÞ p À Eðŷ p jyÞÞ 2 A :ð18Þ
With this sample-based calculation of prediction variance for our measure of extrapolation we can easily extend this univariate approach to the multivariate setting.
Extension to the multivariate case
Building upon this previous work, we aim to extend measures of extrapolation to handle predictions of multivariate data. We illustrate this using the inland lake nutrient and productivity data. Following the multivariate linear model developed by Wagner and Schliep [18], the joint nutrient-productivity model can be collectively written as:
y i ¼ Bx i þ ϵ i ; ϵ i � iid Nð0; ΣÞð19Þ
where y i denotes a vector column of a matrix, Y, where y ni is the value of the n th lake nutrientproductivity variable for lake i. For each lake i,
y i ¼ ½TN i ; TP i ; CHL i ; Secchi i � 0 y i ¼ Bx i þ ϵ i , y ni ¼ b 0 n x i þ � nið20Þ
where B is a matrix of coefficients such that b 0 n is a row vector where b nq is the coefficient of the q th predictor variable for the n th lake nutrient response variable. The notation x i represents a q × 1 vector of predictor variables for lake i. Here, again for lake i, ϵ i � N(0, Σ) where Σ is a n x n covariance matrix capturing the dependence between nutrient-productivity variables not accounted for by the regression. We assume multivariate errors are independent and identically distributed across lakes. Following Wagner and Schliep (2018), we take a Bayesian approach and specify priors for all model parameters.
b rc � iid Nð0; 100Þ; r ¼ 1; . . . ; n; c ¼ 1; . . . ; q Σ � IWðI; q þ 1Þð21Þ
Predictions of different response types covary in multivariate models, complicating our definition of a gIVH (see Eq 11) which relies on finding a maximum univariate value. Where a univariate model would yield a scalar prediction variance (Eq 18), a multivariate model will have a prediction covariance matrix. We propose capturing the size of a covariance matrix using univariate measures. Note this is similar to A-optimality and D-optimality criteria used in experimental design [19].
Further, using our novel numeric measure of extrapolation, we aim to take advantage of the multivariate response variable information to explore when we may identify an additional observation's (i.e. covariates for a new lake location) predictions as extrapolations for all response values, jointly. We also present an approach to identify when we cannot trust a prediction for only a single response variable at either a new lake location, or a currently partiallysampled lake. The latter identification would be useful for a range of applications in ecology. For example, in the inland lakes project, one important goal is to predict TN because this essential nutrient is not well-sampled across the study extent, and yet is important for understanding nutrient dynamics and for informing eutrophication management strategies for inland lakes. In this case, to accommodate TN not being observed (i.e. sampled) as often as some other water quality variables, we can leverage the knowledge gained from samples of other water quality measures taken more often than TN (e.g. Secchi disk depth [20] is a common measure of water clarity obtained on site, while other water quality measurements require samples to be sent to a lab for analysis). We first outline our approach for identifying extrapolated new observations using a measure of predictive variance for lakes that have been fully or partially sampled and used to fit a model. Then, we describe how this approach can be applied to the prediction of TN in lakes for which it has not been sampled.
Multivariate extrapolation measures. Using available data for both complete and partial measurements of water quality at inland lake observations (here, Y ¼ fy o ; o 2 L O g) and corresponding covariates of these sampled locations (X) we first fit an appropriate model to obtain estimates for parameters needed for prediction (here,B andΣÞ. With these values, in addition to covariates that correspond with unsampled locations, we may either directly calculate the prediction variance or, in a Bayesian setting, simulate it via posterior predictive inference. We denote this prediction variance with V i where
V i ¼ varðŷ i jYÞ ¼ varðBx i jYÞ:ð22Þ
Each V i is a square matrix for a sampled or unobserved location, (i.e. the combined sets of L O and L P , respectively), with the dimensions equal to the number of response variables in the model. As in the univariate case, we propose to characterize extrapolation by comparing prediction variances of unobserved lakes with corresponding prediction variances of observed lakes. To obtain a scalar value representation of each covariance matrix we propose using the trace or determinant. In this paper, we will refer to these multivariate posterior variance (MVPV) measures for each inland lake observation with respect to how this scalar value representation is calculated:
MVPVðtrÞ i ¼ trðV i Þ ¼ X 4 n¼1 V i ½n; n� where V i ½n; n� ¼ n th diagonal element of V i MVPVðDÞ i ¼ jV i j ¼ Y 4 n¼1 l in where l in ¼ n th Eigenvalue of V ið23Þ
The trace (tr) of an n × n square matrix V is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right). This does not take into account the correlation between variables and is not a scale-invariant measure. As the response variables for the inland lakes example are log transformed, we chose to explore the use of this measure for obtaining a scalar value extrapolation measure. The determinant (D) takes into account the correlations among pairs of variables and is scale-invariant. In this paper, we explore both approaches by quantifying extrapolation using our multivariate model of the LAGOS-NE lake data set by: Conditional single variable extrapolation measures. The chosen numeric measure of MV extrapolation includes information from the entire set of responses. In the inland lake example, this could be used to identify unsampled lakes where prediction of the whole vector of response variables (TN, TP, Chl a, Secchi) are extrapolations. However, even when a joint model is appropriate, there are important scientific questions that can be answered with prediction of a single variable.
To focus on a single response variable (taken to be the n th variable without loss of generality) conditioned on others, we now define the conditional multivariate predictive variance (CMVPV) as
CMVPV i ¼ varðŷ ni jy À ni ; YÞð24Þ
Because in our model the multivariate errors ϵ i are independently and identically distributed across lakes, then varð� ni Þ ¼ s n . As σ n is constant across all lakes, we can use either var (y ni |Y) or varðŷ ni jYÞ to characterize extrapolation. While the variances are different, the conclusions about extrapolation will be the same as both observed and unobserved lakes will have the same constant added.
As the inland lake data are modelled with a multivariate normal (MVN) distribution, we may use results from a conditional MVN distribution. If y i is jointly normally distributed as
y i � Nðμ i ; ΣÞð26Þ
where μ i = Bx i and if we condition the response for one nutrient measure for lake i on all other available nutrient measures for that lake then,
½y ni jy À ni ¼ a; Y� � Nð� m; � ΣÞð27Þ
For a lake observation that has been fully sampled for all four measures, we may compartmentalize the covariance matrix Σ for use in calculating the scalar values � m and � Σ for [y 1i | y (2,3,4)
Within this new configuration of Σ, S 11 does not change, howeverΣ 12 is the submatrix of Σ of dimension 1× 3 containing S 12 , S 13 , and S 14 ,Σ 21 is the submatrix of dimension 3 × 1 containing S 21 , S 31 , and S 41 , andΣ 22 is the submatrix of dimension 3 × 3 containing the remaining elements of Σ. Using this partitioned Σ we may obtain � m and � S.
� m ¼ m ni ÀΣ 12Σ À 1 22 ða À m À ni Þ � S ¼ S 11 ÀΣ 12Σ À 1 22Σ21ð29Þ
Any of the four response variables may be considered to be variable 1 and so this general partition approach may be used for any variable conditioned on all others. The values of μ −ni and Σ are determined by the availability of data for the three variables we are conditioning on. These water quality measure can be fully, partially, or not observed.
In the instances where all other measures have not been observed then we may still proceed to calculate varðŷ ni jYÞ as done for the MVPV. In order for this measure of variance to be comparable to other CMVPV values, we must add varð� ni Þ ¼ s n . This Conditional MVPV (CMVPV) measure results in a single scalar value for each location, i 2 L O [ L P , that may be used as outlined above to diagnose extrapolation.
Cutoffs vs continuous measures
With our selection of multivariate prediction variance measures (MVPV(tr), MVPV(D), and CMVPV) we may proceed by choosing a cutoff or range of cutoffs for delineating between extrapolation and interpretation. The role of a cutoff value or criteria in identifying and characterizing extrapolation is to delineate between prediction and extrapolation. Or rather, where (among covariate values, time, space, etc) may we expect our model to provide accurate prediction values versus where should we exhibit caution when using model-based predictions. A key decision for whether or not we label a prediction as an extrapolation (and thus identifying the location as a potentially unreliable extension of our model beyond the data) is the measure used as a boundary cutoff. Previous work [8,21] has used the maximum prediction variance as the cutoff of the g(IVH). However, many datasets contain outliers and influential points-data locations very different from the rest of the data. Choosing a cutoff for extrapolation based on the most extreme outlier in a data set will result in a very conservative definition of extrapolation for many datasets. We thus suggest (and illustrate below) a range of extrapolation cutoffs be explored, resulting in a more complete understanding of potential extrapolation. Each cutoff value we propose is a function of the scalar valued prediction variance representations of MVPV(D or tr) and CMVPV, for observed locations (L O ) only denoted collectively here by v o . We examine the following cutoff options:
1. Maximum predictive variance (Cook, Conn)
k max ¼ max o2L O ðv o Þð30Þ
2. Leverage-informed maximum predictive variance
k lev ¼ max o2L O;À lev ðv o Þ 3. Quantile value k r ¼ q r ðv o ; o 2 L O Þ
The leverage-informed cutoff value is calculated from a set of observations in L O ; À lev , where potential influential points (as determined based only on the covariate values, X) have been removed. We suggest considering quantile-based approaches as cutoff values at the 0.99 and 0.95 quantiles of the prediction variances from observed locations. These cutoffs are less conservative than the maximum predictive variance which may also be considered the 100% quantile value (i.e. a smaller cutoff value results in more unobserved locations identified as places where the empirical model may not be trusted).
Identifying locations as extrapolations
With the (C)MVPV values and cutoff choice in hand, determining which locations (observed/ unobserved) are extrapolations is straightforward and results in a binary (yes/no) value. We refer to this delineation as our extrapolation index (e)
e k p ¼ ( 1 if v p > k 0 otherwiseð31Þ
where k represents the cutoff choice obtained using v o . Each extrapolation index value is a function of the scalar value prediction variance representations of MVPV(D or tr) and CMVPV, for predicted locations (L P ) only denoted collectively here by v p . While this binary formulation allows for a simple way to determine whether or not we may diagnose a point as being an extrapolation, it does not allow for much nuance. Should a prediction with its predictive variance just beyond the boundary of the IVH be considered as untrustworthy as one with a predictive variance well beyond the boundary? We thus propose a numeric measure of extrapolation calculated by dividing predictive variance values for predicted locations by the cutoff value to generate a Relative MVPV (RMVPV) measurement:
R k MVPV p ¼ v p k :ð32Þ
R k MVPV values greater than 1 would be considered to be extrapolations, but in addition the larger the value the less trustworthy we would consider its prediction to be. This approach does not change which locations are identified as extrapolations since the binary extrapolation index as described above can be calculated from the RMVPV as
e k p ¼ ( 1 if R k MVPV p > 1 0 otherwise :ð33Þ
Choosing IVH vs PV
With several methods of identifying extrapolations available we now provide additional guidance on choosing between various options. Cook's approach of using the maximum leverage value to define the IVH boundary may be useful for either an univariate or a joint model in a linear regression framework. However, as it depends on covariate values alone, it lacks any influence of response data. Conn et al.'s gIVH introduces the use of posterior predictive variance instead of the hat matrix to define the hull boundary in the case of a generalized model. One possible limitation of predictive variance approaches to obtain an extrapolation index arises under certain generalized models. Models with constrained supports (i.e. binary, Poisson, etc) may exhibit decreased posterior variation when predictions are near the edges of the support. For example, in the binary case with a single covariate, if
y i � Bernðp i Þ logitðp i Þ ¼ b 0 þ b 1 x ið34Þ
then as x i ! 1 (x i −1), p i ! 1 (p i 0), and var(y i |p i ) = p i (1 − p i )!0. Thus, extreme points on the outside range of the observed values may have tiny predicted variance. This artificial decrease in variance may mask the identification of potentially extrapolated data points when using PV methods. Missing these extrapolations may also hinder our ability to characterize the covariate space, limiting the ability to provide reliable predictions. Thus, in models where prediction variance decreases as means go to extreme values such as Binomial, Beta, or Uniform distributions, we recommend IVH over PV approaches where this masking of extrapolation locations does not occur. We use the inland lake data set (see Predicting lake nutrient and productivity variables) to illustrate predicting joint response variables at unobserved lake locations.
Visualization and interpretation
Exploring data and taking a principled approach to identifying potential extrapolation points is often aided by visualization (and interpretation) of data and predictions. With the LAGOS data we examine spatial plots of the lakes and their locations coded by extrapolation vs prediction. Plotting this for multiple cutoff choices (as in Fig 3) is useful to explore how this choice can influence which locations are considered extrapolations. This is important from both an ecological and management perspective. For instance, if potential areas are identified as having many extrapolations this might suggest that specific lake ecosystems or landscapes have characteristics influencing processes governing nutrient dynamics in lakes not well captured by previously collected data-and thus may require further investigation. In addition to an exploration of possible extrapolation in physical space (through the plot in Fig 3), we also examine possible extrapolation in covariate space. Using either of the binary/ numeric Extrapolation Index values, we propose a Classification and Regression Tree (CART) analysis with the extrapolation values as the response. Our classification approach allows for further insight into what covariates may be influential in determining whether a newly observed location is too dissimilar to existing ones. A CART model allows for the identification of regions in covariate space where predictions are suspect and may inform future sampling efforts as the available data has not fully characterized all lakes.
Model fitting
The joint nutrient-productivity model (see Extension to the multivariate case) was fit using MCMC in R [22]. We ran the MCMC algorithm for 20,000 iterations and used the coda package to analyze MCMC output and check for convergence [23]. Full conditional updates were available for all parameters (B, Σ, and Z) thus Gibbs updates were specified. We generated posterior predictions of lake nutrient levels across the entirety of observed and unobserved lake locations as
y i � NðBx i ; ΣÞ i ¼ 1; . . . ; nð35Þ
and calculated multivariate prediction variance values as described in Multivariate extrapolation measures.
Results
Fitting our multivariate As the cutoff values become more conservative in nature the number of extrapolations identified increases. This figure shows the level of cutoff that first identifies a location as an extrapolation, (e.g. red squares are locations first flagged using the 99% cutoff, but they would also be included in the extrapolations found with the 95% cutoff). This increasing number of extrapolations identified highlights the importance of exploring different choices for a cutoff value. When the maximum value or the leverage-informed maximum of the predictive variance measure (k max and k lev ) are used as cutoffs for determining when a prediction for an unsampled lake location should not be fully trusted, zero lakes are identified as extrapolations. Exploratory data analysis (see S1 Fig) indicates that for each of the lakes identified as extrapolations, the values are within the distribution of the data, with only a few exceptions. Rather than a few key variables standing out, it appears to be some combination of variables that makes a lake an extrapolation. To further characterize the type of lake more likely to be identified as an extrapolation we used a CART Model with our binary extrapolation index results using the MVPV(D) and the 0.95 quantile cutoff. This approach can help identify regions in the covariate space where extrapolations are more likely to occur (Fig 4). This CART analysis suggests the most important factors associated with extrapolation include shoreline length, elevation, stream density, and lake SDF. For example, a lake with a shoreline greater than 26 kilometers and above a certain elevation (� 279 m), is likely to be identified as an extrapolation when using this model to obtain predictions. This type of information is useful for ecologists trying to model lake nutrients because it suggests lakes with these types of characteristics may behave differently than other lakes. In fact, lake perimeter, SDF, and elevation have been shown to be associated with reservoirs relative to natural lakes [24]. Although it is beyond the scope of our paper to fully explore this notion because our existing database does not differentiate between natural lakes and reservoirs, these results lend support to our approach and conclusions. We also employed the conditional single variable extrapolation through predictive variance approach to leverage all information known about a lake when considering whether a prediction of a single response variable (e.g. TN, as explored here) is an extrapolation (Fig 5). These cutoffs resulted in 0, 2, 73, and 386 lake multivariate response predictions out of 5031 being identified as extrapolations. To characterize the type of lake more likely to be identified as an extrapolation we used a CART model using the 95% cutoff criterion. CART revealed the most important factors associated with extrapolation were latitude, maximum depth, and watershed to lake size ratio. Latitude may be expected as many of the lakes without measures for TN are located in the northern region. An additional visualization and table exploring extrapolation lakes and their covariate values may be found in S1 Tables.
Discussion
We have presented different approaches for identifying and characterizing potential extrapolation points within multivariate response data. Ecological research is often faced with the challenge of explaining processes at broad scales with limited data. Financial, temporal, and logistical restrictions often prevent research efforts from fully exploring an ecosystem or ecological setting. Rather, ecologists rely on predictions made on a select amount of available data that may not fully represent the breadth of a system of study. By better understanding when extrapolation is occurring scientists may avoid making unsound inferences. In our inland lakes example we addressed the issue of large-scale predictions to fill in missing data using a joint linear model presented by Wagner and Schliep [18]. With our novel approach for identifying and characterizing extrapolation in a multivariate setting we were able to provide numeric measures associated with extrapolation (MVPV, CMVPV, R(C) MVPV) allowing for focus on predictions for all response variables or a single response variable while conditioning on others. Each of these measurements, when paired with a cutoff criterion, identify novel locations that are extrapolations. Our recommendations for visualization and interpretation of these extrapolated lakes is useful for future analyses and predictions which inform policy and management decisions. Insight into identified extrapolations and their characteristics provides additional sampling locations to consider for future work. In this analysis we found certain lakes, such as lakes located at relatively higher elevations in our study area, are more likely to be identified as an extrapolation. The available data may thus not fully represent these types of lakes, resulting in them being poorly predicted, or identified as extrapolations.
The tools outlined in this work provide novel insights into identifying and characterizing extrapolations in multivariate response settings. Further extensions of this work are available but not explored in this paper. In addition to the A-and D-optimality approaches (trace and determinant, respectively) used to obtain scalar value representations of the covariance matrices one may also explore the utility of E-optimality (maximum eigenvalue) as an additional criterion. This approach would focus on examining variance in the first principle component of the predictive variance matrix and, like the trace, this variance is not a scaleinvariant measure. Our work takes advantage of posterior predictive inference under a Bayesian setting to obtain an estimate of the variance of the predictive mean response vector for each lake. However, a frequentist approach using simulation-based methods may also provide an estimate of this variance through non-parametric or parametric bootstrapping (a comparison of the two for spatial abundance estimates may be found in Hedley and Buckland [25]) and the extrapolation coefficients may be obtained through the trace and/or determinant of this variance.
This work results in identification of extrapolated lake locations as well as further understanding of the unique covariate space they occupy. The resulting caution shown when using joint nutrient models to estimate water quality variables at lakes with partially or completely unsampled measures is necessary for larger goals such as estimating the overall combined levels of varying water qualities in all US inland lakes. In addition, under-or overestimating concentrations of key nutrients such as TN and TP can potentially lead to misinformed management strategies which may have deleterious effects on water quality and the lake ecosystem. The identification of lake and landscape characteristics associated with extrapolation locations can further understanding between natural/anthropogenic sources of nutrients in lakes not well represented in the sampled population. In our database, TP is sampled more than TN, which is likely due to the conventional wisdom that inland waters are P limited, where P contributes the most to eutrophication [26]. However, nitrogen has been shown to be an important nutrient in eutrophication in some lakes and some regions [27], and may be as important to sample to fully understand lake eutrophication. Our results show it is possible to predict TN if other water quality variables are available, but it would be better if it was sampled more often.
The joint model used in this work can be improved upon in several regards; no spatial component is included, response variables are averages over several years worth of data and thus temporal variation is not considered, and data from different years are given equal weight. The model we use to fit these data may be considered to be a simple one, but the novel approach presented here may be applied to more complicated models. In a sample based approach using a Bayesian framework the MVPV and CMVPV values obtained come from the MCMC samples and are thus independent from model design choices.
Deeper understanding of where extrapolation is occurring will allow researchers to propagate this uncertainty forward. Follow up analyses using model-based predictions need to acknowledge that some predictions are less trustworthy than others. This approach and our analysis here shows that while a model may be able to produce an estimate and a confidence or prediction interval, that does not mean the truth is captured nor does the assumed relationship persist, especially outside the range of observed data. The methods outlined here will serve to guide future scientific inquiries involving joint distribution models.
Supporting information
Fig 1 .
1Fit of Chl a-TP relationship for inland lakes using linear regression. A 95% confidence interval of the mean is included around the regression line. Dashed red lines represents the 95% prediction interval. Areas shaded in darker grey indicate regions of extrapolation (using the maximum leverage value (h ii ) to identify the boundaries). https://doi.org/10.1371/journal.pone.0225715.g001 Identifying and characterizing extrapolation in multivariate response data PLOS ONE | https://doi.org/10.1371/journal.pone.0225715 December 5, 2019 2 / 20
Fig 2 .
2Fig 2. Left: map of inland lake locations with full, partial, or missing response variables. Missing response variables are lakes where all water quality measures have not been observed, while partial status indicates only some lake response variables are unobserved. Covariates were quantified for all locations. Right: subset of data status (observed or missing) for each response variable. All spatial plots in this paper were created using the Maps package [15] in R to provide outline of US states.
1 .
1Finding the joint posterior distribution B, S|Y 2. Calculating the posterior predictive variance at in-sample lakes 3. Calculating the posterior predictive variance at out of sample lakes 4. Identifying extrapolations by comparing out of sample MVPV values to a cutoff value chosen using the in-sample values.
Fig 3 .
3Identification of prediction vs extrapolation locations of LAGOS-NE lakes.Four cutoff approaches are compared and presented. Lakes in orange diamonds and red triangles indicate those where predictions were beyond the 99% and 95% cutoff values, respectively, and thus considered extrapolations. The color and shape of extrapolated lake locations are determined by which cutoff value first identifies the prediction at that location as an extrapolation.https://doi.org/10.1371/journal.pone.0225715.g003
linear model to the 8,910 lakes resulted in most lakes' predictions remaining within the extrapolation index cutoff and thus not being identified as extrapolations. We explored the use of both trace and determinant for obtaining a scalar value representation of the multivariate posterior predictive variance in addition to four cutoff criteria. Using MVPV(tr) with these cutoffs (max value, leverage max, 0.99 quantile, and 0.95 quantile) resulted in 0, 1, 9, and 33 multivariate response predictions being identified as extrapolations, respectively. In contrast, using MVPV(D) values combined with the four cutoffs resulted in 0, 0, 8, 37 predictions identified as extrapolations. Unless all response variables are on the same scale we recommend the use of MVPV(D) over MVPV(tr). However, if a scale-invariant measure if not necessary, exploring the use of MVPV(tr) (in addition to MVPV(D) may reveal single-response variables that are of interest to researchers for further exploration using our Conditional MVPV approach. Fig 3 shows the spatial locations of lakes where the collective model predictions for TP, TN, Chl a, and Secchi depth have been identified as extrapolations using MVPV(D) combined with the cutoff measures.
Fig 4 .
4CART model results showing which variables may be useful in identifying extrapolations for inland lake. Each level of nodes include the thresholds and variables used to sort the data. Node color indicate whether the majority of sorted inland lake locations were identified as predictions (blue) or extrapolations (red). The first row of numbers in a node indicate the number of lakes identified as predictions (right) or extrapolations (left) that have been sorted into this node. The second row of numbers indicate the percentage of lakes that are identified as predictions (left) or extrapolation (right) with the terminal nodes (square nodes) including the percentage of records sorted by the decision tree.https://doi.org/10.1371/journal.pone.0225715.g004
Fig 5 .
5Identification and locations of prediction vs extrapolation of the single response variable, TN. Four cutoff approaches are compared and presented. Lakes in blue circles represent locations where TN predictions have been not been identified as extrapolations for any cutoff choice. Lakes in red squares, orange triangles, and yellow diamonds indicate those where predictions were beyond the cutoff values and thus considered extrapolations. The color and shape of extrapolated lake locations are determined by which cutoff value first identifies the prediction at that location as an extrapolation. https://doi.org/10.1371/journal.pone.0225715.g005
S1
Fig. Violin plots of covariate densities and extrapolation points plotted. (PDF) S1 Tables. Tables of covariate values for lakes identified as extrapolations using MVPV(D) and CMVPV for TN. (TEX)
ONE | https://doi.org/10.1371/journal.pone.0225715 December 5, 2019
1 / 20
a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
Using Cook's IVH boundary connection to predictive variance, Conn et al. define the gIVH as the set of all predicted locations L P for whichvarðŷ p jyÞ � max
o2L O
½varðŷ o jyÞ�
ð11Þ
where y −ni are the response variables for the i th lake observation being conditioned upon. With the Bayesian approach detailed above, we can get sample realizations of the conditional MVN distribution of [y ni |y −ni , Y] for all MCMC iterations, however, we still need a way to obtain univariate prediction variance. While one could directly sample from ½ŷ ni jy À ni ; Y�, we suggest the following relationship varðy ni jYÞ ¼ varðŷ ni þ� ni jYÞ ¼ varðŷ ni jYÞ þ varð� ni jYÞ ¼ varðŷ ni jYÞ þŝ n
i ] in the following way,§ = § 11 § 12 § 13 § 14 § 21 § 22 § 23 § 24 § 31 § 32 § 33 § 34 § 41 § 42 § 43 § 442
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
5´2
6
4
§ 11 §12
§ 21 §22
3
7
5
PLOS ONE | https://doi.org/10.1371/journal.pone.0225715 December 5, 2019
PLOS ONE | https://doi.org/10.1371/journal.pone.0225715 December 5, 2019 20 / 20
AcknowledgmentsWe thank the LAGOS Continental Limnology Research Team for helpful discussions throughout the process of this manuscript. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.DisclaimerThis draft manuscript is distributed solely for purposes of scientific peer review. Its content is deliberative and predecisional, so it must not be disclosed or released by reviewers. Because the manuscript has not yet been approved for publication by the US Geological Survey (USGS), it does not represent any official finding or policy. Wagner.Author Contributions
Spatial Extrapolation: The Science of Predicting Ecological Patterns and Processes. J R Miller, M G Turner, Smithwick Eah, C L Dent, E H Stanley, 10.1641/0006-3568(2004)054%5B0310:SETSOP%5D2.0.CO;2054%5B0310:SETSOP%5D2.0.CO;2BioScience. 544310Miller JR, Turner MG, Smithwick EaH, Dent CL, Stanley EH. Spatial Extrapolation: The Science of Pre- dicting Ecological Patterns and Processes. BioScience. 2004; 54(4):310. https://doi.org/10.1641/0006- 3568(2004)054%5B0310:SETSOP%5D2.0.CO;2
Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes. C T Filstrup, T Wagner, P A Soranno, E H Stanley, C A Stow, K E Webster, Limnology and Oceanography. 595Filstrup CT, Wagner T, Soranno PA, Stanley EH, Stow CA, Webster KE, et al. Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes. Limnology and Oceanography. 2014; 59(5):1691-1703.
Extrapolation in ecological risk assessment: balancing pragmatism and precaution in chemical controls legislation. V Forbes, P Calow, BioScience. 2253Forbes V, Calow P. Extrapolation in ecological risk assessment: balancing pragmatism and precaution in chemical controls legislation. BioScience. 2002; 225(3):152-161.
The problems of prediction and scale in applied ecology: The example of fire as a management tool. R P Freckleton, Journal of Applied Ecology. 414Freckleton RP. The problems of prediction and scale in applied ecology: The example of fire as a man- agement tool. Journal of Applied Ecology. 2004; 41(4):599-603.
Estimating Terrestrial Biodiversity through Extrapolation. R K Colwell, J A Coddington, 10.1098/rstb.1994.0091Philosophical Transactions of the Royal Society B: Biological Sciences. 345Colwell RK, Coddington JA. Estimating Terrestrial Biodiversity through Extrapolation. Philosophical Transactions of the Royal Society B: Biological Sciences. 1994; 345(1311):101-118. https://doi.org/10. 1098/rstb.1994.0091
Strategies for ecological extrapolation. Dpc Peters, J E Herrick, D L Urban, R H Gardner, D D Breshears, Oikos. 1063Peters DPC, Herrick JE, Urban DL, Gardner RH, Breshears DD. Strategies for ecological extrapolation. Oikos. 2004; 106(3):627-636.
Detection of Influential Observation in Linear Regression. R D Cook, Technometrics. Cook RD. Detection of Influential Observation in Linear Regression. Technometrics. 1977.
On extrapolating past the range of observed data when making statistical predictions in ecology. P B Conn, D S Johnson, P L Boveng, PLoS ONE. Conn PB, Johnson DS, Boveng PL. On extrapolating past the range of observed data when making sta- tistical predictions in ecology. PLoS ONE. 2015.
Species Distribution Models: Ecological Explanation and Prediction Across Space and Time. J Elith, J R Leathwick, 10.1146/annurev.ecolsys.110308.120159Annual Review of Ecology, Evolution, and Systematics. 401Elith J, Leathwick JR. Species Distribution Models: Ecological Explanation and Prediction Across Space and Time. Annual Review of Ecology, Evolution, and Systematics. 2009; 40(1):677-697. https:// doi.org/10.1146/annurev.ecolsys.110308.120159
Here be dragons: A tool for quantifying novelty due to covariate range and correlation change when projecting species distribution models. Diversity and Distributions. M B Mesgaran, R D Cousens, B L Webber, 20Mesgaran MB, Cousens RD, Webber BL. Here be dragons: A tool for quantifying novelty due to covari- ate range and correlation change when projecting species distribution models. Diversity and Distribu- tions. 2014; 20(10):1147-1159.
Carpenter_et_al-1998-Ecological_Applications. S R Carpenter, N F Caraco, D L Correll, R W Howarth, A N Sharpley, V H Smith, Ecological Applications. 8Carpenter SR, Caraco NF, Correll DL, Howarth RW, Sharpley AN, Smith VH. Carpenter_et_al-1998- Ecological_Applications. Ecological Applications. 1998; 8(January 1998):559-568.
The study of carbon in inland waters-from isolated ecosystems to players in the global carbon cycle. L J Tranvik, J J Cole, Y T Prairie, Limnology and Oceanography Letters. 33Tranvik LJ, Cole JJ, Prairie YT. The study of carbon in inland waters-from isolated ecosystems to play- ers in the global carbon cycle. Limnology and Oceanography Letters. 2018; 3(3):41-48.
LAGOS-NE: A multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of U.S. lakes. P A Soranno, L C Bacon, M Beauchene, K E Bednar, E G Bissell, C K Boudreau, 10.1093/gigascience/gix10129053868GigaScience. Soranno PA, Bacon LC, Beauchene M, Bednar KE, Bissell EG, Boudreau CK, et al. LAGOS-NE: A multi-scaled geospatial and temporal database of lake ecological context and water quality for thou- sands of U.S. lakes. GigaScience. 2017;(March):1-22. https://doi.org/10.1093/gigascience/gix101 PMID: 29053868
Combining nutrient, productivity, and landscape-based regressions improves predictions of lake nutrients and provides insight into nutrient coupling at macroscales. T Wagner, E M Schliep, Limnology and Oceanography. 636Wagner T, Schliep EM. Combining nutrient, productivity, and landscape-based regressions improves predictions of lake nutrients and provides insight into nutrient coupling at macroscales. Limnology and Oceanography. 2018; 63(6):2372-2383.
maps: Draw Geographical Maps. R package version 23-6. R A Becker, A R Wilks, R Brownrigg, T P Minka, Becker RA, Wilks AR, Brownrigg R, Minka TP. maps: Draw Geographical Maps. R package version 23- 6. 2013;.
On the generalized distance in statistics. National Institute of Science of India. P C Mahalanobis, Mahalanobis PC. On the generalized distance in statistics. National Institute of Science of India. 1936;.
Mahalanobis distances and ecological niche modelling: correcting a chi-squared probability error. T R Etherington, PeerJ. Etherington TR. Mahalanobis distances and ecological niche modelling: correcting a chi-squared proba- bility error. PeerJ. 2019.
Combining nutrient, productivity, and landscape-based regressions improves predictions of lake nutrients and provides insight into nutrient coupling at macroscales. T Wagner, E M Schliep, Limnology and Oceanography. Wagner T, Schliep EM. Combining nutrient, productivity, and landscape-based regressions improves predictions of lake nutrients and provides insight into nutrient coupling at macroscales. Limnology and Oceanography. 2018.
Matrix Algebra: Theory, Computations, and Applications in Statistics. J E Gentle, Gentle JE. Matrix Algebra: Theory, Computations, and Applications in Statistics; 2007.
Long-term citizen-collected data reveal geographical patterns and temporal trends in lake water clarity. N R Lottig, T Wagner, E N Henry, K S Cheruvelil, K E Webster, J A Downing, PLoS ONE. Lottig NR, Wagner T, Henry EN, Cheruvelil KS, Webster KE, Downing JA, et al. Long-term citizen-col- lected data reveal geographical patterns and temporal trends in lake water clarity. PLoS ONE. 2014.
Influential observations in linear regression. R D Cook, Journal of the American Statistical Association. Cook RD. Influential observations in linear regression. Journal of the American Statistical Association. 1979.
R: A language and environment for statistical computing. R Foundation for Statistical Computing. R Core Team. R Core, Team, Vienna, Austria URLR Core Team. R Core Team (2016). R: A language and environment for statistical computing. R Foun- dation for Statistical Computing, Vienna, Austria URL http://wwwR-projectorg/. 2016;.
CODA: Convergence Diagnosis and Output Analysis for MCMC. R news. M P Vines, N Best, K Cowles, Karen , 6Vines MP, Best N, Cowles K, Karen. CODA: Convergence Diagnosis and Output Analysis for MCMC. R news. 2006; 6(1):7-11.
Catchment, morphometric, and water quality characteristics differ between reservoirs and naturally formed lakes on a latitudinal gradient in the conterminous United States. J P Doubek, C C Carey, Doubek JP, Carey CC. Catchment, morphometric, and water quality characteristics differ between res- ervoirs and naturally formed lakes on a latitudinal gradient in the conterminous United States. Inland Waters. 2017.
Spatial models for line transect sampling. S L Hedley, S T Buckland, Journal of Agricultural, Biological, and Environmental Statistics. Hedley SL, Buckland ST. Spatial models for line transect sampling. Journal of Agricultural, Biological, and Environmental Statistics. 2004.
Ecology-Controlling eutrophication: Nitrogen and phosphorus. D J Conley, H W Paerl, R W Howarth, D F Boesch, S P Seitzinger, K E Havens, Conley DJ, Paerl HW, Howarth RW, Boesch DF, Seitzinger SP, Havens KE, et al. Ecology-Controlling eutrophication: Nitrogen and phosphorus; 2009.
Controlling harmful cyanobacterial blooms in a hyper-eutrophic lake. H W Paerl, H Xu, M J Mccarthy, G Zhu, B Qin, Y Li, The need for a dual nutrient (N & P) management strategy. Water Research. Lake Taihu, ChinaPaerl HW, Xu H, McCarthy MJ, Zhu G, Qin B, Li Y, et al. Controlling harmful cyanobacterial blooms in a hyper-eutrophic lake (Lake Taihu, China): The need for a dual nutrient (N & P) management strategy. Water Research. 2011.
| [] |
[
"TASI 2009 Lectures -Flavor Physics",
"TASI 2009 Lectures -Flavor Physics"
] | [
"Oram Gedalia \nDepartment of Particle Physics and Astrophysics\nWeizmann Institute of Science\n76100RehovotIsrael\n",
"Gilad Perez \nDepartment of Particle Physics and Astrophysics\nWeizmann Institute of Science\n76100RehovotIsrael\n"
] | [
"Department of Particle Physics and Astrophysics\nWeizmann Institute of Science\n76100RehovotIsrael",
"Department of Particle Physics and Astrophysics\nWeizmann Institute of Science\n76100RehovotIsrael"
] | [] | The standard model picture of flavor and CP violation is now experimentally verified, hence strong bounds on the flavor structure of new physics follow. We begin by discussing in detail the unique way that flavor conversion and CP violation arise in the standard model. The description provided is based on a spurion, symmetry oriented, analysis, and a covariant basis for describing flavor transition processes is introduced, in order to make the discussion transparent for non-experts. We show how to derive model independent bounds on generic new physics models. Furthermore, we demonstrate, using the covariant basis, how recent data and LHC projections can be applied to constrain models with an arbitrary mechanism of alignment. Next, we discuss the various limits of the minimal flavor violation framework and their phenomenological aspects, as well as the implications to the underlying microscopic origin of the framework. We also briefly discuss aspects of supersymmetry and warped extra dimension flavor violation. Finally we speculate on the possible role of flavor physics in the LHC era.The standard model flavor sectorThe SM quarks furnish three representations of the SM gauge group, SU (3) × SU (2) × U (1):, where Q, U, D stand for SU (2) weak doublet, up type and down type singlet quarks, respectively. Flavor physics is related to the fact that the SM consists of | null | [
"https://arxiv.org/pdf/1005.3106v1.pdf"
] | 119,296,827 | 1005.3106 | 6eb669c62173ca82812626d5aecba7b6f464b032 |
TASI 2009 Lectures -Flavor Physics
18 May 2010
Oram Gedalia
Department of Particle Physics and Astrophysics
Weizmann Institute of Science
76100RehovotIsrael
Gilad Perez
Department of Particle Physics and Astrophysics
Weizmann Institute of Science
76100RehovotIsrael
TASI 2009 Lectures -Flavor Physics
18 May 20101
The standard model picture of flavor and CP violation is now experimentally verified, hence strong bounds on the flavor structure of new physics follow. We begin by discussing in detail the unique way that flavor conversion and CP violation arise in the standard model. The description provided is based on a spurion, symmetry oriented, analysis, and a covariant basis for describing flavor transition processes is introduced, in order to make the discussion transparent for non-experts. We show how to derive model independent bounds on generic new physics models. Furthermore, we demonstrate, using the covariant basis, how recent data and LHC projections can be applied to constrain models with an arbitrary mechanism of alignment. Next, we discuss the various limits of the minimal flavor violation framework and their phenomenological aspects, as well as the implications to the underlying microscopic origin of the framework. We also briefly discuss aspects of supersymmetry and warped extra dimension flavor violation. Finally we speculate on the possible role of flavor physics in the LHC era.The standard model flavor sectorThe SM quarks furnish three representations of the SM gauge group, SU (3) × SU (2) × U (1):, where Q, U, D stand for SU (2) weak doublet, up type and down type singlet quarks, respectively. Flavor physics is related to the fact that the SM consists of
Introduction
Flavors are replications of states with identical quantum numbers. The standard model (SM) consists of three such replications of the five fermionic representations of the SM gauge group. Flavor physics describes the non-trivial spectrum and interactions of the flavor sector. What makes this field particularly interesting is that the SM flavor sector is rather unique, and its special characteristics make it testable and predictive. 1 Let us list few of the SM unique flavor predictions:
• It Contains a single CP violating parameter. 2 • Flavor conversion is driven by three mixing angles.
• To leading order, flavor conversion proceeds through weak charged current interactions.
• To leading order, flavor conversion involves left handed (LH) currents.
• CP violating processes must involve all three generations.
• The dominant flavor breaking is due to the top Yukawa coupling, hence the SM possesses a large approximate global flavor symmetry (as shown below, technically it is given by
U (2) Q × U (2) U × U (1) t × U (3) D ).
In the last four decades or so, a huge effort was invested towards testing the SM predictions related to its flavor sector. Recently, due to the success of the B factories, the field of flavor physics has made a dramatic progress, culminated in Kobayashi and Maskawa winning the Nobel prize. It is now established that the SM contributions drive the observed flavor and CP violation (CPV) in nature, via the Cabibbo-Kobayashi-Maskawa (CKM) [1,2] description. To verify that this is indeed the case, one can allow new physics (NP) to contribute to various clean observables, which can be calculated precisely within the SM. Analyses of the data before and after the B factories data have matured [3,4,5,6], demonstrating that the NP contributions to these clean processes cannot be bigger than O (30%) of the SM contributions [7,8]. Very recently, the SM passed another non-trivial test. The neutral D meson system (for formalism see e.g. [9,10,11,12,13] and refs. therein) bears two unique aspects among the four neutral meson system (K, D, B, B s ): (i) The long distance contributions to the mixing are orders of magnitude above the SM short distance ones [14,15], thus making it difficult to theoretically predict the width and mass splitting. (ii) The SM contribution to the CP violation in the mixing amplitude is expected to be below the permil level [16], hence D 0 − D 0 mixing can unambiguously signal new physics if CPV is observed. Present data [17,18,19,20,21,22,23,24] implies that generic CPV contributions can be only of O (20%) of the total (un-calculable) contributions to the mixing amplitudes, again consistent with the SM null prediction.
We have just given rather solid arguments for the validity of the SM flavor description. What else is there to say then? Could this be the end of the story? We have several important reasons to think that flavor physics will continue to play a significant role in our understanding of microscopical physics at and beyond the reach of current colliders. Let us first mention a few examples that demonstrate the role that flavor precision tests played in the past:
• The smallness of Γ(K L → µ + µ − )/Γ(K + → µ + ν) led to predicting a fourth quark (the charm) via the discovery of the GIM mechanism [25].
• The size of the mass difference in the neutral Kaon system, ∆m K , led to a successful prediction of the charm mass [26].
• The size of ∆m B led to a successful prediction of the top mass (for a review see [27] and refs. therein).
This partial list demonstrates the power of flavor precision tests in terms of being sensitive to short distance dynamics. Even in view of the SM successful flavor story, it is likely that there are missing experimental and theoretical ingredients, as follows:
• Within the SM, as mentioned, there is a single CP violating parameter. We shall see that the unique structure of the SM flavor sector implies that CP violating phenomena are highly suppressed. Baryogenesis, which requires a sizable CP violating source, therefore cannot be accounted for by the SM CKM phase. Measurements of CPV in flavor changing processes might provide evidence for additional sources coming from short distance physics.
• The SM flavor parameters are hierarchical, and most of them are small (excluding the top Yukawa and the CKM phase), which is denoted as the flavor puzzle. This peculiarity might stem from unknown flavor dynamics. Though it might be related to very short distance physics, we can still get indirect information about its nature via combinations of flavor precision and high p T measurements.
• The SM fine tuning problem, which is related to the quadratic divergence of the Higgs mass, generically requires new physics at, or below, the TeV scale. If such new physics has a generic flavor structure, it would contribute to flavor changing neutral current (FCNC) processes orders of magnitude above the observed rates. Putting it differently, the flavor scale at which NP is allowed to have a generic flavor structure is required to be larger than O (10 5 ) TeV, in order to be consistent with flavor precision tests. Since this is well above the electroweak symmetry breaking scale, it implies an "intermediate" hierarchy puzzle (cf. the little hierarchy [28,29] problem). We use the term "puzzle" and not "problem" since in general, the smallness of the flavor parameters, even within NP models, implies the presence of approximate symmetries. One can imagine, for instance, a situation where the suppression of the NP contributions to FCNC processes is linked with the SM small mixing angles and small light quark Yukawas [4,5]. In such a case, this intermediate hierarchy is resolved in a technically natural way, or radiatively stable manner, and no fine tuning is required. 3 three replications/generations/flavors of these three representations. The flavor sector of the SM is described via the following part of the SM Lagrangian
L F = q i / D q j δ ij + (Y U ) ij Q i U j H U + (Y D ) ij Q i D j H D + h.c. ,(1)
where / D ≡ D µ γ µ with D µ being a covariant derivative, q = Q, U, D, within the SM with a single Higgs H U = iσ 2 H * D (however, the reader should keep in mind that at present, the nature and content of the SM Higgs sector is unknown) and i, j = 1, 2, 3 are flavor indices.
If we switch off the Yukawa interactions, the SM would possess a large global flavor symmetry,
G SM , 4 G SM = U (3) Q × U (3) U × U (3) D .(2)
Inspecting Eq. (1) shows that the only non-trivial flavor dependence in the Lagrangian is in the form of the Yukawa interactions. It is encoded in a pair of 3 × 3 complex matrices, Y U,D .
The SM quark flavor parameters
Naively one might think that the number of the SM flavor parameters is given by 2 × 9 = 18 real numbers and 2 × 9 = 18 imaginary ones, the elements of Y U,D . However, some of the parameters which appear in the Yukawa matrices are unphysical. A simple way to see that (see e.g. [30,31,32] and refs. therein) is to use the fact that a flavor basis transformation,
Q → V Q Q , U → V U U , D → V D D ,(3)
leaves the SM Lagrangian invariant, apart from redefinition of the Yukawas,
Y U → V Q Y U V † U , Y D → V Q Y D V † D ,(4)
where V i is a 3 × 3 unitary rotation matrix. Each of the three rotation matrices V Q,U,D contains three real parameters and six imaginary ones (the former ones correspond to the three generators of the SO(3) group, and the latter correspond to the remaining six generators of the U (3) group). We know, however, that physical observables do not depend on our choice of basis. Hence, we can use these rotations to eliminate unphysical flavor parameters from Y U,D . Out of the 18 real parameters, we can remove 9 (3 × 3) ones. Out of the 18 imaginary parameters, we can remove 17 (3×6 − 1) ones. We cannot remove all the imaginary parameters, due to the fact that the SM Lagrangian conserves a U (1) B symmetry. 5 Thus, there is a linear combination of the diagonal generators of G SM which is unbroken even in the presence of the Yukawa matrices, and hence cannot be used in order to remove the extra imaginary parameter. An explicit calculation shows that the 9 real parameters correspond to 6 masses and 3 CKM mixing angles, while the imaginary parameter corresponds to the CKM celebrated CPV phase. To see that, we can define a mass basis where Y U,D are both diagonal. This can be achieved by applying a bi-unitary transformation on each of the Yukawas:
Q u,d → V Q u,d Q u,d , U → V U U , D → V D D ,(5)
which leaves the SM Lagrangian invariant, apart from redefinition of the Yukawas,
Y U → V Q u Y U V † U , Y D → V Q d Y D V † D .(6)
The difference between the transformations used in Eqs. (3) and (4) and the ones above (5,6), is in the fact that each component of the SU (2) weak doublets (denoted as Q u ≡ U L and Q d ≡ D L ) transforms independently. This manifestly breaks the SU (2) gauge invariance, hence such a transformation makes sense only for a theory in which the electroweak symmetry is broken. This is precisely the case for the SM, where the masses are induced by spontaneous electroweak symmetry breaking via the Higgs mechanism. Applying the above transformation amounts to "moving" to the mass basis. The SM flavor Lagrangian, in the mass basis, is given by (in a unitary gauge),
L F m = q i D / q j δ ij NC + u L c L t L y u 0 0 0 y c 0 0 0 y t u R c R t R (v + h) + (u, c, t) ↔ (d, s, b) + g 2 √ 2 u Li γ µ V CKM ij d Lj W + µ + h.c.,(7)
where the subscript NC stands for neutral current interaction for the gluons, the photon and the Z gauge bosons, W ± stands for the charged electroweak gauge bosons, h is the physical Higgs field, v ∼ 176 GeV, m i = y i v and V CKM is the CKM matrix
V CKM = V Q u V † Q d .(8)
In general, the CKM is a 3 × 3 unitary matrix, with 6 imaginary parameters. However, as evident from Eq. (7), the charged current interactions are the only terms which are not invariant under individual quark vectorial U (1) 6 field redefinitions,
u i , d j → e iθ u i ,d j .(9)
The diagonal part of this transformation corresponds to the classically conserved baryon current, while the non-diagonal, U (1) 5 , part of the transformation can be used to remove 5 out of the 6 phases, leaving the CKM matrix with a single physical phase. Notice also that a possible permutation ambiguity for ordering the CKM entries is removed, given that we have ordered the fields in Eq. (7) according to their masses, light fields first. This exercise of explicitly identifying the mass basis rotation is quite instructive, and we have already learned several important issues regarding how flavor is broken within the SM (we shall derive the same conclusions using a spurion analysis in a symmetry oriented manner in Sec. 3):
• Flavor conversions only proceed via the three CKM mixing angles.
• Flavor conversion is mediated via the charged current electroweak interactions.
• The charge current interactions only involve LH fields.
Even after removing all the unphysical parameters, there are various possible forms for the CKM matrix. For example, a parameterization used by the particle data group [33], is given by
V CKM =
where c ij ≡ cos θ ij and s ij ≡ sin θ ij . The three sin θ ij are the three real mixing parameters, while δ KM is the Kobayashi-Maskawa phase.
CP violation
The SM predictive power picks up once CPV is considered. We have already proven that the SM flavor sector contains a single CP violating parameter. Once presented with a SM Lagrangian where the Yukawa matrices are given in a generic basis, it is not trivial to determine whether CP is violated or not. This is even more challenging when discussing beyond the SM dynamics, where new CP violating sources might be present. A brute force way to establish that CP is violated would be to show that no field redefinitions would render a Lagrangian real. For example, consider a Lagrangian with a single Yukawa matrix,
L Y = Y ij ψ i L φψ j R + Y * ij ψ j R φ † ψ i L ,(11)
where φ is a scalar and ψ i X is a fermion field. A CP transformation exchanges the operators
ψ i L φψ j R ↔ ψ j R φ † ψ i L ,(12)
but leaves their coefficients, Y ij and Y * ij , unchanged, since CP is a linear unitary non-anomalous transformation. This means that CP is conserved if
Y ij = Y * ij .(13)
This is, however, not a basis independent statement. Since physical observables do no depend on a specific basis choice, it is enough to find a basis in which the above relation holds. 6 Sometimes the brute force way is tedious and might be rather complicated. A more systematic approach would be to identify a phase reparameterization invariant or basis independent quantity, that vanishes in the CP conserving limit. As discovered in [34,35], for the SM case one can define the following quantity
C SM = det[Y D Y † D , Y U Y † U ] ,(14)
and the SM is CP violating if and only if
Im C SM = 0.(15)
It is trivial to prove that only if the number of generations is three or more, then CP is violated. Hence, within the SM, where CP is broken explicitly in the flavor sector, any CP violating process must involve all three generations. This is an important condition, which implies strong predictive power. Furthermore, all the CPV observables are correlated, since they are all proportional to a single CP violating parameter, δ KM . Finally, it is worth mentioning that CPV observables are related to interference between different processes, and hence are measurements of amplitude ratios. Thus, in various known cases, they turn out to be cleaner and easier to interpret theoretically.
The flavor puzzle
Now that we have precisely identified the SM physical flavor parameters, it is interesting to ask what is their experimental value (using MS) [33]: where V CKM ij corresponds to the magnitude of the ij entry in the CKM matrix, δ KM is the CKM phase, only uncertainties bigger than 10% are shown, numbers are shown to a 2-digit precision and the V CKM ti entries involve indirect information (a detailed description and refs. can be found in [33]).
m u = 1.5.
Inspecting the actual numerical values for the flavor parameters given in Eq. (16), shows a peculiar structure. Most of the parameters, apart from the top mass and the CKM phase, are small and hierarchical. The amount of hierarchy can be characterized by looking at two different classes of observables:
• Hierarchies between the masses, which are not related to flavor converting processes -as a measure of these hierarchies, we can just estimate what is the size of the product of the Yukawa coupling square differences (in the mass basis)
(m 2 t − m 2 c ) (m 2 t − m 2 u ) (m 2 c − m 2 u ) (m 2 b − m 2 s ) (m 2 b − m 2 d ) (m 2 s − m 2 d ) v 12 = O 10 −17 .(16)
• Hierarchies in the mixing which mediate flavor conversion -this is related to the tiny misalignment between the up and down Yukawas; one can quantify this effect in a basis independent fashion as follows. A CP violating quantity, associated with V CKM , that is independent of parametrization [34,35], J KM , is defined through
Im V CKM ij V CKM kl V CKM il * V CKM kj * = J KM 3 m,n=1 ikm jln = = c 12 c 23 c 2 13 s 12 s 23 s 13 sin δ KM λ 6 A 2 η = O 10 −5 ,(17)
where i, j, k, l = 1, 2, 3. We see that even though δ KM is of order unity, the resulting CP violating parameter is small, as it is "screened" by small mixing angles. If any of the mixing angles is a multiple of π/2, then the SM Lagrangian becomes real. Another explicit way to see that Y U and Y D are quasi aligned is via the Wolfenstein parametrization of the CKM matrix, where the four mixing parameters are (λ, A, ρ, η), with λ = |V us | = 0.23 playing the role of an expansion parameter [36]:
V CKM = 1 − λ 2 2 λ Aλ 3 (ρ − iη) −λ 1 − λ 2 2 Aλ 2 Aλ 3 (1 − ρ − iη) −Aλ 2 1 + O(λ 4 ).(18)
Basically, to zeroth order, the CKM matrix is just a unit matrix ! As we shall discuss further below, both kinds of hierarchies described in the bullets lead to suppression of CPV. Thus, a nice way to quantify the amount of hierarchies, both in masses and mixing angles, is to compute the value of the reparameterization invariant measure of CPV introduced in Eq. (14)
C SM = J KM (m 2 t − m 2 c ) (m 2 t − m 2 u ) (m 2 c − m 2 u ) (m 2 b − m 2 s ) (m 2 b − m 2 d ) (m 2 s − m 2 d ) v 12 = O 10 −22 .(19)
This tiny value of C SM that characterizes the flavor hierarchy in nature would be of order 10% in theories where Y U,D are generic order one complex matrices. The smallness of C SM is something that many flavor models beyond the SM try to address. Furthermore, SM extensions that have new sources of CPV tend not to have the SM built-in CP screening mechanism. As a result, they give too large contributions to the various observables that are sensitive to CP breaking. Therefore, these models are usually excluded by the data, which is, as mentioned, consistent with the SM predictions.
Spurion analysis of the SM flavor sector
In this part we shall try to be more systematic in understanding the way flavor is broken within the SM. We shall develop a spurion, symmetry-oriented description for the SM flavor structure, and also generalize it to NP models with similar flavor structure, that goes under the name minimal flavor violation (MFV).
Understanding the SM flavor breaking
It is clear that if we set the Yukawa couplings of the SM to zero, we restore the full global flavor group,
G SM = U (3) Q ×U (3) U ×U (3) D .
In order to be able to better understand the nature of flavor and CPV within the SM, in the presence of the Yukawa terms, we can use a spurion analysis as follows. Let us formally promote the Yukawa matrices to spurion fields, which transform under G SM in a manner that makes the SM invariant under the full flavor group (see e.g. [37] and refs. therein). From the flavor transformation given in Eqs. (3,4), we can read the representation of the various fields under G SM (see illustration in Fig. 1)
Fields : Q(3, 1, 1), U (1, 3, 1), D(1, 1, 3) ; Spurions : Y U (3,3, 1), Y D (3, 1,3) .(20)
The flavor group is broken by the "background" value of the spurions Y U,D , which are bi-fundamentals of G SM . It is instructive to consider the breaking of the different flavor groups separately (since Y U,D are bi-fundamentals, the breaking of quark doublet and singlet flavor groups are linked together, so this analysis only gives partial information to be completed below). Consider the quark singlet flavor group, U (3) U × U (3) D , first. We can construct a polynomial of the Yukawas with simple transformation properties under the flavor group. For instance, consider the objects
A U,D ≡ Y † U,D Y U,D − 1 3 tr Y † U,D Y U,D 1 3 .(21)
Under the flavor group A U,D transform as
A U,D → V U,D A U,D V † U,D .(22)
Thus, A U,D are adjoints of U (3) U,D and singlets of the rest of the flavor group [while tr(Y † U,D Y U,D ) are flavor singlets]. Via similarity transformation, we can bring A U,D to a diagonal form, simultaneously. Thus, we learn that the background value of each of the Yukawa matrices separately breaks the U (3) U,D down to a residual U (1) 3 U,D group, as illustrated in Fig. 2. Let us now discuss the breaking of the LH flavor group. We can, in principle, apply the same analysis for the LH flavor group, U (3) Q , via defining the adjoints (in this case we have two independent ones),
A Q u ,Q d ≡ Y U,D Y † U,D − 1 3 tr Y U,D Y † U,D 1 3 .(23)
However, in this case the breaking is more involved, since A Q u,d are adjoints of the same flavor group. This is a direct consequence of the SU (2) weak gauge interaction, which relates the two components of the SU (2) doublets. This actually motivates one to extend the global flavor group as follows. If we switch off the electroweak interactions, the SM global flavor group is actually enlarged to [38]
G SM weakless = U (6) Q × U (3) U × U (3) D ,(24)
since now each SU (2) doublet, Q i , can be split into two independent flavors, Q u,d i , with identical SU (3) × U (1) gauge quantum numbers [39]. This limit, however, is not very illuminating, since it does not allow for flavor violation at all. To make a progress, it is instructive to distinguish the W 3 neutral current interactions from the W ± charged current ones, as follows: The W 3 couplings are flavor universal, which, however, couple up and down quarks separately. The W ± couplings, g ± 2 , link between the up and down LH quarks. In the presence of only W 3 couplings, the residual flavor group is given by 7
G SM exten = U (3) Q u × U (3) Q d × U (3) U × U (3) D .(25)
In this limit, even in the presence of the Yukawa matrices, flavor conversion is forbidden. We have already seen explicitly that only the charged currents link between different flavors (see Eq. (7)). It is thus evident that to formally characterize flavor violation, we can extend the flavor group from G SM → G SM exten , where now we break the quark doublets to their isospin components, U L , D L , and add another spurion, g ± 2 Fields :
U L (3, 1, 1, 1), D L (1, 3, 1, 1), U (1, 1, 3, 1), D(1, 1, 1, 3) Spurions : g ± 2 (3,3, 1, 1), Y U (3, 1,3, 1), Y D (1, 3, 1,3) .(26)
Flavor breaking within the SM occurs only when G SM exten is fully broken via the Yukawa background values, but also due to the fact that g ± 2 has a background value. Unlike Y U,D , g ± 2 is a special spurion in the sense that its eigenvalues are degenerate, as required by the weak gauge symmetry. Hence, it breaks the U (3) Q u × U (3) Q d down to a diagonal group, which is nothing but U (3) Q . We can identify two bases where g ± 2 has an interesting background value: The weak interaction basis, in which the background value of g ± 2 is simply a unit matrix 8
g ± 2 int ∝ 1 3 ,(27)
and the mass basis, where (after removing all unphysical parameters) the background value of g ± 2 is the CKM matrix
g ± 2 mass ∝ V CKM .(28)
Now we are in a position to understand the way flavor conversion is obtained in the SM. Three spurions must participate in the breaking: Y U,D and g ± 2 . Since g ± 2 is involved, it is clear that generation transitions must involve LH charged current interactions. These transitions are mediated by the spurion backgrounds, A Q u ,Q d (see Eq. (23)), which characterize the breaking of the individual LH flavor symmetries,
U (3) Q u × U (3) Q d → U (1) 3 Q u × U (1) 3 Q d .(29)
Flavor conversion occurs because of the fact that in general we cannot diagonalize simultaneously A Q u ,Q d and g ± 2 , where the misalignment between A Q u and A Q d is precisely characterized by the CKM matrix. This is illustrated in Fig. 3, where it is shown that the flavor breaking within the SM goes through collective breaking [40] -a term often used in the context of little Higgs models (see e.g. [41] and refs. therein). We can now combine the LH and RH quark flavor symmetry breaking to obtain the complete picture of how flavor is broken within the SM. As we saw, the breaking of the quark singlet groups is rather trivial. It is, however, linked to the more involved LH flavor breaking, since the Yukawa matrices are bi-fundamentals -the LH and RH flavor breaking are tied together. The full breaking is illustrated in Fig. 4.
A comment on description of flavor conversion in physical processes
The above spurion structure allows us to describe SM flavor converting processes. However, the reader might be confused, since we have argued above that flavor converting processes must involve the three spurions, A Q u,d and g ± 2 . It is well known that the rates for charge current processes, which are described via conversion of down quark to an up one (and vise a versa), say like beta decay or b → u transitions, are only suppressed by the corresponding CKM entry, or g ± 2 . What happened to the dependence on A Q u,d ? The key point here is that in a typical flavor precision measurement, the experimentalists produce mass eigenstate (for example a neutron or a B meson), and thus the fields involved are chosen to be in the mass basis. For example, a b → c process is characterized by producing a B meson which decays into a charmed one. Hence, both A Q u and A Q d participate, being forced to be diagonal, but in a nonlinear way. Physically, we can characterize it by writing an operator O b→c =c mass g ±
where both the b mass and c mass quarks are mass eigenstate. Note that this is consistent with the transformation rules for the extended gauge group, G SM exten , given in Eqs. (25) and (26), where the fields involved belong to different representations of the extended flavor group.
The situation is different when FCNC processes are considered. In such a case, a typical measurement involves mass eigenstate quarks belonging to the same representation of G SM exten . For example, processes that mediate B 0 d − B 0 d oscillation due to the tiny mass difference ∆m B d between the two mass eigenstates (which was first measured by the ARGUS experiment [42]), are described via the following operator, omitting the spurion structure for simplicity,
O ∆m B d = b mass d mass 2 .(31)
Obviously, this operator cannot be generated by SM processes, as it is violates the G SM exten symmetry explicitly. Since it involves flavor conversion (it violates b number by two units, hence denoted as ∆b = 2 and belongs to ∆F = 2 class of FCNC processes), it must have some power of g ± 2 . A single power of g ± 2 connects a LH down quark to a LH up one, so the leading contribution should go likeD i L g ± 2 ik g ± 2 * kj D j L (i, k, j = 1, 2, 3). Hence, as expected, this process is mediated at least via one loop. This would not work as well, since we can always rotate the down quark fields into the mass basis, and simultaneously rotate also the up type quarks (away from their mass basis) so that g ± 2 ∝ 1 3 . These manipulations define the interaction basis, which is not unique (see Eq. (27)). Therefore, the leading flavor invariant spurion that mediates FCNC transition would have to involve the up type Yukawa spurion as well. A naive guess would be
O ∆m B d ∝ b mass g ± 2 bk mass (A Q u ) kl g ± 2 * ld mass d mass 2 ∼ b mass m 2 t V CKM tb V CKM td * + m 2 c V CKM cb V CKM cd * d mass 2 ,(32)
where it is understood that (A Q u ) kl is evaluated in the down quark mass basis (tiny corrections of order m 2 u are neglected in the above). This expression captures the right flavor structure, and is correct for a sizeable class of SM extensions. However, it is actually incorrect in the SM case. The reason is that within the SM, the flavor symmetries are strongly broken by the large top quark mass [40]. The SM corresponding amplitude consists of a rather non-trivial and non-linear function of A Q u , instead of the above naive expression (see e.g. [43] and refs. therein), which assumes only the simplest polynomial dependence of the spurions. The SM amplitude for ∆m B d is described via a box diagram, and two out of the four powers of masses are canceled, since they appear in the propagators.
The SM approximate symmetry structure
In the above we have considered the most general breaking pattern. However, as discussed, the essence of the flavor puzzle is the large hierarchies in the quark masses, the eigenvalues of Y U,D and their approximate alignment. Going back to the spurions that mediate the SM flavor conversions defined in Eqs. (21) and (23), we can write them as
A U,D = diag 0, 0, y 2 t,b − y 2 t,b 3 1 3 + O m 2 c,s m 2 t,b , A Q u ,Q d = diag 0, 0, y 2 t,b − y 2 t,b 3 1 3 + O m 2 c,s m 2 t,b + O λ 2 ,(33)
where in the above we took advantage of the fact that m 2 c,s /m 2 t,b , λ 2 = O (10 −5,−4,−2 ) are small. The hierarchies in the quark masses are translated to an approximate residual RH U (2) U × U (2) D flavor group (see Fig. 5), implying that RH currents which involve light quarks are very small.
We have so far only briefly discussed the role of FCNCs. In the above we have argued, both based on an explicit calculation and in terms of a spurion analysis, that at tree level there are no flavor violating neutral currents, since they must be mediated through the W ± couplings or g ± 2 . In fact, this situation, which is nothing but the celebrated GIM mechanism [25], goes beyond the SM to all models in which all LH quarks are SU (2) doublets and all RH ones are singlets. The Z boson might have flavor changing couplings in models where this is not the case.
Can we guess what is the leading spurion structure that induces FCNC within the SM, say which mediates the b → dνν decay process via an operator O b→dνν ? The process changes b quark number by one unit (belongs to ∆F = 1 class of FCNC transitions). It clearly has to contain down type LH quark fields (let us ignore the lepton current, which is flavor-trivial; for effects related to neutrino masses and lepton number breaking in this class of models see e.g. [44,45,46,47,48,49,50,51,52,53,54]). Therefore, using the argument presented when discussing ∆m B d (see Eq. (32)), the leading flavor invariant spurion that mediates FCNC would have to involve the up type Yukawa spurion as well
O b→dνν ∝D i L g ± 2 ik (A Q u ) kl g ± 2 * lj D j L ×νν .(34)
The above considerations demonstrate how the GIM mechanism removes the SM divergencies from various one loop FCNC processes, which are naively expected to be log divergent. The reason is that the insertion of A Q u is translated to quark mass difference insertion. It means that the relevant one loop diagram has to be proportional to m 2 i − m 2 j (i = j). Thus, the superficial degree of divergency is lowered by two units, which renders the amplitude finite. 9 Furthermore, as explained above (see also Eq. (37)), we can use the fact that the top contribution dominates the flavor violation to simplify the form of O b→dνν
O b→dνν ∼ g 4 2 16π 2 M 2 Wb L V CKM tb V CKM td * d L ×νν ,(35)
where we have added a one loop suppression factor and an expected weak scale suppression. This rough estimation actually reproduces the SM result up to a factor of about 1.5 (see e.g. [43,55,56,57]). We thus find that down quark FCNC amplitudes are expected to be highly suppressed due to the smallness of the top off-diagonal entries of the CKM matrix. Parameterically, we find the following suppression factor for transition between the ith and jth generations:
b → s ∝ V CKM tb V CKM ts ∼ λ 2 , b → d ∝ V CKM tb V CKM td ∼ λ 3 , s → d ∝ V CKM td V CKM ts ∼ λ 5 ,(36)
where for the ∆F = 2 case one needs to simply square the parametric suppression factors. This simple exercise illustrates how powerful is the SM FCNC suppression mechanism. The gist of it is that the rate of SM FCNC processes is small, since they occur at one loop, and more importantly due to the fact that they are suppressed by the top CKM off-diagonal entries, which are very small. Furthermore, since
V CKM ts,td m 2 c,u m 2 t ,(37)
in most cases the dominant flavor conversion effects are expected to be mediated via the top Yukawa coupling. 10 We can now understand how the SM uniqueness related to suppression of flavor converting processes arises:
• RH currents for light quarks are suppressed due to their small Yukawa couplings (them being light).
• Flavor transition occurs to leading order only via LH charged current interactions.
• To leading order, flavor conversion is only due to the large top Yukawa coupling.
Covariant description of flavor violation
The spurion language discussed in the previous section is useful in understanding the flavor structure of the SM. In the current section we present a covariant formalism, based on this language, that enables to express physical observables in an explicitly basis independent form. This formalism, introduced in [58,59], can be later used to analyze NP contributions to such observables, and obtain model independent bounds based on experimental data. We focus only on the LH sector.
Two generations
We start with the simpler two generation case, which is actually very useful in constraining new physics, as a result of the richer experimental precision data. Any hermitian traceless 2 × 2 matrix can be expressed as a linear combination of the Pauli matrices σ i . This combination can be naturally interpreted as a vector in three dimensional real space, which applies to A Q d and A Q u . We can then define a length of such a vector, a scalar product, a cross product and an angle between two vectors, all of which are basis independent 11 :
| A| ≡ 1 2 tr(A 2 ) , A · B ≡ 1 2 tr(A B) , A × B ≡ − i 2 [A, B] , cos(θ AB ) ≡ A · B | A|| B| = tr(A B) tr(A 2 )tr(B 2 ) .(38)
These definitions allow for an intuitive understanding of the flavor and CP violation induced by a new physics source, based on simple geometric terms. Consider a dimension six SU (2) L -invariant operator, involving only quark doublets,
C 1 Λ 2 NP O 1 = 1 Λ 2 NP Q i (X Q ) ij γ µ Q j Q i (X Q ) ij γ µ Q j ,(39)
where Λ NP is some high energy scale. 12 X Q is a traceless hermitian matrix, transforming as an adjoint of SU (3) Q (or SU (2) Q for two generations), so it "lives" in the same space as A Q d and A Q u .
In the down sector for example, the operator above is relevant for flavor violation through K − K mixing. To analyze its contribution, we define a covariant orthonormal basis for each sector, with the following unit vectorŝ
A Q u ,Q d ≡ A Q u ,Q d A Q u ,Q d ,Ĵ ≡ A Q d × A Q u A Q d × A Q u ,Ĵ u,d ≡ Q u ,Q d ×Ĵ .(40)
Then the contribution of the operator in Eq. (39) to ∆c, s = 2 processes is given by the misalignment between X Q and A Q u ,Q d , which is equal to
C D,K 1 = X Q × Q u ,Q d 2 .(41)
This result is manifestly invariant under a change of basis. The meaning of Eq. (41) can be understood as follows: We can choose an explicit basis, for example the down mass basis, where A Q d is proportional to σ 3 . ∆s = 2 transitions are induced by the off-diagonal element of X Q , so that C K 1 = |(X Q ) 12 | 2 . Furthermore, |(X Q ) 12 | is simply the combined size of the σ 1 and σ 2 components of X Q . Its size is given by the length of X Q times the sine of the angle between X Q and A Q d (see Fig. 6). This is exactly what Eq. (41) describes. Figure 6: The contribution of X Q to K 0 − K 0 mixing, ∆m K , given by the solid blue line. In the down mass basis, Q d corresponds to σ 3 ,Ĵ is σ 2 andĴ d is σ 1 . The figure is taken from [59].
Next we discuss CPV, which is given by
Im C K,D 1 = 2 X Q ·Ĵ X Q ·Ĵ u,d .(42)
The above expression is easy to understand in the down basis, for instance. In addition to diagonalizing A Q d , we can also choose A Q u to reside in the σ 1 − σ 3 plane (Fig. 7) without loss of generality, since there is no CPV in the SM for two generations. As a result, all of the potential CPV originates from X Q in this basis. C K 1 is the square of the off-diagonal element in X Q , (X Q ) 12 , thus Im C K 1 is simply twice the real part (σ 1 component) times the imaginary part (σ 2 component). In this basis we haveĴ ∝ σ 1 andĴ d ∝ σ 2 , this proves the validity of Eq. (42). Figure 7: CP violation in the Kaon system induced by X Q . Im(C K 1 ) is twice the product of the two solid orange lines. Note that the angle between A Q d and A Q u is twice the Cabibbo angle, θ C . The figure is taken from [59].
An interesting conclusion can be inferred from the analysis above: In addition to the known necessary condition for CPV in two generation [23]
X J ∝ tr X Q A Q d , A Q u = 0 ,(43)
we identify a second necessary condition, exclusive for ∆F = 2 processes:
X J u,d ∝ tr X Q A Q u ,Q d , A Q d , A Q u = 0 ,(44)
These conditions are physically transparent and involve only observables.
Three generations
4.2.1 Approximate U (2) Q limit of
massless light quarks
For three generations, a simple 3D geometric interpretation does not naturally emerge anymore, as the relevant space is characterized by the eight Gell-Mann matrices 13 . A useful approximation appropriate for third generation flavor violation is to neglect the masses of the first two generation quarks, where the breaking of the flavor symmetry is characterized by [U (3)/U (2)] 2 [40]. This description is especially suitable for the LHC, where it would be difficult to distinguish between light quark jets of different flavor. In this limit, the 1-2 rotation and the phase of the CKM matrix become unphysical, and we can, for instance, further apply a U (2) rotation to the first two generations to "undo" the 1-3 rotation. Therefore, the CKM matrix is effectively reduced to a real matrix with a single rotation angle, θ, between an active light flavor (say, the 2nd one) and the 3rd generation,
θ ∼ = θ 2 13 + θ 2 23 ,(45)
where θ 13 and θ 23 are the corresponding CKM mixing angles. The other generation (the first one) decouples, and is protected by a residual U (1) Q symmetry. This can be easily seen when writing A Q d and A Q u in, say, the down mass basis
A Q d = y 2 b 3 −1 0 0 0 −1 0 0 0 2 , A Q u = y 2 t ♠ 0 0 0 ♠ ♠ 0 ♠ ♠ ,(46)
where ♠ stands for a non-zero real entry. The resulting flavor symmetry breaking scheme is depicted in Fig. 5, where we now focus only on the LH sector. An interesting consequence of this approximation is that a complete basis cannot be defined covariantly, since A Q u ,Q d in Eq. (46) clearly span only a part of the eight dimensional space. More concretely, we can identify four directions in this space:Ĵ andĴ u,d from Eq. (40) and either one of the two orthogonal pairsÂ
Q u ,Q d andĈ u,d ≡ 2Ĵ ×Ĵ u,d − √ 3 Q u ,Q d ,(47)or Q u ,Q d ≡Ĵ ×Ĵ u,d andĴ Q ≡ √ 3Ĵ ×Ĵ u,d − 2 Q u ,Q d .(48)
Note thatĴ Q corresponds to the conserved U (1) Q generator, so it commutes with both A Q d and A Q u , and takes the same form in both bases 14 . There are four additional directions, collectively denoted asˆ D, which transform as a doublet under the CKM (2-3) rotation, and do not mix with the other generators. The fact that these cannot be written as combinations of A Q u ,Q d stems from the approximation introduced above of neglecting light quark masses. Without this assumption, it is possible to span the entire space using the Yukawa matrices [60,61,62]. Despite the fact that this can be done in several ways, in the next subsection we focus on a realization for which the basis elements have a clear physical meaning. It is interesting to notice that a given traceless adjoint object X in three generations flavor space has an inherent SU (2) symmetry (that is, two identical eigenvalues) if and only if it satisfies
tr X 2 3/2 = √ 6 tr X 3 .(49)
In this case it must be a unitary rotation of either Λ 8 or its permutations (Λ 8 ± √ 3Λ 3 )/2, which form an equilateral triangle in the Λ 3 − Λ 8 plane (see Fig. (8)).
As before, we wish to characterize the flavor violation induced by X Q in a basis independent form. The simplest observable we can construct is the overall flavor violation of the third generation quark, that is, its decay to any quark of the first two generations. This can be written as 2 14 The meaning of these basis elements can be understood from the following: In the down mass basis we havê
√ 3 X Q × Q u ,Q d ,(50)A Q d = −Λ 8 ,Ĵ = Λ 7 ,Ĵ d = Λ 6 andĈ d = Λ 3 . The alternative diagonal generators from Eq. (48) are Q d = (Λ 3 − √ 3Λ 8 )/2 = diag(0, −1, 1) andĴ Q = ( √ 3Λ 3 + Λ 8 )/2 = diag(2, −1, −1)/ √ 3.
It is then easy to see that J Q commutes with the effective CKM matrix, which is just a 2-3 rotation, and that it corresponds to the U (1) Q generator, diag(1, 0, 0), after trace subtraction and proper normalization. A Q d and Q u were schematically added (their angle to the Λ 8 axis is actually much smaller than what appears in the plot). The figure is taken from [59].
which extracts |(X Q ) 13 | 2 + |(X Q ) 23 | 2 in each basis.
No U (2) Q limit -complete covariant basis
It is sufficient to restore the masses of the second generation quarks in order to describe the full flavor space. A simplifying step to accomplish this is to define the following object: We take the n-th power of Y D Y † D , remove the trace, normalize and take the limit n → ∞. This is denoted by n
Q d :Â n Q d ≡ lim n→∞ Y D Y † D n − 1 3 tr Y D Y † D n /3 Y D Y † D n − 1 3 tr Y D Y † D n /3 ,(51)
and we similarly define n Q u . Once we take the limit n → ∞, the small eigenvalues of Q u ,Q d go to zero, and the approximation assumed before is formally reproduced. As before, we compose the following basis elements:
J n ≡ n Q d × n Q u  n Q d × n Q u ,Ĵ n d ≡ n Q d ×Ĵ n  n Q d ×Ĵ n ,Ĉ n d ≡ 2Ĵ n ×Ĵ n d − √ 3 n Q d ,(52)
which are again identical to the previous case. The important observation for this case is that the U (1) Q symmetry is now broken. Consequently, the U (1) Q generator, J Q , does not commute with A Q d and A Q u anymore (nor doesĈ n d , which is different from J Q only by normalization and a shift by A Q d , see Eqs. (47) and (48)). It is thus expected that the commutation relation [A Q d ,Ĉ n d ] (where A Q d now contains also the strange quark mass) would point to a new direction, which could not be obtained in the approximation used before. Further commutations with the existing basis elements should complete the description of the flavor space.
We thus defineD
2 ≡ Q d ×Ĉ n d  Q d ×Ĉ n d .(53)
In order to understand the physical interpretation, note thatD 2 does not commute with A Q d , so it must induce flavor violation, yet it does commute with n Q d . The latter can be identified as a generator of a U (1) symmetry for the bottom quark (it is proportional to diag(0,0,1) in its diagonal form, without removing the trace), so this fact means thatD 2 preserves this symmetry. Therefore, it must represent a transition between the first two generations of the down sector.
We further definê
D 1 ≡ Q d ×D 2  Q d ×D 2 ,D 4 ≡Ĵ n d ×D 2 Ĵ n d ×D 2 ,D 5 ≡Ĵ n ×D 2 Ĵn ×D 2 ,(54)
which complete the basis. All of these do not commute with A Q d , thus producing down flavor violation.D 1 commutes with n Q d , so it is of the same status asD 2 . The last two elements,D 4,5 , are responsible for third generation decays, similarly toĴ n andĴ n d . More concretely, the latter two involve transitions between the third generation and what was previously referred to as the "active" generation (a linear combination of the first two), whileD 4,5 mediate transitions to the orthogonal combination. It is of course possible to define linear combinations of these four basis elements, such that the decays to the strange and the down mass eigenstates are separated, but we do not proceed with this derivation. It is also important to note that this basis is not completely orthogonal.
In order to give a sense of the physical interpretation of the different basis elements, it is helpful to see their decomposition in terms of Gell-Mann matrices, in the down mass basis (writing only the dependence of the leading terms on λ and η, and omitting for simplicity O(1) factors such as the Wolfenstein parameter A). This is given bŷ
D 1 ∼ {−1, η, 0, 0, 0, 0, 0, 0} , D 2 ∼ {−η, −1, 0, 0, 0, 0, 0, 0} , C n d ∼ {2λ, −2ηλ, 1, 0, 0, 0, 0, 0} , D 4 ∼ 0, 0, 0, −1, η, −λ, −ηλ 3 , 0 , D 5 ∼ 0, 0, 0, −η, −1, ηλ 3 , −λ, 0 , J n d ∼ 0, 0, 0, −λ, ηλ, 1, ηλ 2 , 0 , J n ∼ 0, 0, 0, −ηλ, −λ, −ηλ 2 , 1, 0 , A n Q d = {0, 0, 0, 0, 0, 0, 0, −1} ,(55)
where the values in each set of curly brackets stand for the Λ 1 , . . . , Λ 8 components. This shows which part of an object each basis element extracts under a dot product, relative to the down sector. For instance, the leading term inD 1 is Λ 1 , therefore it represents the real part of a 2 → 1 transition.
Similarly, it is also useful to see the leading term decomposition of A Q u in the down mass basis,
A Q u ∼ −λy 2 c − λ 5 yt 2 , ηλ 5 y 2 t , −(y 2 c + λ 4 y 2
Finally, an instructive exercise is to decompose A Q u in this covariant "down" basis, since A Q u is a flavor violating source within the SM. Focusing again only on leading terms, we have
A Q u · D 1 ,D 2 ,Ĉ n d ,D 4 ,D 5 ,Ĵ n d ,Ĵ n , n Q d ∼ λy 2 c + λ 5 y 2 t , λy 2 c , (y 2 c + λ 4 y 2 t )/2, λ 3 y 2 c , λ 3 y 2 c , λ 2 y 2 t , 0, y 2 t / √ 3 .(57)
This shows the different types of flavor violation in the down sector within the SM. It should be mentioned that theD 2 andD 5 projections of A Q u vanish when the CKM phase is taken to zero, and also when either of the CKM mixing angles is zero or π/2. Therefore these basis elements can be interpreted as CP violating, together withĴ n .
In order to derive model independent bounds in the next section, we use the simpler description based on the approximate U (2) Q symmetry, rather than the full basis.
Model independent bounds
In order to describe NP effects in flavor physics, we can follow two main strategies: (i) build an explicit ultraviolet completion of the model, and specify which are the new fields beyond the SM, or (ii) analyze the NP effects using a generic effective theory approach, by integrating out the new heavy fields. The first approach is more predictive, but also more model dependent. We follow this approach in Secs. 7 and 8 in two well-motivated SM extensions. In this and the next section we adopt the second strategy, which is less predictive but also more general.
Assuming the new degrees of freedom to be heavier than SM fields, we can integrate them out and describe NP effects by means of a generalization of the Fermi Theory. The SM Lagrangian becomes the renormalizable part of a more general local Lagrangian. This Lagrangian includes an infinite tower of operators with dimension d > 4, constructed in terms of SM fields and suppressed by inverse powers of an effective scale Λ > M W :
L eff = L SM + C (d) i Λ (d−4) O (d) i (SM fields).(58)
This general bottom-up approach allows us to analyze all realistic extensions of the SM in terms of a limited number of parameters (the coefficients of the higher dimensional operators). The drawback of this method is the impossibility to establish correlations of NP effects at low and high energiesthe scale Λ defines the cutoff of the effective theory. However, correlations among different low energy processes can still be established implementing specific symmetry properties, such as the MFV hypothesis (Sec. 6). The experimental tests of such correlations allow us to test/establish general features of the new theory, which hold independently of the dynamical details of the model. In particular, B, D and K decays are extremely useful in determining the flavor symmetry breaking pattern of the NP model.
∆F = 2 transitions
The starting point for this analysis is the observation that in several realistic NP models, we can neglect non-standard effects in all cases where the corresponding effective operator is generated at tree level within the SM. This general assumption implies that the experimental determination of the CKM matrix via tree level processes is free from the contamination of NP contributions. Using this determination, we can unambiguously predict meson-antimeson mixing and FCNC amplitudes within the SM and compare it with data, constraining the couplings of the ∆F = 2 operators in Eq. (58).
From short distance physics to observables
In order to derive bounds on the microscopic dynamics, one needs to take into account the fact that the experimental input is usually given at the energy scale in which the measurement is performed, while the bound is presented at some other scale (say 1 TeV). Moreover, the contributing higher dimension operators mix, in general. Finally, all such processes include long distance contributions (that is, interactions at the hadronic level) in actual experiments. Therefore, a careful treatment of all these effects is required. For completeness, we include here all the necessary information needed in order to take the above into account. A complete set of four quark operators relevant for ∆F = 2 transitions is given by
Q q i q j 1 =q α jL γ µ q α iLq β jL γ µ q β iL , Q q i q j 2 =q α jR q α iLq β jR q β iL , Q q i q j 3 =q α jR q β iLq β jR q α iL , Q q i q j 4 =q α jR q α iLq β jL q β iR , Q q i q j 5 =q α jR q β iLq β jL q α iR ,(59)
where i, j are generation indices and α, β are color indices 15 . There are also operatorsQ q i q j 1,2,3 , which are obtained from Q q i q j 1,2,3 by the exchange L ↔ R, and the results given for the latter apply to the former as well.
The Wilson coefficients of the above operators, C i (Λ), are obtained in principle by integrating out all new particles at the NP scale 16 . Then they have to be evolved down to the hadronic scales µ b = m b = 4.6 GeV for bottom mesons, µ D = 2.8 GeV for charmed mesons and µ K = 2 GeV for Kaons. We denote the Wilson coefficients at the relevant hadronic scale, which are the measured observables, as M |L eff |M i , where M represents a meson (note that M |L eff |M has dimension of [mass]). These should be functions of the Wilson coefficients at the NP scale, C i (Λ), the running of α s between the NP and the hadronic scales and the hadronic matrix elements of the meson, M |Q q i q j r |M (here q i q j stand for the quarks that compose the meson M ). For bottom and charmed mesons, the analytic formula that describes this relation is given by [7,63]
M |L eff |M i = 5 j=1 5 r=1 b (r,i) j + η c (r,i) j η a j C i (Λ) Λ 2 M |Q q i q j r |M ,(60)
where η ≡ α s (Λ)/α s (m t ) and a j , b
i = (0, −0.18, −0.003, 0, 0), b (23) i = (0, −0.493, 0.18, 0, 0), c (23) i = (0, −0.014, 0.008, 0, 0), b (32) i = (0, −0.044, 0.035, 0, 0), c (32) i = (0, 0.005, −0.012, 0, 0), b (33) i = (0, 0.011, 0.54, 0, 0), c (33) i = (0, 0, 0.028, 0, 0), b (44) i = (0, 0, 0, 2.87, 0), c (44) i = (0, 0, 0, −0.48, 0.005), b (45) i = (0, 0, 0, 0.961, −0.22), c (45) i = (0, 0, 0, −0.25, −0.006), b (54) i = (0, 0, 0, 0.09, 0), c (54) i = (0, 0, 0, −0.013, −0.016), b (55) i = (0, 0, 0, 0.029, 0.863), c (55) i = (0, 0, 0, −0.007, 0.019).(22)
The hadronic matrix elements are
B q |Q bq 1 |B q = 1 3 m Bq f 2 Bq B B 1 , B q |Q bq 2 |B q = − 5 24 m Bq m b + m q 2 m Bq f 2 Bq B B 2 , B q |Q bq 3 |B q = 1 24 m Bq m b + m q 2 m Bq f 2 Bq B B 3 , B q |Q bq 4 |B q = 1 4 m Bq m b + m q 2 m Bq f 2 Bq B B 4 , B q |Q bq 5 |B q = 1 12 m Bq m b + m q 2 m Bq f 2 Bq B B 5 ,(62)
where q = d, s, and the other inputs needed here are [7,33]
m B d = 5.279 GeV , f B d = 0.2 GeV , m Bs = 5.366 GeV , f Bs = 0.262 GeV , m b = 4.237 GeV , B B 1 = 0.88 , B B 2 = 0.82 , B B 3 = 1.02 , B B 4 = 1.15 , B B 5 = 1.99 .(63)
For the D meson, the a i magic numbers are as in Eq. (61), while the others are given by [7] b (11) i
Generic bounds from meson mixing
We now move to the actual derivation of bounds on new physics from ∆F = 2 transitions. It is interesting to note that only fairly recently has the data begun to disfavor models with only LH currents, but with new sources of flavor and CPV [3,4,5], characterized by a CKM-like suppression [65,66,67]. In fact, this is precisely the way that one can test the success of the Kobayashi-Maskawa mechanism for flavor and CP violation [4,5,6,7,8,68,69,70,71,72,73,74,75]. We start with the B d system, where the recent improvement in measurements has been particularly dramatic, as an example. The NP contributions to B 0 d mixing can be expressed in terms of two parameters, h d and σ d , defined by
M d 12 = (1 + h d e 2iσ d )M d,SM 12 ,(71)
where M d,SM
12
is the dispersive part of the B 0 d − B 0 d mixing amplitude in the SM. In order to constrain deviations from the SM in these processes, one can use measurements which are directly proportional to M d 12 (magnitude and phase). The relevant observables in this case are ∆m B d and the CPV in decay with and without mixing in B 0 d → ψK, S ψK . These processes are characterized by hard GIM suppression, and proceed, within the SM, via one loop (see Eqs. (35) and (36)). In the presence of NP, they can be written as (see e.g. [30,31,32]):
∆m B d = ∆m SM B d 1 + h d e 2iσ d , S ψK = sin 2β + arg 1 + h d e 2iσ d .(72)
The fact that the SM contribution to these processes involve the CKM elements which are not measured directly prevents one from independently constraining the NP contributions. Yet the situation was dramatically improved when BaBar and Belle experiments managed to measure CPV processes which, within the SM, are mediated via tree level amplitudes. The information extracted from these CP asymmetries in B ± → DK ± and B → ρρ is probably hardly affected by new physics. The most recent bounds (ignoring 2σ anomaly in B → τ ν) are [76,77] h d 0.3 and π 2σ d 2π .
Another example where recent progress has been achieved is in measurements of CPV in D 0 − D 0 mixing, which led to an important improvement of the NP constraints. However, in this case the SM contributions are unknown [14,15], and the only robust SM prediction is the absence of CPV [16]. The three relevant physical quantities related to the mixing can be defined as
y 12 ≡ |Γ 12 |/Γ, x 12 ≡ 2|M 12 |/Γ, φ 12 ≡ arg(M 12 /Γ 12 ) ,(74)
where M 12 , Γ 12 are the total dispersive and absorptive part of the D 0 − D 0 amplitude, respectively. 9 shows (in grey) the allowed region in the x NP 12 /x − sin φ NP 12 plane. x NP 12 corresponds to the NP contributions and x ≡ (m 2 − m 1 )/Γ, with m i , Γ being the neutral D meson mass eigenstates and average width, respectively. The pink and yellow regions correspond to the ranges predicted by, respectively, the linear MFV and general MFV classes of models [18] (see Sec. 6 for details). We see that the absence of observed CP violation removes a sizable fraction of the possible NP parameter space, in spite of the fact that the magnitude of the SM contributions cannot be computed! An updated analysis of ∆F = 2 constraints has been presented in [7]. The main conclusions drawn from this analysis can be summarized as follows:
( ) N o C P V N o C P V Figure 9:
The allowed region, shown in grey, in the x NP 12 /x 12 − sin φ NP 12 plane. The pink and yellow regions correspond to the ranges predicted by, respectively, the linear MFV and general MFV classes of models [18].
(i) In all the three accessible short distance amplitudes (K 0 -K 0 , B d -B d , and B s -B s ) the magnitude of the NP amplitude cannot exceed the SM short distance contribution. The latter is suppressed by both the GIM mechanism and the hierarchical structure of the CKM matrix,
A ∆F =2 SM ≈ G 2 F m 2 t 16π 2 V CKM ti * V CKM tj 2 × M |(Q Li γ µ Q Lj ) 2 |M × F M 2 W m 2 t ,(75)
where F is a loop function of O(1). As a result, NP models with TeV scale flavored degrees of freedom and O(1) effective flavor mixing couplings are ruled out. To set explicit bounds, let us consider for instance the LH ∆F = 2 operator Q 1 from Eq. (59), and rewrite it as
i =j c ij Λ 2 (Q Li γ µ Q Lj ) 2 ,(76)
where the c ij are dimensionless couplings. The condition |A ∆F =2
NP | < |A ∆F =2 SM | implies [78] Λ > 4.4 TeV | (V CKM ti ) * V CKM tj |/|c ij | 1/2 ∼ 1.3 × 10 4 TeV × |c sd | 1/2 5.1 × 10 2 TeV × |c bd | 1/2 1.1 × 10 2 TeV × |c bs | 1/2(77)
The strong bounds on Λ for generic c ij of order 1 is a manifestation of what in many specific frameworks (supersymmetry, technicolor, etc.) goes by the name of the flavor problem: if we insist that the new physics emerges in the TeV region, then it must possess a highly non-generic flavor structure.
(ii) In the case of B d -B d and K 0 -K 0 mixing, where both CP conserving and CP violating observables are measured with excellent accuracy, there is still room for a sizable NP contribution (relative to the SM one), provided that it is, to a good extent, aligned in phase with the SM amplitude (O (0.01) for the K system and O (0.3) for the B d system). This is because the theoretical errors in the observables used to constrain the phases, S B d →ψK and K , are smaller with respect to the theoretical uncertainties in ∆m B d and ∆m K , which constrain the magnitude of the mixing amplitudes.
Operator Bounds on Λ in TeV (c ij = 1) Bounds on c ij (Λ = 1 TeV) 4 3.2 × 10 5 6.9 × 10 −9 2.6 × 10 −11 Table 1: Bounds on representative dimension six ∆F = 2 operators (taken from [78], and the last line is from [58,59]). Bounds on Λ are quoted assuming an effective coupling 1/Λ 2 , or, alternatively, the bounds on the respective c ij 's assuming Λ = 1 TeV. Observables related to CPV are separated from the CP conserving ones with semicolons. In the B s system we only quote a bound on the modulo of the NP amplitude derived from ∆m Bs (see text). For the definition of the CPV observables in the D system see Ref. [10].
Observables Re Im Re Im (s L γ µ d L ) 2 9.8 × 10 2 1.6 × 10 4 9.0 × 10 −7 3.4 × 10 −9 ∆m K ; K (s R d L )(s L d R ) 1.8 × 10∆m K ; K (c L γ µ u L ) 2 1.2 × 10 3 2.9 × 10 3 5.6 × 10 −7 1.0 × 10 −7 ∆m D ; |q/p|, φ D (c R u L )(c L u R ) 6.2 × 10 3 1.5 × 10 4 5.7 × 10 −8 1.1 × 10 −8 ∆m D ; |q/p|, φ D (b L γ µ d L ) 2 5.1 × 10 2 9.3 × 10 2 3.3 × 10 −6 1.0 × 10 −6 ∆m B d ; S ψK S (b R d L )(b L d R ) 1.9 × 10 3 3.6 × 10 3 5.6 × 10 −7 1.7 × 10 −7 ∆m B d ; S ψK S (b L γ µ s L ) 2 1.1 × 10 2 7.6 × 10 −5 ∆m Bs (b R s L )(b L s R ) 3.7 × 10 2 1.3 × 10 −5 ∆m Bs (t L γ µ u L ) 2 12 7.1 × 10 −3 pp → tt
(iii) In the case of B s -B s mixing, the precise determination of ∆m Bs does not allow large deviations in modulo with respect to the SM. The constraint is particularly severe if we consider the ratio ∆m B d /∆m Bs , where hadronic uncertainties cancel to a large extent. However, the constraint on the CPV phase is quite poor. Present data from CDF [79] and D0 [80] indicate a large central value for this phase, contrary to the SM expectation. The errors are, however, still large, and the disagreement with the SM is at about the 2σ level. If the disagreement persists, and becomes statistically significant, this would not only signal the presence of physics beyond the SM, but would also rule out a whole subclass of MFV models (see Sec. 6).
(iv) The resulting constraints in the D system discussed above are only second to those from K , and unlike the case of K , they are controlled by experimental statistics, and could possibly be significantly improved in the near future.
To summarize this discussion, a detailed list of constraints derived from ∆F = 2 observables is shown in Table 1, where we quote the bounds for two representative sets of dimension six operators -the left-left operators (present also in the SM) and operators with a different chirality, which arise in specific SM extensions (Q 1 and Q 4 from Eq. (59), respectively). The bounds on the latter are stronger, especially in the Kaon case, because of the larger hadronic matrix elements and enhanced renormalization group evolution (RGE) contributions. The constraints related to CPV correspond to maximal phases, and are subject to the requirement that the NP contributions are smaller than 30% (60%) of the total contributions [4,5] in the B d (K) system (see Eq. (73)). Since the experimental status of CP violation in the B s system is not yet settled, we simply require that the NP contributions would be smaller than the observed value of ∆m Bs (for less naive treatments see e.g. [7,81], and for a different type of ∆F = 2 analysis see [82]).
Robust bounds immune to alignment mechanisms
There are two interesting features for models that can provide flavor-related suppression factors: degeneracy and alignment. The former means that the operators generated by the NP are flavoruniversal, that is diagonal in any basis, thus producing no flavor violation. Alignment, on the other hand, occurs when the NP contributions are diagonal in the corresponding quark mass basis. In general, low energy measurements can only constrain the product of these two factors. An interesting exception occurs, however, for the left-left (LL) operators of the type defined in Eq. (39), where there is an independent constraint on the level of degeneracy [23]. The crucial point is that operators involving only quark doublets cannot be simultaneously aligned with both the down and the up mass bases. For example, we can take X Q from Eq. (39) to be proportional to A Q d . Then it would be diagonal in the down mass basis, but it would induce flavor violation in the up sector. Hence, these types of theories can still be constrained by measurements. The "best" alignment is obtained by choosing the NP contribution such that it would minimize the bounds from both sectors. The strength of the resulting constraint, which is the weakest possible one, is that it is unavoidable in the context of theories with only one set of quark doublets. Here we briefly discuss this issue, and demonstrate how to obtain such bounds.
Two generation ∆F = 2 transitions
As mentioned before, the strongest experimental constraints involve transitions between the first two generations. When studying NP effects, ignoring the third generation is often a good approximation to the physics at hand. Indeed, even when the third generation does play a role, a two generations framework is applicable, as long as there are no strong cancelations with contributions related to the third generation. Hence, for this analysis we can use the formalism of Sec. (4.1).
The operator defined in Eq. (39), when restricted to the first two generations, induces mixing in the K and D systems, and possibly also CP violation. We can use the covariant bases defined in Eq. (40) to parameterize X Q ,
X Q = L X u,d Q u ,Q d + X JĴ + X J u,dĴ u,d ,(78)
and the two bases are related through
X u = cos 2θ C X d − sin 2θ C X J d , X Ju = − sin 2θ C X d − cos 2θ C X J d ,(79)
while X J remains invariant. We choose the X i coefficients to be normalized,
X d 2 + X J 2 + X J d 2 = (X u ) 2 + X J 2 + X Ju 2 = 1 ,(80)
such that L signifies the "length" of X Q under the definitions in Eq. (38),
L = |X Q | = X 2 Q − X 1 Q /2 ,(81)
where X 1,2 Q are the eigenvalues of X Q before removing the trace. Plugging Eqs. (78) and (79) into Eq. (41), we obtain expressions for the contribution of X Q to ∆m K and ∆m D , without CPV,
C K 1 = L 2 X J 2 + X J d 2 , C D 1 = L 2 2 2 X J 2 + X d 2 + X J d 2 + X J d 2 − X d 2 cos(4θ C ) + 2X d X J d sin(4θ C ) .(82)
In order to minimize both contributions, we first need to set X J = 0. Next we define
tan α ≡ X J d X d , r KD ≡ (C K 1 ) exp (C D 1 ) exp ,(83)
where the experimental constraints C K 1 exp and C D 1 exp can be extracted from Table 1. Then the weakest bound is obtained for
tan α = r KD sin(2θ C ) 1 + r KD cos(2θ C ) ,(84)
and is given by
L ≤ 3.8 × 10 −3 Λ NP 1 TeV .(85)
A similar process can be carried out for the CPV in K and D mixing, by plugging Eqs. (78) and (79) into Eq (42). Now we do not set X J = 0, otherwise there would be no CPV (since X Q would reside in the same plane as A Q d and A Q u ). Moreover, there are many types of models in which we can tweak the alignment, but we do not control the phase (we do not expect the NP to be CP-invariant), hence they might give rise to CPV. The weakest bound in this case, as a function of X J , is given by
L ≤ 3.4 × 10 −4 (X J ) 2 − (X J ) 4 1/4 Λ NP 1 TeV .(86)
The combination of the above two bounds is presented in Fig. 10. We should note that L is simply the difference between the eigenvalues of X Q (see Eq. (81)), thus the bounds above put limits on the degeneracy of the NP contribution.
Third generation ∆F = 1 transitions
Similar to the analysis of the previous subsection, we can use other types of processes to obtain model independent constrains on new physics. Here we consider flavor violating decays of third generation quarks in both sectors, utilizing the three generations framework discussed in Sec. 4.2.
Since the existing bound on top decay is rather weak, we use the projection for the LHC bound, assuming that no positive signal is obtained.
We focus on the following operator
O h LL = i Q i γ µ (X Q ) ij Q j H † ← → D µ H + h.c. ,(87)
which contributes at tree level to both top and bottom decays [83]. We omit an additional operator for quark doublets,
O u LL = i Q 3H D /H † Q 2 − i Q 3 D /H H † Q 2 ,Br(B → X s + − ) 1 GeV 2 <q 2 <6 GeV 2 = (1.61 ± 0.51) × 10 −6 , Br(t → (c, u)Z) < 5.5 × 10 −5 ,(88)
where the latter corresponds to the prospect of the LHC bound in the absence of signal for 100 fb −1 at a center of mass energy of 14 TeV. We adopt the weakest limits on the coefficient of the operator in Eq. (87), C h LL , derived in [83]:
Br(B → X s + − ) −→ C h LL b < 0.018 Λ NP 1 TeV 2 , Br(t → (c, u)Z) −→ C h LL t < 0.18 Λ NP 1 TeV 2 ,(89)
and define r tb ≡ C h LL t / C h LL b . The NP contribution can be decomposed in the covariant bases as
X Q = L X u,d Q u ,Q d + X JĴ + X J u,dĴ u,d + X J QĴ Q + X Dˆ D ,(90)
where again the coefficients are normalized such that L = |X Q |. The contribution of X Q to third generation decays is given by Eq. (50). The weakest bound for a fixed L is obtained, as before, by finding a direction of X Q that minimizes the contributions to C h LL t and C h LL b , thus constituting the "best" alignment. However, sinceĴ Q commutes with A Q u ,Q d , as discussed above, it does not contribute to third generation decay in neither sectors. In other words, X Q ∝Ĵ Q is not constrained by such a process. On the other hand, any component of X Q may also generate flavor violation among the first two generations (when their masses are switched back on), which is more strongly constrained. Specifically, the bound that stems from the case of X Q ∝Ĵ Q is [59] L < 0.59 Λ NP 1 TeV
2 ; Λ NP > 1.7 TeV ,(91)
where the latter is for L = 1. This is stronger than the limit given below for other forms of X Q , hence this does not constitute optimal alignment. To conclude this issue, all directions that contribute to first two generations flavor and CPV at O(λ), that isĴ Q ,ˆ D and Q u ,Q d , are not favorable in terms of alignment [59]. The induced third generation flavor violation, after removing these contributions, is then given by 4 3
X Q × Q u ,Q d 2 = X J 2 + X J u,d 2 ,(92)
and in order to see this in a common basis, we express X Ju as
X Ju = cos 2θ X J d + sin 2θ X d ,(93)
with θ as defined in Eq. (45). From this it is clear that X J contributes the same to both the top and the bottom decay rates, so it should be set to zero for optimal alignment. Thus the best alignment is obtained by varying α, defined as before by
tan α ≡ X J d X d .(94)
Here we use X d , which is the coefficient of Q d , instead of X d , since the former does not produce flavor violation among the first two generations to leading order (up to O(λ 5 )). We now consider two possibilities: (i) complete alignment with the down sector; (ii) the best alignment satisfying the bounds of Eq. (89), which gives the weakest unavoidable limit. Note that we can also consider up alignment, but it would give a stronger bound than down alignment, as a result of the stronger experimental constraints in the down sector. The bounds for these cases are [58,59] (i) α = 0 , L < 2.5 Λ NP 1 TeV 2 ; Λ N P > 0.63 (7.9) TeV ,
(ii) α = √ 3 θ 1 + r tb , L < 2.8 Λ NP 1 TeV 2
; Λ N P > 0.6 (7.6) TeV ,
as shown in Fig. 11, where in parentheses we give the strong coupling bound, in which the coefficient of the operators in Eqs. (39) and (87) is assumed to be 16π 2 . Note that these are weaker than the bound in Eq. (91).
It is important to mention that the optimized form of X Q generates also c → u decay at higher order in λ, which might yield stronger constraints than the top decay. However, the resulting bound from the former is actually much weaker than the one from the top [59]. Therefore, the LHC is indeed expected to strengthen the model independent constraints.
Third generation ∆F = 2 transitions
Finally, we analyze ∆F = 2 transitions involving the bottom and the top. For simplicity, we only consider complete alignment with the down sector
X Q = LÂ Q d ,(96)
as the constraints from this sector are much stronger. This generates in the up sector top flavor violation, and also D 0 − D 0 mixing at higher order. Yet there is no top meson, as the top quark decays too rapidly to hadronize. Instead, we analyze the process pp → tt (related to mixing by [58,59] crossing symmetry), which is most appropriate for the LHC. It should be emphasized, however, that in this case the parton distribution functions of the proton strongly break the approximate U (2) Q symmetry of the first two generations. The simple covariant basis introduced in Sec. 4.2.1, which is based on this approximate symmetry, cannot be used as a result. Furthermore, this LHC process is dominated by uu → tt, so we focus only on the operator involving up (and not charm) quarks.
The bound that would stem from this process at the LHC was evaluated in [58,59] to be
C tt 1 < 7.1 × 10 −3 Λ NP 1 TeV 2 ,(97)
for 100 fb −1 at 14 TeV. Since the form of X Q that we consider also contributes to transitions between the first two generations, we should additionally take into account the experimental constraints in the D system, given in Table 1 (we use the CPV observable). The contribution of X Q to these processes is calculated by applying a CKM rotation to Eq. (96). CPV in the D system is then given by Im (X Q ) 2 12 , and (X Q ) 13 2 describes uu → tt.
Note that we have
(X Q ) 12 ∼ = − √ 3 LV CKM ub V CKM cb * , (X Q ) 13 ∼ = − √ 3 LV CKM ub V CKM tb * .(98)
The resulting bounds are
L < 12 Λ NP 1 TeV ; Λ NP > 0.08 (1.0) TeV ,(99)
for uu → tt and
L < 1.8 Λ NP 1 TeV ; Λ NP > 0.57 (7.2) TeV ,(100)
for D mixing. The limits in Eqs. (99) and (100) can be further weakened by optimizing the alignment between the down and the up sectors, as in the previous subsection. Since this would only yield a marginal improvement of about 10%, we do not analyze this case in detail.
To conclude, we learn that for ∆F = 2 processes, the existing bound is stronger than the one which will be obtained at the LHC for top quarks, as opposed to ∆F = 1 case considered above.
Minimal flavor violation
As we have seen above, SM extensions with general flavor structure are strongly constrained by measurements. This is a consequence of the fact that, within the SM, flavor conversion and CP violation arise in a unique and suppressed manner. It is therefore valuable to investigate beyond the SM theories, where the breaking of the global flavor symmetries is induced by the same source as in the SM. In such models, which go under the name of minimal flavor violation (MFV), flavor violating interactions are only generated by the SM Yukawa couplings (see e.g. [37,87,88,89,90,91]). Although we only consider here the quark sector, the notion of MFV can be extended also to the lepton sector. However, there is no unique way to define the minimal sources of flavor symmetry breaking, if one wants to keep track of non-vanishing neutrino masses [92,93,94].
In addition to the suppression of FCNCs, there are two important aspects of the MFV framework: First, low energy flavor conversion processes can be described by a small set of operators in an effective Lagrangian, without reference to a specific model. Furthermore, MFV arises naturally as a low energy limit of a sizable class of models, in which the flavor hierarchy is generated at a scale much higher than other dynamical scales. Examples of microscopic theories which flow to MFV at the IR are supersymmetric models with gauge or anomaly mediation [95,96,97,98] and a certain class of warped extra dimension models [99,100,101,102,103,104].
The basic idea can be described in the language of effective field theory, without the need of referring to a specific framework. MFV models can have a very different microscopical dynamics, yet by definition they all have a common origin of flavor breaking -the SM Yukawa matrices. After integrating out the NP degrees of freedom, we expect to obtain a low energy effective theory which involves only the SM fields and a bunch of higher dimension, Lorentz and gauge invariant, operators suppressed by the NP scale Λ MFV . Since flavor is broken only via the SM Yukawas, we can study the flavor breaking of the MFV framework by the simple following prescription: We should construct the most general set of higher dimensional operators, which in addition of being Lorentz and gauge invariant, they are required also to be flavor invariant, using the spurion analysis that we have introduced above. A simple example for such an operator is the one from Eq. (39), where the matrix that mixes the generations is a combination of the appropriate Yukawa matrices,
O MFV 1 = 1 (Λ MFV ) 2 Q i a u A Q u + a d A Q d + . . . ij γ µ Q j 2 .(101)
Here the dots represent higher order terms in A Q u ,Q d . It is important to realize that quite often, in models which exhibit MFV-like behavior, the Yukawa couplings are associated with constant factors, such that they appear as x U Y U , x D Y D . These factors might come from loop suppression, RGE etc. In general they are not necessarily small, since for example large Logs from RGE flow might compensate for the loop suppression. We should thus consider these "effective" Yukawa couplings x i Y i , rather than just Y i , as operators
would usually involve f (x U Y U , x D Y D ).
To get a further insight on the structure of MFV models, it is useful to classify the framework according to the strength of breaking of the individual flavor group components. Since, within the SM, and also as suggested by the data, the only source of CP breaking is the CKM phase, it is also useful to extend G SM of Eq. (2) to include CP as a discrete group:
G SM CP = U (3) Q × U (3) U × U (3) D × CP .(102)
The low energy phenomenology of MFV models can be divided as follows (see below for details):
(i) Small effective Yukawas -The SM flavor group, G SM CP , is approximately preserved by the NP dynamics, in the sense that all the effective Yukawa couplings are small Obviously, within the MFV framework, a built-in approximate U (2) symmetry for the first two generation is guaranteed for the low energy phenomenology [40]. As we shall see, the models which belong to the MFV class, especially in the cases of (i)-(iii), enjoy much of the protection against large flavor violation that we have found to exist in the SM case, and therefore tend to be consistent with current flavor precision measurements (for reviews see e.g. [55,56,57] and refs. therein).
x U y i U , x D y i D 1 .(103)
MFV with small effective bottom Yukawa
Here we deal with the first two cases (i)+(ii), where x D y b 1. In the following we absorb the x factors into the Yukawa for simplicity of notation.
Small effective top Yukawa
If we are interested in SM processes where the typical energy scale is much lower than Λ MFV and the NP is not strongly coupled, then we expect that the dominant non-SM flavor violation would arise from the lowest order higher dimension operators. For processes involving quark fields, the leading operators are of dimension six. Consider, for instance, the following ∆F = 2 MFV Lagrangian:
L ∆F =2 MFV = 1 (Λ MFV ) 2 Q i a u A Q u + a d A Q d ij Q j 2 + 1 (Λ MFV ) 2 Q i (bA Q u Y D ) ij D j 2 + . . . ,(104)
where we write a LL operator and a LR operator for down quarks, and we assume that they are both suppressed by the same MFV scale 17 . We can immediately reach two conclusions: First, the LR operator is subdominant, since its lowest order flavor violating contribution contains three Yukawa matrices, compared to two for the LL operator (note that a term of the form Q i (Y D ) ij D j does not induce down flavor conversion). Next, we only need to take into account the leading terms, as a result of the small effective Yukawas. Therefore, this case can be named linear MFV (LMFV) [40].
Let us, for instance, focus on flavor violation in the down sector, which is more severely constrained. We want to estimate what is the size of flavor violation which is mediated by L ∆F =2 MFV , restricting ourselves to the first operator for now. The experimental information is obtained by looking at the dynamics (masses, mass differences, decay, time evolution etc.) of down type mesons, hence we can just look at the form that L ∆F =2 MFV takes in the down quark mass basis. By definition A Q d is then diagonal, and does not mediate flavor violation, but A Q u is not diagonal and is given by
(A Q u ) down = V CKM diag(0, 0, y 2 t ) V CKM † − y 2 t 3 1 3 + O m 2 c m 2 t ≈ y 2 t V CKM ti V CKM tj * ,(105)
where we take advantage of the approximate U (2) symmetry discussed before. As expected, we find that within the MFV framework, FCNC processes are suppressed by roughly the same amount as the SM processes, and therefore are typically consistent with present data, at least to leading order.
Within the LMFV framework, several of the constraints used to determine the CKM matrix (and in particular the unitarity triangle) are not affected by NP [90,91]. In this context, NP effects are negligible not only in tree level processes, but also in a few clean observables sensitive to loop effects, such as the time dependent CP asymmetry in B d → ψK L,S . Indeed the structure of the basic flavor changing coupling which results from Eqs. (104) and (105) implies that the weak CPV phase of the operator (bd) 2 , related to B d -B d mixing, is arg V CKM td V CKM tb * 2 , exactly as in the SM. This construction provides a natural (a posteriori ) justification of why no NP effects have been observed in the quark sector: By construction, most of the clean observables measured at B factories are insensitive to NP effects in the LMFV framework.
In Table 2 we report a few representative examples of bounds on higher dimensional operators in the LMFV framework. For simplicity, only leading spurion dependence is shown on the left handed column. The built-in CKM suppression leads to bounds on the effective NP scale not far from the TeV region. These bounds are very similar to the bounds on flavor conserving operators derived by precision electroweak tests. This observation reinforces the conclusion that a deeper study of rare decays is definitely needed in order to clarify the flavor problem: The experimental precision on the clean FCNC observables required to obtain bounds more stringent than those derived from precision electroweak tests (and possibly discover new physics) is typically in the 1 − 10% range. Table 3 demonstrates that discriminating between the SM and a theory with LMFV behavior is expected to be a difficult task.
Large effective top Yukawa
The consequence of a large effective top Yukawa, x U y t 1, is the need to take into account higher order terms in the up Yukawa matrix, and resum all these terms to a single effective contribution.
Bound on Λ MFV
Observables Yukawa. Yet, one subtlety does arise in this case: Contributions to 1 → 2 transitions which proceed through the charm and the top are correlated within LMFV, but are independent in the current case (see Sec. 6.3 below, and specifically the discussion around Eqs. (123) and (124)). Distinguishing between these cases can be achieved by comparing K + → π + νν and the CPV decay K L → π 0 νν, or via K . This needs to be accomplished both theoretically and experimentally to the level of O(m 2 c /m 2 t ). Unfortunately, the smallness of this difference prevents tests of the first in the near future, while the second is masked by long distance contributions at the level of a few percents [110]. Nevertheless, the ability to discriminate between these two cases is of high theoretical importance, since it yields information about short distance physics (such as the mediation scale of supersymmetry breaking via the Logs' size or anomalous dimensions) well beyond the direct reach of near future experiments.
H † D R Y † D A Q u σ µν Q L (eF µν ) 6.1 TeV B → X s γ, B → X s + − 1 2 (Q L A Q u γ µ Q L ) 2 5.9 TeV K , ∆m B d , ∆m Bs H † D D R Y † D A Q u σ µν T a Q L (g s G a µν ) 3.4 TeV B → X s γ, B → X s + − Q L A Q u γ µ Q L (E R γ µ E R ) 2.7 TeV B → X s + − , B s → µ + µ − i Q L A Q u γ µ Q L H † U D µ H U 2.3 TeV B → X s + − , B s → µ + µ − Q L A Q u γ µ Q L (L L γ µ L L ) 1.7 TeV B → X s + − , B s → µ + µ − Q L A Q u γ µ Q L (eD µ F µν ) 1.5 TeV B → X s + −A CP (B → X s γ) < 6% @ 95% CL < 0.02 < 0.01 B(B d → µ + µ − ) < 1.8 × 10 −8 < 1.2 × 10 −9 1.3(3) × 10 −10 B(B → X s τ + τ − ) - < 5 × 10 −7 1.6(5) × 10 −7 B(K L → π 0 νν) < 2.6 × 10 −8 @ 90% CL < 2.9 × 10 −10 2.9(5) × 10 −11
Large bottom Yukawa
The effects of a large effective bottom Yukawa usually appear in two Higgs doublet models (such as supersymmetry), but they can also be found in other NP frameworks without an extended Higgs sector, where x D y b is of order one due to a large value of x D . In any case, we can still assume that the Yukawa couplings are the only irreducible breaking sources of the flavor group.
For concreteness, we analyze the case of a two Higgs doublet model, which is described by the Lagrangian in Eq. (1) (focusing only on the quark sector) with independent H U and H D . This Lagrangian is invariant under an extra U (1) symmetry with respect to the one Higgs case -a symmetry under which the only charged fields are D (charge +1) and H D (charge −1). This symmetry, denoted U (1) PQ , prevents tree level FCNCs, and implies that Y U,D are the only sources of flavor breaking appearing in the Yukawa interaction (similar to the one Higgs doublet scenario). By assumption, this also holds for all the low energy effective operators. This is sufficient to ensure that flavor mixing is still governed by the CKM matrix, and naturally guarantees a good agreement with present data in the ∆F = 2 sector. However, the extra symmetry of the Yukawa interaction allows us to change the overall normalization of Y U,D with interesting phenomenological consequences in specific rare modes.
The normalization of the Yukawa couplings is controlled by the ratio of the vacuum expectation values of the two Higgs fields, or by the parameter
tan β = H U / H D .(106)
For tan β 1, the smallness of the b quark (and τ lepton) mass can be attributed to the smallness of 1/ tan β, rather than to the corresponding Yukawa coupling. As a result, for tan β 1 we cannot anymore neglect the down type Yukawa coupling. Moreover, the U (1) PQ symmetry cannot be exact -it has to be broken at least in the scalar potential in order to avoid the presence of a massless pseudoscalar Higgs. Even if the breaking of U (1) PQ and G SM are decoupled, the presence of U (1) PQ breaking sources can have important implications on the structure of the Yukawa interaction, especially if tan β is large [37,111,112,113].
Since the b quark Yukawa coupling becomes O(1), the large tan β regime is particularly interesting for helicity-suppressed observables in B physics. One of the clearest phenomenological consequences is a suppression (typically in the 10 − 50% range) of the B → ν decay rate with respect to its SM expectation [114,115,116]. Potentially measurable effects in the 10 − 30% range are expected also in B → X s γ [117,118,119] and ∆M Bs [120,121]. Given the present measurements of B → ν, B → X s γ and ∆M Bs , none of these effects seems to be favored by data. However, present errors are still sizable compared to the estimated NP effects.
The most striking signature could arise from the rare decays B s,d → + − , whose rates could be enhanced over the SM expectations by more than one order of magnitude [122,123,124]. Dramatic effects are also possible in the up sector. The leading contribution of the LL operator to D − D mixing is given by
C cu 1 ∝ y 2 s V CKM cs * V CKM us + (1 + r GMFV )y 2 b V CKM cb * V CKM ub 2 ∼ 3 × 10 −8 ζ 1 ,(107)
for tan β ∼ m t /m b , where r GMFV accounts for the necessary resummation of the down Yukawa, and is expected to be an order one number. In such a case, the simple relation between the contribution from the strange and bottom quarks does not apply [40]. We thus have
ζ 1 = e 2iγ + 2r sb e iγ + r 2 sb ∼ 1.7i + r GMFV [2.4i − 1 − 0.7 r GMFV (1 + i)] , r sb ≡ y 2 s y 2 b V CKM us V CKM cs V CKM ub V CKM cb ∼ 0.5 ,(108)
where γ ≈ 67 o is the relevant phase of the unitarity triangle. We thus learn that MFV models with two Higgs doublets can contribute to D − D mixing up to O(0.1) for very large tan β, assuming a TeV NP scale. Moreover, the CPV part of these contributions is not suppressed compared to the CP conserving part, and can provide a measurable signal. In Fig. 9 we show in pink (yellow) the range predicted by the LMFV (GMFV) class of models. The GMFV yellow band is obtained by scanning the range r GMFV ∈ (−1, +1) (but keeping the magnitude of C cu 1 fixed for simplicity). Sizeable contributions to top FCNC can also emerge for large tan β. For a MFV scale of ∼ 1 TeV, this can lead to Br(t → cX) ∼ O(10 −5 ) [40], which may be within the reach of the LHC.
General MFV
The breaking of the G SM flavor group and the breaking of the discrete CP symmetry are not necessarily related, and we can add flavor diagonal CPV phases to generic MFV models [60,61,125]. Because of the experimental constraints on electric dipole moments (EDMs), which are generally sensitive to such flavor diagonal phases [61], in this more general case the bounds on the NP scale are substantially higher with respect to the "minimal" case, where the Yukawa couplings are assumed to be the only breaking sources of both symmetries [37].
If tan β is large, the inclusion of flavor diagonal phases has interesting effects also in flavor changing processes [126,127,128]. The main consequences, derived in a model independent manner, can be summarized as follows [40] We now analyze in detail this general MFV case, where both top and bottom effective Yukawas are large and flavor diagonal phases are present, to prove the above conclusions. We emphasize the differences between the LMFV case and the non-linear MFV (NLMFV) one. It is shown below that even in the general scenario, there is a systematic expansion in small quantities, V CKM td , V CKM ts , and light quark masses, while resumming in y t and y b . This is achieved via a parametrization borrowed from non-linear σ-models 18 . Namely, in the limit of vanishing weak gauge coupling (or m W → ∞), , once the weak interaction is turned on. It should be stressed that while below we implicitly assume a two Higgs doublet model to allow for a large bottom Yukawa coupling, this assumption is not necessary, and the analysis is essentially model independent.
U (3) Q is enhanced to U (3) Q u × U (3) Q d ,
As discussed in Sec. 3.3, the breaking of the flavor group is dominated by the top and bottom Yukawa couplings. Yet here we also assume that the relevant off-diagonal elements of V CKM are small, so the residual approximate symmetry is
H SM = U (2) Q × U (2) U × U (2) D × U (1) 3 (U (1) Q is enhanced to U (2) Q ,
and there is also a U (1) 3 symmetry for the third generation). The broken symmetry generators live in G SM /H SM cosets. It is useful to factor them out of the Yukawa matrices, so we parameterize
Y U,D = e iρ Q e ±iχ/2Ỹ U,D e −iρ U,D ,(109)
where the reduced Yukawa spurions,Ỹ U,D , arẽ
Y U,D = φ U,D 0 0 y t,b .(110)
18 Another non-linear parameterization of MFV was presented in [129].
Here φ U,D are 2 × 2 complex spurions, whileχ andρ i , i = Q, U, D, are the 3 × 3 matrices spanned by the broken generators. Explicitly,
χ = 0 χ χ † 0 ,ρ i = 0 ρ i ρ † i θ i , i = Q, U, D,(111)
where χ and ρ i are two dimensional vectors. Theρ i shift under the broken generators, and therefore play the role of spurion "Goldstone bosons". Thus the ρ i have no physical significance. On the other hand, χ parameterizes the misalignment of the up and down Yukawa couplings, and therefore corresponds to V CKM td and V CKM ts in the low energy effective theory (see Eq. (119)). Under the flavor group, the above spurions transform as,
e iρ i = V i e iρ i U † i , e iχ = U Q e iχ U † Q ,Ỹ i = U QỸi U † i .(112)
Here
U i = U i (V i ,ρ i ) are (reducible) unitary representations of the unbroken flavor subgroup U (2) i × U (1) 3 , U i = U 2×2 i 0 0 e iϕ Q , i = Q, U, D.(113)For V i ∈ H SM , U i = V i ., χ [ρ i ] are fundamentals of U (2) Q [U (2) i ] carrying charge −1 under the U (1) 3 , while φ U,D are bi-fundamentals of U (2) Q × U (2) U,D .
As a final step we also redefine the quark fields by moding out the "Goldstone spurions",
u L = e −iχ/2 e −iρ Q u L ,d L = e iχ/2 e −iρ Q d L ,(114)u R = e −iρu u R ,d R = e −iρ d d R .(115)
The latter form reducible representations of H SM . Concentrating here and below on the down sector, we therefore defined L,R = (d With the redefinitions above, invariance under the full flavor group is captured by the invariance under the unbroken flavor subgroup H SM (see e.g. [130]). Thus, GMFV can be described without loss of generality as a formally H SM -invariant expansion in φ U,D , χ. This is a straightforward generalization of the known effective field theory description of spontaneous symmetry breaking [130]. The only difference in our case is that Y U,D are not aligned, as manifested by χ = 0. Since the background field values of the relevant spurions are small, we can expand in them.
We are now in a position to write down the flavor structure of quark bilinears from which low energy flavor observables can be constructed. We work to leading order in the spurions that break H SM , but to all orders in the top and bottom Yukawa couplings. Beginning with the LL bilinears, to second order in χ and φ U,D , one finds (omitting gauge and Lorentz indices)
b LbL ,d (2) Ld (2) L ,d (2) L φ U φ † Ud (2) L ,(116)d (2) L χb L ,b L χ † χb L ,d(2)
L χχ †d (2) L .
The first two bilinears in Eq. (116) are diagonal in the down quark mass basis, and do not induce flavor violation. In this basis, the Yukawa couplings take the form
Y U = V CKM † diag(m u , m c , m t ) , Y D = diag(m d , m s , m b ) .(118)
This corresponds to spurions taking the background values ρ Q = χ/2,ρ U,D = 0 and φ D = diag(m d , m s )/m b , while flavor violation is induced via
χ † = i(V CKM td , V CKM ts ) , φ U = V CKM (2) † diag m u m t , m c m t . (119) V CKM (2)
stands for a two generation CKM matrix. In terms of the Wolfenstein parameter λ, the flavor violating spurions scale as χ ∼ (λ 3 , λ 2 ), (φ U ) 12 ∼ λ 5 . Note that the redefined down quark fields, Eqs. (114,115), coincide with the mass eigenstate basis,d L,R = d L,R , for the above choice of spurion background values.
The LR and RR bilinears which contribute to flavor mixing are in turn (at leading order in χ and φ U,D spurions),d
(2) L χb R ,d (2) L χχ † φ Dd (2) R ,b L χ † φ Dd (2) R ,(120)d (2) R φ † D χb R ,d (2) R φ † D χχ † φ Dd (2) R .(121)
To make contact with the more familiar MFV notation, consider down quark flavor violation from LL bilinears. We can then expand in the Yukawa couplings,
Q a 1 Y U Y † U + a 2 Y U Y † U 2 Q + b 2 QY U Y † U Y D Y † D Q + h.c. + . . . ,(122)
with
a 1,2 = O(x 2,4 U ), b 2 = O(x 2 U x 2 D )
. Note that the LMFV limit corresponds to a 1 a 2 , b 2 , and the NLMFV limit to a 1 ∼ a 2 ∼ b 2 . While a 1,2 are real, the third operator in Eq. (122) is not Hermitian and b 2 can be complex [60], introducing a new CP violating phase beyond the SM phase. The leading flavor violating terms in Eq. (122) for the down quarks arē
d i L a 1 + a 2 y 2 t ξ t ij + a 1 ξ c ij d j L + b 2 y 2 bd i L ξ t ib b L + h.c. = c b d (2) L χb L + h.c. + c td (2) L χχ †d (2) L + c cd (2) L φ U φ † Ud (2) L ,(123)
where ξ k ij = y 2 k V CKM ki * V CKM kj with i = j. On the right hand side we have used the general parameterization in Eqs. (116,117) with
c b (a 1 y 2 t + a 2 y 4 t + b 2 y 2 b ), c t a 1 y 2 t + a 2 y 4 t and c c a 1 ,(124)
to leading order. The contribution of the c c bilinear in flavor changing transitions is O(1%), compared to the c t bilinear, and can thus be neglected in practice. A novel feature of NLMFV is the potential for observable CPV from RH currents, to which we return below. Other important distinctions can be readily understood from Eq. (123). In NLMFV (with large tan β) the extra flavor diagonal CPV phase Im(c b ) can be large, leading to observable deviations in the B d,s − B d,s mixing phases, but none in LMFV. Another example is b → sνν and s → dνν transitions, which receive contributions only from a single operator in Eq. (123) multiplied by the neutrino currents. Thus, new contributions to B → X s νν, B → Kνν vs. K L → π 0 νν, K + → π + νν are correlated in LMFV (c b c t ), see e.g. [109,131,132], but are independent in NLMFV with large tan β. O(1) effects in the rates would correspond to an effective scale Λ MFV ∼ 3 TeV in the four fermion operators, with smaller effects scaling like 1/Λ MFV due to interference with the SM contributions. Other interesting NLMFV effects involving the third generation, such as large deviations in Br(B d,s → µ + µ − ) and b → sγ, arise in the minimal supersymmetric standard model (MSSM) at large tan β, where resummation is required [117,118,120,133].
Assuming MFV, new CPV effects can be significant if and only if the UV theory contains new flavor diagonal CP sources. The proof is as follows. If no flavor diagonal phases are present, CPV only arises from the CKM phase. In the exact U (2) Q limit, the CKM phase can be removed and the theory becomes CP invariant (at all scales). The only spurions that break the U (2) Q flavor symmetry are φ U,D and χ. CPV in operators linear in χ is directly proportional to the CKM phase (see Eq. (123)). Any additional contributions are suppressed by at least 2 sin θ C ∼ 10 −9 , and are therefore negligible.
[φ † U φ U , φ † D φ D ] ∼ (m s /m b ) 2 (m c /m t )
Flavor diagonal weak phases in NLMFV can lead to new CPV effects in 3 → 1 and 3 → 2 decays. An example is ∆B = 1 electromagnetic and chromomagnetic dipole operators constructed from the first bilinear in Eq. (120). The operators are not Hermitian, hence their Wilson coefficients can contain new CPV phases. Without new phases, the untagged direct CP asymmetry in B → X d,s γ would essentially vanish due to the residual U (2) symmetry, as in the SM [134], and the B → X s γ asymmetry would be less than a percent. However, in the NLMFV limit (large y b ), non-vanishing phases can yield significant CPV in untagged and B → X s γ decays, and the new CPV in B → X s γ and B → X d γ would be strongly correlated. Supersymmetric examples of this kind were studied in [135,136,137], where new phases were discussed.
Next, consider the NLMFV ∆b = 2 effective operators. They are not Hermitian, hence their Wilson coefficients C i /Λ 2 MFV can also contain new CP violating phases. The operators can be divided into two classes: class-1, which does not contain light RH quarks [(d (2) L χb L,R ) 2 ,. . . ]; and class-2, which does [(d [138,139], we conclude that class-1 yields the same weak phase shift in B d − B d and B s − B s mixing relative to the SM. The class-1 contribution would dominate if Λ MFV is comparable for all the operators. For example, in the limit of equal Wilson coefficients C i /Λ 2 MFV , the class-2 contribution to B s − B s mixing would be ≈ 5% of class-1. The maximal allowed magnitude of CPV in the B d system is smaller than roughly 20%. Quantitatively, for Im(C i ) ≈ 1, this corresponds to Λ MFV ≈ 18 TeV for the leading class-1 operator, which applies to the B s system as well. Thus, sizable CPV in the B s system would require class-2 contributions, with O(1) CPV corresponding to Λ MFV ≈ 1.5 TeV for the leading class-2 operator. Conversely, barring cancelations, within NLFMV models NP CPV in B s − B s mixing provides an upper bound on NP CPV in B d − B d mixing.
(2) R φ † D χb L ) (d
For 2 → 1 transitions, the new CPV phases come suppressed by powers of m d,s /m b . All the 2 → 1 bilinears in Eqs. (116), (117), (120) and (121) are Hermitian, with the exception of
d (2) L χχ † φ Dd (2)
R . This provides the leading contribution to K from a non-SM phase, coming from the operator O LR = (d
(2) L χχ † φ Dd (2) R ) 2 . Its contribution is ≈ 2% of the SM operator O LL = (d (2) L χχ †d (2)
L ) 2 for comparable Wilson coefficients C LR ,LL /Λ 2 MFV . For C LL , Im(C LR ) ≈ 1, a new contribution to K that is 50% of the measured value would correspond to Λ MFV ≈ 5 TeV for O LL and Λ MFV ≈ 0.8 TeV for O LR .
Note that the above new CPV effects can only be sizable in the large tan β limit. They arise from non-Hermitian operators (such as the second operator in (122)), and are therefore of higher order in the Y D expansion. Whereas we have been working in the large tan β limit, it is straightforward to incorporate the small tan β limit (discussed above in Sec. 6.1.2) into our formalism. In that case the flavor group is broken down to U (2) Q × U (2) U × U (1) t × U (3) D , and the expansion in Eq. (109) no longer holds. In particular, resummation over y b is not required. Flavor violation is described by linearly expanding in the down type Yukawa couplings, from which it follows that contributions proportional to the bottom Yukawa are further suppressed beyond the SM CKM suppression.
It should also be pointed out that NLMFV differs from the next-to-MFV framework [4,5], since the latter exhibits additional spurions at low energy.
MFV in covariant language
The covariant formalism described in Sec. 4 enables us to offer further insight on the MFV framework. In the LMFV case, the NP source X Q from Eq. (39) or Eq. (87) is a linear combination of the A Q d and A Q u "vectors", naturally with O(1) coefficients at most. Hence we can immediately infer that no new CPV sources exist, as all vectors are on the same plane, and that the induced flavor violation is small (recall that the angle between A Q u and A Q d is small -O(λ 2 )). These conclusions are of course already known, but they emerge naturally when using the covariant language.
In the GMFV scenario, X Q is a general function of A Q u and A Q d . We can alternatively express it in terms of the covariant basis introduced in Sec. 4.2.2, since this basis is constructed using only A Q u and A Q d . Then, it is easy to see that an arbitrary function of the Yukawa matrices could produce any kind of flavor and CP violation [60,61,62]. However, the directions denoted byˆ D require higher powers of the Yukawas, so their contribution is generically much smaller (in [60] it was noticed that some directions, which we identify asˆ D, are not generated via RGE flow). Therefore, the induced flavor and CP violation tend to be restricted to the submanifold which corresponds to the U (2) Q limit (that is, the directions denoted by Q u ,Q d ,Ĵ,Ĵ u,d andĈ u,d ).
Supersymmetry
Supersymmetric models provide, in general, new sources of flavor violation, for both the quark and the lepton sectors. The main new sources are the supersymmetry breaking soft mass terms for squarks and sleptons and the trilinear couplings of a Higgs field with a squark-antisquark or slepton-antislepton pairs. Let us focus on the squark sector. The new sources of flavor violation are most commonly analyzed in the basis in which the corresponding (down or up) quark mass matrix and the neutral gaugino vertices are diagonal. In this basis, the squark masses are not necessarily flavor-diagonal, and have the form
q * M i (M 2 q ) M N ijq N j = (q * Liq * Rk ) (M 2 q ) Lij A q il v q A q jk v q (M 2 q ) Rkl q Lj q Rl ,(125)
where M, N = L, R label chirality, and i, j, k, l = 1, 2, 3 are generation indices. (M 2 q ) L and (M 2 q ) R are the supersymmetry breaking squark masses-squared. The A q parameters enter in the trilinear scalar couplings A q ij H q q Li q * Rj , where H q (q = u, d) is the q-type Higgs boson and v q = H q . In this basis, flavor violation takes place through one or more squark mass insertion. Each mass insertion brings with it a factor of (δ q ij ) M N ≡ (M 2 q ) M N ij /m 2 q , wherem 2 q is a representative q-squark mass scale. Physical processes therefore constrain
[(δ q ij ) M N ] eff ∼ max[(δ q ij ) M N , (δ q ik ) M P (δ q kj ) P N , . . . , (i ↔ j)].(126)
For example,
[(δ d 12 ) LR ] eff ∼ max[A d 12 v d /m 2 d , (M 2 d ) L1k A d k2 v d /m 4 d , A d 1k v d (M 2 d ) Rk2 /m 4 d , . . . , (1 ↔ 2)].(127)
Note that the contributions with two or more insertions may be less suppressed than those with only one. In terms of mass basis parameters, the (δ q ij ) M M 's stand for a combination of mass splittings and mixing angles:
(δ q ij ) M M = 1 m 2 q α (K q M ) iα (K q M ) * jα ∆m 2 qα ,(128)
where K q M is the mixing matrix in the coupling of the gluino (and similarly for the bino and neutral wino) to q Li −q M α ;m 2 q = 1 3 3 α=1 m 2 q M α is the average squark mass-squared, and ∆m 2 qα = m 2 qα −m 2 q . Things simplify considerably when the two following conditions are satisfied [140,141], which means that a two generation effective framework can be used (for simplicity, we omit here the chirality index):
|K ik K * jk | |K ij K * jj |, |K ik K * jk ∆m 2 q k q i | |K ij K * jj ∆m 2 q j q i |,(129)
where there is no summation over i, j, k and where ∆m 2 q j q i = m 2 q j − m 2 q i . Then, the contribution of the intermediateq k can be neglected, and furthermore, to a good approximation, K ii K * ji +K ij K * jj = 0. For these cases, we obtain a simpler expression for the mass insertion term
(δ q ij ) M M = ∆m 2 q j q ĩ m 2 q (K q M ) ij (K q M ) * jj ,(130)
In the non-degenerate case, in particular relevant for alignment models, it is useful to take instead ofm q the mass scalem q ij = 1 2 (mq i + mq j ) [142], which better approximates the full expression. We also define
δ q ij = (δ q ij ) LL (δ q ij ) RR .(131)
The new sources of flavor and CP violation contribute to FCNC processes via loop diagrams involving squarks and gluinos (or electroweak gauginos, or higgsinos). If the scale of the soft supersymmetry breaking is below TeV, and if the new flavor violation is of order one, and/or if the phases are of order one, then these contributions could be orders of magnitude above the experimental bounds. Imposing that the supersymmetric contributions do not exceed the phenomenological constraints leads to constraints of the form (δ q ij ) M M 1. Such constraints imply that either quasi-degeneracy (∆m 2 q j q i (m q ij ) 2 ) or alignment (|K q ij | 1) or a combination of the two mechanisms is at work. Table 4 presents the constraints obtained in Refs. [17,18,143,144] as appear in [140]. Wherever relevant, a phase suppression of order 0.3 in the mixing amplitude is allowed, namely we quote the stronger between the bounds on Re(δ q ij ) and 3Im(δ q ij ). The dependence of these bounds on the average squark massm q , the ratio x ≡ m 2 g /m 2 q as well as the effect of arbitrary strong CP violating phases can be found in [140].
For large tan β, some constraints are modified from those in Table 4. For instance, the effects of neutral Higgs exchange in B s and B d mixing give, for tan β = 30 and x = 1 (see [140,145,146] and refs. therein for details): [143], [17] and [144]. and δ q ii are based on, respectively, Refs. [143], [17], [144] and [147] (with the relation between the neutron and quark EDMs as in [148]).
δ d 13 < 0.01 M A 0 200 GeV , δ d 23 < 0.04 M A 0 200 GeV ,(132)
where M A 0 denotes the pseudoscalar Higgs mass, and the above bounds scale roughly as (30/ tan β) 2 .
The experimental constraints on the (δ q ij ) LR parameters in the quark-squark sector are presented in Table 5. The bounds are the same for (δ q ij ) LR and (δ q ij ) RL , except for (δ d 12 ) M N , where the bound for M N = LR is 10 times weaker. Very strong constraints apply for the phase of (δ q 11 ) LR from EDMs. For x = 4 and a phase smaller than 0.1, the EDM constraints on (δ u,d, 11 ) LR are weakened by a factor ∼ 6.
While, in general, the low energy flavor measurements constrain only the combinations of the suppression factors from degeneracy and from alignment, such as Eq. (130), an interesting exception occurs when combining the measurements of K 0 -K 0 and D 0 -D 0 mixing to test the first two generation squark doublets (based on the analysis in Sec. 5.2.1). Here, for masses below the TeV scale, some level of degeneracy is unavoidable [23]:
m Q 2 − m Q 1 m Q 2 + m Q 1 ≤ 0.034 maximal phases 0.27 vanishing phases(133)
Similarly, using ∆F = 1 processes involving the third generation (Sec. 5.2.2), the following bound is obtained [59] m 2Q
2 − m 2Q 3 2mQ 2 + mQ 3 2 < 20 m Q 100 GeV 2 ,(134)
which is rather weak and insignificant in practice. The bound that stems from ∆F = 2 third generation processes (Sec. 5.2.3) is [58,59] m 2Q
1 − m 2Q 3 2mQ 1 + mQ 3 2 < 0.45 m Q 100 GeV 2 .
(135)
Note that the latter limit is actually determined by CPV in D mixing (see discussion in Sec. 5.2.3).
It should be mentioned that by carefully tuning the squark and gluino masses, one finds a region in parameter space where the above bounds can be ameliorated [149]. The strong constraints in Tables 4 and 5 can be satisfied if the mediation of supersymmetry breaking to the MSSM is MFV. In particular, if at the scale of mediation, the supersymmetry breaking squark masses are universal, and the A-terms (couplings of squarks to the Higgs bosons) vanish or are proportional to the Yukawa couplings, then the model is phenomenologically safe. Indeed, there are several known mechanisms of mediation that are MFV (see, e.g. [150]). In particular, gauge-mediation [95,96,151,152], anomaly-mediation [97,98], and gaugino-mediation [153] are such mechanisms. (The renormalization group flow in the MSSM with generic MFV softbreaking terms at some high scale has recently been discussed in Refs. [60,154].) On the other hand, we do not expect gravity-mediation to be MFV, and it could provide subdominant, yet observable flavor and CP violating effects [155].
Extra Dimensions
Models of extra dimensions come in a large variety, and the corresponding phenomenology, including the implications for flavor physics, changes from one extra dimension framework to another. Yet, as in the supersymmetric case, one can classify the new sources of flavor violation which generically arise:
Bulk masses -If the SM fields propagate in the bulk of the extra dimensions, they can have bulk vector-like masses. These mass terms are of particular importance to flavor physics, since they induce fermion localization which may yield hierarchies in the low energy effective couplings. Furthermore, the bulk masses, which define the extra dimension interaction basis, do not need to commute with the Yukawa matrices, and hence might induce contributions to FCNC processes, similarly to the squark soft masses-squared in supersymmetry.
Cutoff, UV physics -Since, generically, higher dimensional field theories are non-renormalizable, they rely on unspecified microscopic dynamics to provide UV completion of the models. Hence, they can be viewed as effective field theories, and the impact of the UV physics is expected to be captured by a set of operators suppressed by the framework dependent cutoff scale. Without precise knowledge of the short distance dynamics, the additional operators are expected to carry generic flavor structure and contribute to FCNC processes. This is somewhat similar to "gravity mediated" contributions to supersymmetry breaking soft terms, which are generically expected to have an anarchic flavor structure, and are suppressed by the Planck scale.
"Brane"-localized terms -The extra dimensions have to be compact, and typically contain defects and boundaries of smaller dimensions [in order, for example, to yield a chiral low energy four dimension (4D) theory]. These special points might contain different microscopical degrees of freedom. Therefore, generically, one expects that a different and independent class of higher dimension operators may be localized to this singular region in the extra dimension manifold. (These are commonly denoted 'brane terms', even though, in most cases, they have very little to do with string theory). The brane-localized terms can, in principle, be of anarchic flavor structure, thus providing new flavor and CP violating sources. One important class of such operators are brane kinetic terms: their impact is somewhat similar to that of non-canonical kinetic terms, which generically arise in supersymmetric flavor models.
We focus on flavor physics of five dimension (5D) models, with bulk SM fields, since most of the literature focuses on this class. Furthermore, the new flavor structure that arises in 5D models captures most of the known effects of extra dimension flavor models. Assuming a flat extra dimension, the energy range, Λ 5D R (where Λ 5D is the 5D effective cutoff scale and R is the extra dimension radius with the extra dimension coordinate y ∈ (0, πR)), for which the 5D effective field theory holds, can be estimated as follows. Since gauge couplings in extra dimensional theories are dimensionful, i.e. α 5D has mass dimension −1, a rough guess (which is confirmed, up to order one corrections, by various naive dimensional analysis methods) is [156] Λ 5D ∼ 4π/α 5D . Matching this 5D gauge coupling to a 4D coupling of the SM at leading order, 1/g 2 = πR/g 2 5D , we obtain
Λ 5D R ∼ 4 α ∼ 30 .(136)Λ 5D 10 2 TeV Λ K ∼ 2 × 10 5 TeV ,(137)
where Λ K is the scale required to suppress the generic contributions to K , discussed above (see Table 1). The above discussion ignores the possibility of splitting the fermions in the extra dimension. In split fermion models, different bulk masses are assigned to different generations, which gives rise to different localizations of the fermions in the extra dimension. Consequently, they have different couplings to the Higgs, in a manner which may successfully address the SM flavor puzzle [157]. Separation in the extra dimension may suppress the contributions to K from the higher dimension cutoff-induced operators. As shown in Table 1, the most dangerous operator is
O 4 K = 1 Λ 2 5D (s L d R ) (s R d L ) .(138)
This operator contains s and d fields of both chiralities. As a result, in a large class of split fermion models, the overlap suppression would be similar to that accounting for the smallness of the down and strange 4D Yukawa couplings. The integration over the 5D profiles of the four quarks may yield a suppression factor of O m d m s /v 2 ∼ 10 −9 . Together with the naive scale suppression, 1/Λ 2 5D , the coefficient of O 4 K can be sufficiently suppressed to be consistent with the experimental bound.
In the absence of large brane kinetic terms, however, fermion localization generates order one non-universal couplings to the gauge KK fields [158] (the case with large brane kinetic terms is similar to the warped scenario discussed below). The fact that the bulk masses are, generically, not aligned with the 5D Yukawa couplings implies that KK gluon exchange processes induce, among others, the following operator in the low energy theory: [(D L ) 2 12 /(6M 2 KK )] (s L d L ) 2 , where (D L ) 12 ∼ λ is the LH down quark rotation matrix from the 5D interaction basis to the mass basis. This structure provides only a mild suppression to the resulting operator. It implies that to satisfy the K constraint, the KK and the inverse compactification scales have to be above 10 3 TeV, beyond the direct reach of near future experiments, and too high to be linked to a solution of the hierarchy problem. This problem can be solved by tuning the 5D flavor parameters, and imposing appropriate 5D flavor symmetries to make the tuning stable. Once the 5D bulk masses are aligned with the 5D Yukawa matrices, the KK gauge contributions vanish, and the configuration becomes radiatively stable.
The warped extra dimension [Randall Sundrum (RS)] framework [159] provides a solution to the hierarchy problem. Moreover, with SM fermions propagating in the bulk, both the SM and the NP flavor puzzles can be addressed. The light fermions can be localized away from the TeV brane [160], where the Higgs is localized. Such a configuration can generate the observed Yukawa hierarchy, and at the same time ensure that higher dimensional operators are suppressed by a high cutoff scale, associated with the location of the light fermions in the extra dimension [161,162]. Furthermore, since the KK states are localized near the TeV brane, the couplings between the SM quarks and the gauge KK fields exhibit the hierarchical structure associated with SM masses and CKM mixings. This hierarchy in the couplings provides an extra protection against non-standard flavor violating effects [163], denoted as RS-GIM mechanism [66,67] (see also [164,165]). It is interesting to note that an analogous mechanism is at work in models with strong dynamics at the TeV scale, with large anomalous dimension and partial compositeness [166,167,168]. The link with strongly interacting models is indeed motivated by the AdS/CFT correspondence [169,170], which implies that the above 5D framework is a dual description of 4D composite Higgs models [99,171].
Concerning the quark zero modes, the flavor structure of the above models as well as the phenomenology can be captured by using the following simple rules [66,67,172,173]. In the 5D interaction basis, where the bulk masses k C ij x are diagonal (x = Q, U, D; i, j = 1, 2, 3; k is the AdS curvature), the value f x i of the profile of the quark zero modes is given by
f 2 x i = (1 − 2c x i )/(1 − 1−2cx i ) .(139)
Here c x i are the eigenvalues of the C x matrices, = exp[−ξ], ξ = log[M Pl /TeV], and M Pl is the reduced Planck mass. If c x i < 1/2, then f x i is exponentially suppressed. Hence, order one variations in the 5D masses yield large hierarchies in the 4D flavor parameters. We consider the cases where the Higgs VEV either propagates in the bulk or is localized on the IR brane. For a bulk Higgs case, the profile is given byṽ(β, z) v k(1 + β)z 2+β / , wherez ∈ ( , 1) (z = 1 on the IR brane), and β ≥ 0. The β = 0 case describes a Higgs maximally-spread into the bulk (saturating the AdS stability bound [174]). The relevant part of the effective 4D Lagrangian, which involves the zero modes and the first KK gauge states (G 1 ), can be approximated by [66,67,173]
L 4D ⊃ (Y 5D U,D ) ij H U,DQi f Q i (U, D) j f U j ,D j r φ 00 (β, c Q i , c U j ,D j ) + g * G 1 x † i x i f 2 x i r g 00 (c x i ) − 1/ξ ,(140)
where g * stands for a generic effective gauge coupling and summation over i, j is implied. The corrections for the couplings relative to the case of fully IR-localized Higgs and KK states are given by the functions r φ 00 [173] and r g 00 [175,176], respectively:
r φ 00 (β, c L , c R ) ≈ 2(1 + β) 2 + β − c L − c R , r g 00 (c) ≈ √ 2 J 1 (x 1 ) 0.7 6 − 4c 1 + e c/2 ,(141)
where r φ 00 (β, c L , c R ) = 1 for brane-localized Higgs and x 1 ≈ 2.4 is the first root of the Bessel function, J 0 (x 1 ) = 0.
In Table 6 we present an example of a set of f x i values that, starting from anarchical 5D Yukawa couplings, reproduce the correct hierarchy of the flavor parameters. We assume for simplicity an Table 6: Values of the f x i parameters (Eq. (139)) which reproduce the observed quark masses and CKM mixing angles starting from anarchical 5D Yukawa couplings. We fix f U 3 = √ 2 and y 5D = 2 (see text).
Flavor f Q f U f D 1 Aλ 3 f Q 3 ∼ 3 × 10 −3 mu mt f U 3 Aλ 3 ∼ 1 × 10 −3 m d m b f D 3 Aλ 3 ∼ 2 × 10 −3 2 Aλ 2 f Q 3 ∼ 1 × 10 −2 mc mt f U 3 Aλ 2 ∼ 0.1 ms m b f D 3 Aλ 2 ∼ 1 × 10 −2 3 mt vy 5D f U 3 ∼ 0.3 √ 2 m b mt f U 3 ∼ 2 × 10 −2
IR-localized Higgs. The values depend on two input parameters: f U 3 , which has been determined assuming a maximally IR-localized t R (c U 3 = −0.5), and y 5D , the overall scale of the 5D Yukawa couplings in units of k, which has been fixed to its maximal value assuming three KK states. On general grounds, the value of y 5D is bounded from above, as a function of the number of KK levels, N KK , by the requirement that Yukawa interactions are perturbative below the cutoff of the theory, Λ 5D . In addition, it is bounded from below in order to account for the large top mass. Hence the following range for y 5D is obtained (see e.g. [104,177]):
1 2 y 5D 2π N KK for brane Higgs ; 1 2 y 5D 4π √ N KK for bulk Higgs ,(142)
where we use the rescaling y 5D → y 5D √ 1 + β, which produces the correct β → ∞ limit [178] and avoids subtleties in the β = 0 case.
With anarchical 5D Yukawa matrices, an RS residual little CP problem remains [104]: Too large contributions to the neutron EDM [66,67] and sizable chirally enhanced contributions to K [7,65,175,179,180] are predicted. The RS leading contribution to K is generated by a tree level KK gluon exchange, which leads to an effective coupling for the chirality-flipping operator in Eq. (138) of the type [65,173,175,179,180]
C K 4 g 2 s * M 2 KK f Q 2 f Q 1 f d 2 f d 1 r g 00 (c Q 2 )r g 00 (c d 2 ) ∼ g 2 s * M 2 KK 2m d m s (vy 5D ) 2 r g 00 (c Q 2 )r g 00 (c d 2 ) r H 00 (β, c Q 1 , c d 1 )r H 00 (β, c Q 2 , c d 2 )
.
The final expression is independent of the f x i , so the bound in Table 1 can be translated into constraints in the y 5D −M KK plane. The analogous effects in the D and B systems yield numerically weaker bounds. Another class of contributions, which involves only LH quarks, is also important to constrain the f Q − M KK parameter space. In Table 7 we summarize the resulting constraints. For the purpose of a quantitative analysis we set g s * = 3, as obtained by matching to the 4D coupling at one-loop [177] (for the impact of a smaller RS volume see [181]). The constraints related to CPV correspond to maximal phases, and are subject to the requirement that the RS contributions are smaller than 30% (60%) of the SM contributions [4,5] in the B d (K) system. The analytical expressions in the table have roughly a 10% accuracy over the relevant range of parameters. Contributions from scalar exchange, either Higgs [178,182] or radion [183], are not included, since these are more model dependent and known to be weaker [184] in the IR-localized Higgs case.
Constraints from / K have a different parameter dependence than the K constraints. Explicitly, for β = 0, the / K bound reads M min G = 1.2y 5D TeV. When combined with the K constraint, we find M min G = 5.5 TeV with a corresponding y min 5D = 4.5 [173]. The constraints summarized in Table 7
β = 0 CPV-B LLLL d 12f 2 Q 3 12f 2 Q 3 f max Q 3 = 0.5 f max Q 3 = 0.5 CPV-B LLRR d 4.2/y 5D 2.4/y 5D y min 5D = 1.4 y min 5D = 0.82 CPV-D LLLL 0.73f 2 Q 3 0.73f 2 Q 3
no bound no bound CPV-D LLRR 4.9/y 5D 2.4/y 5D y min 5D = 1.6 y min 5D = 0.8
LLLL K 7.9f 2 Q 3 7.9f 2 Q 3 f max Q 3 = 0.62 f max Q 3 = 0.62 LLRR K
49/y 5D 24/y 5D above (142) y min 5D = 8 Table 7: Most significant flavor constraints in the RS framework (taken from [78]). The values of y min 5D and f max Q 3 correspond to M KK = 3 TeV. The bounds are obtained assuming maximal CPV phases and g s * = 3. Entries marked 'above (142)' imply that for M KK = 3 TeV, y 5D is outside the perturbative range.
problem. The problem can be amended by various alignment mechanisms [101,103,104,176,185]. In this case, the bounds from the up sector, especially from CPV in the D system [18,23], become important. Constraints from ∆F = 1 processes (in either the down sector [66,67,186,187,188] or t → cZ [189]) are not included here, since they are weaker in general, and furthermore, these contributions can be suppressed (see [186,187,188]) due to incorporation of a custodial symmetry [190].
It is interesting to combine measurements from the down and the up sector in order to obtain general bounds (as done for supersymmetry above). Using K and D mixing, Eq. (86), the constraint on the RS framework is [23]
m KK > 2.1f 2 Q 3 TeV ,(144)
for a maximal phase, where f Q 3 is typically in the range of 0.4-√ 2. We thus learn that the case where the third generation doublet is maximally localized on the IR brane (fully composite) is excluded, if we insist on m KK = 3 TeV, as allowed by electroweak precision tests (see e.g. [191]). The bounds derived from ∆F = 1 and ∆F = 2 processes involving the third generation are [58,59]
m KK > 0.33f 2 Q 3 TeV , m KK > 0.4f 2 Q 3 TeV ,(145)
respectively.
9 High p T Flavor Physics Beyond the SM So far we have mostly focused on information that can be gathered from observables related to flavor conversion and in particular to low energy experiments, the exception being top flavor violation, which will be studied in great detail at the LHC. However, much insight can be obtained on short distance flavor dynamics, if one is to observe new degrees of freedom which couple to the SM flavor sector. This is why high p T collider analyses are also useful for flavor physics (see e.g. [155,192,193,194,195,196,197,198,199,200]). Below we discuss implications of measurements related to both flavor diagonal information and flavor conversion transitions.
Most of the analysis discussed in the following is rather challenging to be done at the LHC for the quark sector, due to the difficulty in distinguishing between jets originated from first and second generation quarks. However, it is certainly possible to distinguish the third generation quarks from the other ones. Furthermore, even though not discussed in this review, the charged lepton sector, which possesses a similar approximate symmetry structure, allows for rather straightforward flavor tagging. Therefore, some of the analysis discussed below can be applied more directly to the lepton sector (see e.g. [201,202,203,204,205]). For the quark sector, future progress in the frontier of charm tagging 19 may play a crucial role in extracting further information regarding the breaking of the SM approximate symmetries.
In general, one may say that not much work has been done on the issues discussed below, and that there are many issues, both theoretical and experimental, to study on how to improve the treatment related to high p T flavor physics at the LHC era. While we do not attempt here to give a complete or even in depth description of the subject of flavor at the LHC, we at least try to touch upon a few of the relevant ingredients which may help the reader to understand the potential richness and importance of this topic.
Flavor diagonal information
Naively, one might think that flavor physics is related to flavor converting amplitudes, say when the sum of the flavor charges of the incoming particles is different from that of the outgoing particles. However, this is not entirely true, since (as we have discussed in detail in Sec. 4) any form of nonuniversality, if not aligned with the quark mass basis, would induce some form of flavor conversion. Furthermore, non-universal terms involving new states, which transform non-trivially under the SU (2) L gauge group and are gauge invariant (such as LH squark square masses), unavoidably induce flavor conversion at some level, since these cannot be simultaneously diagonalized in the up and down mass bases (see discussion in Sec. 5.2).
The information that can be extracted is most usefully expressed in terms of the manner that the SM flavor symmetry, G SM , is broken by the NP flavor diagonal sources. Of particular importance is whether the approximate U (2) symmetry, which acts on the light quarks, is broken, since in this case the data implies that a strong mechanism of alignment must be at work. Even if the U (2) symmetry is respected by the new degrees of freedom, any non-universal information, related to the breaking of G SM , would be also extremely useful. In general, this kind of experimental insight is linked to the microscopic nature of the new dynamics. Such knowledge is invaluable, and is typically related to scales well beyond the direct reach of near future experiments. As an example of flavor diagonal information that can be extracted at the LHC era, we discuss the spectrum of new degrees of freedom which transform under the SM flavor group and the coupling of a flavor singlet state to the SM quarks.
Spectrum
Among the first parameters that can be extracted once new degrees of freedom are found are their masses. The phenomenology changes quantitatively based on the representation of the new particles under the flavor group. However, the interesting experimental information that one would wish to extract is similar in essence. Suppose, for instance, that we have the new states, discovered at the LHC, transforming as an irreducible representation of the U (3) U SM flavor group (this is a reasonable assumption, given that the top couplings yield the most severe hierarchy problem). If the masses of all new states are identical or universal, then not much flavor information could be extracted. Otherwise, it is useful to break the states according to their representation of the approximate U (2) U symmetry, obtained by setting the up and charm masses to zero. The simplest non-trivial case, which we now consider, is when the new states transform in the fundamental representation of the flavor group. The most celebrated example of this case is the up type squarks, but also the KK partner of the up type quarks in universal/warped extra dimension. Under the U (2) U approximate flavor group, the fundamental states would transform as a doublet and singlet. Thus, we can think of the following three possibilities listed by order of significance (regarding flavor physics):
(i) The spectrum is universal, and the U (2) U doublet and singlet are of identical masses. This implies a flavor blind underlying dynamics.
(ii) The spectrum exhibits an approximate 2 + 1 structure, i.e. the doublet and singlet differ in mass. This spectrum is expected in a wide class of models, where the NP flavor dynamics preserve the SM approximate symmetry structure. Examples of this class are the MFV and next-to-MFV [4] frameworks, which contain various classes of supersymmetry models, warped extra dimension models etc. There is highly non-trivial physical content in this case, since the U (3) U → U (2) U breaking of the new physics cannot be generic: New physics with such breaking, if not aligned with the SM up type Yukawa, induces top flavor violation (as we have discussed in Sec. 5.2 to be constrained at the LHC) and more importantly c → u transition contributing to D −D mixing. Furthermore, hints on the origin of the flavor puzzle and flavor mediation scale could be extracted.
(iii) The spectrum is anarchic, i.e. there is no approximate degeneracy between the new particles' masses. This case is the most exciting in terms of flavor physics, since it suggests that some form of alignment mechanism is at work, to prevent too large contributions to various flavor violating processes. Thus, there is a potential that when combining the spectrum information with high p T and low energy measurements, information on the origin of the flavor hierarchies and flavor mediation scale could be extracted.
Let us also consider another case: Suppose that the newly discovered particles are in the adjoint representation of the U (3) Q,U,D flavor group. An example of this case is the KK excitation of a flavor gauge boson of extra dimension models [99,101,102,104]. As discussed in Sec. 6.3, under the approximate U (2) Q,U,D flavor group, an adjoint consists of a doublet (which corresponds to the four broken generators), a triplet and a singlet (both correspond to the unbroken generators). Once again, there are three possible cases: A universal spectrum, an approximate 3 + 2 + 1 structure or an anarchic spectrum with alignment. The case of a bi-fundamental representation has been recently discussed in [207].
Couplings
Another source of precious flavor diagonal information, which has not been widely studied, is the coupling of a flavor singlet object. Celebrated examples would be in the form of non-oblique and non-universal corrections to the coupling of the Z to the bottom due to the top Yukawa, or just the predicted Higgs branching ratio into quarks, which favors third generation final states. A more exotic example is the quark coupling of a new gauge boson, such as the Z variety, supersymmetric gauginos 20 or KK gauge bosons in extra dimension models. In these cases, we can view the coupling as a spurion which either transforms under the fundamental representation of the flavor group (the Higgs case) or as an adjoint (the other cases). The approach would be therefore to characterize the flavor information according to the three items listed above. If the couplings are flavor universal, then there is not much to learn. If, however, the couplings obey the 2 + 1 rule, it already tells us that the new interactions do not only follow the SM approximate symmetry structure, but are also quasi-aligned with the SM third generation direction. The case where the couplings are anarchical is the most exciting one, as it requires a strong alignment mechanism, and may lead to a new insight on the SM flavor puzzle.
As an example for the case of a 2 + 1 structure, let us imagine that a color octet resonance 21 is discovered at the LHC in the tt channel [209,210]. One may suggest that this is an observation of a KK gluon state, yet other options are clearly possible as well (assuming that the particle's spin is consistent with one). It would be a particularly convincing argument in favor of the anarchic warped extra dimension framework if one is to prove experimentally that the decay channels into the light quarks are much smaller than the tt one. The challenge in this measurement would be to compete against the continuous di-jet background. The ability to have charm tagging is obviously a major advantage in such a scenario. Not only that it would help to suppress the background, but also a bound on the deviation from universality could be translated to a bound on the warped extra dimension volume, and thus hint for the amount of hierarchy produced by the warping [181,211].
To conclude the subject of flavor diagonal information, we schematically show possible consequences in Figs. 12 and 13. The former presents different structures of the spectrum or coupling of newly discovered degrees of freedom, and the latter demonstrates how such a measurement at the LHC affects the NP parameter space, in addition to existing low energy bounds.
Flavor non-diagonal information
So far we have mostly considered flavor conversion at low energies. In the following we briefly mention possible signals in which new degrees of freedom are involved in flavor converting processes, hopefully to be discovered soon at the LHC. Clearly, more direct information regarding flavor physics would be obtained in case the new states induce some form of flavor breaking beyond non-universality. For concreteness, let us give a few examples for such a possibility:
• A sfermion, say squark, which decays to a gaugino and either of two different quark flavors, both with considerable rate [196].
• A gluino which decays to quark and squark of a different flavor with a sizable rate [198].
• A lifetime measurement of a long lived stop [195,199].
• A single stop production from the charm sea content due to large scharm-stop mixing.
• A Z state or a KK gauge boson which decay into two quarks of a different flavor.
• A charged higgs particle which decays to a top and a strange [193]. 20 In the case of softly broken supersymmetry, it is most likely that the gauginos' coupling will be characterized by a unitary matrix -a remnant of supersymmetric gauge invariance. In such a case, unless large flavor violation in the gauginos' couplings is present, they are expected to exhibit universality. 21 A recent proposal to distinguish between a color octet resonance an singlet one can be found in [208]. Figure 12: A schematic representation of some possible spectra or coupling structure of new degrees of freedom. The x axis symbols the difference in mass/coupling between the third generation and the first two, and the y axis is for the difference between the first two generations. The red solid arrow represents a 2 + 1 structure of the spectrum/coupling, the dashed green arrow stands for an anarchic structure (generally excluded) and the blue circle at the origin signifies complete degeneracy.
As in the above, we separate the discussion to the case where the approximate U (2) flavor symmetry is respected by the new dynamics and the one in which it is badly broken.
(i) U (2) preserving -flavor conversion occurs between the third generation and a light one.
The corresponding processes then contain an odd number of third generation quarks. Since ATLAS and CMS have a top and bottom tagging capability, this class of processes can be observed with a reasonable efficiency. In the absence of charm tagging, there is no practical way to differentiate between the first two generation (thus, the information that can be extracted is well described by the covariant formalism presented in Sec. 4.2.1). Recall that in the exact massless U (2) limit, the first two generations are divided into an active state and a sterile non-interacting one. In the absence of CP violating observables at the LHC, the measurement of flavor conversion is directly translated to determination of the amount of the third-active transition strength, or the corresponding mediating generator denoted asĴ u in Sec. 4.2.1.
(ii) In order to go beyond case (i), charm tagging is required, which would enable to observe flavor violation that differentiate between the first two generations at high p T . Almost no work has been performed on this case, but the corresponding measurement would be equivalent to probing the "small" CP conserving generators denoted byD 1,4 in Sec. 4.2.2.
Fig. 14 demonstrates how detecting a clear signal of flavor violation at the LHC affects the NP parameter space, in addition to flavor diagonal information (Fig. 13). Figure 13: A schematic representation of bounds on the new physics parameter space, given by the mixing between two generations θ ij and the difference in mass/coupling. Left: A typical present constraint arising from not observing deviations from the SM predictions (the allowed region is colored). Right: Adding a possible measurement of a mass/coupling difference at the LHC. This figure is inspired by a plot from [212]. Figure 14: A schematic representation of bounds on the new physics parameter space. Here we include, in addition to the low energy data and the mass/coupling difference measurement in Fig. 13, a positive signal of flavor violation at the LHC.
Conclusions
The field of flavor physics is now approaching a new era marked by the conclusion of the B-factories and the rise of the LHC experiments. In the last decade or so, huge progress has been achieved in precision flavor measurements. As of today, no evidence for deviation from the standard model (SM) predictions has been observed, and in particular it is established that the SM is the dominant source of CP violation phenomena in quark flavor conversion. Furthermore, strong bounds related to CP violation in the up sector were recently obtained, which provide another non-trivial test for the SM Kobayashi-Maskawa mechanism.
The unique way of the SM to induce flavor violation implies that the recent data is translated to stringent bounds on new microscopical dynamics. To put it differently, any new physics at the TeV scale, motivated by the hierarchy problem, cannot have a general flavor structure. As we have discussed in detail in these lectures, it is very likely that for a SM extension to be phenomenologically viable, it has to possess the SM approximate symmetry structure, characterized by the smallness of the first two generation masses and their mixing with third generation quarks.
In the LHC epoch, while continuous progress is expected in the low energy precision tests frontier, dramatic progress is foreseen in measurements related to top flavor changing neutral processes. Moreover, in the event of new physics discovery, a new arena for flavor physics tests would open, if the new degrees of freedom carry flavor quantum numbers. At the LHC high energy experiments, extraction of flavor information is somewhat limited by its hadronic nature. In particular, distinguishing between the first two generation quarks is extremely challenging. Nevertheless, the power of this information is in probing physics at scales well beyond the direct reach of near future experiments. Thus, we expect flavor physics to continue playing an important role in our understanding of nature at short distances.
c 13 s 12 c 13 s 13 e −iδ KM −s 12 c 23 − c 12 s 23 s 13 e iδ KM c 12 c 23 − s 12 s 23 s 13 e iδ KM s 23 c 13 s 12 s 23 − c 12 c 23 s 13 e iδ KM −c 12 s 23 − s 12 c 23 s 13 e iδ KM c 23 c 13 ,
.3.3 MeV , m d = 3.5..6.0 MeV , m s = 150 +30 −40 MeV , m c = 1.3 GeV , m b = 4.2 GeV , m t = 170 GeV , V CKM ud = 0.97 , V CKM us = 0.23 , V CKM ub = 3.9 × 10 −3 , V CKM cd = 0.23 , V CKM cs = 1.0 , V CKM cb = 41 × 10 −3 , V CKM td = 8.1 × 10 −3 , V CKM ts = 39 × 10 −3 , V CKM tb = 1 , δ KM = 77 o ,6 It is easy to show that in this example, in fact, CP is not violated for any number of generations.
Figure 1 :
1The SM flavor symmetry breaking by the Yukawa matrices.
Figure 2 :
2Breaking of the U (3) U,D groups by the Yukawa matrices, which form an appropriate LH (RH) flavor group singlet (adjoint+singlet).
Figure 3 :
3U (3) Q u ,Q d breaking by A Q u ,Q d and g ± 2 .
Figure 4 :
4The schematic structure of the various ingredients that mediate flavor breaking within the SM.
Figure 5 :
5The approximate flavor symmetry breaking pattern. Note that there is also a residual U (1) Q symmetry, as explained in Sec. 4.2.
Figure 8 :
8The three unit-length diagonal traceless matrices with an inherent SU (2) symmetry.
Fig.
Fig. 9 shows (in grey) the allowed region in the x NP 12 /x − sin φ NP 12 plane. x NP 12 corresponds to the NP contributions and x ≡ (m 2 − m 1 )/Γ, with m i , Γ being the neutral D meson mass eigenstates and average width, respectively. The pink and yellow regions correspond to the ranges predicted by, respectively, the linear MFV and general MFV classes of models [18] (see Sec. 6 for details). We see that the absence of observed CP violation removes a sizable fraction of the possible NP parameter space, in spite of the fact that the magnitude of the SM contributions cannot be computed! An updated analysis of ∆F = 2 constraints has been presented in [7]. The main conclusions drawn from this analysis can be summarized as follows:
Figure 10 :
10The weakest upper bound on L coming from flavor and CPV in the K and D systems, as a function of the CP violating parameter X J , assuming Λ NP = 1 TeV. The figure is taken from[23].
Figure 11 :
11Upper bounds on L as a function of α, coming from the measurements of flavor violating decays of the bottom and the top quarks, assuming Λ NP = 1 TeV. The figure is taken from
(
ii) Large effective top Yukawa -The effective down type Yukawas are still small, but the top coupling is O(1). (iii) Large third generation Yukawas -Both the top and the bottom effective Yukawa couplings are large. This can happen for instance in two Higgs doublet models (see e.g. [105, 106, 107, 108]) with large tan β, but also in theories with only one Higgs doublet, but a large x D factor. However, in this case CP is only broken by the up and down Yukawa matrices, hence no extra sources of flavor diagonal phases are present in the microscopic theory. (iv) Large effective Yukawas and flavor diagonal phases -This is the most general case, where both the top and bottom Yukawa couplings are large and new flavor diagonal CP violating phases are present. It is thus denoted as general MFV (GMFV) [40].
An enhancement of both B s → + − and B d → + − respecting the MFV relation Γ(B s → + − )/Γ(B d → + − ) ≈ |V CKM ts /V CKM td | 2 would be an unambiguous signature of MFV at large tan β [109].
: (i) extra CPV can only arise from flavor diagonal CPV sources in the UV theory; (ii) the extra CP phases in B s − B s mixing provide an upper bound on the amount of CPV in B d − B d mixing; (iii) if operators containing RH light quarks are subdominant, then the extra CPV is equal in the two systems, and is negligible in 2 → 1 transitions. Conversely, these operators can break the correlation between CPV in the B s and B d systems, and can induce significant new CPV in K .
as discussed in Sec.3. The two groups are broken down to U (2) × U (1) by large third generation eigenvalues in A Q u ,Q d , so that the low energy theory is described by a [U (3)/U (2) × U (1)] 2 non-linear σ-model. Flavor violation arises due to the misalignment of Y U and Y D , given by V CKM td and V CKM ts
L = exp(iϕ Q )b L .A similar definition can be made for the up quarks.
L
χb R ),. . . ]. Class-2 only contributes to B s − B s mixing, up to m d /m s corrections. Taking into account that SU (3) F (approximate u-d-s flavor symmetry of the strong interaction) breaking in the bag parameters of the B s − B s vs. B d − B d mixing matrix elements is only at the few percent level in lattice QCD
4 :
4The phenomenological upper bounds on (δ q ij ) M M and on δ q ij , where q = u, d and M = L, R. The constraints are given form q = 1 TeV and x ≡ m 2 g /m 2 q = 1. We assume that the phases could suppress the imaginary parts by a factor ∼ 0.3. The bound on (δ d 23 ) RR is about 3 times weaker than that on (δ d 23 ) LL (given in table). The constraints on (δ d 12,13 ) M M , (δ u 12 ) M M and (δ d 23 ) M M are based on, respectively, Refs.
Generically, the mass of the lightest Kaluza-Klein (KK) states, M KK , is of O R −1 . If the extra dimension theory is linked to the solution of the hierarchy problem and/or directly accessible to near future experiments, then R −1 = O TeV . This implies an upper bound on the 5D cutoff:
Table 2 :
2Bounds on the NP scale (at 95% C.L.) for some representative ∆F = 1 [109] and
∆F = 2 [7] MFV operators (assuming effective coupling ±1/Λ 2 ), and corresponding observables
used to set the bounds.
Observable
Experiment
LMFV prediction
SM prediction
β s from A CP (B s → ψφ) [0.10, 1.44] @ 95% CL
0.04(5)
0.04(2)
Table 3 :
3Some predictions derived in the LMFV framework, compared to the SM[78].However, the results derived for the LMFV case are in principle still valid for a large effective top
Otherwise the U i depend on the broken generators andρ i . They form a nonlinear realization of the full flavor group. In particular, Eq. (112) defines U i (V i ,ρ i ) by requiring thatρ i is of the same form asρ i , Eq.(111). Consequentlyρ i is shifted under G SM /H SM , and can be set to a convenient value as discussed below. Under H SM
Table
Table 5 :
5The phenomenological upper bounds on chirality-mixing (δ q ij ) LR , where q = u, d. The constraints are given form q = 1 TeV and x ≡ m 2g /m 2
q = 1. The constraints on δ d
12,13 , δ u
12 , δ d
23
and the contributions to the neutron EDM which generically require M KK > O (10 TeV) [66, 67] are a clear manifestation of the RS little CPObservable
M min
G [TeV]
y min
5D or f max
Q 3
IR Higgs β = 0
IR Higgs
This set of lectures discusses the quark sector only. Many of the concepts that are explained here can be directly applied to the lepton sector.2 The SM contains an additional flavor diagonal CP violating parameter, namely the strong CP phase. However, experimental data constrains it to be smaller than O 10 −10 , hence negligibly small.
Unlike, say, the case of the S electroweak parameter, where in general one cannot associate an approximate symmetry with the limit of small NP contributions to S.
At the quantum level, a linear combination of the diagonal U (1)'s inside the U (3)'s, which corresponds to the axial current, is anomalous.5 More precisely, only the combination U (1) B−L is non-anomalous.
To get to this limit formally, one can think of a model where the Higgs field is an adjoint of SU (2) and a singlet of color and hypercharge. In this case the Higgs vacuum expectation value (VEV) preserves a U (1) gauge symmetry, and the W 3 would therefore remain massless. However, the W ± will acquire masses of the order of the Higgs VEV, and therefore charged current interactions would be suppressed.
Note that the interaction basis is not unique, given that g ± 2 is invariant under a flavor transformation where Q u and Q d are rotated by the same amount -see more in the following.
For simplicity, we only consider cases with hard GIM, in which the dependence on mass differences is polynomial. There is a large class of amplitudes, for example processes that are mediated via penguin diagrams with gluon or photon lines, where the quark mass dependence is more complicated, and may involve logarithms. The suppression of the corresponding amplitudes goes under the name soft GIM[43].
This is definitely correct for CP violating processes, or any ones which involve the third generation quarks. It also generically holds for new physics MFV models. Within the SM, for CP conserving processes which involve only the first two generations, one can find exceptions, for instance when considering the Kaon and D meson mass differences, ∆m D,K .11 The factor of −i/2 in the cross product is required in order to have the standard geometrical interpretation A × B = | A|| B| sin θ AB , with θ AB defined through the scalar product as in Eq.(38).
This use of effective field theory to describe NP contributions will be explained in detail in the next section. Note also that we employ here a slightly different notation, more suitable for the current needs, than in the next section.
We denote the Gell-Mann matrices by Λ i , where tr(Λ i Λ j ) = 2δ ij . Choosing this convention allows us to keep the definitions of Eq.(38).
t )/2, λ 3 y 2 t , −ηλ 3 y 2 t , −λ 2 y 2 t , −ηλ 4 y 2 t , −y 2 t / √ 3 ,(56)neglecting the mass of the up quark.
note that the operator Q 1 has actually already been defined in Eq.(39) in the previous section, using a slightly different notation.16 When a bound is written in terms of an energy scale, the running should start from this scale, which is not known a priori. This is done in an iterative process, which converges quickly due to the very slow running of α s at high scales.
Strictly speaking, this does not have to be the case, as these operators might be generated by different processes in the underlying theory.
Some progress has been recently achieved at the Tevatron in this direction[206], and one might expect that the LHC would perform at least as well, given that its detectors are better (we thank Gustaaf Brooijmans for bringing this point to our attention).
AcknowledgementsGP thanks the organizers of TASI09 for the successful school and great hospitality. GP is the Shlomo and Michla Tomarin career development chair. The work of GP is supported by the Israel Science Foundation (grant #1087/09), EU-FP7 Marie Curie, IRG fellowship and the Peter & Patricia Gruber Award.
Unitary Symmetry and Leptonic Decays. N Cabibbo, Phys. Rev. Lett. 10N. Cabibbo, "Unitary Symmetry and Leptonic Decays," Phys. Rev. Lett. 10 (1963) 531-533.
CP Violation in the Renormalizable Theory of Weak Interaction. M Kobayashi, T Maskawa, Prog. Theor. Phys. 49M. Kobayashi and T. Maskawa, "CP Violation in the Renormalizable Theory of Weak Interaction," Prog. Theor. Phys. 49 (1973) 652-657.
The CKM matrix and CP violation. Z Ligeti, hep-ph/0408267Int. J. Mod. Phys. 20Z. Ligeti, "The CKM matrix and CP violation," Int. J. Mod. Phys. A20 (2005) 5105-5118, hep-ph/0408267.
Next to minimal flavor violation. K Agashe, M Papucci, G Perez, D Pirjol, hep-ph/0509117K. Agashe, M. Papucci, G. Perez, and D. Pirjol, "Next to minimal flavor violation," hep-ph/0509117.
Implications of the measurement of the B 0 s −B 0 s mass difference. Z Ligeti, M Papucci, G Perez, hep-ph/0604112Phys. Rev. Lett. 97Z. Ligeti, M. Papucci, and G. Perez, "Implications of the measurement of the B 0 s −B 0 s mass difference," Phys. Rev. Lett 97 (2006) 101801, hep-ph/0604112.
Testing the CKM Picture of Flavour and CP Violation in Rare K and B Decays and Particle-Antiparticle Mixing. A J Buras, 0904.4917Prog. Theor. Phys. 122A. J. Buras, "Testing the CKM Picture of Flavour and CP Violation in Rare K and B Decays and Particle-Antiparticle Mixing," Prog. Theor. Phys. 122 (2009) 145-168, 0904.4917.
Model-independent constraints on ∆ F=2 operators and the scale of new physics. M Bona, UTfit CollaborationJHEP. 03UTfit Collaboration, M. Bona et al., "Model-independent constraints on ∆ F=2 operators and the scale of new physics," JHEP 03 (2008) 049, 0707.0636.
Status of the CKM matrix and a simple new physics scenario. J Charles, Nucl. Phys. Proc. Suppl. 185J. Charles, "Status of the CKM matrix and a simple new physics scenario," Nucl. Phys. Proc. Suppl. 185 (2008) 17-21.
The Role of CP violation in D0 anti-D0 mixing. G Blaylock, A Seiden, Y Nir, hep-ph/9504306Phys. Lett. 355G. Blaylock, A. Seiden, and Y. Nir, "The Role of CP violation in D0 anti-D0 mixing," Phys. Lett. B355 (1995) 555-560, hep-ph/9504306.
Lessons from CLEO and FOCUS Measurements of D0-anti-D0 Mixing Parameters. S Bergmann, Y Grossman, Z Ligeti, Y Nir, A A Petrov, hep-ph/0005181Phys. Lett. 486S. Bergmann, Y. Grossman, Z. Ligeti, Y. Nir, and A. A. Petrov, "Lessons from CLEO and FOCUS Measurements of D0-anti-D0 Mixing Parameters," Phys. Lett. B486 (2000) 418-425, hep-ph/0005181.
A Cicerone for the physics of charm. S Bianco, F L Fabbri, D Benson, I Bigi, hep-ex/0309021Riv. Nuovo Cim. 267S. Bianco, F. L. Fabbri, D. Benson, and I. Bigi, "A Cicerone for the physics of charm," Riv. Nuovo Cim. 26N7 (2003) 1-200, hep-ex/0309021.
New physics contributions to the lifetime difference in D0 -anti-D0 mixing. E Golowich, S Pakvasa, A A Petrov, hep-ph/0610039Phys. Rev. Lett. 98E. Golowich, S. Pakvasa, and A. A. Petrov, "New physics contributions to the lifetime difference in D0 -anti-D0 mixing," Phys. Rev. Lett. 98 (2007) 181801, hep-ph/0610039.
Implications of D 0 -D 0 Mixing for New Physics. E Golowich, J Hewett, S Pakvasa, A A Petrov, Phys. Rev. 763650E. Golowich, J. Hewett, S. Pakvasa, and A. A. Petrov, "Implications of D 0 -D 0 Mixing for New Physics," Phys. Rev. D76 (2007) 095009, 0705.3650.
SU(3) breaking and D0 -anti-D0 mixing. A F Falk, Y Grossman, Z Ligeti, A A Petrov, hep-ph/0110317Phys. Rev. 6554034A. F. Falk, Y. Grossman, Z. Ligeti, and A. A. Petrov, "SU(3) breaking and D0 -anti-D0 mixing," Phys. Rev. D65 (2002) 054034, hep-ph/0110317.
The D0 -anti-D0 mass difference from a dispersion relation. A F Falk, Y Grossman, Z Ligeti, Y Nir, A A Petrov, hep-ph/0402204Phys. Rev. 69114021A. F. Falk, Y. Grossman, Z. Ligeti, Y. Nir, and A. A. Petrov, "The D0 -anti-D0 mass difference from a dispersion relation," Phys. Rev. D69 (2004) 114021, hep-ph/0402204.
New physics and CP violation in singly Cabibbo suppressed D decays. Y Grossman, A L Kagan, Y Nir, hep-ph/0609178Phys. Rev. 7536008Y. Grossman, A. L. Kagan, and Y. Nir, "New physics and CP violation in singly Cabibbo suppressed D decays," Phys. Rev. D75 (2007) 036008, hep-ph/0609178.
D -D mixing and new physics: General considerations and constraints on the MSSM. M Ciuchini, hep-ph/0703204Phys. Lett. 655M. Ciuchini et al., "D -D mixing and new physics: General considerations and constraints on the MSSM," Phys. Lett. B655 (2007) 162-166, hep-ph/0703204.
Lessons from Recent Measurements of D −D Mixing. O Gedalia, Y Grossman, Y Nir, G Perez, Phys. Rev. 80O. Gedalia, Y. Grossman, Y. Nir, and G. Perez, "Lessons from Recent Measurements of D −D Mixing," Phys. Rev. D80 (2009) 055024, 0906.1879.
Relating D0-anti-D0 Mixing and D0 → l+l-with New Physics. E Golowich, J Hewett, S Pakvasa, A A Petrov, 0903.2830Phys. Rev. 79114030E. Golowich, J. Hewett, S. Pakvasa, and A. A. Petrov, "Relating D0-anti-D0 Mixing and D0 → l+l-with New Physics," Phys. Rev. D79 (2009) 114030, 0903.2830.
CP Violation in D0 -anti-D0 Oscillations: General Considerations and Applications to the Littlest Higgs Model with T-Parity. I I Bigi, M Blanke, A J Buras, S Recksiegel, 0904.1545JHEP. 0797I. I. Bigi, M. Blanke, A. J. Buras, and S. Recksiegel, "CP Violation in D0 -anti-D0 Oscillations: General Considerations and Applications to the Littlest Higgs Model with T-Parity," JHEP 07 (2009) 097, 0904.1545.
No Pain, No Gain -On the Challenges and Promises of Charm Studies. I I Bigi, 0907.2950I. I. Bigi, "No Pain, No Gain -On the Challenges and Promises of Charm Studies," 0907.2950.
On Indirect CP Violation and Implications for D 0 − D 0 and B s − B s mixing. A L Kagan, M D Sokoloff, Phys. Rev. 80A. L. Kagan and M. D. Sokoloff, "On Indirect CP Violation and Implications for D 0 − D 0 and B s − B s mixing," Phys. Rev. D80 (2009) 076008, 0907.3917.
Combining K −K mixing and D −D mixing to constrain the flavor structure of new physics. K Blum, Y Grossman, Y Nir, G Perez, Phys. Rev. Lett. 102K. Blum, Y. Grossman, Y. Nir, and G. Perez, "Combining K −K mixing and D −D mixing to constrain the flavor structure of new physics," Phys. Rev. Lett. 102 (2009) 211802, 0903.2118.
Testing New Indirect CP Violation. Y Grossman, Y Nir, G Perez, 0904.0305Phys. Rev. Lett. 10371602Y. Grossman, Y. Nir, and G. Perez, "Testing New Indirect CP Violation," Phys. Rev. Lett. 103 (2009) 071602, 0904.0305.
Weak Interactions with Lepton-Hadron Symmetry. S L Glashow, J Iliopoulos, L Maiani, Phys. Rev. 2S. L. Glashow, J. Iliopoulos, and L. Maiani, "Weak Interactions with Lepton-Hadron Symmetry," Phys. Rev. D2 (1970) 1285-1292.
Rare Decay Modes of the K-Mesons in Gauge Theories. M K Gaillard, B W Lee, Phys. Rev. 10897M. K. Gaillard and B. W. Lee, "Rare Decay Modes of the K-Mesons in Gauge Theories," Phys. Rev. D10 (1974) 897.
B anti-B Mixing: A Review of Recent Progress. P J Franzini, Phys. Rept. 1731P. J. Franzini, "B anti-B Mixing: A Review of Recent Progress," Phys. Rept. 173 (1989) 1.
What is the limit on the Higgs mass?. R Barbieri, A Strumia, hep-ph/9905281Phys. Lett. 462R. Barbieri and A. Strumia, "What is the limit on the Higgs mass?," Phys. Lett. B462 (1999) 144-149, hep-ph/9905281.
The 'LEP paradox. R Barbieri, A Strumia, hep-ph/0007265R. Barbieri and A. Strumia, "The 'LEP paradox'," hep-ph/0007265.
CP violation: A New era. Y Nir, hep-ph/0109090Y. Nir, "CP violation: A New era," hep-ph/0109090.
CP violation in meson decays. Y Nir, hep-ph/0510413Y. Nir, "CP violation in meson decays," hep-ph/0510413.
Y Nir, 0708.1872Probing new physics with flavor physics (and probing flavor physics with new physics). Y. Nir, "Probing new physics with flavor physics (and probing flavor physics with new physics)," 0708.1872.
Review of particle physics. C Amsler, Particle Data Group CollaborationPhys. Lett. 6671Particle Data Group Collaboration, C. Amsler et al., "Review of particle physics," Phys. Lett. B667 (2008) 1.
Commutator of the Quark Mass Matrices in the Standard Electroweak Model and a Measure of Maximal CP Violation. C Jarlskog, Phys. Rev. Lett. 551039C. Jarlskog, "Commutator of the Quark Mass Matrices in the Standard Electroweak Model and a Measure of Maximal CP Violation," Phys. Rev. Lett. 55 (1985) 1039.
A Basis Independent Formulation of the Connection Between Quark Mass Matrices, CP Violation and Experiment. C Jarlskog, Z. Phys. 29C. Jarlskog, "A Basis Independent Formulation of the Connection Between Quark Mass Matrices, CP Violation and Experiment," Z. Phys. C29 (1985) 491-497.
Parametrization of the Kobayashi-Maskawa Matrix. L Wolfenstein, Phys. Rev. Lett. 511945L. Wolfenstein, "Parametrization of the Kobayashi-Maskawa Matrix," Phys. Rev. Lett. 51 (1983) 1945.
Minimal flavour violation: An effective field theory approach. G Ambrosio, G F Giudice, G Isidori, A Strumia, hep-ph/0207036Nucl. Phys. 645G. D'Ambrosio, G. F. Giudice, G. Isidori, and A. Strumia, "Minimal flavour violation: An effective field theory approach," Nucl. Phys. B645 (2002) 155-187, hep-ph/0207036.
Brief Introduction to Flavor Physics. G Perez, 0911.2092G. Perez, "Brief Introduction to Flavor Physics," 0911.2092.
A universe without weak interactions. R Harnik, G D Kribs, G Perez, hep-ph/0604027Phys. Rev. 7435006R. Harnik, G. D. Kribs, and G. Perez, "A universe without weak interactions," Phys. Rev. D74 (2006) 035006, hep-ph/0604027.
General Minimal Flavor Violation. A L Kagan, G Perez, T Volansky, J Zupan, Phys. Rev. 80A. L. Kagan, G. Perez, T. Volansky, and J. Zupan, "General Minimal Flavor Violation," Phys. Rev. D80 (2009) 076002, 0903.1794.
Little Higgs Review. M Schmaltz, D Tucker-Smith, hep-ph/0502182Ann. Rev. Nucl. Part. Sci. 55M. Schmaltz and D. Tucker-Smith, "Little Higgs Review," Ann. Rev. Nucl. Part. Sci. 55 (2005) 229-270, hep-ph/0502182.
Observation of B0 -anti-B0 Mixing. H Albrecht, ARGUS CollaborationPhys. Lett. 192245ARGUS Collaboration, H. Albrecht et al., "Observation of B0 -anti-B0 Mixing," Phys. Lett. B192 (1987) 245.
Weak decays beyond leading logarithms. G Buchalla, A J Buras, M E Lautenbacher, hep-ph/9512380Rev. Mod. Phys. 68G. Buchalla, A. J. Buras, and M. E. Lautenbacher, "Weak decays beyond leading logarithms," Rev. Mod. Phys. 68 (1996) 1125-1144, hep-ph/9512380.
The CP Violating Dacay K0(L) → pi0 Neutrino anti-neutrino. L S Littenberg, Phys. Rev. 39L. S. Littenberg, "The CP Violating Dacay K0(L) → pi0 Neutrino anti-neutrino," Phys. Rev. D39 (1989) 3322-3324.
K → pi nu anti-nu and high precision determinations of the CKM matrix. G Buchalla, A J Buras, hep-ph/9607447Phys. Rev. 54G. Buchalla and A. J. Buras, "K → pi nu anti-nu and high precision determinations of the CKM matrix," Phys. Rev. D54 (1996) 6782-6789, hep-ph/9607447.
CP violation beyond the standard model. Y Grossman, Y Nir, R Rattazzi, hep-ph/9701231Adv. Ser. Direct. High Energy Phys. 15Y. Grossman, Y. Nir, and R. Rattazzi, "CP violation beyond the standard model," Adv. Ser. Direct. High Energy Phys. 15 (1998) 755-794, hep-ph/9701231.
K(L) → pi0 nu anti-nu beyond the standard model. Y Grossman, Y Nir, hep-ph/9701313Phys. Lett. 398Y. Grossman and Y. Nir, "K(L) → pi0 nu anti-nu beyond the standard model," Phys. Lett. B398 (1997) 163-168, hep-ph/9701313.
Probing the flavor and CP structure of supersymmetric models with K → pi nu anti-nu decays. Y Nir, M P Worah, hep-ph/9711215Phys. Lett. 423Y. Nir and M. P. Worah, "Probing the flavor and CP structure of supersymmetric models with K → pi nu anti-nu decays," Phys. Lett. B423 (1998) 319-326, hep-ph/9711215.
The CP conserving contribution to K(L) → pi0 nu anti-nu in the standard model. G Buchalla, G Isidori, hep-ph/9806501Phys. Lett. 440G. Buchalla and G. Isidori, "The CP conserving contribution to K(L) → pi0 nu anti-nu in the standard model," Phys. Lett. B440 (1998) 170-178, hep-ph/9806501.
Implications of neutrino masses on the K(L) → pi0 nu anti-nu decay. G Perez, hep-ph/9907205JHEP. 0919G. Perez, "Implications of neutrino masses on the K(L) → pi0 nu anti-nu decay," JHEP 09 (1999) 019, hep-ph/9907205.
The K(L) → pi0 nu anti-nu decay in models of extended scalar sector. G Perez, hep-ph/0001037JHEP. 0243G. Perez, "The K(L) → pi0 nu anti-nu decay in models of extended scalar sector," JHEP 02 (2000) 043, hep-ph/0001037.
Lepton flavor mixing and K → pi nu anti-nu decays. Y Grossman, G Isidori, H Murayama, hep-ph/0311353Phys. Lett. 588Y. Grossman, G. Isidori, and H. Murayama, "Lepton flavor mixing and K → pi nu anti-nu decays," Phys. Lett. B588 (2004) 74-80, hep-ph/0311353.
Waiting for precise measurements of K + → π + νν and K L → π 0 νν. A J Buras, F Schwab, S Uhlig, hep-ph/0405132Rev. Mod. Phys. 80A. J. Buras, F. Schwab, and S. Uhlig, "Waiting for precise measurements of K + → π + νν and K L → π 0 νν," Rev. Mod. Phys. 80 (2008) 965-1007, hep-ph/0405132.
K+ → pi+ nu anti-nu and K(L) → pi0 nu anti-nu decays in the general MSSM. A J Buras, T Ewerth, S Jager, J Rosiek, hep-ph/0408142Nucl. Phys. 714A. J. Buras, T. Ewerth, S. Jager, and J. Rosiek, "K+ → pi+ nu anti-nu and K(L) → pi0 nu anti-nu decays in the general MSSM," Nucl. Phys. B714 (2005) 103-136, hep-ph/0408142.
Minimal flavor violation. A J Buras, hep-ph/0310208Acta Phys. Polon. 34A. J. Buras, "Minimal flavor violation," Acta Phys. Polon. B34 (2003) 5615-5668, hep-ph/0310208.
Flavour physics and CP violation. A J Buras, hep-ph/0505175A. J. Buras, "Flavour physics and CP violation," hep-ph/0505175.
Effective Theories for Flavour Physics beyond the Standard Model. G Isidori, 0908.0404G. Isidori, "Effective Theories for Flavour Physics beyond the Standard Model," 0908.0404.
Covariant Description of Flavor Violation at the LHC. O Gedalia, L Mannelli, G Perez, 1002.0778O. Gedalia, L. Mannelli, and G. Perez, "Covariant Description of Flavor Violation at the LHC," 1002.0778.
Covariant Description of Flavor Conversion at the LHC Era. O Gedalia, L Mannelli, G Perez, 1003.3869O. Gedalia, L. Mannelli, and G. Perez, "Covariant Description of Flavor Conversion at the LHC Era," 1003.3869.
Supersymmetric models with minimal flavour violation and their running. G Colangelo, E Nikolidakis, C Smith, 0807.0801Eur. Phys. J. 59G. Colangelo, E. Nikolidakis, and C. Smith, "Supersymmetric models with minimal flavour violation and their running," Eur. Phys. J. C59 (2009) 75-98, 0807.0801.
EDM constraints on flavored CP-violating phases. L Mercolli, C Smith, 0902.1949Nucl. Phys. 817L. Mercolli and C. Smith, "EDM constraints on flavored CP-violating phases," Nucl. Phys. B817 (2009) 1-24, 0902.1949.
Flavour Geometry and Effective Yukawa Couplings in the MSSM. J Ellis, R N Hodgkinson, J S Lee, A Pilaftsis, 0911.3611JHEP. 0216J. Ellis, R. N. Hodgkinson, J. S. Lee, and A. Pilaftsis, "Flavour Geometry and Effective Yukawa Couplings in the MSSM," JHEP 02 (2010) 016, 0911.3611.
B d −B d mixing and the B d → J/ψK s asymmetry in general SUSY models. D Becirevic, hep-ph/0112303Nucl. Phys. 634D. Becirevic et al., "B d −B d mixing and the B d → J/ψK s asymmetry in general SUSY models," Nucl. Phys. B634 (2002) 105-119, hep-ph/0112303.
Delta M(K) and epsilon(K) in SUSY at the next-to-leading order. M Ciuchini, hep-ph/9808328JHEP. 108M. Ciuchini et al., "Delta M(K) and epsilon(K) in SUSY at the next-to-leading order," JHEP 10 (1998) 008, hep-ph/9808328.
Solving the flavour problem with hierarchical fermion wave functions. S Davidson, G Isidori, S Uhlig, 0711.3376Phys. Lett. 663S. Davidson, G. Isidori, and S. Uhlig, "Solving the flavour problem with hierarchical fermion wave functions," Phys. Lett. B663 (2008) 73-79, 0711.3376.
B-factory signals for a warped extra dimension. K Agashe, G Perez, A Soni, hep-ph/0406101Phys. Rev. Lett. 93K. Agashe, G. Perez, and A. Soni, "B-factory signals for a warped extra dimension," Phys. Rev. Lett. 93 (2004) 201804, hep-ph/0406101.
Flavor structure of warped extra dimension models. K Agashe, G Perez, A Soni, hep-ph/0408134Phys. Rev. 7116002K. Agashe, G. Perez, and A. Soni, "Flavor structure of warped extra dimension models," Phys. Rev. D71 (2005) 016002, hep-ph/0408134.
Detecting new physics from CP-violating phase measurements in B decays. J P Silva, L Wolfenstein, hep-ph/9610208Phys. Rev. 55J. P. Silva and L. Wolfenstein, "Detecting new physics from CP-violating phase measurements in B decays," Phys. Rev. D55 (1997) 5331-5333, hep-ph/9610208.
A model independent construction of the unitarity triangle. Y Grossman, Y Nir, M P Worah, hep-ph/9704287Phys. Lett. 407Y. Grossman, Y. Nir, and M. P. Worah, "A model independent construction of the unitarity triangle," Phys. Lett. B407 (1997) 307-313, hep-ph/9704287.
CP violation in the decays B0 → Psi K(S) and B0 → pi+ pi-: A Probe for new physics. J M Soares, L Wolfenstein, Phys. Rev. 47J. M. Soares and L. Wolfenstein, "CP violation in the decays B0 → Psi K(S) and B0 → pi+ pi-: A Probe for new physics," Phys. Rev. D47 (1993) 1021-1025.
SUSY GUTs contributions and model independent extractions of CP phases. N G Deshpande, B Dutta, S Oh, hep-ph/9608231Phys. Rev. Lett. 77N. G. Deshpande, B. Dutta, and S. Oh, "SUSY GUTs contributions and model independent extractions of CP phases," Phys. Rev. Lett. 77 (1996) 4499-4502, hep-ph/9608231.
B factory physics from effective supersymmetry. A G Cohen, D B Kaplan, F Lepeintre, A E Nelson, hep-ph/9610252Phys. Rev. Lett. 78A. G. Cohen, D. B. Kaplan, F. Lepeintre, and A. E. Nelson, "B factory physics from effective supersymmetry," Phys. Rev. Lett. 78 (1997) 2300-2303, hep-ph/9610252.
Constraining new physics with the CDF measurement of CP violation in B → ψK s. G Barenboim, G Eyal, Y Nir, hep-ph/9905397Phys. Rev. Lett. 83G. Barenboim, G. Eyal, and Y. Nir, "Constraining new physics with the CDF measurement of CP violation in B → ψK s ," Phys. Rev. Lett. 83 (1999) 4486-4489, hep-ph/9905397.
Implications of a small CP asymmetry in B → ψK S. G Eyal, Y Nir, G Perez, hep-ph/0008009JHEP. 02808G. Eyal, Y. Nir, and G. Perez, "Implications of a small CP asymmetry in B → ψK S ," JHEP 08 (2000) 028, hep-ph/0008009.
Implications of the CP asymmetry in semileptonic B decay. S Laplace, Z Ligeti, Y Nir, G Perez, hep-ph/0202010Phys. Rev. 6594040S. Laplace, Z. Ligeti, Y. Nir, and G. Perez, "Implications of the CP asymmetry in semileptonic B decay," Phys. Rev. D65 (2002) 094040, hep-ph/0202010.
CKM fits as of winter 2009 and sensitivity to New Physics. V Tisserand, 0905.1572V. Tisserand, "CKM fits as of winter 2009 and sensitivity to New Physics," 0905.1572.
Status of the Unitarity Triangle Analysis. M Bona, 0909.5065M. Bona et al., "Status of the Unitarity Triangle Analysis," 0909.5065.
Flavor Physics Constraints for Physics Beyond the Standard Model. G Isidori, Y Nir, G Perez, 1002.0900G. Isidori, Y. Nir, and G. Perez, "Flavor Physics Constraints for Physics Beyond the Standard Model," 1002.0900.
First Flavor-Tagged Determination of Bounds on Mixing-Induced CP Violation in B 0 s → J/ψφ Decays. T Aaltonen, CDF CollaborationPhys. Rev. Lett. 100CDF Collaboration, T. Aaltonen et al., "First Flavor-Tagged Determination of Bounds on Mixing-Induced CP Violation in B 0 s → J/ψφ Decays," Phys. Rev. Lett. 100 (2008) 161802, 0712.2397.
Measurement of B 0 s mixing parameters from the flavor-tagged decay B 0 s → J/ψφ. V M Abazov, D0 CollaborationPhys. Rev. Lett. 101D0 Collaboration, V. M. Abazov et al., "Measurement of B 0 s mixing parameters from the flavor-tagged decay B 0 s → J/ψφ," Phys. Rev. Lett. 101 (2008) 241801, 0802.2255.
CP violation and the CKM matrix: Assessing the impact of the asymmetric B factories. J Charles, CKMfitter Group Collaborationhep-ph/0406184Eur. Phys. J. 41CKMfitter Group Collaboration, J. Charles et al., "CP violation and the CKM matrix: Assessing the impact of the asymmetric B factories," Eur. Phys. J. C41 (2005) 1-131, hep-ph/0406184.
Predictions for b → ssdbar, ddsbar decays in the SM and with new physics. D Pirjol, J Zupan, 0908.3150JHEP. 0228D. Pirjol and J. Zupan, "Predictions for b → ssdbar, ddsbar decays in the SM and with new physics," JHEP 02 (2010) 028, 0908.3150.
Deciphering top flavor violation at the LHC with B factories. P J Fox, Z Ligeti, M Papucci, G Perez, M D Schwartz, Phys. Rev. 78P. J. Fox, Z. Ligeti, M. Papucci, G. Perez, and M. D. Schwartz, "Deciphering top flavor violation at the LHC with B factories," Phys. Rev. D78 (2008) 054008, 0704.1482.
Measurement of the B → X s + − branching fraction with a sum over exclusive modes. B Aubert, BABAR Collaborationhep-ex/0404006Phys. Rev. Lett. 9381802BABAR Collaboration, B. Aubert et al., "Measurement of the B → X s + − branching fraction with a sum over exclusive modes," Phys. Rev. Lett. 93 (2004) 081802, hep-ex/0404006.
Improved measurement of the electroweak penguin process B → X/s l+ l. M Iwasaki, Belle Collaborationhep-ex/0503044Phys. Rev. 7292005Belle Collaboration, M. Iwasaki et al., "Improved measurement of the electroweak penguin process B → X/s l+ l-," Phys. Rev. D72 (2005) 092005, hep-ex/0503044.
Study of ATLAS sensitivity to FCNC top decays. J Carvalho, ATLAS Collaboration0712.1127Eur. Phys. J. 52ATLAS Collaboration, J. Carvalho et al., "Study of ATLAS sensitivity to FCNC top decays," Eur. Phys. J. C52 (2007) 999-1019, 0712.1127.
Composite Technicolor Standard Model. R S Chivukula, H Georgi, Phys. Lett. 18899R. S. Chivukula and H. Georgi, "Composite Technicolor Standard Model," Phys. Lett. B188 (1987) 99.
Weak scale effective supersymmetry. L J Hall, L Randall, Phys. Rev. Lett. 65L. J. Hall and L. Randall, "Weak scale effective supersymmetry," Phys. Rev. Lett. 65 (1990) 2939-2942.
Supersymmetric corrections to epsilon prime / epsilon at the leading order in QCD and QED. E Gabrielli, G F Giudice, hep-lat/9407029Nucl. Phys. 433E. Gabrielli and G. F. Giudice, "Supersymmetric corrections to epsilon prime / epsilon at the leading order in QCD and QED," Nucl. Phys. B433 (1995) 3-25, hep-lat/9407029.
Profiles of the unitarity triangle and CP-violating phases in the standard model and supersymmetric theories. A Ali, D London, hep-ph/9903535Eur. Phys. J. 9A. Ali and D. London, "Profiles of the unitarity triangle and CP-violating phases in the standard model and supersymmetric theories," Eur. Phys. J. C9 (1999) 687-703, hep-ph/9903535.
Universal unitarity triangle and physics beyond the standard model. A J Buras, P Gambino, M Gorbahn, S Jager, L Silvestrini, hep-ph/0007085Phys. Lett. 500A. J. Buras, P. Gambino, M. Gorbahn, S. Jager, and L. Silvestrini, "Universal unitarity triangle and physics beyond the standard model," Phys. Lett. B500 (2001) 161-167, hep-ph/0007085.
Minimal flavor violation in the lepton sector. V Cirigliano, B Grinstein, G Isidori, M B Wise, hep-ph/0507001Nucl. Phys. 728V. Cirigliano, B. Grinstein, G. Isidori, and M. B. Wise, "Minimal flavor violation in the lepton sector," Nucl. Phys. B728 (2005) 121-134, hep-ph/0507001.
Various definitions of minimal flavour violation for leptons. S Davidson, F Palorini, hep-ph/0607329Phys. Lett. 642S. Davidson and F. Palorini, "Various definitions of minimal flavour violation for leptons," Phys. Lett. B642 (2006) 72-80, hep-ph/0607329.
Minimal Flavour Seesaw Models. M B Gavela, T Hambye, D Hernandez, P Hernandez, JHEP. 09M. B. Gavela, T. Hambye, D. Hernandez, and P. Hernandez, "Minimal Flavour Seesaw Models," JHEP 09 (2009) 038, 0906.1461.
Low-energy dynamical supersymmetry breaking simplified. M Dine, A E Nelson, Y Shirman, hep-ph/9408384Phys. Rev. 51M. Dine, A. E. Nelson, and Y. Shirman, "Low-energy dynamical supersymmetry breaking simplified," Phys. Rev. D51 (1995) 1362-1370, hep-ph/9408384.
New tools for low-energy dynamical supersymmetry breaking. M Dine, A E Nelson, Y Nir, Y Shirman, hep-ph/9507378Phys. Rev. 53M. Dine, A. E. Nelson, Y. Nir, and Y. Shirman, "New tools for low-energy dynamical supersymmetry breaking," Phys. Rev. D53 (1996) 2658-2669, hep-ph/9507378.
Out of this world supersymmetry breaking. L Randall, R Sundrum, hep-th/9810155Nucl. Phys. 557L. Randall and R. Sundrum, "Out of this world supersymmetry breaking," Nucl. Phys. B557 (1999) 79-118, hep-th/9810155.
Gaugino Mass without Singlets. G F Giudice, M A Luty, H Murayama, R Rattazzi, hep-ph/9810442JHEP. 1227G. F. Giudice, M. A. Luty, H. Murayama, and R. Rattazzi, "Gaugino Mass without Singlets," JHEP 12 (1998) 027, hep-ph/9810442.
Comments on the holographic picture of the Randall-Sundrum model. R Rattazzi, A Zaffaroni, hep-th/0012248JHEP. 0421R. Rattazzi and A. Zaffaroni, "Comments on the holographic picture of the Randall-Sundrum model," JHEP 04 (2001) 021, hep-th/0012248.
A GIM Mechanism from Extra Dimensions. G Cacciapaglia, JHEP. 04G. Cacciapaglia et al., "A GIM Mechanism from Extra Dimensions," JHEP 04 (2008) 006, 0709.1714.
Flavor from Minimal Flavor Violation & a Viable Randall-Sundrum Model. A L Fitzpatrick, G Perez, L Randall, 0710.1869A. L. Fitzpatrick, G. Perez, and L. Randall, "Flavor from Minimal Flavor Violation & a Viable Randall-Sundrum Model," 0710.1869.
Natural Neutrino Masses and Mixings from Warped Geometry. G Perez, L Randall, JHEP. 01G. Perez and L. Randall, "Natural Neutrino Masses and Mixings from Warped Geometry," JHEP 01 (2009) 077, 0805.4652.
A Simple Flavor Protection for RS. C Csaki, A Falkowski, A Weiler, Phys. Rev. 80C. Csaki, A. Falkowski, and A. Weiler, "A Simple Flavor Protection for RS," Phys. Rev. D80 (2009) 016001, 0806.3757.
Flavor Alignment via Shining in RS. C Csaki, G Perez, Z Surujon, A Weiler, 0907.0474C. Csaki, G. Perez, Z. Surujon, and A. Weiler, "Flavor Alignment via Shining in RS," 0907.0474.
Natural Conservation Laws for Neutral Currents. S L Glashow, S Weinberg, Phys. Rev. 151958S. L. Glashow and S. Weinberg, "Natural Conservation Laws for Neutral Currents," Phys. Rev. D15 (1977) 1958.
BOUNDS ON CHARGED HIGGS PROPERTIES FROM CP VIOLATION. G G Athanasiu, F J Gilman, Phys. Lett. 153274G. G. Athanasiu and F. J. Gilman, "BOUNDS ON CHARGED HIGGS PROPERTIES FROM CP VIOLATION," Phys. Lett. B153 (1985) 274.
Effects of Charged Higgs Bosons on the Processes b → s Gamma, b → s g* and b → s Lepton+ Lepton. W.-S Hou, R S Willey, Phys. Lett. 202591W.-S. Hou and R. S. Willey, "Effects of Charged Higgs Bosons on the Processes b → s Gamma, b → s g* and b → s Lepton+ Lepton-," Phys. Lett. B202 (1988) 591.
B → X(s) e+ e-in the Six Quark Model. B Grinstein, M J Savage, M B Wise, Nucl. Phys. 319B. Grinstein, M. J. Savage, and M. B. Wise, "B → X(s) e+ e-in the Six Quark Model," Nucl. Phys. B319 (1989) 271-290.
Constraints on New Physics in MFV models: A Model-independent analysis of ∆ F = 1 processes. T Hurth, G Isidori, J F Kamenik, F Mescia, 0807.5039Nucl. Phys. 808T. Hurth, G. Isidori, J. F. Kamenik, and F. Mescia, "Constraints on New Physics in MFV models: A Model-independent analysis of ∆ F = 1 processes," Nucl. Phys. B808 (2009) 326-346, 0807.5039.
On K beyond lowest order in the Operator Product Expansion. A J Buras, D Guadagnoli, G Isidori, 1002.3612A. J. Buras, D. Guadagnoli, and G. Isidori, "On K beyond lowest order in the Operator Product Expansion," 1002.3612.
The Top quark mass in supersymmetric SO(10) unification. L J Hall, R Rattazzi, U Sarid, hep-ph/9306309Phys. Rev. 50L. J. Hall, R. Rattazzi, and U. Sarid, "The Top quark mass in supersymmetric SO(10) unification," Phys. Rev. D50 (1994) 7048-7065, hep-ph/9306309.
Finite supersymmetric threshold corrections to CKM matrix elements in the large tan Beta regime. T Blazek, S Raby, S Pokorski, hep-ph/9504364Phys. Rev. 52T. Blazek, S. Raby, and S. Pokorski, "Finite supersymmetric threshold corrections to CKM matrix elements in the large tan Beta regime," Phys. Rev. D52 (1995) 4151-4158, hep-ph/9504364.
Scalar flavor changing neutral currents in the large tan beta limit. G Isidori, A Retico, hep-ph/0110121JHEP. 00111G. Isidori and A. Retico, "Scalar flavor changing neutral currents in the large tan beta limit," JHEP 11 (2001) 001, hep-ph/0110121.
Enhanced charged Higgs boson effects in B-→ tau anti-neutrino, mu anti-neutrino and b → tau anti-neutrino + X. W.-S Hou, Phys. Rev. 48W.-S. Hou, "Enhanced charged Higgs boson effects in B-→ tau anti-neutrino, mu anti-neutrino and b → tau anti-neutrino + X," Phys. Rev. D48 (1993) 2342-2344.
The effect of H+-on B+-→ tau+-nu/tau and B+-→ mu+-nu/mu. A G Akeroyd, S Recksiegel, hep-ph/0306037J. Phys. 29A. G. Akeroyd and S. Recksiegel, "The effect of H+-on B+-→ tau+-nu/tau and B+-→ mu+-nu/mu," J. Phys. G29 (2003) 2311-2317, hep-ph/0306037.
Hints of large tan(beta) in flavour physics. G Isidori, P Paradisi, hep-ph/0605012Phys. Lett. 639G. Isidori and P. Paradisi, "Hints of large tan(beta) in flavour physics," Phys. Lett. B639 (2006) 499-507, hep-ph/0605012.
b → s gamma and supersymmetry with large tan(beta). M S Carena, D Garcia, U Nierste, C E M Wagner, hep-ph/0010003Phys. Lett. 499M. S. Carena, D. Garcia, U. Nierste, and C. E. M. Wagner, "b → s gamma and supersymmetry with large tan(beta)," Phys. Lett. B499 (2001) 141-146, hep-ph/0010003.
B → X/s gamma in supersymmetry: Large contributions beyond the leading order. G Degrassi, P Gambino, G F Giudice, hep-ph/0009337JHEP. 12G. Degrassi, P. Gambino, and G. F. Giudice, "B → X/s gamma in supersymmetry: Large contributions beyond the leading order," JHEP 12 (2000) 009, hep-ph/0009337.
Effective Lagrangian for thē tbH + interaction in the MSSM and charged Higgs phenomenology. M S Carena, D Garcia, U Nierste, C E M Wagner, hep-ph/9912516Nucl. Phys. 577M. S. Carena, D. Garcia, U. Nierste, and C. E. M. Wagner, "Effective Lagrangian for thē tbH + interaction in the MSSM and charged Higgs phenomenology," Nucl. Phys. B577 (2000) 88-120, hep-ph/9912516.
∆M d,s , B 0 d, s → µ + µ − and B → X s γ in supersymmetry at large tan β. A J Buras, P H Chankowski, J Rosiek, L Slawianowska, hep-ph/0210145Nucl. Phys. 659A. J. Buras, P. H. Chankowski, J. Rosiek, and L. Slawianowska, "∆M d,s , B 0 d, s → µ + µ − and B → X s γ in supersymmetry at large tan β," Nucl. Phys. B659 (2003) 3, hep-ph/0210145.
∆ M(s) / ∆ M(d), sin 2 Beta and the angle γ in the presence of new ∆F = 2 operators. A J Buras, P H Chankowski, J Rosiek, L Slawianowska, hep-ph/0107048Nucl. Phys. 619A. J. Buras, P. H. Chankowski, J. Rosiek, and L. Slawianowska, "∆ M(s) / ∆ M(d), sin 2 Beta and the angle γ in the presence of new ∆F = 2 operators," Nucl. Phys. B619 (2001) 434-466, hep-ph/0107048.
Higgs-mediated FCNC in supersymmetric models with large tan(beta). C Hamzaoui, M Pospelov, M Toharia, hep-ph/9807350Phys. Rev. 5995005C. Hamzaoui, M. Pospelov, and M. Toharia, "Higgs-mediated FCNC in supersymmetric models with large tan(beta)," Phys. Rev. D59 (1999) 095005, hep-ph/9807350.
Dileptonic decay of B/s meson in SUSY models with large tan(beta). S R Choudhury, N Gaur, hep-ph/9810307Phys. Lett. 451S. R. Choudhury and N. Gaur, "Dileptonic decay of B/s meson in SUSY models with large tan(beta)," Phys. Lett. B451 (1999) 86-92, hep-ph/9810307.
Higgs mediated B 0 → µ + µ − in minimal supersymmetry. K S Babu, C F Kolda, hep-ph/9909476Phys. Rev. Lett. 84K. S. Babu and C. F. Kolda, "Higgs mediated B 0 → µ + µ − in minimal supersymmetry," Phys. Rev. Lett. 84 (2000) 228-231, hep-ph/9909476.
B-Meson Observables in the Maximally CP-Violating MSSM with Minimal Flavour Violation. J R Ellis, J S Lee, A Pilaftsis, Phys. Rev. 76J. R. Ellis, J. S. Lee, and A. Pilaftsis, "B-Meson Observables in the Maximally CP-Violating MSSM with Minimal Flavour Violation," Phys. Rev. D76 (2007) 115011, 0708.2079.
The supersymmetric Higgs sector and B − Bbar mixing for large tan β. M Gorbahn, S Jager, U Nierste, S Trine, 0901.2065M. Gorbahn, S. Jager, U. Nierste, and S. Trine, "The supersymmetric Higgs sector and B − Bbar mixing for large tan β," 0901.2065.
Revisiting CP-violation in Minimal Flavour Violation. L Mercolli, 0903.4633L. Mercolli, "Revisiting CP-violation in Minimal Flavour Violation," 0903.4633.
The SUSY CP Problem and the MFV Principle. P Paradisi, D M Straub, 0906.4551Phys. Lett. 684P. Paradisi and D. M. Straub, "The SUSY CP Problem and the MFV Principle," Phys. Lett. B684 (2010) 147-153, 0906.4551.
Large Top Mass and Non-Linear Representation of Flavour Symmetry. T Feldmann, T Mannel, Phys. Rev. Lett. 100T. Feldmann and T. Mannel, "Large Top Mass and Non-Linear Representation of Flavour Symmetry," Phys. Rev. Lett. 100 (2008) 171601, 0801.1802.
S Weinberg, The quantum theory of fields. Univ. Pr.Cambridge, UK2pModern applicationsS. Weinberg, "The quantum theory of fields. Vol. 2: Modern applications,". Cambridge, UK: Univ. Pr. (1996) 489 p.
Constraining models of new physics in light of recent experimental results on a(ψK S. S Bergmann, G Perez, hep-ph/0103299Phys. Rev. 64S. Bergmann and G. Perez, "Constraining models of new physics in light of recent experimental results on a(ψK S ," Phys. Rev. D64 (2001) 115009, hep-ph/0103299.
Upper bounds on rare K and B decays from minimal flavor violation. C Bobeth, hep-ph/0505110Nucl. Phys. 726C. Bobeth et al., "Upper bounds on rare K and B decays from minimal flavor violation," Nucl. Phys. B726 (2005) 252-274, hep-ph/0505110.
Enhancement of B(anti-B(d) → µ + µ −) / B(anti-B(s) → µ + µ −) in the MSSM with minimal flavor violation and large tan beta. C Bobeth, T Ewerth, F Kruger, J Urban, hep-ph/0204225Phys. Rev. 6674021C. Bobeth, T. Ewerth, F. Kruger, and J. Urban, "Enhancement of B(anti-B(d) → µ + µ −) / B(anti-B(s) → µ + µ −) in the MSSM with minimal flavor violation and large tan beta," Phys. Rev. D66 (2002) 074021, hep-ph/0204225.
CP violation in radiative b decays. J M Soares, Nucl. Phys. 367J. M. Soares, "CP violation in radiative b decays," Nucl. Phys. B367 (1991) 575-590.
Low Energy Probes of CP Violation in a Flavor Blind MSSM. W Altmannshofer, A J Buras, P Paradisi, 0808.0707Phys. Lett. 669W. Altmannshofer, A. J. Buras, and P. Paradisi, "Low Energy Probes of CP Violation in a Flavor Blind MSSM," Phys. Lett. B669 (2008) 239-245, 0808.0707.
Correlation between ∆ M(s) and B 0 (s, d ) → µ + µ − in supersymmetry at large tan beta. A J Buras, P H Chankowski, J Rosiek, L Slawianowska, hep-ph/0207241Phys. Lett. 546A. J. Buras, P. H. Chankowski, J. Rosiek, and L. Slawianowska, "Correlation between ∆ M(s) and B 0 (s, d ) → µ + µ − in supersymmetry at large tan beta," Phys. Lett. B546 (2002) 96-107, hep-ph/0207241.
Untagged B → X/s+d gamma CP asymmetry as a probe for new physics. T Hurth, E Lunghi, W Porod, hep-ph/0312260Nucl. Phys. 704T. Hurth, E. Lunghi, and W. Porod, "Untagged B → X/s+d gamma CP asymmetry as a probe for new physics," Nucl. Phys. B704 (2005) 56-74, hep-ph/0312260.
B-parameters of the complete set of matrix elements of Delta(B) = 2 operators from the lattice. D Becirevic, V Gimenez, G Martinelli, M Papinutto, J Reyes, hep-lat/0110091JHEP. 0425D. Becirevic, V. Gimenez, G. Martinelli, M. Papinutto, and J. Reyes, "B-parameters of the complete set of matrix elements of Delta(B) = 2 operators from the lattice," JHEP 04 (2002) 025, hep-lat/0110091.
Neutral B Meson Mixing in Unquenched Lattice QCD. E Gamiz, HPQCD CollaborationC T H Davies, HPQCD CollaborationG P Lepage, HPQCD CollaborationJ Shigemitsu, HPQCD CollaborationM Wingate, HPQCD CollaborationPhys. Rev. 80HPQCD Collaboration, E. Gamiz, C. T. H. Davies, G. P. Lepage, J. Shigemitsu, and M. Wingate, "Neutral B Meson Mixing in Unquenched Lattice QCD," Phys. Rev. D80 (2009) 014503, 0902.1815.
Flavor Changing Processes in Supersymmetric Models with Hybrid Gauge-and Gravity-Mediation. G Hiller, Y Hochberg, Y Nir, 0812.0511JHEP. 11503G. Hiller, Y. Hochberg, and Y. Nir, "Flavor Changing Processes in Supersymmetric Models with Hybrid Gauge-and Gravity-Mediation," JHEP 03 (2009) 115, 0812.0511.
G Hiller, Y Hochberg, Y Nir, 1001.1513Flavor in Supersymmetry: Anarchy versus Structure. 79G. Hiller, Y. Hochberg, and Y. Nir, "Flavor in Supersymmetry: Anarchy versus Structure," JHEP 03 (2010) 079, 1001.1513.
The mass insertion approximation without squark degeneracy. G Raz, hep-ph/0205310Phys. Rev. 6637701G. Raz, "The mass insertion approximation without squark degeneracy," Phys. Rev. D66 (2002) 037701, hep-ph/0205310.
Flavour physics and grand unification. A Masiero, S K Vempati, O Vives, 0711.2903A. Masiero, S. K. Vempati, and O. Vives, "Flavour physics and grand unification," 0711.2903.
B, D and K decays. M Artuso, 0801.1833Eur. Phys. J. 57M. Artuso et al., "B, D and K decays," Eur. Phys. J. C57 (2008) 309-492, 0801.1833.
G Isidori, A Retico, " B S,D → + −, K L → + − , hep-ph/0208159SUSY models with nonminimal sources of flavor mixing. 63G. Isidori and A. Retico, "B s,d → + − and K L → + − in SUSY models with nonminimal sources of flavor mixing," JHEP 09 (2002) 063, hep-ph/0208159.
New constraints on SUSY flavour mixing in light of recent measurements at the Tevatron. J Foster, K Okumura, L Roszkowski, hep-ph/0604121Phys. Lett. 641J. Foster, K.-i. Okumura, and L. Roszkowski, "New constraints on SUSY flavour mixing in light of recent measurements at the Tevatron," Phys. Lett. B641 (2006) 452-460, hep-ph/0604121.
Flavour physics of leptons and dipole moments. M , 0801.1826Eur. Phys. J. 57M. Raidal et al., "Flavour physics of leptons and dipole moments," Eur. Phys. J. C57 (2008) 13-182, 0801.1826.
A complete analysis of FCNC and CP constraints in general SUSY extensions of the standard model. F Gabbiani, E Gabrielli, A Masiero, L Silvestrini, hep-ph/9604387Nucl. Phys. 477F. Gabbiani, E. Gabrielli, A. Masiero, and L. Silvestrini, "A complete analysis of FCNC and CP constraints in general SUSY extensions of the standard model," Nucl. Phys. B477 (1996) 321-352, hep-ph/9604387.
Do squarks have to be degenerate? Constraining the mass splitting with Kaon and D mixing. A Crivellin, M Davidkov, 1002.2653Phys. Rev. 8195004A. Crivellin and M. Davidkov, "Do squarks have to be degenerate? Constraining the mass splitting with Kaon and D mixing," Phys. Rev. D81 (2010) 095004, 1002.2653.
Dynamical supersymmetry breaking. Y Shadmi, Y Shirman, hep-th/9907225Rev. Mod. Phys. 72Y. Shadmi and Y. Shirman, "Dynamical supersymmetry breaking," Rev. Mod. Phys. 72 (2000) 25-64, hep-th/9907225.
Dynamical supersymmetry breaking at low-energies. M Dine, A E Nelson, hep-ph/9303230Phys. Rev. 48M. Dine and A. E. Nelson, "Dynamical supersymmetry breaking at low-energies," Phys. Rev. D48 (1993) 1277-1287, hep-ph/9303230.
General Gauge Mediation. P Meade, N Seiberg, D Shih, 0801.3278Prog. Theor. Phys. Suppl. 177P. Meade, N. Seiberg, and D. Shih, "General Gauge Mediation," Prog. Theor. Phys. Suppl. 177 (2009) 143-158, 0801.3278.
Gaugino mediated supersymmetry breaking. Z Chacko, M A Luty, A E Nelson, E Ponton, hep-ph/9911323JHEP. 013Z. Chacko, M. A. Luty, A. E. Nelson, and E. Ponton, "Gaugino mediated supersymmetry breaking," JHEP 01 (2000) 003, hep-ph/9911323.
Running minimal flavor violation. P Paradisi, M Ratz, R Schieren, C Simonetto, 0805.3989Phys. Lett. 668P. Paradisi, M. Ratz, R. Schieren, and C. Simonetto, "Running minimal flavor violation," Phys. Lett. B668 (2008) 202-209, 0805.3989.
The Standard Model and Supersymmetric Flavor Puzzles at the Large Hadron Collider. J L Feng, C G Lester, Y Nir, Y Shadmi, 0712.0674Phys. Rev. 7776002J. L. Feng, C. G. Lester, Y. Nir, and Y. Shadmi, "The Standard Model and Supersymmetric Flavor Puzzles at the Large Hadron Collider," Phys. Rev. D77 (2008) 076002, 0712.0674.
Phenomenology of extra dimensions. G D Kribs, hep-ph/0605325G. D. Kribs, "Phenomenology of extra dimensions," hep-ph/0605325.
Hierarchies without symmetries from extra dimensions. N Arkani-Hamed, M Schmaltz, hep-ph/9903417Phys. Rev. 6133005N. Arkani-Hamed and M. Schmaltz, "Hierarchies without symmetries from extra dimensions," Phys. Rev. D61 (2000) 033005, hep-ph/9903417.
Electroweak and flavor physics in extensions of the standard model with large extra dimensions. A Delgado, A Pomarol, M Quiros, hep-ph/9911252JHEP. 0130A. Delgado, A. Pomarol, and M. Quiros, "Electroweak and flavor physics in extensions of the standard model with large extra dimensions," JHEP 01 (2000) 030, hep-ph/9911252.
A large mass hierarchy from a small extra dimension. L Randall, R Sundrum, hep-ph/9905221Phys. Rev. Lett. 83L. Randall and R. Sundrum, "A large mass hierarchy from a small extra dimension," Phys. Rev. Lett. 83 (1999) 3370-3373, hep-ph/9905221.
Neutrino masses and mixings in non-factorizable geometry. Y Grossman, M Neubert, hep-ph/9912408Phys. Lett. 474Y. Grossman and M. Neubert, "Neutrino masses and mixings in non-factorizable geometry," Phys. Lett. B474 (2000) 361-371, hep-ph/9912408.
Bulk fields and supersymmetry in a slice of AdS. T Gherghetta, A Pomarol, hep-ph/0003129Nucl. Phys. 586T. Gherghetta and A. Pomarol, "Bulk fields and supersymmetry in a slice of AdS," Nucl. Phys. B586 (2000) 141-162, hep-ph/0003129.
Fermion Masses, Mixings and Proton Decay in a Randall-Sundrum Model. S J Huber, Q Shafi, hep-ph/0010195Phys. Lett. 498S. J. Huber and Q. Shafi, "Fermion Masses, Mixings and Proton Decay in a Randall- Sundrum Model," Phys. Lett. B498 (2001) 256-262, hep-ph/0010195.
Flavor violation and warped geometry. S J Huber, hep-ph/0303183Nucl. Phys. 666S. J. Huber, "Flavor violation and warped geometry," Nucl. Phys. B666 (2003) 269-288, hep-ph/0303183.
Constraints on the bulk standard model in the Randall-Sundrum scenario. G Burdman, hep-ph/0205329Phys. Rev. 6676003G. Burdman, "Constraints on the bulk standard model in the Randall-Sundrum scenario," Phys. Rev. D66 (2002) 076003, hep-ph/0205329.
Flavor violation in warped extra dimensions and CP asymmetries in B decays. G Burdman, hep-ph/0310144Phys. Lett. 590G. Burdman, "Flavor violation in warped extra dimensions and CP asymmetries in B decays," Phys. Lett. B590 (2004) 86-94, hep-ph/0310144.
SU(2) x U(1) Breaking by Vacuum Misalignment. D B Kaplan, H Georgi, Phys. Lett. 136183D. B. Kaplan and H. Georgi, "SU(2) x U(1) Breaking by Vacuum Misalignment," Phys. Lett. B136 (1984) 183.
CALCULATION OF THE COMPOSITE HIGGS MASS. H Georgi, D B Kaplan, P Galison, Phys. Lett. 143152H. Georgi, D. B. Kaplan, and P. Galison, "CALCULATION OF THE COMPOSITE HIGGS MASS," Phys. Lett. B143 (1984) 152.
Composite Higgs and Custodial SU(2). H Georgi, D B Kaplan, Phys. Lett. 145216H. Georgi and D. B. Kaplan, "Composite Higgs and Custodial SU(2)," Phys. Lett. B145 (1984) 216.
The large N limit of superconformal field theories and supergravity. J M Maldacena, hep-th/9711200Adv. Theor. Math. Phys. 2J. M. Maldacena, "The large N limit of superconformal field theories and supergravity," Adv. Theor. Math. Phys. 2 (1998) 231-252, hep-th/9711200.
Anti-de Sitter space and holography. E Witten, hep-th/9802150Adv. Theor. Math. Phys. 2E. Witten, "Anti-de Sitter space and holography," Adv. Theor. Math. Phys. 2 (1998) 253-291, hep-th/9802150.
Holography and phenomenology. N Arkani-Hamed, M Porrati, L Randall, hep-th/0012148JHEP. 0817N. Arkani-Hamed, M. Porrati, and L. Randall, "Holography and phenomenology," JHEP 08 (2001) 017, hep-th/0012148.
Warped/Composite Phenomenology Simplified. R Contino, T Kramer, M Son, R Sundrum, hep-ph/0612180JHEP. 0574R. Contino, T. Kramer, M. Son, and R. Sundrum, "Warped/Composite Phenomenology Simplified," JHEP 05 (2007) 074, hep-ph/0612180.
Combining Direct & Indirect Kaon CP Violation to Constrain the Warped KK Scale. O Gedalia, G Isidori, G Perez, 0905.3264Phys. Lett. 682O. Gedalia, G. Isidori, and G. Perez, "Combining Direct & Indirect Kaon CP Violation to Constrain the Warped KK Scale," Phys. Lett. B682 (2009) 200-206, 0905.3264.
Positive Energy in anti-De Sitter Backgrounds and Gauged Extended Supergravity. P Breitenlohner, D Z Freedman, Phys. Lett. 115197P. Breitenlohner and D. Z. Freedman, "Positive Energy in anti-De Sitter Backgrounds and Gauged Extended Supergravity," Phys. Lett. B115 (1982) 197.
The Flavor of the Composite Pseudo-Goldstone Higgs. C Csaki, A Falkowski, A Weiler, JHEP. 09804C. Csaki, A. Falkowski, and A. Weiler, "The Flavor of the Composite Pseudo-Goldstone Higgs," JHEP 09 (2008) 008, 0804.1954.
A Flavor Protection for Warped Higgsless Models. C Csaki, D Curtin, Phys. Rev. 80C. Csaki and D. Curtin, "A Flavor Protection for Warped Higgsless Models," Phys. Rev. D80 (2009) 015027, 0904.2137.
Flavor Violation Tests of Warped/Composite SM in the Two-Site Approach. K Agashe, A Azatov, L Zhu, Phys. Rev. 79K. Agashe, A. Azatov, and L. Zhu, "Flavor Violation Tests of Warped/Composite SM in the Two-Site Approach," Phys. Rev. D79 (2009) 056006, 0810.1016.
Higgs Mediated FCNC's in Warped Extra Dimensions. A Azatov, M Toharia, L Zhu, Phys. Rev. 80906A. Azatov, M. Toharia, and L. Zhu, "Higgs Mediated FCNC's in Warped Extra Dimensions," Phys. Rev. D80 (2009) 035016, 0906.1990.
Flavor Physics in the Randall-Sundrum Model: I. Theoretical Setup and Electroweak Precision Tests. S Casagrande, F Goertz, U Haisch, M Neubert, T Pfoh, JHEP. 10S. Casagrande, F. Goertz, U. Haisch, M. Neubert, and T. Pfoh, "Flavor Physics in the Randall-Sundrum Model: I. Theoretical Setup and Electroweak Precision Tests," JHEP 10 (2008) 094, 0807.4937.
∆ F=2 Observables and Fine-Tuning in a Warped Extra Dimension with Custodial Protection. M Blanke, A J Buras, B Duling, S Gori, A Weiler, JHEP. 03M. Blanke, A. J. Buras, B. Duling, S. Gori, and A. Weiler, "∆ F=2 Observables and Fine-Tuning in a Warped Extra Dimension with Custodial Protection," JHEP 03 (2009) 001, 0809.1073.
The Little Randall-Sundrum Model at the Large Hadron Collider. H Davoudiasl, G Perez, A Soni, 0802.0203Phys. Lett. 665H. Davoudiasl, G. Perez, and A. Soni, "The Little Randall-Sundrum Model at the Large Hadron Collider," Phys. Lett. B665 (2008) 67-71, 0802.0203.
Composite Higgs-Mediated FCNC. K Agashe, R Contino, 0906.1542Phys. Rev. 8075016K. Agashe and R. Contino, "Composite Higgs-Mediated FCNC," Phys. Rev. D80 (2009) 075016, 0906.1542.
Radion Mediated Flavor Changing Neutral Currents. A Azatov, M Toharia, L Zhu, Phys. Rev. 80A. Azatov, M. Toharia, and L. Zhu, "Radion Mediated Flavor Changing Neutral Currents," Phys. Rev. D80 (2009) 031701, 0812.2489.
A Comparative Study of Contributions to K in the RS Model. B Duling, 0912.4208B. Duling, "A Comparative Study of Contributions to K in the RS Model," 0912.4208.
Minimal Flavor Protection: A New Flavor Paradigm in Warped Models. J Santiago, 0806.1230JHEP. 1246J. Santiago, "Minimal Flavor Protection: A New Flavor Paradigm in Warped Models," JHEP 12 (2008) 046, 0806.1230.
Rare K and B Decays in a Warped Extra Dimension with Custodial Protection. M Blanke, A J Buras, B Duling, K Gemmler, S Gori, JHEP. 10803M. Blanke, A. J. Buras, B. Duling, K. Gemmler, and S. Gori, "Rare K and B Decays in a Warped Extra Dimension with Custodial Protection," JHEP 03 (2009) 108, 0812.3803.
The Impact of Kaluza-Klein Fermions on Standard Model Fermion Couplings in a RS Model with Custodial Protection. A J Buras, B Duling, S Gori, JHEP. 09A. J. Buras, B. Duling, and S. Gori, "The Impact of Kaluza-Klein Fermions on Standard Model Fermion Couplings in a RS Model with Custodial Protection," JHEP 09 (2009) 076, 0905.2318.
M Bauer, S Casagrande, U Haisch, M Neubert, 0912.1625Flavor Physics in the Randall-Sundrum Model: II. Tree-Level Weak-Interaction Processes. M. Bauer, S. Casagrande, U. Haisch, and M. Neubert, "Flavor Physics in the Randall-Sundrum Model: II. Tree-Level Weak-Interaction Processes," 0912.1625.
Collider Signals of Top Quark Flavor Violation from a Warped Extra Dimension. K Agashe, G Perez, A Soni, hep-ph/0606293Phys. Rev. 7515002K. Agashe, G. Perez, and A. Soni, "Collider Signals of Top Quark Flavor Violation from a Warped Extra Dimension," Phys. Rev. D75 (2007) 015002, hep-ph/0606293.
A custodial symmetry for Z b anti-b. K Agashe, R Contino, L Da Rold, A Pomarol, hep-ph/0605341Phys. Lett. 641K. Agashe, R. Contino, L. Da Rold, and A. Pomarol, "A custodial symmetry for Z b anti-b," Phys. Lett. B641 (2006) 62-66, hep-ph/0605341.
Warped 5-Dimensional Models: Phenomenological Status and Experimental Prospects. H Davoudiasl, S Gopalakrishna, E Ponton, J Santiago, 908H. Davoudiasl, S. Gopalakrishna, E. Ponton, and J. Santiago, "Warped 5-Dimensional Models: Phenomenological Status and Experimental Prospects," 0908.1968.
Probing Minimal Flavor Violation at the LHC. Y Grossman, Y Nir, J Thaler, T Volansky, J Zupan, Phys. Rev. 76Y. Grossman, Y. Nir, J. Thaler, T. Volansky, and J. Zupan, "Probing Minimal Flavor Violation at the LHC," Phys. Rev. D76 (2007) 096006, 0706.1845.
Charged-Higgs Collider Signals with or without Flavor. S Dittmaier, G Hiller, T Plehn, M Spannowsky, 0708.0940Phys. Rev. 77S. Dittmaier, G. Hiller, T. Plehn, and M. Spannowsky, "Charged-Higgs Collider Signals with or without Flavor," Phys. Rev. D77 (2008) 115001, 0708.0940.
Collider aspects of flavour physics at high Q. F Del Aguila, 0801.1800Eur. Phys. J. 57F. del Aguila et al., "Collider aspects of flavour physics at high Q," Eur. Phys. J. C57 (2008) 183-308, 0801.1800.
Measuring Flavor Mixing with Minimal Flavor Violation at the LHC. G Hiller, Y Nir, 0802.0916JHEP. 0346G. Hiller and Y. Nir, "Measuring Flavor Mixing with Minimal Flavor Violation at the LHC," JHEP 03 (2008) 046, 0802.0916.
Squark Flavor Violation at the LHC. G D Kribs, A Martin, T S Roy, 0901.4105JHEP. 0642G. D. Kribs, A. Martin, and T. S. Roy, "Squark Flavor Violation at the LHC," JHEP 06 (2009) 042, 0901.4105.
Flavour violating squark and gluino decays. T Hurth, W Porod, 0904.457487T. Hurth and W. Porod, "Flavour violating squark and gluino decays," JHEP 08 (2009) 087, 0904.4574.
Impact of squark generation mixing on the search for gluinos at LHC. A Bartl, 0905.0132Phys. Lett. 679A. Bartl et al., "Impact of squark generation mixing on the search for gluinos at LHC," Phys. Lett. B679 (2009) 260-266, 0905.0132.
Collider Signatures of Minimal Flavor Mixing from Stop Decay Length Measurements. G Hiller, J S Kim, H Sedello, Phys. Rev. 80G. Hiller, J. S. Kim, and H. Sedello, "Collider Signatures of Minimal Flavor Mixing from Stop Decay Length Measurements," Phys. Rev. D80 (2009) 115016, 0910.2124.
Correlations between high-p T and flavour physics. T Hurth, W Porod, 0911.4868T. Hurth and W. Porod, "Correlations between high-p T and flavour physics," 0911.4868.
Test of lepton flavour violation at LHC. A Bartl, hep-ph/0510074Eur. Phys. J. 46A. Bartl et al., "Test of lepton flavour violation at LHC," Eur. Phys. J. C46 (2006) 783-789, hep-ph/0510074.
Impact of slepton generation mixing on the search for sneutrinos. A Bartl, 0709.1157Phys. Lett. 660A. Bartl et al., "Impact of slepton generation mixing on the search for sneutrinos," Phys. Lett. B660 (2008) 228-235, 0709.1157.
Measuring Slepton Masses and Mixings at the LHC. J L Feng, JHEP. 01J. L. Feng et al., "Measuring Slepton Masses and Mixings at the LHC," JHEP 01 (2010) 047, 0910.1618.
Slepton mass-splittings as a signal of LFV at the LHC. A J Buras, L Calibbi, P Paradisi, 0912.1309A. J. Buras, L. Calibbi, and P. Paradisi, "Slepton mass-splittings as a signal of LFV at the LHC," 0912.1309.
Testing minimal lepton flavor violation with extra vector-like leptons at the LHC. E Gross, D Grossman, Y Nir, O Vitells, Phys. Rev. 81E. Gross, D. Grossman, Y. Nir, and O. Vitells, "Testing minimal lepton flavor violation with extra vector-like leptons at the LHC," Phys. Rev. D81 (2010) 055013, 1001.2883.
Search for Scalar top decaying into Charm and Neutralino. M Vidal, CDF CollaborationO Gonzalez, CDF CollaborationCDF Collaboration, M. Vidal and O. Gonzalez, "Search for Scalar top decaying into Charm and Neutralino," www-cdf.fnal.gov/physics/exotic/r2a/20090709.stop charm/.
Goldstone Bosons in Effective Theories with Spontaneously Broken Flavour Symmetry. M E Albrecht, T Feldmann, T Mannel, 1002.4798M. E. Albrecht, T. Feldmann, and T. Mannel, "Goldstone Bosons in Effective Theories with Spontaneously Broken Flavour Symmetry," 1002.4798.
Probing the Gauge Content of Heavy Resonances with Soft Radiation. I Sung, 0908.3688Phys. Rev. 8094020I. Sung, "Probing the Gauge Content of Heavy Resonances with Soft Radiation," Phys. Rev. D80 (2009) 094020, 0908.3688.
LHC signals from warped extra dimensions. K Agashe, A Belyaev, T Krupovnickas, G Perez, J Virzi, hep-ph/0612015Phys. Rev. 7715003K. Agashe, A. Belyaev, T. Krupovnickas, G. Perez, and J. Virzi, "LHC signals from warped extra dimensions," Phys. Rev. D77 (2008) 015003, hep-ph/0612015.
The Bulk RS KK-gluon at the LHC. B Lillie, L Randall, L.-T Wang, hep-ph/0701166JHEP. 0974B. Lillie, L. Randall, and L.-T. Wang, "The Bulk RS KK-gluon at the LHC," JHEP 09 (2007) 074, hep-ph/0701166.
Big Signals of Little Randall-Sundrum Models. H Davoudiasl, S Gopalakrishna, A Soni, 0908.1131Phys. Lett. 686H. Davoudiasl, S. Gopalakrishna, and A. Soni, "Big Signals of Little Randall-Sundrum Models," Phys. Lett. B686 (2010) 239-243, 0908.1131.
Future prospects of B physics. Y Grossman, Z Ligeti, Y Nir, 0904.4262Prog. Theor. Phys. 122Y. Grossman, Z. Ligeti, and Y. Nir, "Future prospects of B physics," Prog. Theor. Phys. 122 (2009) 125-143, 0904.4262.
| [] |
[
"Challenges in Procedural Multimodal Machine Comprehension: A Novel Way To Benchmark",
"Challenges in Procedural Multimodal Machine Comprehension: A Novel Way To Benchmark"
] | [
"Pritish Sahu [email protected] \nSRI International\n\n\nRutgers University\n\n",
"Karan Sikka [email protected] \nSRI International\n\n",
"Ajay Divakaran [email protected] \nSRI International\n\n"
] | [
"SRI International\n",
"Rutgers University\n",
"SRI International\n",
"SRI International\n"
] | [] | We focus on Multimodal Machine Reading Comprehension (M3C) where a model is expected to answer questions based on given passage (or context), and the context and the questions can be in different modalities. Previous works such as RecipeQA have proposed datasets and cloze-style tasks for evaluation. However, we identify three critical biases stemming from the question-answer generation process and memorization capabilities of large deep models. These biases makes it easier for a model to overfit by relying on spurious correlations or naive data patterns. We propose a systematic framework to address these biases through three Control-Knobs that enable us to generate a test bed of datasets of progressive difficulty levels. We believe that our benchmark (referred to as Meta-RecipeQA) will provide, for the first time, a fine grained estimate of a model's generalization capabilities. We also propose a general M 3 C model that is used to realize several prior SOTA models and motivate a novel hierarchical transformer based reasoning network (HTRN). We perform a detailed evaluation of these models with different language and visual features on our benchmark. We observe a consistent improvement with HTRN over SOTA (∼ 18% in Visual Cloze task and ∼ 13% in average over all the tasks). We also observe a drop in performance across all the models when testing on RecipeQA and proposed Meta-RecipeQA (e.g. 83.6% versus 67.1% for HTRN), which shows that the proposed dataset is relatively less biased. We conclude by highlighting the impact of the control knobs with some quantitative results. | 10.1109/wacv51458.2022.00060 | [
"https://arxiv.org/pdf/2110.11899v1.pdf"
] | 239,616,365 | 2110.11899 | 49f802f99abf9595b79ece192ba1eb31c9c9ab8c |
Challenges in Procedural Multimodal Machine Comprehension: A Novel Way To Benchmark
Pritish Sahu [email protected]
SRI International
Rutgers University
Karan Sikka [email protected]
SRI International
Ajay Divakaran [email protected]
SRI International
Challenges in Procedural Multimodal Machine Comprehension: A Novel Way To Benchmark
We focus on Multimodal Machine Reading Comprehension (M3C) where a model is expected to answer questions based on given passage (or context), and the context and the questions can be in different modalities. Previous works such as RecipeQA have proposed datasets and cloze-style tasks for evaluation. However, we identify three critical biases stemming from the question-answer generation process and memorization capabilities of large deep models. These biases makes it easier for a model to overfit by relying on spurious correlations or naive data patterns. We propose a systematic framework to address these biases through three Control-Knobs that enable us to generate a test bed of datasets of progressive difficulty levels. We believe that our benchmark (referred to as Meta-RecipeQA) will provide, for the first time, a fine grained estimate of a model's generalization capabilities. We also propose a general M 3 C model that is used to realize several prior SOTA models and motivate a novel hierarchical transformer based reasoning network (HTRN). We perform a detailed evaluation of these models with different language and visual features on our benchmark. We observe a consistent improvement with HTRN over SOTA (∼ 18% in Visual Cloze task and ∼ 13% in average over all the tasks). We also observe a drop in performance across all the models when testing on RecipeQA and proposed Meta-RecipeQA (e.g. 83.6% versus 67.1% for HTRN), which shows that the proposed dataset is relatively less biased. We conclude by highlighting the impact of the control knobs with some quantitative results.
Introduction
Machine Reading Comprehension (MRC) has been used extensively to evaluate language understanding capabilities of Natural Language Processing (NLP) systems [34,8,27,Figure 1. An illustration of the three biases present in the dataset. Bias-1 is caused when the overlap between the questions reveal the entire recipe process. The constraint is to upper bound the intersection of steps in multiple questions. Bias-2 exists when the distance between the incorrect choices from the correct choice (ϵ) is large in the latent space. The "m" mentioned above is some small value. Bias-3 occurs when the correct choice is closer to the question list (|d1 − d2| ≤ ϵ) as compared to incorrect choices. 25]. MRC is evaluated similar to how humans are evaluated for understanding a piece of text (referred to as context) by asking them to answer questions about the text. Recently Multi-Modal Machine Comprehension (M 3 C) has extended MRC by introducing multimodality in the context or the question or both [29,15,33,2] (Figure 3). A strong M 3 C system is thus required to not only understand the (unimodal) context but also reason across different modalities. Previous MRC studies have shown that it is often hard to to verify whether the model is actually understanding the context or naively using spurious correlations to answer questions [8,14,29]. We first identify three key biases that plague M 3 C cloze-style benchmarks and then propose a novel procedure to create multiple datasets of different levels of difficulty from a single meta dataset. We then use these datasets to study the performance of differ-ent M 3 C models and understand how the datasets affects performance. We also propose a novel hierarchical transformer based approach and show consistent improvements over prior methods. M 3 C can be evaluated in multiple ways [34,18,13,32,15]. For example, in VQA the context is an image and the question and answer are in textual modality. M 3 C datasets (e.g. RecipeQA [33]) can also include multiple modalities in the context or the question. Multiple-choice cloze-style tasks are also used for evaluation, where the question is prepared as a sequence of steps with one of the steps replaced by a placeholder and the model is asked to find the correct answer from a set of choices. Cloze-style evaluation is quite common in MRC since such questions can be generated without any human intervention. This makes it easier to train, test and deploy a model for a new domain with sparsely labeled data. We focus on procedural M 3 C, where the context is a list of steps, for preparing a recipe, that are described in multiple modalities. We evaluate on the three visual cloze-style tasks defined in RecipeQA-Visual Cloze, Visual Coherence, and Visual Ordering. Although several works have used RecipeQA for evaluation, we observe three critical biases introduced by how the cloze-style question-answers (QA) pairs were created for these tasks. These biases makes it easier for models to answer question by relying on spurious correlations and surface level patterns and thus casting doubts around prior evaluations. The first bias is related to overlap between the questions, which are sampled randomly from four locations in the context in one of the modality. This bias results from multiple questions being sampled from the same context and makes it easier for the model to answer questions by using other questions in the dataset. The second bias results from the negative choices being far away from the correct choices i.e. they are not hard negatives. Hard negatives refer to incorrect choices that are closer to the correct choice visually and might be difficult for a naive algorithm (e.g. using background) to discriminate ( Figure 4). This bias also causes the model to overfit as it can rely on simple features to get the correct answer. The third bias is induced by the correct choice being significantly closer to the question (in feature space) as compared to the incorrect choices. This causes the model to answer questions by only matching the choices with the question. We propose a systematic way to tackle these biases by grounding them in three Control-Knob that are then used to sample multiple datasets from a single meta dataset of recipes ( Figure 1). We refer to our benchmark as Meta-RecipeQA. We then use these knobs to generate datasets with lower bias and of progressively increasing difficulty level. For example, the accuracy of our model on the simplest and the hardest sets on Visual Cloze task varies from 56.3% to 68.4% on Meta-RecipeQA as compared to 70.5% on RecipeQA. We also used addi-tional pre-processing steps to improve the quality of the meta-dataset which is based on RecipeQA (Table 1). We recommend that a model should be evaluated on all these datasets instead of a single sampled dataset to get a better estimate of its capabilities to answer questions. This type of evaluation is similar to cross-validation in machine learning which tends to provide improved estimates of performance.
In addition, we also study the effect of using better visual features for constructing these datasets and show that it has a large effect on performance. We also propose a general M 3 C model (GM 3 C) that is then used to realize several state-of-the-art (SOTA) models. GM 3 C is composed of two primary components-modality encoder and scoring function-and allows us to systematically study the performance variations with different component choices. We also got inspired by the general model to propose a Hierarchical Transformer based Reasoning Network (HTRN) that uses transformers for both the primary components. We show consistent improvement over SOTA methods with our approach (+18% in visual cloze task and +13% in average over all task). We also undertake an extensive ablation study to show the impact of visual features, textual features, and the Control-Knob on performance. We see consistent drop in performance across all the models on Meta-RecipeQA as compared to RecipeQA showing that our approach is able to create harder datasets and addresses the underlying biases. We finally provide qualitative analysis to show the effect of Control-Knob on question-answer pairs. Our contributions are summarized as follows:
• We identify and locate the origins of the three critical biases that bedevil the RecipeQA dataset. We propose a systematic framework (referred to as Meta-RecipeQA) to address these biases through three
Related Works
QA tasks have been a popular method for evaluating a model's reasoning skills in NLP. One of the earliest forms of question-answering task is Machine Reading Comprehension (MRC) [12] that involves a textual passage and QA pairs. The answer format can be in a cloze-form (fill in the blanks), or could include finding the answer inside the passage or generation [5,11,27,25]. A generalization of the MRC is a new task that employs multimodality in the context or QA and is referred to as MultiModal Machine Comprehension (M 3 C). Several datasets have been proposed to evaluate M 3 C, e.g. COMICSQA [13], TQA [15], MoviesQA [32] and RecipeQA [33]. Although our work is closely related to M 3 C, we tackle the task of procedural M 3 C, e.g. RecipeQA where the context is procedural description of an event. RecipeQA [33] dataset provides "How-To" steps to cook a recipe written by internet users. Solving procedural-M 3 C requires understanding the entire temporal process along with tracking the state changes. Procedural M 3 C is investigated in [1] on RecipeQA by keeping track of state change of entities over the course. However, the method falls short in aligning the different modalities.
Bias in a dataset could be referred to as a hidden artifact that allow a model to perform well on it without learning the intended reasoning skills. These artifacts in the form of spurious correlations or surface level patterns boost model performance to well beyond chance performance. Sometimes the cause of these biases are partial input data [9,23] or high overlaps in the inputs [20,6]. These biases influence various other tasks as well such as argument reasoning [22], machine reading comprehension [14], story cloze tests [30,4]. We have investigated three biases that plague the visual tasks in RecipeQA. One major bias occurs due to the high overlap of steps present in the question, this differs from [20] as the later involves unimodal data and the overlap occurring in word embeddings. The next bias shown in [30] occurs due to the differences in writing style in the text modality. RecipeQA also suffers from difference in style as a bias but in the visual domain. The bias due to difference in style in the visual domain is introduced due to the lack of necessary constraint while preparing the QA task.
Approach
We describe our approach by using visual cloze style tasks to demonstrate the efficacy of our benchmark. We describe the three critical biases present in these tasks that prevents a comprehensive assessment of M 3 C models. We simultaneously outline our proposed solution through Control-Knobs that are used to generate multiple datasets from a meta dataset. We contrast it with RecipeQA. Next, we describe of the general M 3 C model which we use to realize many prior models and finally propose a novel method based on hierarchical transformers.
Visual Cloze-Style Tasks
We address the biases in the three visual cloze-style tasks from RecipeQA. In RecipeQA each instance ( Figure 3) is a sequence of steps about preparing a recipe, where each step is described by either images, text or both. In the visual task, the context is in textual modality and the Question-Answer (QA) pairs 1 are in visual modality.
1. Visual Cloze: Determine the correct image that fits the placeholder in a question sequence of N Q images. The question is generated by selecting N Q images from a recipe and randomly replacing one of the images with a placeholder. 2. Visual Coherence: Determine the incoherent image from a list of N Q images. The coherent images are sampled in an ordered manner from one recipe. 3. Visual Ordering: Predict which sequence of images in the question is the correct sequence. The question is generated by sampling N Q images from N Q separate steps in a recipe and jumbling the order of the sequence for all except one.
We set N Q to 4 as done in RecipeQA. In each of the tasks the model is expected to establish cross-modal correspondences across textual steps and visual QA pairs and then reason to find the correct answer. We selected these tasks as they cover a broad range of reasoning capabilities and also allows us to verify our approach for removing biases on multiple tasks. The above mentioned skills are required to solve cloze style, coherence and ordering. M 3 C cloze style, coherence and ordering skills assess the knowledge and understanding obtained from the context and question [28].
Biases and Proposed Control-Knobs
M 3 C models are evaluated to measure whether they are able to understand the context and then answer questions. However, it has been shown before that it is common for MRC models to answer questions by using biases or surface level patterns in the data [29,8,5]. Such behavior stems both from the data creation process and large capacity of recent state-of-the-art (SOTA) models-which makes it easier for them to overfit [3]. Figure 2 depicts an example of data bias in RecipeQA by showing the distribution of distances between the question (averaged) and the correct and incorrect choices in the feature space. We see that the distributions are well separated in the original RecipeQA as compared to one of our datasets. This allows a model to answer questions without reading the context. With these inherent biases can prior evaluation results be trusted and how can we do better? We formalize three key biases present in previous evaluation setup and also propose a solution to counter them with Control-Knobs. We use the visual cloze task as an example to describe the Control-Knobs. We provide descriptions for the other two tasks (visual coherence and visual ordering) in Appendix B. We briefly describe the construction of QA pairs in visual cloze (from RecipeQA) to better understand our Control-Knobs. Questions are prepared by repeating the process of first sampling four random locations (in increasing order) in the recipe and then replacing one of the images randomly with the placeholder. The negative choices are randomly sampled, beyond a certain distance from the positive choice, in the feature space. We have illustrated these biases visually in Figure 1.
1. Bias-1-High overlap between question sequences: This bias occurs since multiple question sequences are sampled from the same recipe. Here the model can learn the correct answer by relying on other oversampled questions and fail to actually understand the context.
Control-Knob-1: This knob controls the overlap between the questions as well as the maximum number of questions that can be generated from a recipe. It first imposes a constraint on the maximum number of questions that can be sampled from a recipe. We also sample questions from recipes with #Steps ≥ 5. Although we iteratively sample a question from a recipe as done in RecipeQA, we minimize the overlap between questions by removing the step corresponding to the correct choice before sampling the next question. This makes sure that the model cannot exploit commonalities between questions to know the correct answer. We use two settings for this knob where the first setting fixes the maximum number of questions to #steps/2. The second setting makes the dataset harder by fixing the maximum number of questions to #steps/3 and also removes a random choice along with the correct choice before sampling the next question.
Bias-2-Incorrect choices are not hard negatives:
This bias occurs when the correct choice is closer (in feature space) to the question features as compared to the incorrect choices. A model can thus exploit this artifact to answer question.
Control-Knob-2: We first compute K nearestneighbors (KNNs) of the correct choice and select K C points in the feature space. To vary the difficulty level of the negative choices, we discretize the space of KNNs by computing mean (m d ) and standard deviation (σ d ) of the distances of the K C points from the correct choice. We use two settings of this knob by either sampling the negative choice from the euclidean
ball (0, m d − s d ) or (m d − s d , m d + s d ).
The first setting with generate harder negatives since they will be closer to the correct choice. We use image features from pre-trained models for computing these distance and observe that such features have a huge impact on the semantic similarity of the incorrect choices to the correct choice (ViT versus ResNet-50).
3. Bias-3-Incorrect choices being far away from the question: This bias occurs since the correct choice is closer (in feature space) to the question features as compared to the incorrect choices. In such cases the model can simply answer question by using the relative distances between correct and incorrect choice to the question and can bypass the context. This is similar to using odd-one-out in standard comprehension.
Control-Knob-3: Generally all the images from one recipe exhibit underlying semantic similarity such as the background. The incorrect choice should share some semantic similarities with question images, similar to the correct choice, to make it harder for the model to discriminate based on such naive cues. The Control-Knob is designed to consider the distance between the the question and the correct choice when sampling the incorrect choices. During the process of selecting negative choices in Control-Knob-2, we select one negative choice which is close to the question as compared to the correct choice. We use two values for this knob-when this knob is off, we do not enforce the constraint described above; when the knob is on, we randomly select one negative choice to satisfy the distance constraint.
Meta Dataset for Meta-RecipeQA
We refer to our proposed benchmark that consists of multiple datasets generated by varying the Control-Knobs as Meta-RecipeQA. We create these datasets using a meta dataset which contains all the recipe from RecipeQA without any of the tasks. We found that several recipes in Scoring Func�on ( , , )
Modality Encoders
You will need: --45g bu�er -
Step-1
It's very tasty with the nice
Step-N … Context Textual Encoder
1 ( 1 ) ( ) Place holder
Ques�on Answer
Visual Encoder
( 1 ) ( 4 ) 1 4 1 4 ( 1 )( 4 )
2 Correct Answer RecipeQA had missing/partial content and were noisy (e.g. multiple words were joined together). We used its source (instructables.com) to complete and clean the existing content. With our proposed Control-Knobs we are able to regulate the amount of question-answer generated from 8K to 22K. We also notice that RecipeQA dataset is plagued with out-of-vocabulary tokens (19% over all tokens). One main reason for out-of-vocabulary tokens is the fusion of multiple in-vocabulary tokens. In the Meta Dataset, our cleaning process results in 90% in-vocabulary tokens over all tokens. Please refer to the supplement for a detailed description involved in the process of creating the Meta Dataset.
Hierarchical Transformer based Reasoning Network (HTRN)
We now describe the general M 3 C model (GM 3 C), which is used to realize prior SOTA methods along with our proposed method. For description of this model we limit ourselves to the visual cloze task and provide additional details in Appendix C. The context C = {c k } N C Figure 3 and consists of two primary modules-modality encoder and scoring function. The modality encoder featurizes the textual context using a textual encoder, denoted as ϕ T (c k ) as well as the questions and the answers using a visual encoder, denoted as ϕ V (q i ). The output from both these modules is fed into the scoring function to compute a compatibility scores for the answer. We use this model to implement prior SOTA models. For example, a popular method "Impatient Reader" uses Doc2Vec with an LSTM as the textual encoder, ResNet50 with an LSTM as the visual encoder, and then uses attention layers for the scoring function.
Hierarchical Transformer based Reasoning Network (HTRN): is build upon the GM 3 C model. HTRN encodes each step in the context using a pre-trained transformer model to obtain embeddings for each token. We also use a bi-directional LSTM to encode the contextual features for each step. We obtain the feature vector for each step by averaging feature of all the tokens for that step. To model temporal dependencies across the steps, HTRN uses another bi-directional LSTM before feeding the inputs to the scoring function. For the visual encoder, we use the pre-trained transformer based visual encoder. We now have the encoding for each step, question and answer as ϕ C (c k ), ϕ V (q i ), and ϕ V (a j ) respectively. We also use a bi-directional LSTM to encode the temporal relationships between the images. These inputs are now passed to the scoring function.
The aim of the scoring function is to provide a score for each of the candidate answer a j . Since we need to score N A answers, we create N A query vectors (denoted as u), where each query vector is prepared by replacing the placeholder with the candidate choice at location j in the answer. HTRN uses a second shallow transformer (trained from scratch) for the scoring function. We use ideas from preparing BERT inputs for question-answering [7] by creating a representation for a context-query pair as R(C, a j ) = [CLS, ϕ C (c 1 ), . . . , ϕ C (c N ), SEP, ϕ V (q i ), u], where CLS and SEP are special token as used in the NLP models and [] denotes concatenation. We pass this input through the transformer and use the contextual representation of the CLS token as the final representation of j th query vector. We finally use an FC layer to obtain scores for all the query vectors. Our motivation for using the transformer as a scoring function is that its underlying self-attention mechanism enables us to model the complex relationships between the context-context, context-QA, and QA-QA pairs. Such relationships are often modeled by multiple components in prior models and may not suffice for the application at hand. Moreover, transformers bring additional advantage in terms of multi-head attention and skip-connection leading to improved learning [7,24].
Experiments
In this section we empirically study the (1) impact of the Control-Knobs on the performance of different models, and (2) performance improvement from our proposed HTRN model. We begin by stating the dataset creation process using the Control-Knobs as well the metrics used for evaluation, details of prior methods and the implementation details of our methods. Next, we report the quantitative results where we first compare different models on RecipeQA with our proposed Meta-RecipeQA benchmark. Next, we study the impact of the Control-Knobs in more details. We then compare the proposed HTRN models with SOTA methods. We finally provide quantitative results to highlight the effect of the Control-Knobs on some generated question-answer pairs.
Dataset and Metrics
We use the 3 Control-Knobs to create multiple datasets for evaluation. To keep the number of experiments under control we use two discrete settings for each of these Control-Knob (see in subsection 3.2). In the remainder of the text we shall refer to the dataset setting with the Control-Knobs set to value i, j, k as a tuple (i, j, k). Along with the Control-Knobs, we use two choices of language models (LM) (Word2Vec, BERT), two choices of visual models (VM) (ResNet-50, ViT) and three different scoring function. For the visual cloze task, we trained a total of 108 models. Out of 108, 96 models are trained on Meta-RecipeQA that constituted combinations of 3 Control-Knobs, 2 LM, 2 VM and 3 scoring functions.
We use classification accuracy as the metric that measures the percentage of questions that the model is able to answer correctly.
Prior Methods and Implementation Details
Prior Methods: For comparison with SOTA we compare our model with Hasty Reader [33], "Impatient Reader" [10], PRN [1], MLMM-Trans [17] on RecipeQA. We obtain their results from MLMM-Trans [17]. For the experiments involving Control-Knobs on proposed benchmark, we adopted two popular MRC models as the scoring function in addition to transformers in HTRN -BiDAF [31], "Impatient Reader" [10].
Comparison on RecipeQA and the Proposed Meta-RecipeQA
In Figure 4, we show results of different algorithms with different visual and textual features on RecipeQA and Meta-RecipeQA. We report mean performance across our proposed datasets that were generated by sweeping through eight combination of the control knobs.
We first observe a consistent drop in performance, for all algorithms, between the previous and the proposed splits. For example, with LM as Word2Vec and VM as ViT, the performance of HTRN on old and proposed splits is 83.6% and 67.1% respectively. We also observe a similar drop with prior methods e.g. for BiDAF the performance on RecipeQA and Meta-RecipeQA is 70.4% and 59.8% respectively. We believe this drop occurs since the Control-Knobs are able to remove some of the biases present in RecipeQA by creating a benchmark which makes it for the models to overfit. We also observe that the visual features have a large impact on performance e.g. HTRN gives 67.1% and 62.1% with ViT and Resnet50 respectively. We also believe that it is easier for improved image features to overfit for the visual cloze task since they can easily compare two images using surface level patterns such as background. Also, the variance in performance across different splits highlights that our Control-Knob provide flexibility in creating datasets with progressive difficulty levels (in terms of skills required for solving these tasks).
Impact of Control Knobs on Performance
In Figure 5 we show the impact of different Control-Knob on performance. We measure performances for three models with LM and VM as Word2Vec and Resnet50 respectively on datasets generated by sweeping across the two discrete values for each of the Control-Knob. All the models achieved their best performance when Control-Knob-1 (overlap bias) was set to 0 i.e. high overlap and Control-Knob-2 (distance of incorrect choice from correct choice) was set to 1 i.e. images sampled in the euclidean ball be-
tween [µ d − σ d , µ d + σ d ].
However, if we choose complement of these values we obtain lower performance across all the models. This is the case since reducing the overlap between questions and reducing the distance between correct and incorrect choices makes the QA harder. This highlights the ability of our Control-Knob to create datasets with progressive difficulty levels. For Control-Knob-3, we see a small influence on model's performance. We believe this results from our implementation of Control-Knob-3, where we randomly flip a coin on each sample and apply the constraint to only one incorrect choice. This Control-Knob can be more impactful by applying it over multiple choices. The experimental details of HTRN is presented in Appendix A. Table 2 shows the comparison of our model HTRN with SOTA models on the three tasks from RecipeQA. We establish superiority by comparing our results against previous RecipeQA benchmark. We start by demonstrating the human performance on RecipeQA [33] followed by results from prior works. Our model outperforms MLMM-Trans (multi level transformer based model) on all tasks. For example, the performance of HTRN versus MLMM on the ordering task is 65.1 and 63.8 respectively. We also believe that our algorithm is the first to outperform human's performance on the ordering task. We compare with the prior SOTA models using LM: Word2Vec and VM: ResNet-50 (under same input conditions), represented in the table 2 as HTRN-Transformer * , where we get a 5% and 2% improvement over MLMM-Trans in visual cloze and ordering task respectively. With the last three rows, report the performance of our proposed model HTRN (LM: BERT,VM :ViT), where gain over MLMM-Trans by a margin of 12.4% in average (18% in visual cloze, 13% in coherence and 16.5% in ordering). We also compared our transformer scoring function against Bidaf and Impatient Reader as scoring function. We observe improvements of 7.5% (absolute) highlighting the performance advantage of the transformer based scoring function which is able to better model the complex relationships between multimodal context and QA pairs.
Comparison with SOTA
Qualitative Analysis
In Figure 6, we show three questions from the same recipe with the choice list. The QA pair in the first row is chosen from the original RecipeQA dataset. The QA pairs in second and third rows are selected with Control Knobs set to (1, 0, 1) and (0, 1, 1) to provide a medium and a hard QA pair. We can see that in the case of the first QA pair it will be easier for a model to use the background to find the correct answer. However, this is slightly harder in the second row where two of the images share a similar background. We believe it would be hardest for a model to find the correct answer by using surface level patterns in the third QA pair, where the negative choices are semantically closer (in foreground) to the correct answer. We believe that such hard negatives will generally make it harder for a model to utilize the biases and thus help in generalization.
Ques�on Choice List
Original Potato Skin Mini Quiches Figure 6. Question-Answer (QA) pairs generated using our Control-Knobs for recipe "Potato Skin Mini Quiches". First row shows QA from RecipeQA. Row 2 uses Control-Knob setting (1,0,1), best performance dataset on HTRN. Row 3 uses Control-Knob setting (0,1,1) (from the dataset with the lowest performance). Distinguishing the correct images is easiest in the first QA pair compared to the second and third rows where two of the images share a similar background.
Related Works
QA tasks have been a popular method for evaluating a model's reasoning skills in NLP. One of the earliest forms of question-answering task is Machine Reading Comprehension (MRC) [12] that involves a textual passage and QA pairs. The answer format can be in a cloze-form (fill in the blanks), or could include finding the answer inside the passage or generation [5,11,27,25]. A generalization of the MRC is a new task that employs multimodality in the context or QA and is referred to as MultiModal Machine Comprehension (M 3 C). Several datasets have been pro-posed to evaluate M 3 C, e.g. COMICSQA [13], TQA [15], MoviesQA [32] and RecipeQA [33]. Although our work is closely related to M 3 C, we tackle the task of procedural M 3 C, e.g. RecipeQA where the context is procedural description of an event. RecipeQA [33] dataset provides "How-To" steps to cook a recipe written by internet users. Solving procedural-M 3 C requires understanding the entire temporal process along with tracking the state changes. Procedural M 3 C is investigated in [1] on RecipeQA by keeping track of state change of entities over the course. However, the method falls short in aligning the different modalities.
Bias in a dataset could be referred to as a hidden artifact that allow a model to perform well on it without learning the intended reasoning skills. These artifacts in the form of spurious correlations or surface level patterns boost model performance to well beyond chance performance. Sometimes the cause of these biases are partial input data [9,23] or high overlaps in the inputs [20,6]. These biases influence various other tasks as well such as argument reasoning [22], machine reading comprehension [14], story cloze tests [30,4]. We have investigated three biases that plague the visual tasks in RecipeQA. One major bias occurs due to the high overlap of steps present in the question, this differs from [20] as the later involves unimodal data and the overlap occurring in word embeddings. The next bias shown in [30] occurs due to the differences in writing style in the text modality. RecipeQA also suffers from difference in style as a bias but in the visual domain. The bias due to difference in style in the visual domain is introduced due to the lack of necessary constraint while preparing the QA task.
Conclusion
In this paper, we identified three key weaknesses in the M 3 C based RecipeQA dataset stemming from distance between question-choices, as well as overlap between questions. We propose a novel Meta-RecipeQA framework guided by three Control-Knobs to reduce the bias in the question-answering tasks. Using the defined Control-Knobs, we propose multiple datasets of progressive difficulty levels. We also propose a general M 3 C (GM 3 C) framework for realizing SOTA models. We use the general M 3 C to implement SOTA models such as BiDAF and Impatient Reader. It also motivates us to propose a novel Hierarchical Transformer based Reasoning Network (HTRN) that uses transformers for both the modality encoders and the scoring function. We significantly outperform prior SOTA methods on RecipeQA. Similarly we see a drop in performance over Meta-RecipeQA as compared to RecipeQA that suggests that we have successfully reduced the bias. We also gain deeper insights into each Control-Knobs through quantitative analysis. We hope that our framework will provide a rich evaluation of multimodal comprehension systems by testing models on datasets generated by sweeping through the three proposed Control-Knobs. Such evaluation will go beyond overall accuracy to a fine-grained understanding of robustness to dataset bias.
A. Implementation Details for HTRN
In our experiments, we experiment using Word2Vec [21] (trained on the step description of the recipes) and BERT model-"bert-basenli-mean-tokens" trained using Sentence-Transformers [26]. For visual encoder, we use the outputs of the final activation layer for ResNet50 trained on Im-ageNet dataset and ViT model [24] pretained with CLIP [24]. In our implementation, we used two single layer Bi-Directional LSTMs (size of the hidden layer is 256) to model the temporal information within a recipe steps and across the steps. The transformer based scoring function in HTRN was trained from scratch. This transformer uses 4 hidden layers with a size of 512 and 8 attention heads. We used CrossEntropy loss to train our model. We use the Adam [16] optimizer with the fixed learning rate of 5×10 −4 including an early stopping criteria with a patience set to 20 for all of our experiments. We performed our experiments on a single Nvidia GTX 1080Ti and that takes about 8 − 10 hours to train for a single experiment. We did not perform any hyperparameter tuning and used the same hyperparameters for all the models trained.
B. Control-Knobs Design Process
As described in Section 2.2, question preparation begins by sampling four random locations (increasing order) in the recipe. We give details of the Control-Knobs that are unique to each of the remaining visual tasks i.e. visual coherence and ordering. Control-Knob-1 is applied to all the visual tasks.
B.1. Visual Coherence
Control-Knob-2: In contrast to visual cloze where three negative choices are also provided along with the correct answer in the choice list, visual coherence on the other hand has only one incoherent image (correct answer) among the four images. For visual cloze, the selection of a negative choice was determined by it's metric distance from the correct choice whereas in coherence there are not multiple negative choices i.e. the odd the image in the set is the correct answer. Hence, we alter Control-Knob-2 process to select the incoherent image from the set of union/intersection of K nearest neighbor (KNNs) of these three coherent images. We use two Control-Knob setting and use the union set of the KNNs where the sampling space of negative choice is the Euclidean ball (0,
m d − s d ) or (m d − s d , m d + s d ).
However, the m d and s d are computed on the samples from the union set. Control-Knob-3 As described in the main paper, the aim of this Control-Knob is to control the distance of negative choice from the question. In case of visual coherence, we use the three coherent images as the question and the incoherent image as the negative choice. We take the min of all pairwise distance for the coherent images. To make the negative choice closer, the distance of a randomly picked sample from the mean of question should be smaller than the min distance of all pairwise computed earlier. This forces sample of a negative choice with similar features as the question.
B.2. Visual Ordering
Control-Knob-2 For a sequence of four images randomly selected (in increasing order), excluding the correct order, there is 23 other ordering present. The altering process for Control-Knob-2 involves devising a metric to control the sampling of three negative choices out of 23 possible choices. The metric devised is as follows, for each wrong sequence, we get a score by computing the pairwise distance of the consecutive images in the sequence and adding them i.e.
i=3 i=1 dist(ϕ V (x i ), ϕ V (x i+1 ))
, where x i represents the image at i th index in the wrong sequence. We obtain 23 such scores which we refer to as weights to compute weighted probability distribution. We follow by associating each wrong sequence with their corresponding probability. We finally sample three negative choice out of 23 options based on the probability associated with them. We have two setting for this Control-Knob, case-1 when the probability distribution is uniform, case-2 as discussed above. As the visual ordering task evaluates the memorization and recall ability of the model , we do not have Control-Knob-3.
C. Hierarchical Transformer based Reasoning Network (HTRN)
In this section, we briefly describe the process of preparing query vectors for visual coherence and ordering tasks.
C.1. Visual Coherence
For Visual Coherence, the question Q = {q i } N i=1 consists of N q images, including an incoherent image in between. The answer A is a scalar pointing to the location of the incoherent image. As we need N A scores for each candidate answer a j , we create N A query vectors, where each query vector is prepared by removing one element from Q. Finally we obtain N A query vectors each of size N A − 1 and only one vector contains all the coherent images.
C.2. Visual Ordering
As the structure of ordering task provides N A choice vectors each of size N A . We do not make any changes here.
D. Meta Dataset of Meta-RecipeQA
The preparation of metadata is broken into two folds: 1) description of each recipe which is compiled from [33] and instructables.com, 2) control panels that moderates the scale of the question and answer during the dataset generation process. Below, we describe the process involved in the creation of Meta Dataset.
Each recipe in the metadata stage contains the following information a) name of the recipe, b) all steps required for completing the recipe, where each step includes multi-modal (text, image) information describing that step. In order to prepare the meta recipe content, we begin by further cleaning the RecipeQA [33] as we observe noise in each modality i.e. text and image. The missing texts and images from recipe steps were scrapped again from instructables.com. Next, we remove the noise in the textual side by processing the data again. Few of the persisting noise removed by our algorithm are "HTML/CSS" tags, Unicode, few data entries that were not food recipes ("nutrient-calculator", "cnc-nyancat-foodmold-nyancake"). After removing the above-mentioned aberrations, in the next step, we used NLTK toolkit [19] to process the out-of-dictionary vocabulary. Most of the out-of-dictionary vocabulary was found to be some form of a composite of in-dictionary vocabulary or vocabulary with numbers or measurement units. The next step of the preprocessing algorithm separates the in-dictionary words wrapped as out-of-dictionary words. This process provided us with a much cleaner version of the data.
E. Prior Scoring Functions
Impatient Reader [10] is an attention-based model that recurrently uses attention over the context for each question except the location containing the "@ placeholder". This attention allows the model to accumulate information recurrently from the context as it sees each question embeddings. It outputs a final joint embedding for the answer prediction. This embedding is used to compute a compatibility score for each choice using a cosine similarity function in the feature space. The attention over context and question is computed on the output of an LSTM. The answer choices are also encoded using an LSTM with a similar architecture. BiDAF [31] is abbreviated for "Bi-Directional Attentional Flow", as the name suggests, it employs a bi-directional attention flow mechanism between the context, representation of the question images, and each candidate choice representation to learn temporal matching. We base our prediction on the best-matched candidate. Originally it was proposed as a span-selection model from the input context. Here, we adapt it to work in for visual tasks in multimodal setting.
F. Experimental Results
The additional visual cloze results on the remaining three pairwise combination of LM and VM hold true to our analysis in the main paper. The three tuple shown in the plots are: (Word2Vec, ViT), (BERT, ResNet-50), (BERT, ViT).
We even see the impact of Control-Knob-3 in all three plots on the meta dataset as compared to it effect in the case of (Word2Vec, ResNet-50). In the case of coherence, we study the impacts using Word2Vec and ResNet-50, where Control-Knob-1 and Control-Knob-3 clearly have larger impact on model performance compared to Control-Knob-2.
Figure 2 .
2Distribution of distances of the correct and incorrect choices from the question in the feature space (ViT) for RecipeQA (left) and one of our datasets generated with Control-Knob set to (0,1,1). Training a Support Vector Machine (SVM) with the distance of each choice from the question as input feature results in an accuracy of 71.9% on RecipeQA and 31.7% on our dataset. This highlights the inherent bias in RecipeQA which can be exploited by a naive model to give high performance.
Figure 3 .
3An illustration of the General M 3 C (GM 3 C) model that consists of two primary components-Modality Encoders and Scoring Function. We use this model to implement prior SOTA models but also propose our Hierarchical Transformer based Reasoning Network (HTRN) that uses transformers for both the components.
k=1 consists of N C steps in textual modality, where each step c k = {w k s } K s=1 contains K tokens. For the visual cloze task, the question Q = {q i } N Q i=1 consists of N Q images with one image being replaced by a placeholder. The answer A = {a j } N A j=1 is composed of one correct and N A − 1 incorrect choices.
GM 3 C: is shown in
Figure 4 .
4Evaluation on visual cloze tasks on both RecipeQA (blue bar) and the proposed Meta-RecipeQA (yellow bar). For Meta-RecipeQA we compute the mean and the standard deviation of performance across all the generated datasets. For each figure, we mention the LM and VM used for training the model
Figure 5 .
5Studying the impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to Word2Vec and VM is set to ResNet-50. Starting from left we plot performance of Control-Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for Control-Knob-2 and Control-Knob-3 in the center and right figure respectively.
Figure 9 .
9Visual Cloze: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to Bert and VM is set to ViT. Starting from left we plot performance of Control-Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for Control-Knob-2 and Control-Knob-3 in the center and right figure respectively.
Figure 10 .
10Visual Coherence: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to Word2Vec and VM is set to ResNet-50. Starting from left we plot performance of Control-Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for Control-Knob-2 and Control-Knob-3 in the center and right figure respectively.
Figure 11 .
11Visual Ordering: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to Word2Vec and VM is set to ResNet-50. For each Control-Knob we fix one and plot the other, as show in the figure side by side.
Control-Knobs. The Control-Knobs makes it possible for us to generate a test bed of multiple datasets of progressive difficulty levels.• Propose a general M 3 C (GM 3 C) model that is used to implement several SOTA models and motivate a novelHierarchical Transformer based Reasoning Network
(HTRN). HTRN uses transformers for both modality
encoder and the scoring function.
• HTRN outperforms SOTA by ∼ +18% in Visual
Cloze task and ∼ +13% (absolute) in average over all
the tasks.
• We observe a considerable drop in performance across
all the models when testing on RecipeQA and the pro-
posed Meta-RecipeQA (e.g. 83.6% versus 67.1% for
HTRN).
Table 1 .
1Dataset statistics for RecipeQA and Meta-RecipeQA.RecipeQA Meta-RecipeQA
#Recipes(train, valid )
9101
8639
#VisualCloze QA
7986
(8K-22K)
#Recipes used in VisualCloze QA
5684
(6K-9K)
#in-vocab tokens / #vocab tokens
19.9%
90.2%
Table 2 .
2Comparison of HTRN with SOTA on the three visual tasks of RecipeQA dataset.Model
Cloze Coherence Ordering Average
Human* [33]
77.6
81.6
64.0
74.4
Hasty Student [33]
27.3
65.8
40.9
44.7
Impatient Reader [10]
27.3
28.1
26.7
27.4
PRN [1]
56.3
53.6
62.8
57.6
MLMM-Trans [17]
65.6
67.3
63.8
65.6
(Word2Vec, Resnet-50)
HTRN-Bidaf*
57.1
58.2
65.5
60.3
HTRN-Impatient*
58.8
57.9
64.2
60.3
HTRN-Transformer*
70.5
67.7
65.1
67.8
(BERT, ViT)
HTRN-Bidaf
73.7
77.0
70.7
73.8
HTRN-Impatient
76.0
74.1
70.7
73.6
HTRN-Transformer
83.6
80.1
70.3
78.0
Here we refer to the sequence of steps with the placeholder as the question
Acknowledgement: We would like to thank Michael Cogswell for the valuable suggestions.
Procedural reasoning networks for understanding multimodal procedures. Semih Mustafa Sercan Amac, Yagcioglu, Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Aykut Erdem, and Erkut ErdemMustafa Sercan Amac, Semih Yagcioglu, Aykut Erdem, and Erkut Erdem. Procedural reasoning networks for under- standing multimodal procedures. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 441-451, 2019.
Vqa: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425- 2433, 2015.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Pay attention to the ending: Strong neural baselines for the roc story cloze task. Zheng Cai, Lifu Tu, Kevin Gimpel, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics2Short Papers)Zheng Cai, Lifu Tu, and Kevin Gimpel. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 2: Short Pa- pers), pages 616-622, 2017.
A thorough examination of the cnn/daily mail reading comprehension task. Danqi Chen, Jason Bolton, Christopher D Manning, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading compre- hension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358-2367, 2016.
Evaluating compositionality in sentence embeddings. Ishita Dasgupta, Demi Guo, Andreas Stuhlmüller, J Samuel, Noah D Gershman, Goodman, arXiv:1802.04302arXiv preprintIshita Dasgupta, Demi Guo, Andreas Stuhlmüller, Samuel J Gershman, and Noah D Goodman. Evaluating com- positionality in sentence embeddings. arXiv preprint arXiv:1802.04302, 2018.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June 2019. Associa- tion for Computational Linguistics.
On making reading comprehension more comprehensive. Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Proceedings of the 2nd Workshop on Machine Reading for Question Answering. the 2nd Workshop on Machine Reading for Question AnsweringAlon Talmor, and Sewon MinMatt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. On making reading comprehension more comprehensive. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 105- 112, 2019.
Swabha Suchin Gururangan, Omer Swayamdipta, Roy Levy, Schwartz, Noah A Samuel R Bowman, Smith, arXiv:1803.02324Annotation artifacts in natural language inference data. arXiv preprintSuchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. An- notation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324, 2018.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in neural information processing systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blun- som. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693-1701, 2015.
Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. Felix Hill, Antoine Bordes, International Conference on Learning Representations (ICLR). Felix Hill, Antoine Bordes, Sumit Chopra, and Jason We- ston. The goldilocks principle: Reading children's books with explicit memory representations. International Confer- ence on Learning Representations (ICLR), 2016.
Deep read: A reading comprehension system. Lynette Hirschman, Marc Light, Eric Breck, John D Burger, Proceedings of the 37th annual meeting of the Association for Computational Linguistics. the 37th annual meeting of the Association for Computational LinguisticsLynette Hirschman, Marc Light, Eric Breck, and John D Burger. Deep read: A reading comprehension system. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics, pages 325-332, 1999.
The amazing mysteries of the gutter: Drawing inferences between panels in comic book narratives. Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daume, Larry S Davis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daume, and Larry S Davis. The amazing mysteries of the gutter: Drawing inferences be- tween panels in comic book narratives. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 7186-7195, 2017.
How much reading does reading comprehension require? a critical investigation of popular benchmarks. Divyansh Kaushik, Zachary C Lipton, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingDivyansh Kaushik and Zachary C Lipton. How much read- ing does reading comprehension require? a critical investi- gation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 5010-5015, 2018.
A diagram is worth a dozen images. Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, European Conference on Computer Vision. SpringerAniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In European Conference on Computer Vision, pages 235-251. Springer, 2016.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Multi-level multimodal transformer network for multimodal recipe comprehension. Ao Liu, Shuai Yuan, Chenbin Zhang, Congjian Luo, Yaqing Liao, Kun Bai, Zenglin Xu, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalAo Liu, Shuai Yuan, Chenbin Zhang, Congjian Luo, Yaqing Liao, Kun Bai, and Zenglin Xu. Multi-level multimodal transformer network for multimodal recipe comprehension. In Proceedings of the 43rd International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 1781-1784, 2020.
Neural machine reading comprehension: Methods and trends. Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, Weiming Zhang, Applied Sciences. 9183698Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, and Weiming Zhang. Neural machine reading comprehension: Methods and trends. Applied Sciences, 9(18):3698, 2019.
Nltk: The natural language toolkit. Edward Loper, Steven Bird, cs/0205028arXiv preprintEdward Loper and Steven Bird. Nltk: The natural language toolkit. arXiv preprint cs/0205028, 2002.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTom McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy, July 2019. Association for Com- putational Linguistics.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Probing neural network comprehension of natural language arguments. Timothy Niven, Hung-Yu Kao, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsTimothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. In Proceed- ings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4658-4664, 2019.
Hypothesis only baselines in natural language inference. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme, Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. the Seventh Joint Conference on Lexical and Computational SemanticsNew Orleans, LouisianaAssociation for Computational LinguisticsAdam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. Hypothesis only base- lines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, arXiv:2103.00020arXiv preprintAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. arXiv preprint arXiv:2103.00020, 2021.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine com- prehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, 2016.
Sentence-bert: Sentence embeddings using siamese bert-networks. Nils Reimers, Iryna Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics112019Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguis- tics, 11 2019.
Mctest: A challenge dataset for the open-domain machine comprehension of text. Matthew Richardson, J C Christopher, Erin Burges, Renshaw, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingMatthew Richardson, Christopher JC Burges, and Erin Ren- shaw. Mctest: A challenge dataset for the open-domain ma- chine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing, pages 193-203, 2013.
Comprehension based question answering using bloom's taxonomy. Pritish Sahu, Michael Cogswell, Sara Rutherford-Quach, Ajay Divakaran, arXiv:2106.04653arXiv preprintPritish Sahu, Michael Cogswell, Sara Rutherford-Quach, and Ajay Divakaran. Comprehension based question answering using bloom's taxonomy. arXiv preprint arXiv:2106.04653, 2021.
Towards solving multimodal comprehension. Pritish Sahu, Karan Sikka, Ajay Divakaran, arXiv:2104.10139arXiv preprintPritish Sahu, Karan Sikka, and Ajay Divakaran. To- wards solving multimodal comprehension. arXiv preprint arXiv:2104.10139, 2021.
The effect of different writing tasks on linguistic style: A case study of the roc story cloze task. Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A Smith, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningRoy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A Smith. The effect of different writ- ing tasks on linguistic style: A case study of the roc story cloze task. In Proceedings of the 21st Conference on Compu- tational Natural Language Learning (CoNLL 2017), pages 15-25, 2017.
Bidirectional attention flow for machine comprehension. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, International Conference on Learning Representations (ICLR). Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Han- naneh Hajishirzi. Bidirectional attention flow for machine comprehension. International Conference on Learning Rep- resentations (ICLR), 2017a.
Movieqa: Understanding stories in movies through question-answering. M Tapaswi, Zhu, Stiefelhagen, Torralba, S Urtasun, Fidler, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). M Tapaswi, Y Zhu, R Stiefelhagen, A Torralba, R Urtasun, and S Fidler. Movieqa: Understanding stories in movies through question-answering. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4631-4640, 2016.
Recipeqa: A challenge dataset for multimodal comprehension of cooking recipes. Semih Yagcioglu, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingSemih Yagcioglu, Aykut Erdem, Erkut Erdem, and Nazli Ikizler-Cinbis. Recipeqa: A challenge dataset for multi- modal comprehension of cooking recipes. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1358-1368, 2018.
A survey on machine reading comprehension-tasks, evaluation metrics and benchmark datasets. Changchang Zeng, Shaobo Li, Qin Li, Jie Hu, Jianjun Hu, Applied Sciences. 10217640Changchang Zeng, Shaobo Li, Qin Li, Jie Hu, and Jianjun Hu. A survey on machine reading comprehension-tasks, evaluation metrics and benchmark datasets. Applied Sci- ences, 10(21):7640, 2020.
Visual Cloze: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to Word2Vec and VM is set to ViT. Starting from left we plot performance of Control-Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for. 7Figure 7. Visual Cloze: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to Word2Vec and VM is set to ViT. Starting from left we plot performance of Control- Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for Control-Knob-2 and Control-Knob-
Visual Cloze: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to BERT and VM is set to ResNet-50. Starting from left we plot performance of Control-Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for. 8Figure 8. Visual Cloze: Impact of Control-Knobs by comparing the performance of different scoring functions (adapted baseline and transformer) for each knob setting. LM is set to BERT and VM is set to ResNet-50. Starting from left we plot performance of Control- Knob-2 and Control-Knob-3 for all combination of control setting by fixing Control-Knob-1. We do the same for Control-Knob-2 and Control-Knob-
| [] |
[
"PARALLEL NETWORK FLOW ALLOCATION IN REPEATED ROUTING GAMES VIA LQR OPTIMAL CONTROL",
"PARALLEL NETWORK FLOW ALLOCATION IN REPEATED ROUTING GAMES VIA LQR OPTIMAL CONTROL"
] | [
"Marsalis Gibson [email protected] ",
"Yiling You [email protected] ",
"Alexandre M Bayen [email protected] ",
"YouBayen Gibson ",
"YouBayen Gibson ",
"\nUniversity of California\nBerkeley 652 Sutardja Dai Hall94720-1710BerkeleyCA\n",
"\nUniversity of California\nBerkeley 652 Sutardja Dai Hall94720-1710BerkeleyCA\n",
"\nBerkeley Institute of Transportation Studies\nUniversity of California\n109 McLaughlin Hall94720-1720BerkeleyCA\n"
] | [
"University of California\nBerkeley 652 Sutardja Dai Hall94720-1710BerkeleyCA",
"University of California\nBerkeley 652 Sutardja Dai Hall94720-1710BerkeleyCA",
"Berkeley Institute of Transportation Studies\nUniversity of California\n109 McLaughlin Hall94720-1720BerkeleyCA"
] | [] | In this article, we study the repeated routing game problem on a parallel network with affine latency functions on each edge. We cast the game setup in a LQR control theoretic framework, leveraging the Rosenthal potential formulation. We use control techniques to analyze the convergence of the game dynamics with specific cases that lend themselves to optimal control. We design proper dynamics parameters so that the conservation of flow is guaranteed. We provide an algorithmic solution for the general optimal control setup using a multiparametric quadratic programming approach (explicit MPC). Finally we illustrate with numerics the impact of varying system parameters on the solutions. | null | [
"https://arxiv.org/pdf/2112.14888v1.pdf"
] | 245,634,562 | 2112.14888 | e99f58fec7f1db8ab7f34057d599b87a66d0c97c |
PARALLEL NETWORK FLOW ALLOCATION IN REPEATED ROUTING GAMES VIA LQR OPTIMAL CONTROL
Submission Date: August 1, 2020 30 Dec 2021
Marsalis Gibson [email protected]
Yiling You [email protected]
Alexandre M Bayen [email protected]
YouBayen Gibson
YouBayen Gibson
University of California
Berkeley 652 Sutardja Dai Hall94720-1710BerkeleyCA
University of California
Berkeley 652 Sutardja Dai Hall94720-1710BerkeleyCA
Berkeley Institute of Transportation Studies
University of California
109 McLaughlin Hall94720-1720BerkeleyCA
PARALLEL NETWORK FLOW ALLOCATION IN REPEATED ROUTING GAMES VIA LQR OPTIMAL CONTROL
Submission Date: August 1, 2020 30 Dec 2021Word Count: 6677 words + 9 figures× 0 + 2 tables× 250 = 7177 wordsRepeated routing gameAlgorithmic Game TheoryNash EquilibriumLinear-quadratic RegulatorMultiparametric quadratic programmingExplicit MPC
In this article, we study the repeated routing game problem on a parallel network with affine latency functions on each edge. We cast the game setup in a LQR control theoretic framework, leveraging the Rosenthal potential formulation. We use control techniques to analyze the convergence of the game dynamics with specific cases that lend themselves to optimal control. We design proper dynamics parameters so that the conservation of flow is guaranteed. We provide an algorithmic solution for the general optimal control setup using a multiparametric quadratic programming approach (explicit MPC). Finally we illustrate with numerics the impact of varying system parameters on the solutions.
INTRODUCTION
The routing problem -a problem in which trip-makers route traffic between an origin and destination -is a central and essential component in the study of urban transportation planning. Emerging as an enabler to this problem over the last decade, navigational apps have used solution from the routing problem to provide shortest paths from origin to destination, or identify the paths with the shortest travel time (1). Due to ubiquitous adoption of navigational apps by motorists (commuters, travelers), TNCs (Lyft/Uber), delivery fleets (2), and many others, these apps are playing even bigger roles in transportation and the significance of their roles are growing (3). With increased penetration routing app users, navigational apps are starting to "act" as flow allocation mechanisms -having positive effects on some cities travels and negative on others when users use local neighborhood routes to avoid traffic (4). Thus, it is becoming ever for important to understand the routing game problem, its solutions, and how flow allocation mechanisms (and traffic assignment) are being used in transportation.
The traffic assignment problem is a model framework for aggregated flows of motorists routed through a network from their origins to their destinations. To address the traffic assignment problem, this process has initially been modeled in a static way using notions such as static traffic assignment, for which specific solutions exist like User equilibrium and system optimal (5) (6). In this static model, we assume the network being studied is close to "steady state", and that the current network capacity is the capacity over the entire analysis period. However, in hopes to move away from this assumption and better model traffic in urban areas, researchers have moved toward dynamic traffic assignment models, which aim to model time-varying networks and demand interactions and changes (7) (8) (9) (5). Additionally, within the field of traffic control, researchers have used simulation based approaches (10) and some are starting to use big data to further analyze travel behavior (11).
The traffic assignment problem has been modeled using game theory among many other techniques. Specifically, it has been posed in a game theoretic framework called routing game. A routing game is a game in which the players (non-cooperatively) route traffic between an origin and a destination of a proposed network, where the network models a system of roads. After each player chooses routing strategies, each player will incur a cost on the route they choose (12). One specific kind of game is called "the one-shot" game. The one-shot routing game is a static game 1 , where the players make their decision simultaneously and therefore no time history is involved. Within the one-shot game framework, multiple equilibrium models and the corresponding equilibria have played an important role in understanding the inefficiencies of networks (e.g. the price of anarchy (13)), for example the user equilibrium or Nash equilibrium (14). Properties and the characterizations of the equilibrium solutions are also well-studied, for example, the Nash equilibria are known to be the solution set of the convex Rosenthal potential function (15).
Within routing games, one can also study a dynamic game, where the environment is time varying. The analysis of such games is more difficult from a theoretic standpoint: the notion of dynamic equilibria has to be introduced, and extra ingredients like information acquisition and time ordering of the execution of the actions need to be specified (16). Therefore, moving towards more dynamical games, we consider repeated games, i.e. a static game iterated over time, as a model of the evolution of the traffic situation. The iteration here can be naturally interpreted as time discretization (e.g. sometimes where traffic conditions are steady over time intervals). In the case of the present work, we follow previous approaches in which the repeated nature of the game encompasses the process of a routing entity (Google, Waze, etc) to learn from day to day from their previous performances. Oftentimes (and it is the case in our formulation), the repeated version is essentially the sequential appreciation of a static game. A natural question to study is the behavior of the dynamics, when the players repeat a static game using past information. Dynamics in a repeated routing game might not converge, and even if it does, convergence might not be quantifiable (e.g. convergence almost everywhere, convergence in the sense of Cesaro means or of the time-averages, etc.). Even when the dynamics sequence converges itself, the equilibria to which it converges to (Nash equilibrium or social optimum) might not be unique (17). Approaching this question, Krichene et al. estimates the learning dynamics within an online learning model of player dynamics (18) and studies the traffic dynamics under partial control (19).
To better understand the routing game in these systems and provide bigger impacts in transportation, we might also want to leverage concepts from other areas of engineering. Modeling and analysis of conflict, the study of systems, and the study of their equilibria are not unique to the routing game. In fact, very similar concepts and analysis shows up in control theory, which oftentimes, similarly to game theory, models interactions between a system of controllers (the "decisionmakers") and their environment (20). Unsurprisingly, many influences within game theory show up in many areas of controls (differential games, teams games, and distributed control) (21), and many connections can be made between the two. Even though there exists many similarities and influences between game theory and controls, there are many useful control design techniques that exists separate from game theory (22) (23) (24), but are conceptually similar/applicable to designing decision-makers within routing games, especially for routing systems like google.
Therefore, in this article, we contribute to the literature of routing games and draw from other areas of engineering to make deeper connections and make use of control techniques for routing. We specifically introduce a new game design framework for routing games using the linear-quadratic regulator (LQR).
Key Contributions
The present article focuses on transforming the repeated game problem into a control theoretic problem, and studying the convergence of the game dynamics in this framework. The key contributions of the article include the following:
• We provide a game design scheme for repeated routing games through the Linear Quadratic Regulator (LQR). • We design control dynamics with specific choices of system parameters to respect the conservation of flow and achieve specific convergence results. • We provide a method for producing piecewise affine optimal routing strategies using explicit model predictive control (MPC) techniques. • We derive new theoretical results for specific cases of the new design scheme. • We illustrate a geometric framework for visualizing optimal routing solutions for every feasible state in the routing game. • We extend the comparison literature between game theory and controls.
Organization of the article This article, first, explains the new control theoretic game design scheme in LQR Repeated Game Framework for Routing in Parallel Networks, under which we also build up the intuition and un-derstanding of the framework by illustrating an evolution of the game from a one-shot framing (see One-shot Game Framework) to a repeated game framing (see Repeated Game Framework), and finally to the new control framing (see LQR Framework). We further present analogies between the new framework and a classical algorithmic game theory model of routing games within this section (see LQR Framework); then, we finally talk about how to design the game (see Designing Games with LQR) and its interpretation (see Interpretation of the Framework). Next, we run through the system analysis and its properties in System Analysis and System Properties, where we additionally describe a few requirements for the system. In Solutions via Explicit MPC: A Multiparametric Quadratic Programming Approach, we describe a algorithmic approach to solve the control routing problem using explicit MPC techniques. Finally, in Illustration on Simple Numerics we run through a few numerical simulations and present its results with associated discussions.
PRELIMINARY Notation
Description Type G = (V , E ) A parallel network for the routing game made of a set of edges E (as roads) and a set of vertices V (one origin and one destination)
A directed graph e ∈ E An edge in the edge set An element of E l e (·)
Affine latency function on edge e R ≥0 → R ≥0 : l e ( f e ) = a e f e + b e , a e , b e ≥ 0 n e Number of parallel edges N T Time horizon (total number of rounds in game) N (discrete time) ∆ Probability Simplex
∆ := { f : ∑ e∈E f e = 1, f e ≥ 0} f
Flow allocation vector on the network
∆, f = ( f e ) e∈E l
Latency vector incurred on the network R n e ≥0 , l = (l e ( f e )) e∈E x t System state vector at time t
∆, x t = (x t 1 , . . . , x t n e ) u t
System input vector at time t ∆, u t = (u t 1 , . . . , u t n e ) x * Steady state of the system ∆ x 0 Initial state of the system ∆ Q f
Terminal cost matrix R n e ×n e positive semi-definite R Running cost matrix for inputs R n e ×n e positive semi-definite Q Running cost matrix for states R n e ×n e positive semi-definite γ
Weight parameter in the dynamics γ ∈ [0, 1] A Mapping from the memory to decision R n e ×n e left stochastic matrix B
Mapping from suggested routing to decision R n e ×n e left stochastic matrix
LQR REPEATED GAME FRAMEWORK FOR ROUTING IN PARALLEL NETWORKS
We consider a specific routing game design problem where the routing game is posed with a parallel network, G . G is modeled as a directed graph G = (V , E ), where V is a set containing two nodes (an origin o and a destination d), and E is an edge set containing n e edges between each vertex.
One-shot Game Framework
We start by introducing the flow allocation problem, which we will use later as a potential equilibrium point for our repeated game. We will also refer to it as a one-shot game (i.e. a repeated game converging in one iteration).
Definition 4.1 (Edge flows).
For each edge e ∈ E , the edge flow, f e ∈ R ≥0 , represents a large collection of non-atomic drivers using edge e on the network. Each edge flow, f e , represents a portion of traffic in the game (and thus has "mass"), while individual drivers themselves do not. They are negligible with respect to the total number of drivers present (i.e. the non-atomic framework).
In the flow allocation problem, all drivers are allocated along the network's edges, and each allocation incurs a latency (i.e., loss), l (this latency is usually labeled as the average travel time each driver took). Each edge's latency l is modeled in the present work as an affine function associated with each edge. Normally, in the allocation problem, drivers at the origin, o, are allocated strategically to reduce the total travel time the system will incur. The resulting allocation for all of the drivers of total mass 1 in G , is represented as a stochastic vector f = ( f e ) e∈E ∈ R n e ≥0 , where each f e tells us the proportion of flow being allocated on edge e. Specifically,
f ∈ ∆, where ∆ := { f : ∑ e∈E f e = 1, f e ≥ 0}
(4.1)
Nash Equilibrium
If we assume that all drivers act rationally and has access to perfect information, we can describe an equilibrium of the game, known as the Nash equilibrium.
Definition 4.3 (Nash equilibrium).
A flow allocation f ∈ ∆ is a Nash equilibrium (user equilibrium (14), non-atomic equilibrium flow (12), or Wardrop equilibrium (25)) if and only if:
∀e ∈ E such that f e > 0, l e ( f e ) = min e ∈E l e ( f e ). (4.2)
Repeated Game Framework
In order to encompass time-varying changes in the system, one can use "repeated play" or learning models (26). In this framework, a flow allocator (i.e. Gooogle maps) makes a decision iteratively, as opposed to one time in the one-shot game, and uses the outcome of each iteration to adjust their next decision. Adding a t dependency for time to the aforementioned notation, f t ∈ ∆ becomes the flow allocation at time t, l t ∈ R n e ≥0 becomes a vector of latencies produced from functions l e (·) at time t, and throughout the game, the flow allocator iteratively chooses f t and then observes l t . In this framework, designing learning models of player decisions and whether the resulting dynamics asymptotically converges to equilibrium is one of the objectives of our article. Furthermore, we are also interested in situations in which convergence happens in one shot. Previous research has been done to characterize classes of learning algorithms that converge to different sets of equilibria (27) (21) (28). Specifically, different models of learning algorithms and their associated convergence guarantees has been studied for the routing game (18), (19), (29).
LQR Framework
The following table sets the framework for the proposed work. The right column defines the repeated game framework previously described (using example designs from previous work (30)), while the left column defines the classical control framework used later for LQR control. Both frameworks present obvious mathematical analogies (21) (31).
We re-frame the routing problem under the repeated game framework into the LQR framework and solve for/analyze player strategies using existing control techniques. We will essentially aim to do the same game theoretical analysis within linear/nonlinear controls. In the right column of the table, we list the setup under the repeated game framework. The goal of the routing problem there is to design algorithms that dictate strategies for players (the flow allocator). At each time step t, the algorithm updates the flow allocation f t+1 based on the players' memory f t and the loss on the edge l t induced by the traffic (29) (19) (32) (33). The goal is to guarantee a sub-linear regret and a convergence to the set of Nash equilibria.
We start from the repeated games setup, and cast the repeated game problem in a control theoretic framework. We keep the authoritative central decision maker u as the flow allocator,
Control problem
Repeated game (online learning) Notation
x t state, u t control at time t f t flow allocation, l t loss at time t Entities involved • Central decision maker: flow allocator • Players: drivers • Central decision maker: flow allocator • Players: flow allocator
Design u t (·) s.t. x t+1 = g(x t , u t ) is stable, where g(·, ·) is the recurrence relation defining the dynamics. h(·, ·) s.t. f t+1 = h( f t , l t ) defines the dynam- ics for the routing game. Example designs u t (x t ) = Kx t and g(x t , u t ) = Ax t + Bu t =⇒ x t+1 = (A + BK)x t , where (A, B) is stabiliz- able, and A + BK is asymptotically stable. h( f t , l t ) = argmin f ∈∆ l t ( f − f t ) + 1 η t D Ψ ( f , f t ),
where η t is the learning rate at time t, D Ψ (·, ·) is the Bregman divergence induced by a strongly convex function Ψ defined as
D Ψ (x, y) = Ψ(x) − Ψ(y) − ∇Ψ(y) (x − y)
. This is called the mirror descent algorithm with Bregman divergence D Ψ . Objective function (to minimize)
Cumulative cost, ∑ T −1 t=0 c 1 (x t , u t ) + c 1,T (x T ) Cumulative regret, ∑ T t=0 c 2 ( f t , l t ) − min f ∈∆ ∑ T t=0 c 2 ( f , l t ) Example objective functions c 1 (x t , u t ) = x t Qx t + u t Ru t , c 1,T (x t ) = x t Q f x t , Q, Q f 0, R 0 c 2 ( f t , l t ) = f t l t
but we differentiate between the "player" and the "decision-maker" as coined in the original game theoretic setup. Here, the "players" will refer to the drivers in the network flow being routed. In our framework, the "players" do not play; they are only considered by the routing system and the model. The central decision maker is what we want to design in this framework. It has full access to the flow allocation x on the network, and aims to achieve a certain routing goal (e.g., steer the network flow to a target flow distribution, or minimize the average travel time). The central decision maker embeds the routing goal in the cumulative cost function
∑ T −1 t=0 c 1 (x t , u t ) + c 1,T (x T ).
The players now update their routing decision x t+1 by leveraging their memory x t and the central decision maker's suggested routing distribution u t at each time step. The target flow allocations x * are characterized as the steady states of the controlled system, and the central decision maker chooses the optimal controls {u t } to minimize the cumulative cost function and stabilizes the system to its steady states.
The choice of LQR is motivated by the observation that a potential game with affine latency and a parallel network can be naturally framed as a quadratic cost and linear dynamic problem, leveraging the convex formulation of the Rosenthal potential:
minimize u t ,t=0,...,T −1 T −1 ∑ t=0 x t Qx t + u t Ru t + x T Q f x T (4.3) subject to x t+1 = Ax t + Bu t ,t = 0, 1, . . . , T − 1 (4.4) x 0 = x(0) (4.5) where x t ∈ R n x
is the state of the system at time t and u t ∈ R n u is the action of the decision-maker at time t, with appropriate matrices A, B, Q, R, Q f . Within this framework, we can leverage known optimal control techniques to formulate optimal strategies and inherent known properties for the system. Furthermore, definition (4.3) for the context here becomes: Nash Equilibrium Definition 4.4 (Nash equilibrium). A state of the system x ∈ ∆ is a Nash equilibrium if and only if:
∀e ∈ E such that x e > 0, l e (x e ) = min e ∈E l e (x e ).
(4.6)
We reiterate the key differences between the repeated game framework and our control theoretic framework:
1. In repeated games for parallel networks, the two entities involved, the player and the central decision maker are the same (a flow allocator). However, in our control theoretic framework, the two entities are different -players being the drivers (in the present case aggregated as non-atomic along the edges) and the central decision-maker being the flow allocator. 2. In repeated games, the players update their flow allocation based on their memory f t and the incurred loss l t on the network. In our control theoretic framework, the players update their flow allocation based on their memory x t and the central decision maker's suggested routing decision u t . 3. In repeated games, one possible goal of the non-atomic player, the flow allocator, (see for example (32) (34)) is to update their flow allocation in order to converge to the set of Nash equilibria and to achieve sub-linear regret. In our control theoretic framework, the goal of the central decision maker is to steer the flow allocation to a target flow distribution, by inputting suggestions to the system's players (drivers).
Designing Games with LQR
We now describe our control routing model.
1. States and controls: the state of the LQR at time t is the flow vector x t , consisting of the flow on each edge at time t, and the control at time t is the flow allocation u t given by the decision maker, i.e.,
x t = (x t 1 , . . . , x t n e ) ,t = 0, . . . , T (4.7) u t = (u t 1 , . . . , u t n e ) ,t = 0, . . . , T − 1. (4.8) We model both x t , u t ∈ ∆ at each time step t.
2. Equation for the dynamics: we assume the dynamics of the flow update is described using a linear time invariant difference equation
x t+1 = γAx t + (1 − γ)Bu t ,t = 0, 1, . . . , T − 1 (4.9) where γ ∈ [0, 1]
is a design parameter that weights the contribution of x t and u t in the update rule, and A, B ∈ R n e ×n e are two design matrices deciding how the flow vectors x t and u t will affect the drivers' routing decision in the subsequent time step respectively. 3. Cost function: the cost function of our control problem is a summation of quadratic functions
T −1 ∑ t=0 x t Qx t + u t Ru t + x T Q f x T (4.10)
where Q, Q f , R 0.
The resulting control problem is minimize u t ,t=0,...,T −1
T −1 ∑ t=0 x t Qx t + u t Ru t + x T Q f x T (4.11)
subject to x t+1 = γAx t + (1 − γ)Bu t ,t = 0, 1, . . . , T − 1 (4.12) x t ∈ ∆,t = 0, . . . , T (4.13) u t ∈ ∆,t = 0, . . . , T − 1 (4.14)
Interpretation of the Framework We now provide interpretations for the design of the control system. 1. States x t : the interpretation of the state x t is twofold: (1) one could interpret it as the routing decision made by the drivers from one day to another, and (2) it is also the actual flow distribution on the network, incurred by the drivers. We assume all drivers comply with a routing decision which depends at the same time on the past state, and the control. 2. Controls u t : One could interpret the control u t as the recommended routing decision from the central decision maker. 3. The Ax t term in the dynamics represents the mapping from the drivers' memory (past state inherited from the day before) to their flow allocation decision. 4. The Bu t term in the dynamics represents the mapping from the central decision maker's suggested routing distribution to the players' flow allocation decision. 5. When updating their flow allocation decision, the drivers leverage their memory x t and the suggested routing distribution u t from the decision maker. γ ∈ [0, 1] is the design parameter that characterizes the trade-off between the two contributions. 6. (Quadratic) Cost function: the design of the quadratic cost function x t Qx t + u t Ru t is motivated by the observation that the Rosenthal potential function and the cost function associated to social welfare in a potential game are both quadratic for linear latency functions (15). Definition 4.6 (Social Welfare). The cost function associated to social welfare is defined as
J 2 (x) = ∑ e∈E x e · l e (x e ),
The importance of the Rosenthal potential and the social welfare is that the Nash equilibrium and social optimal of the routing game can be characterized as the local minimizers of these functions respectively, respecting the conservation of flow. With linear latency parallel networks,
J 1 (x) = n e ∑ i=1 x i 0 (a i y + b i ) dy = n e ∑ i=1 a i x 2 i 2 + b i x i = x Q x + b x,(4.16)
whereQ = diag( a 1 2 , . . . , a n e 2 ), b = (b 1 , . . . , b n e ) and
J 2 (x) = n e ∑ i=1 x i (a i x i + b i ) = n e ∑ i=1 a i x 2 i + b i x i = 2x Q x + b x, (4.17)
with theQ, b defined as the same as above. Note thatQ and b are used to construct Q and Q f of the cost function. 7. Design of the matrix A in the dynamics: since the Ax t is the mapping from the memory to the flow allocation decision, the matrix A should reflect how the drivers take the memory into account when updating their routing decisions. 8. Design of the matrix B in the dynamics: since the Bu t is the mapping from the suggested routing distribution to the flow allocation decision, the matrix B should reflect how the drivers take the central decision maker's suggested flow into account when updating their routing decisions.
SYSTEM ANALYSIS AND SYSTEM PROPERTIES
One of the contributions of the work is about how to design specific A and B matrices to produce various convergence results. 1. Construction of A, B: when designing the matrices A, B in the dynamics, we need to take the conservation of flow into account. More specifically, A, B have to be chosen such that the following property holds Proof. We will show that if x t , u t ∈ ∆, then γAx t + (1 − γ)Bu t = x t+1 ∈ ∆. The proof is then complete by induction on t. First observe that x t+1 has non-negative entries due to the non-negativity of the entries in A, B, x t , u t . Furthermore,
x t , u t ∈ ∆ =⇒ γAx t + (1 − γ)Bu t = x t+1 ∈ ∆.I x t+1 = γI Ax t + (1 − γ)I Bu t (5.2) = γI x t + (1 − γ)I u t (5.3) = γ + (1 − γ) = 1. (5.4) 2.
Examples: the analysis of the LQR problem with probability simplex constraints is in general not tractable. In other words, we are not aware of a way to solve the discretetime algebraic Ricatti equation in a way that preserves (5.1). We therefore provide the important special cases where either (1) exact calculations of the steady state and optimal controls are allowed, or (2) interesting behavioral interpretation of the players can be given. This choice of B matrix in the dynamics can be interpreted as follows: when the players (drivers) leverage the central decision maker's suggested routing distribution u t to update their flow allocation on each edge, they simply average the suggested flow at each time step t, and take only the average 1 n e ∑ n e i=1 u t i into account. We show that when the players (drivers) update their flow allocation in this way, the controller u t is essentially "muted", and therefore a target state may not be controllable. The state x t will converge to the same equilibrium that is independent of the choice of the cost function. then the unique steady state of the system is given by
x * = 1 − γ n e (I n e − γA) −1 I,(5.
7)
where I n e is the n e -dimensional identity matrix, and I = (1, 1, . . . , 1) ∈ R n e is the allones vector.
Proof. The key observation is, with equation (5.6), since we have u t ∈ ∆,t = 0, . . . , T − 1,
Bu t = 1 n e I, (5.8) therefore x t+1 = γAx t + (1 − γ) 1 n e I,(5.9)
and the steady state should satisfy
x * = γAx * + (1 − γ) 1 n e I, =⇒ x * = 1 − γ n e (I n e − γA) −1 I,(5.10)
with the invertibility of I n e − γA guaranteed by the fact that the spectral radius of any left stochastic matrix is at most 1, and γ < 1.
The foregoing property suggests that in the case of equation (5.6), the flow allocation on the network does converge to an equilibrium, but the equilibrium depends only on the design parameters A, γ and the number of edges n e , which can be different from the Nash equilibrium. Therefore, the optimal control of problem (4.11)-(4.14) can be explicitly solved by the optimization problem minimize u t ,t=0,...T −1 Note that in the very specific case where R = 0 and Q, Q f are defined by the Rosenthal potential function, the minimizer of the above optimization problem and steady state of the system x * is the Nash equilibrium. If the linear equation system x * = γ n e I+(1−γ)Bu has at least one solution u * (e.g., when B is invertible and γ = 1), then the optimal controllers are u t = u * ,t = 0, . . . , T − 1.
x 0 Qx 0 + T −2 ∑ t=0 γ n e I + (1 − γ)Bu t Q γ n e I + (1 − γ)Bu t + u t Ru t + γ n e I + (1 − γ)Bu T −1 Q f γ n e I + (1 − γ)Bu T −1 + u T −1 Ru T −1 (5.
SOLUTIONS VIA EXPLICIT MPC: A MULTIPARAMETRIC QUADRATIC PROGRAM-MING APPROACH
At first glance, one might consider using the discrete-time algebraic Ricatti equation to obtain solutions for system (4.11)-(4.14).
Definition 6.1 (Discrete-time Algebraic Ricatti Equation)
. The discrete-time algebraic Ricatti equation is a nonlinear equation that gives a solution to the LQR problem presented in equation (4.3), described as P t evolving backwards in time from P T = Q f according to
P t−1 = Q + A P t A − A P t B(R + B Q f B) −1 B P t A. (6.1)
However, using the algebraic Riccati equation is not possible in the presence of our flow conservation constraints, as specified in equation (5.1). According to lemma (5.1), in order to design a flow preserving scheme and guarantee flow conservation, we must be able to guarantee that each x t and u t vector given by a chosen controller remains on the probability simplex ∆, while the optimal controller given by the algebraic Riccati equation drives the system to the steady state x * = 0. Therefore, we turn to multiparametric quadratic programming (mpQP), a technique used in explicit MPC to generate explicit piecewise affine controllers. To generate these explicit strategies, we will formulate the LQR problem into a mpQP problem, utilize an mpQP solver proposed in (35) to generate piecewise affine functions for u t (the player needs to execute different affine controllers depending on the state), and use (within the solver) a geometric algorithm to plot regions, known as critical regions, on the state space to visualize when players should use which specific optimal strategies. Optimal strategies produced by this technique will be piecewisely-affine with respect to FIGURE 1: Here, the optimal strategy for our player is space-varying and we visualize solutions with respect to the feasible state space of the routing game. Each colored region of the graph represents a unique affine strategy that is optimal for those points in space. For example, region 1 (in green) has the solution
u * t (x t ) = − 2 3 1 3 1 3 1 3 − 2 3 1 3 1 3 1 3 − 2 3 x t + 1 3 1 3 1 3 .
the state of the game, hence, optimal strategies in this context are space-varying (see figure (1) for an example).
Algorithm (2) defines the process by which we use mpQP to generate solutions for routing games. Specifically, we consider the optimal control formulation in problem (4.11)-(4.14). We take the linear flow and input constraints in (5.1) and construct them in the form
A u u t ≤ b u , A x x t ≤ b x .
(6.2) To solve for the optimal controls u 0 , . . . , u T −1 , first we condense the problem into a strictly convex quadratic programming problem
V * (x) minimize 1 2 z Pz + (Fx + c) z + 1 2 x Y x (6.3) subject to Gz ≤ W + Sx (6.4)
and then use the mpQP algorithm from reference (35) to solve for z where z := u 0 . . . u T −1 ∈ R n e T is the decision variable. P = P ∈ R n e T ×n e T , F ∈ R n e T ×n e , c ∈ R n e T , Y ∈ R n e ×n e are defined by the cost matrices, and G ∈ R n e T ×q , W ∈ R q , S ∈ R q×n e define the constraints imposed by (4.11)-(4.14) in compact form. The following property guarantees the existence of a (piecewise affine) solution of the condensed problem (6.3)-(6.4).
Property 6.1 (Existence of a piecewise affine solution (35)). Consider (6.3)-(6.4) with P = P 0 and let x ∈ X = R n .
1.
The set X f of states for which the problem is feasible is a polyhedron. 2. The optimizer function z * : X f → R n is piecewise affine and continuous over X f .
3. If P F F Y 0 and symmetric, then the value function V * is continuous, convex, and piecewise quadratic over X f . (36)). A critical region is a polyhedron in X for which there exists an optimal solution z * (·) that is affine and optimal for the entire region. Each critical region is defined by a unique set of active constraints A that is common for all points in the region. As explained in ( (35)), MPQPSolver(·) uses a geometrical algorithm (see (36)) to find the critical regions of the feasible state space by understanding when sets of active constraints differs from point to point in the state space. Once it has identified a critical region, the algorithm is able to uncover the optimal solution defined within the region by using KKT conditions (or similar methods) to obtain the piecewise affine solution z * . Algorithm (2) produces (1) the critical regions R, (2) the matrices to check if a state is in specific critical region, H = {H r } and K = {K r }, and (3) the matrices that characterizes the affine solution F = {F r } and G = {G r }. Therefore, during the routing game, to recover the optimal strategies, the central decision maker executes if H r x t ≤ K r , then u * t = F r x t + G r (6.5) and x t belongs to critical region r.
Definition 6.2 (Critical Region
ILLUSTRATION ON SIMPLE NUMERICS
In this section, we plan to provide numerical results and discuss 1. differences in varying A matrices in the dynamics, and the interpretations, 2. changes in γ and its effect on the converging rate, 3. different initial states and its effect on the converging rate, 4. different cost functions to drive the system towards different stable equilibrium. More specifically, we will compare between 1. A = We also discuss the reliability of the solver by showing one example, where the solver outputs the correct steady state solution only with large enough time horizon T . This example serves as an alert that in order to get the accurate numerical solution with the MPQPSolver(·), one needs to set a large time horizon T , at the price of longer computation time.
Comparisons and Results
For ease of reference, we list below the detailed choice of parameters in each figure. In all experiments we assume no penalty on the controller (i.e., the central decision maker has the freedom to pose any suggested routing decision). Discussion: In each figure, we present on the left the probability simplex in R 3 , the critical regions in it (recall that within each partition the optimal controller takes different affine expression), and the convergence trajectory of the state x t (with the initial state denoted by * ). On the right we present (1) the value of the flow allocation x t on each edge at each time step t, (2) the value of the optimal control (suggested routing decision) u t on each edge at each time step t, (3) the incurred latency l t on each edge at each time step t, and (4) the incurred stage cost x t Qx t + u t Ru t ,t = 0, . . . , T − 1 and the terminal cost x T Q f x T .
1. Varying A matrices in the dynamics: To see the differences in changing A, we compare figure (2) and figure (3).
• It can be observed that in both scenarios, the controlled system converges to its steady state x * = (1/3, 1/3, 1/3) in one step. The numerical results suggest that with the choice of cost matrices being Q = Q f = I 3 , R = 0 and the specific setup of other parameters in the numerics, the steady state x * coincides with the Nash equilibrium in the parallel network with latency functions l 1 (y) = l 2 (y) = l 3 (y) = y. • Specifically, the fact that the state x t converges in one step to the Nash equilibrium with A = there is no partition of the space, meaning that the optimal controller takes the same affine function over ∆. In the case of A = I 3 , the feasible state space is partitioned into four polyhedral regions, within each taking different expression of affine optimal controllers. 2. Varying γ in the dynamics: To see the differences in changing γ, we compare figure steady state x * coincides with the Nash equilibrium in the parallel network with latency functions l 1 (y) = y, l 2 (y) = 2y, l 3 (y) = 4y. • The polyhedral partitions of the feasible state space ∆ are different in the two cases.
We observe that with larger γ = 0.7 (i.e. the players update their flow allocation considering more their memory than the central decision maker's suggested routing decision), the feasible state space has way more polyhedral partitions. This intuitively means that: if the players weight less the suggestions from the central decision maker, in order for the central decision maker to drive the traffic to a target flow allocation, it will have to design more complex routing strategy based on the actual flow on the network. • The convergence rate with larger γ is slower: we see that with γ = 0.5, the state stabilizes with around 5 time steps, while with γ = 0.7, the state stabilizes with around 10 time steps. This can be intuitively interpreted as: if the players weight less the suggestions from the central decision maker, the steady state of the system is less controllable and therefore it takes longer for the central decision maker to stabilize the system. 3. Varying initial states x 0 : To see the differences in changing x 0 , we compare figure (5) and figure (6).
• Both controlled systems converge to the steady state x * = (4/7, 2/7, 1/7) , which again coincides with the Nash equilibrium in the parallel network with latency functions l 1 (y) = y, l 2 (y) = 2y, l 3 (y) = 4y. • We would like to point out that with x 0 = [0, 1, 0] , the state of the system starts further away from its steady state, and starts within a different polyhedron from where the steady state is located. However with the specific control dynamics in the numerical experiment, the central decision maker still manages to steer the system to the target flow allocation. 4. Varying cost functions: To see the differences in changing the cost matrices, we compare figure (3) and figure (5).
• Both controlled systems converge, but to the different steady states. More specifically, with Q = Q f = I 3 , the steady state is x * = (1/3, 1/3, 1/3) , which coincides with the Nash equilibrium in the parallel network with latency functions l 1 (y) = l 2 (y) = which coincides with the Nash equilibrium in the parallel network with latency functions l 1 (y) = y, l 2 (y) = 2y, l 3 (y) = 4y. • This justifies the capability of the central decision maker to design different objective functions and steer the controlled system to different target flow allocations. Reliability of the solver: with the following setting, we show that it is important to set the time horizon T to be large enough so that the algorithm gives the steady state solution. (8) that the control system can be oscillating and non-stable, and does not converge to a steady state. However if we set a larger T = 35, we see in figure (9) that the controlled system is stabilizing -the trajectory of the state converges to a steady state.
CONCLUSION
This article transformed the dynamic traffic assignment problem into a control theoretic problem, under a simple scenario using a parallel network with affine latency functions on each edge. We started with the routing problem under the repeated game framework, proposed a game design scheme through LQR within the control theoretic framework, and then leveraged control techniques to analyze convergence/stability of the system with special examples. Within our system analysis, we discovered that the design of A, B in the equation of the dynamics could be chosen to be left stochastic matrices so that the conservation of flow is guaranteed. Additionally, the suggested routing allocation u has to be chosen such that the resulting vector is stochastic, disqualifying certain control technique that can be used in this system (i.e. the algebraic Ricatti equation) and requesting the use of others. Using this fact, we designed an algorithmic solution for the opti- mal control using a multiparametric quadratic programming approach (explicit MPC), generating explicit strategies and geometric plots to visualize the optimal solutions the central decision maker should take at the feasible states of the game. We discussed the impact of each system parameter on the solution and explained that various structures in A will result in different solution requirements for the central decision maker, while the cost function changes the system's steady state, and γ, x 0 influence the rate of convergence. Future work includes studying the region of attraction of the LQR routing system via reachability analysis, extending the present work to more complex network structures (beyond the parallel network), generalizing the analysis of the convergence behavior of the LQR routing system beyond the special cases considered in the present work, and leveraging the Reinforcement Learning techniques to approach general routing problems.
Definition 4.2 (Latency function).Each edge e ∈ E has an affine latency function l e (·) : R ≥0 → R ≥0 , l e ( f e ) = a e f e + b e , a e , b e ≥ 0 that converts the edge flow f e into the edge latency. The latency functions are nonnegative, nondecreasing and Lipschitz-continuous functions.
Definition 4. 5 (
5Rosenthal potential). The Rosenthal potential function of a potential game, with a flow vector x, is defined as
to ensure the conservation of flow on the network is to construct left stochastic A, B.Definition 5.1. A matrix P = (p 1 , . . . , p n e ) ∈ R n e ×n e is left stochastic if p i ∈ ∆, i = 1, . . . , n e . Lemma 5.1. If A, B ∈ R n e ×n e are left stochastic, and x 0 , u t ∈ ∆,t = 0, . . . , T − 1, then γAx t + (1 − γ)Bu t = x t+1 ∈ ∆,t = 0, . . . , T − 1.
of A matrix in the dynamics can be interpreted as follows: when the players (drivers) leverage their memory x t to update their flow allocation on each edge, they simply average the past flow at each time step t, and take only this average 1 show that when the players (drivers) update their flow allocation in this way, their memory has no impact in the update rule.Property 5.2. Consider the control problem (4.11)-
optimal controller u t ,t = 0, . . . , T − 2 is the solution set of the optimization problemminimize u γ n e I + (1 − γ)Bu Q γ n e I + (1 − γ)Bu + u Ru (5.13)subject to u ∈ ∆, (5.14) and the optimal u T −1 is the solution set of the optimization problemminimize u γ n e I + (1 − γ)Bu Q f γ n e I + (1 − γ)Bu + u Ru (5.15) subject to u ∈ ∆. (5.16)Proof. The key observation is, with equation (5.12), since we have x t ∈ ∆,t = 0, . . . ,
19) subject to u t ∈ ∆,t = 0, . . . , T − 1,(5.20) which is further equivalent to solving the constrained u ∈ ∆. (5.24) If Q f = Q and the solution of the above optimization problem exists, the minimizer u * characterizes the steady state x * = γ n e I + (1 − γ)Bu * .
Example 5 . 3 .
53The third special case we would like to analyze is A = B = I n e . The players update their flow allocation with x t+1 = γx t + (1 − γ)u t at each time step t, which is the same as the element-wise update x t+1i = γx t i + (1 − γ)u t i , i = 1,. . . , N,t = 0, . . . , T − 1. The interpretation of this choice of A, B is that the players update the flow on each edge independently: to update the flow on edge i, the players leverage their memory and the suggested routing decision on the edge i only.
Algorithm 2 :
2mpQP algorithm Input : Q,R,Q f ,N c ,A,B,A u ,b u ,A x ,b x Output: Optimal strategy solutions: F , G Critical Regions to apply solutions: R, H , K , 1 P, F, c,Y, G,W, S ← Condense(Q,R,Q f ,N c ,A,B,A u ,b u ,A x ,b x ); 2 return MPQPSolver(P, F, c,Y, G,W, S);
A = I 3 , with all the other parameters fixed as B = I 3 , Q = Q f = I 3 , R = 0, γ = 0.5, x 0 = (0.3, 0.5, 0.2). This means we investigate the differences between the players' routing decision process when they "mute" their memory, and when they leverage their memory on each edge independently (i.e., when x t+1 i depends only on x t i in the past state x t for i = 1, . . . , n e ). 2. γ = 0.5 and γ = 0.7: with all the other parameters fixed as A = B = I 3 , Q = Q , x 0 = (0.3, 0.5, 0.2) . This means we investigate the differences between the players' routing decision process when they weight equally the their memory and the suggested flow, and when they weight more their memory γ = 0.5. This means we investigate the differences between the players' routing decision process when they start with putting partial flow on each edge, and when they start with putting all flow on the second edge.4. Q = Q f = I 3 and Q = Q , γ = 0.5, x 0 = (0.3, 0.5, 0.2) . This means we investigate the differences between the players' routing decision process when the central decision maker penalizes equally the flow on the three edges, and when it gradually doubles the penalty on the edges.
FigureT 2 FigureFIGURE 2 :
22= 15, γ = 0.5, x 0 = 0.3 0.5 0.Identity Q, Q f , B, A = (1/3), γ = 0.5. "*" denotes the initial state.
FIGURE 3 :
3Identity Q, Q f , A, B, γ = 0.5. "*" denotes the initial state.
FIGURE 4 :
4Identity B, A = (1/3), Q = Q f = diag(1,2,4), γ = 0.5. "*" denotes the initial state.
with our analysis in property (5.2).• The polyhedral partitions of the feasible state space ∆ are different. In the case of
FIGURE 5 :
5Identity A, B, Q = Q f = diag(1, 2, 4), γ = 0.5. "*" denotes the initial state.
FIGURE 6 :
6Identity A, B, Q = Q f = diag(1,2,4), γ = 0.5, initial (0, 1, 0). "*" denotes the initial state.(5) and figure(7). • Both controlled systems converge to the steady state x * = (4/7, 2/7, 1/7) . The numerical results suggest that with the choice of cost matrices being Q = Q f , R = 0 and the specific setup of other parameters in the numerics, the
FIGURE 7 :
7Identity A, B, Q = Q f = diag(1, 2, 4), γ = 0.7. "*" denotes the initial state.
set the time horizon T = 15, we observe in figure
FIGURE 8 :
8Identity Q = Q f = diag(1,2,4), γ = 0.6, T = 15. "*" denotes the initial state.
FIGURE 9 :
9Identity Q = Q f = diag(1,2,4), γ = 0.6, T = 35. "*" denotes the initial state.
A static game is played only once with one iteration.
T Cabannes, M A S Vincentelli, A Sundt, H Signargout, E Porter, V Fighiera, J Ugirumurera, A M Bayen, The Impact of GPS-Enabled Shortest Path Routing on Mobility: A Game Theoretic Approach. Cabannes, T., M. A. S. Vincentelli, A. Sundt, H. Signargout, E. Porter, V. Fighiera, J. Ugirumurera, and A. M. Bayen, The Impact of GPS-Enabled Shortest Path Routing on Mobility: A Game Theoretic Approach, 2018.
Urban freight truck routing under stochastic congestion and emission considerations. T Hwang, Y Ouyang, Sustainability. 76Hwang, T. and Y. Ouyang, Urban freight truck routing under stochastic congestion and emission considerations. Sustainability, Vol. 7, No. 6, 2015, pp. 6610-6625.
On-demand highcapacity ride-sharing via dynamic trip-vehicle assignment. J Alonso-Mora, S Samaranayake, A Wallar, E Frazzoli, D Rus, Proceedings of the National Academy of Sciences. 1143Alonso-Mora, J., S. Samaranayake, A. Wallar, E. Frazzoli, and D. Rus, On-demand high- capacity ride-sharing via dynamic trip-vehicle assignment. Proceedings of the National Academy of Sciences, Vol. 114, No. 3, 2017, pp. 462-467.
Negative externalities of GPS-enabled routing applications: A game theoretical approach. J Thai, N Laurent-Brouty, A M Bayen, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC). IEEEThai, J., N. Laurent-Brouty, and A. M. Bayen, Negative externalities of GPS-enabled rout- ing applications: A game theoretical approach. In 2016 IEEE 19th International Confer- ence on Intelligent Transportation Systems (ITSC), IEEE, 2016, pp. 595-601.
Traffic network analysis and design. S D Boyles, S T Waller, Wiley Encyclopedia of Operations Research and Management ScienceBoyles, S. D. and S. T. Waller, Traffic network analysis and design. Wiley Encyclopedia of Operations Research and Management Science, 2010.
The traffic assignment problem: models and methods. M Patriksson, Courier Dover PublicationsPatriksson, M., The traffic assignment problem: models and methods. Courier Dover Pub- lications, 2015.
Dynamic traffic assignment: A primer. Dynamic Traffic Assignment: A Primer. Y.-C Chiu, J Bottom, M Mahut, A Paz, R Balakrishna, T Waller, J Hicks, Chiu, Y.-C., J. Bottom, M. Mahut, A. Paz, R. Balakrishna, T. Waller, and J. Hicks, Dy- namic traffic assignment: A primer. Dynamic Traffic Assignment: A Primer, 2011.
Calibration and validation of a simulation-based dynamic traffic assignment model for a large-scale congested network. S Shafiei, Z Gu, M Saberi, Simulation Modelling Practice and Theory. 86Shafiei, S., Z. Gu, and M. Saberi, Calibration and validation of a simulation-based dy- namic traffic assignment model for a large-scale congested network. Simulation Modelling Practice and Theory, Vol. 86, 2018, pp. 169-186.
Dynamic network traffic assignment and simulation methodology for advanced system management applications. H S Mahmassani, Networks and spatial economics. 13-4Mahmassani, H. S., Dynamic network traffic assignment and simulation methodology for advanced system management applications. Networks and spatial economics, Vol. 1, No. 3-4, 2001, pp. 267-292.
A simulation-based optimization framework for urban traffic control. C Osorio, M Bierlaire, Osorio, C. and M. Bierlaire, A simulation-based optimization framework for urban traffic control, 2010.
The promises of big data and small data for travel behavior (aka human mobility) analysis. Transportation research part C: emerging technologies. C Chen, J Ma, Y Susilo, Y Liu, M Wang, 68Chen, C., J. Ma, Y. Susilo, Y. Liu, and M. Wang, The promises of big data and small data for travel behavior (aka human mobility) analysis. Transportation research part C: emerging technologies, Vol. 68, 2016, pp. 285-299.
Algorithmic game theory. N Nisam, T Roughgarden, E Tardos, V Vazirani, Cambridge University PressNisam, N., T. Roughgarden, E. Tardos, and V. Vazirani, Algorithmic game theory. Cam- bridge University Press, 2007.
How Bad is Selfish Routing?. T Roughgarden, E Tardos, J. ACM. 492Roughgarden, T. and E. Tardos, How Bad is Selfish Routing? J. ACM, Vol. 49, No. 2, 2002, p. 236-259.
The traffic assignment problem: models and methods. M Patriksson, Courier Dover PublicationsPatriksson, M., The traffic assignment problem: models and methods. Courier Dover Pub- lications, 2015.
A class of games possessing pure-strategy Nash equilibria. R W Rosenthal, International Journal of Game Theory. 2Rosenthal, R. W., A class of games possessing pure-strategy Nash equilibria. International Journal of Game Theory, Vol. 2, 1973, pp. 65-67.
T Başar, G J Olsder, Dynamic Noncooperative Game Theory. 2nd EditionBaşar, T. and G. J. Olsder, Dynamic Noncooperative Game Theory, 2nd Edition. Society for Industrial and Applied Mathematics, 1998.
On Learning How Players Learn: Estimation of Learning Dynamics in the Routing Game. K Lam, W Krichene, A Bayen, 2016 ACM/IEEE 7th International Conference on Cyber-Physical Systems (ICCPS). Lam, K., W. Krichene, and A. Bayen, On Learning How Players Learn: Estimation of Learning Dynamics in the Routing Game. In 2016 ACM/IEEE 7th International Confer- ence on Cyber-Physical Systems (ICCPS), 2016, pp. 1-10.
On Learning How Players Learn: Estimation of Learning Dynamics in the Routing Game. W Krichene, M C Bourguiba, K Tlam, A Bayen, ACM Trans. Cyber-Phys. Syst. 2123Krichene, W., M. C. Bourguiba, K. Tlam, and A. Bayen, On Learning How Players Learn: Estimation of Learning Dynamics in the Routing Game. ACM Trans. Cyber-Phys. Syst., Vol. 2, No. 1, 2018, pp. 6:1-6:23.
On Social Optimal Routing Under Selfish Learning. W Krichene, M S Castillo, A Bayen, IEEE Transactions on Control of Network Systems. 99Krichene, W., M. S. Castillo, and A. Bayen, On Social Optimal Routing Under Selfish Learning. IEEE Transactions on Control of Network Systems, Vol. PP, No. 99, 2016, pp. 1-1.
A tour of reinforcement learning: The view from continuous control. B Recht, Robotics, and Autonomous Systems. 2Annual Review of ControlRecht, B., A tour of reinforcement learning: The view from continuous control. Annual Review of Control, Robotics, and Autonomous Systems, Vol. 2, 2019, pp. 253-279.
Game theory and control. J R Marden, J S Shamma, Robotics, and Autonomous Systems. 1Annual Review of ControlMarden, J. R. and J. S. Shamma, Game theory and control. Annual Review of Control, Robotics, and Autonomous Systems, Vol. 1, 2018, pp. 105-134.
Control system design. G C Goodwin, S F Graebe, M E Salgado, Prentice HallUpper Saddle River, NJGoodwin, G. C., S. F. Graebe, M. E. Salgado, et al., Control system design. Upper Saddle River, NJ: Prentice Hall" 2001.
A survey on explicit model predictive control. A Alessio, A Bemporad, Nonlinear model predictive control. SpringerAlessio, A. and A. Bemporad, A survey on explicit model predictive control. In Nonlinear model predictive control, Springer, 2009, pp. 345-369.
Hamilton-Jacobi reachability: A brief overview and recent advances. S Bansal, M Chen, S Herbert, C J Tomlin, 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEEBansal, S., M. Chen, S. Herbert, and C. J. Tomlin, Hamilton-Jacobi reachability: A brief overview and recent advances. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), IEEE, 2017, pp. 2242-2253.
Some theoretical aspects of road traffic research. J G Wardrop, J I Whitehead, Correspondence , ICE Proceedings: Engineering Divisions 1. Wardrop, J. G. and J. I. Whitehead, Correspondence. Some theoretical aspects of road traffic research. ICE Proceedings: Engineering Divisions 1, 1952.
Prediction, learning, and games. N Cesa-Bianchi, G Lugosi, Cambridge university pressCesa-Bianchi, N. and G. Lugosi, Prediction, learning, and games. Cambridge university press, 2006.
Routing without regret: On convergence to Nash equilibria of regret-minimizing algorithms in routing games. A Blum, E Even-Dar, K Ligett, Proceedings of the twentyfifth annual ACM symposium on Principles of distributed computing. the twentyfifth annual ACM symposium on Principles of distributed computingBlum, A., E. Even-Dar, and K. Ligett, Routing without regret: On convergence to Nash equilibria of regret-minimizing algorithms in routing games. In Proceedings of the twenty- fifth annual ACM symposium on Principles of distributed computing, 2006, pp. 45-52.
A general class of adaptive strategies. S Hart, A Mas-Colell, Journal of Economic Theory. 981Hart, S. and A. Mas-Colell, A general class of adaptive strategies. Journal of Economic Theory, Vol. 98, No. 1, 2001, pp. 26-54.
Online Learning of Nash Equilibria in Congestion Games. W Krichene, B Drighès, A M Bayen, SIAM Journal on Control and Optimization. 532Krichene, W., B. Drighès, and A. M. Bayen, Online Learning of Nash Equilibria in Con- gestion Games. SIAM Journal on Control and Optimization, Vol. 53, No. 2, 2015, pp. 1056-1081.
Accelerated Mirror Descent in Continuous and Discrete Time. W Krichene, A Bayen, P L Bartlett, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Krichene, W., A. Bayen, and P. L. Bartlett, Accelerated Mirror Descent in Continuous and Discrete Time. In Advances in Neural Information Processing Systems 28 (C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds.), Curran Associates, Inc., 2015, pp. 2845-2853.
Game theory and distributed control. J R Marden, J S Shamma, Handbook of game theory with economic applications. Elsevier4Marden, J. R. and J. S. Shamma, Game theory and distributed control. In Handbook of game theory with economic applications, Elsevier, Vol. 4, 2015, pp. 861-899.
The Hedge Algorithm on a Continuum. W Krichene, M Balandat, C Tomlin, A Bayen, PMLRProceedings of the 32nd International Conference on Machine Learning. F. Bach and D. Bleithe 32nd International Conference on Machine LearningLille, France37of Proceedings of Machine Learning ResearchKrichene, W., M. Balandat, C. Tomlin, and A. Bayen, The Hedge Algorithm on a Contin- uum. In Proceedings of the 32nd International Conference on Machine Learning (F. Bach and D. Blei, eds.), PMLR, Lille, France, 2015, Vol. 37 of Proceedings of Machine Learn- ing Research, pp. 824-832.
Algorithmic game theory. T Roughgarden, Communications of the ACM. 537Roughgarden, T., Algorithmic game theory. Communications of the ACM, Vol. 53, No. 7, 2010, pp. 78-86.
Convergence of heterogeneous distributed learning in stochastic routing games. S Krichene, W Krichene, R Dong, A Bayen, 53rd Annual Allerton Conference on Communication, Control, and Computing. AllertonKrichene, S., W. Krichene, R. Dong, and A. Bayen, Convergence of heterogeneous dis- tributed learning in stochastic routing games. In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2015, pp. 480-487.
A multiparametric quadratic programming algorithm with polyhedral computations based on nonnegative least squares. A Bemporad, IEEE Transactions on Automatic Control. 6011Bemporad, A., A multiparametric quadratic programming algorithm with polyhedral com- putations based on nonnegative least squares. IEEE Transactions on Automatic Control, Vol. 60, No. 11, 2015, pp. 2892-2903.
On the facet-tofacet property of solutions to convex parametric quadratic programs. J Spjøtvold, E C Kerrigan, C N Jones, P Tøndel, T A Johansen, Automatica. 4212SpjøTvold, J., E. C. Kerrigan, C. N. Jones, P. TøNdel, and T. A. Johansen, On the facet-to- facet property of solutions to convex parametric quadratic programs. Automatica, Vol. 42, No. 12, 2006, pp. 2209-2214.
| [] |
[] | [] | [] | [] | In these notes we discuss the conservation of the energy for weak solutions of the twodimensional incompressible Euler equations. Weak solutions with vorticity in L ∞ t L p x with p ≥ 3/2 are always conservative, while for less integrable vorticity the conservation of the energy may depend on the approximation method used to construct the solution. Here we prove that the canonical approximations introduced by DiPerna and Majda provide conservative solutions when the initial vorticity is in the class L(log L) α with α > 1/2.2010 Mathematics Subject Classification. Primary: 35Q35, Secondary: 35Q31. | 10.4310/cms.2022.v20.n3.a10 | [
"https://arxiv.org/pdf/2103.01792v1.pdf"
] | 232,092,479 | 2103.01792 | b03af9cb00ede5e0f3cb9205e98cae6e71d35488 |
2 Mar 2021
2 Mar 2021ENERGY CONSERVATION FOR 2D EULER WITH VORTICITY IN L(log L) α GENNARO CIAMPAand phrases 2D Euler equationsvanishing viscosityvortex methodsconservation of energy
In these notes we discuss the conservation of the energy for weak solutions of the twodimensional incompressible Euler equations. Weak solutions with vorticity in L ∞ t L p x with p ≥ 3/2 are always conservative, while for less integrable vorticity the conservation of the energy may depend on the approximation method used to construct the solution. Here we prove that the canonical approximations introduced by DiPerna and Majda provide conservative solutions when the initial vorticity is in the class L(log L) α with α > 1/2.2010 Mathematics Subject Classification. Primary: 35Q35, Secondary: 35Q31.
Introduction
The motion of an incompressible, homogeneous, planar fluid is described by the system of the 2D Euler equations ∂ t u + (u · ∇) u + ∇p = 0, div u = 0, u(0, ·) = u 0 ,
(1.1)
where u : [0, T ] × R 2 → R 2 is the velocity of the fluid, p : [0, T ] × R 2 → R is the pressure and u 0 : R 2 → R is a given initial configuration. The first set of equations derive from Newton's second law while the divergence-free condition expresses the conservation of mass. A peculiar fact of the 2D case is that the vorticity ω, defined as
ω = ∂ x 1 u 2 − ∂ x 2 u 1 ,
is a scalar quantity which is advected by the velocity u. In fact, the equations (1.1) can be rewritten in the vorticity formulation ∂ t ω + u · ∇ω = 0, u = K * ω, ω(0, ·) = ω 0 ,
(1.2)
where K(x) = x ⊥ /(2π|x| 2 ) is the 2D Biot-Savart kernel. Note that the equation (1.2) is a non-linear and non-local transport equation.
The well-posedness of (1.1) is an old and outstanding problem. For smooth initial data, the existence and uniqueness of classical solutions was proved in [17,28]. The existence of weak solutions has been proved by DiPerna and Majda in [15] by assuming that the initial datum ω 0 ∈ L 1 ∩L p (R 2 ) with 1 < p ≤ ∞. Besides this result, the goal of [15] was to develop a rigorous framework for the study of approximate solution sequences of the two-dimensional Euler equations. In particular, the authors proved a general compactness theorem towards measure-valued solutions by assuming that ω 0 is a vortex-sheet, i.e. ω 0 ∈ M ∩ H −1 loc (R 2 ). They described three different methods to construct approximate solution sequences:
(ES) approximation by exact smooth solutions of (1.1); (VV) vanishing viscosity from the two-dimensional Navier-Stokes equations; (VB) vortex-blob approximation. In [15] DiPerna and Majda showed the existence of weak solutions via a compactness argument based on the methods (ES) and (VV). The counterpart for the vortex-blob method was proved by Beale in [2]. In these results, the L p -integrability with 1 < p ≤ ∞ of ω 0 is crucial in order to use Sobolev embeddings which guarantee the strong compactness in L 2 of an approximate solution sequence. This is enough to deal with the non-linear term in the equations. However, in the case ω 0 is just L 1 or a measure with distinguished sign, it turns out that the limit vector field is a solution of (1.1) even though concentrations may occur in the non-linearity. This is a purely 2D phenomenon known as concentration-cancellation, and it was studied in [14,25]. The uniqueness of weak solutions in the class considered in [15] is still an open problem, contrary to the case p = ∞ which has been proved by Yudovich [29]. There exist several partial results towards the non-uniqueness in the case of unbounded initial vorticity, see [4,5,21,26,27].
Smooth solutions of (1.1) are known to be conservative, which means that u(t) L 2 = u 0 L 2 for all times, while this property is not trivial when we consider weak solution. The problem of the energy conservation, assuming only integrability conditions on the vorticity, has been addressed in [8]: the authors consider the 2D Euler equations on the two-dimensional flat torus T 2 and they prove that all weak solutions satisfy the energy conservation if the vorticity ω ∈ L ∞ ((0, T ); L p (T 2 )) with p ≥ 3/2. The proof is based on a mollification argument and the exponent p = 3/2 is required in order to have weak continuity of a commutator term in the energy balance. The authors also give an example of the sharpness of the exponent p = 3/2 in their argument, but still leaves open the question of the existence of non-conservative solutions below this integrability threshold. Moreover, they show that if ω ∈ L ∞ ((0, T ); L p (T 2 )), with 1 < p < 3/2, solutions constructed via (ES) and (VV) conserve the kinetic energy.
Here we discuss the conservation of the energy for solutions of the 2D Euler equations when the initial vorticity is slightly more than integrable, namely ω 0 ∈ L 1 ∩ L(log L) α (R 2 ) with α > 1/2. Existence of weak solutions of (1.1) in this setting was proved by Chae, first in [6] in the case ω 0 ∈ L 1 ∩ L log L(R 2 ), and then extended to the case ω 0 ∈ L 1 ∩ L(log L) 1/2 (R 2 ) in [7]. In these results, the strategy of the proof is based on the properties of Calderón-Zygmund singular integral operators and compact embeddings of Orlicz-Sobolev spaces into L 2 loc (R 2 ). In a similar fashion to the framework of DiPerna and Majda, in [18] the authors introduce the definition of H −1 loc -stability for a sequence of approximating vorticity ω ε , showing that it is a sharp criterion for the strong L 2 loc -convergence of an approximate solution sequence u ε . With their approach they are able to recover previous existence results, expanding the set of possible initial data to much more general rearrangement invariant spaces, such as the Orlicz spaces L(log L) α , with α ≥ 1/2, and the Lorentz spaces L (1,q) with 1 < q ≤ 2.
Finally, in [16] it has been proven that the strong L 2 -compactness of a sequence of velocity fields constructed via (VV) is equivalent to the energy conservation property. In virtue of this result, by posing the problem in the two-dimensional torus, the authors obtained as a corollary that the vanishing viscosity limit produce conservative weak solutions for initial vorticity in the rearrangement invariant spaces considered in [18], including L(log L) α with α > 1/2.
The contribution of these notes in the theory of conservative weak solutions of (1.1) is the following: we consider an initial datum u 0 ∈ L 2 (R 2 ) such that ω 0 ∈ L(log L) α (R 2 ) with compact support and we prove that the canonical approximations introduced in [15] produce approximate solution sequences such that the velocity converges globally in L 2 if α > 1/2. This allows us to prove that the vortex-blob method yields to conservative weak solutions and, in this setting, we extend the results of [8,16] concerning (ES) and (VV) to the case in which the domain is the whole plane R 2 . In order to get the strong convergence in C([0, T ]; L 2 (R 2 )) of the approximating velocity, we will exploit the techniques of [10,24] by adapting the Serfati identity to this less integrable setting. In particular, it would be crucial that the approximating vorticity converge strongly in C([0, T ]; L 1 (R 2 )), as shown recently in [9,10].
The two-dimensional Euler equations
The goal of this section is to provide some prelimiary results on weak solutions of the 2D Euler equations. First, we introduce the notations used in the paper. Then, we will pay particular attention to the theory developed by DiPerna and Majda in [15]. Finally, we will summarize some more recent results concerning conservative weak solutions.
2.1.
Notations. We will denote by L p (R d ) the standard Lebesgue spaces and with · L p their norm. Moreover, L p c (R d ) denotes the space of L p functions defined on R d with compact support. The Sobolev space of L p functions with distributional derivatives of first order in L p is denoted by W 1,p (R d ). The spaces L p loc (R d ), W 1,p loc (R d ) denote the space of functions which are locally in L p (R d ), W 1,p (R d ) respectively. We will denote by H 1 (R d ) the space W 1,2 (R d ) and by H −1 (R d ) its dual space. Moreover, we will say that a function u is in
H −1 loc (R d ) if ρu ∈ H −1 (R d ) for every function ρ ∈ C ∞ c (R d ). We denote with L(log L) α (R d ) the space of functions f such that R d |f (x)|(log + (|f (x)|)) α dx < ∞,
endowed with the Luxemburg norm
f L(log L) α = inf k > 0 : R d |f | k log + |f | k α dx ≤ 1 ,(2.1)
where the function log + is defined as
log + (t) = log(t) if t ≥ 1, 0 otherwise,
and L(log L) α c (R d ) will be the space of functions in L(log L) α (R d ) with compact support. We denote by L p ((0, T ); L q (R d )) the space of all measurable functions u defined on [0, T ] × R d such that
u L p ((0,T );L q (R d )) := T 0 u(t, ·) p L q dt 1 p < ∞,
for all 1 ≤ p < ∞, and
u L ∞ ((0,T );L q (R d )) := ess sup t∈[0,T ] u(t, ·) L q < ∞,
and analogously for the spaces L p ((0, T ); W 1,q (R d )). We denote by B R the ball of radius R > 0 centered in the origin of R d . In the estimates we will denote with C a positive constant which may change from line to line. Finally, it is useful to denote with ⋆ the following variant of the convolution
v ⋆ w = 2 i=1 v i * w i if v, w are vector fields in R 2 , A ⋆ B = 2 i,j=1 A ij * B ij if A, B are matrix-valued functions in R 2 .
With the notations above it is easy to check that if f : R 2 → R is a scalar function and v :
R 2 → R 2 is a vector field, then f * curl v = ∇ ⊥ f ⋆ v, ∇ ⊥ f ⋆ div(v ⊗ v) = ∇∇ ⊥ f ⋆ (v ⊗ v).
Weak solutions.
We recall the definition of weak solution of the Euler equations as in [15].
Definition 2.1. A vector valued function u ∈ L ∞ ((0, T ); L 2 loc (R 2 )) is a weak solution of (1.1) if it satisfies:
(1) for all test functions Φ ∈ C ∞ 0 ((0, T ) × R 2 ) with div Φ = 0,
T 0 R 2 (∂ t Φ · u + ∇Φ : u ⊗ u) dx dt = 0; (2.2)
(2) div u = 0 in the sense of distributions;
(3) u ∈ Lip([0, T ); H −L loc (R 2 )) for some L > 0 and u(0, x) = u 0 (x). Remark 2.2. The choice of divergence-free test functions allow to not consider the pressure in the weak formulation (2.2). It can be formally recovered by the formula −∆p = div div(u ⊗ u), which is obtained applying the divergence in the momentum equation in (1.1).
In [15], DiPerna and Majda introduced the following definition of an approximate solution sequence of the 2D Euler equations. Definition 2.3. A sequence of smooth velocity fields u n with vorticity curl u n = ω n ∈ C([0, T ]; L 1 (R 2 )) is an approximate solution sequence for the 2D Euler equations provided that (i) u n has uniformly bounded local kinetic energy and u n is incompressible, i.e., for each R > 0 and T > 0, there exists C(R) > 0 such that
max t∈[0,T ] B R |u n (t, x)| 2 dx ≤ C(R), div u n = 0;
(ii) the vorticity ω n is uniformly bounded in L 1 , i.e., for every T > 0,
max t∈[0,T ] R 2 |ω n (t, x)| dx ≤ C; (iii) for some L > 0, the sequence u n is uniformly bounded in Lip([0, T ]; H −L loc (R 2 )); (iv) u n is weakly consistent with the 2D Euler equations, i.e. lim n→∞ T 0 R 2 (∂ t Φ · u n + ∇Φ : u n ⊗ u n ) dx dt = 0, (2.3) for every Φ ∈ C ∞ c ((0, T ) × R 2 ) with div Φ = 0.
Besides the very general definition, in [15] the authors give three different examples of approximate solutions sequences, which are important for physical or numerical reasons. They are the following.
(ES) Approximation by exact smooth solutions of (1.1). We consider a smooth approximation of the initial datum u δ 0 such that u δ 0 → u 0 in L 2 loc and we define u δ the unique solution of the approximating problem
∂ t u δ + (u δ · ∇)u δ + ∇p δ = 0, div u δ = 0, u δ (0, ·) = u δ 0 . (2.4)
Then, a solution u of (1.1) is constructed analyzing the limit of the sequence u δ as δ → 0.
(VV) Vanishing viscosity from the two-dimensional Navier-Stokes equations. We consider the two-dimensional incompressible Navier-Stokes equations
∂ t u ν + (u ν · ∇)u ν + ∇p ν = ν∆u ν , div u ν = 0, u ν (0, ·) = u ν 0 , (2.5)
where ν > 0 is the viscosity of the fluid and u ν 0 is smooth and converges in L 2 loc towards u 0 as ν → 0. Then, a solution u of (1.1) is constructed analyzing the vanishing viscosity limit of the sequence u ν .
(VB) Vortex-blob approximation. It is a numerical method which is the prototype of several important numerical schemes. It is based on the idea of approximating the vorticity with a finite number of cores which evolve according to the velocity of the fluid. Without going into details, the approximating velocity u ε solves the system
∂ t u ε + (u ε · ∇) u ε + ∇p ε = K * E ε , div u ε = 0, u ε (0, ·) = u ε 0 , (2.6)
where u ε 0 is a suitable smooth approximation of the initial datum and E ε is an error term which comes from the fact that, roughly speaking, each blob is rigidly translated by the flow. We give the precise construction together with its main properties in the Appendix.
By assuming only integrability hyphotesis on the initial vorticity ω 0 , the existence of weak solutions constructed with the methods above has been proven in [2,15]. For simplicity of exposition, for the remainder of this subsection we will use n as an approximation parameter for all the three methods.
Theorem 2.4. Let u 0 ∈ L 2 loc (R 2 )
be a divergence-free vector field vanishing uniformly as |x| → ∞ and let ω 0 = curl u 0 ∈ L p c (R 2 ) for some p > 1. Let u n be an approximate solution sequence constructed via one of the methods (ES), (VV), (VB), where the associated initial datum u n 0 → u 0 in L 2 loc (R 2 ). Then, there exists a subsequence of u n and a vector field
u ∈ L ∞ ((0, T ); L 2 loc (R 2 )) ∩ Lip([0, T ]; H −L loc (R 2 )
) which vanishes uniformly as |x| → ∞ with the following properties:
• u(0, ·) = u 0 , • u n → u in L 2 ((0, T ); L 2 loc (R 2 )), • ω n * ⇀ ω in L ∞ ((0, T ); L p (R 2 )), • ω n → ω in C([0, T ]; H −L−1 loc (R 2 )).
Remark 2.5. Note that the setting of the previous theorem is for a regime where the uniqueness of solutions of (1.1) is not known. Therefore, the three methods could have multiple limit points which may also change depending on the approximation.
As already mentioned in the introduction, the previous theorem has been generalized by Chae [6,7]:
Theorem 2.6. Let u 0 ∈ L 2 loc (R 2 ) be a divergence-free vector field such that curl u 0 = ω 0 ∈ L(log L) α c (R 2 ) with α ≥ 1/2. Then, there exists a weak solution u of (1.1) with initial datum u 0 satisfying u ∈ C([0, T ]; L 2 loc (R 2 )). (2.7)
The proof of Theorem 2.6 strongly relies on the fact that the operator
T : f ∈ L(log L) α c (R 2 ) → K * f ∈ L 2 loc (R 2 ), (2.8)
is compact for α > 1/2, where K is the two dimensional Biot-Savart kernel. It is worth to note that the solutions are constructed analyzing the vanishing viscosity limit of the corresponding Navier-Stokes equations with the same initial data. Moreover, we remark that in [24] it is shown that it is possible to construct a function f which belongs to L(log L) α c (R 2 ) with α < 1/2 such that K * f is not locally square integrable.
We conclude this subsection by summarizing some known results about the strong convergence in C(L p ) of the approximating vorticity ω n . This problem has been addressed by several authors in different settings, especially with regard to the inviscid limit of the Navier-Stokes equations, see for example [9,11,22]. We collect the results we need in the following theorem, see [3,9,10].
L 1 ∩ L p (R 2 )) such that ω n → ω in C([0, T ]; L 1 ∩ L p (R 2 )). (2.9)
Remark 2.8. Being L(log L) α c ⊂ L 1 c , assuming ω 0 ∈ L(log L) α c by Theorem 2.7 if ω n is a sequence constructed via one of the aforementioned methods, then there exists
ω ∈ C([0, T ]; L 1 (R 2 )) such that ω n → ω in C([0, T ]; L 1 (R 2 )). (2.10) 2.3. Conservative solutions.
In this subsection we discuss the conservation of the energy for the 2D Euler equations. We recall the following definition.
Definition 2.9. Let u ∈ C([0, T ]; L 2 (R 2 )) be a weak solution of (1.1) with initial datum u 0 ∈ L 2 (R 2 ). We say that u is a conservative weak solution if u(t, ·) L 2 = u 0 L 2 ∀t ∈ [0, T ].
It is well-known that in the two-dimensional case, even if we assume that the vorticity is bounded, the velocity field is in general not square integrable. In order to define the kinetic energy, we need to require that the vorticity has zero mean value, see [20].
Proposition 2.
10. An incompressible velocity field in R 2 with vorticity of compact support has finite kinetic energy if and only if the vorticity has zero mean value, that is
R 2 |u(x)| 2 dx < ∞ ⇐⇒ R 2 ω(x) dx = 0.
(2.11)
As already explained in the introduction, the problem of the conservation of the energy in this low regularity setting has been addressed in [8]: they showed that every weak solution is conservative if curl u ∈ L ∞ ((0, T ); L p (T 2 )) with p ≥ 3/2, while for less integrable vorticities the conservation of the energy may depend on the approximation procedure. In particular, by collecting the results of [8,9,10] we have the following theorem.
Theorem 2.11. Let u ∈ C([0, T ]; L 2 (R 2 )) be a weak solution of (1.1) with ω = curl u satisfying (2.11). Then,
• if ω ∈ L ∞ ((0, T ); L 1 ∩ L 3 2 (R 2 )), then u is conservative;
• if ω ∈ L ∞ ((0, T ); L 1 ∩ L p (R 2 )) with p > 1, and u is constructed as limit of one of the approximations (ES), (VV), or (VB), then u is conservative.
We finish this subsection by recalling a theorem that has been proved in [16]. It characterizes the compactness of (VV) and the energy conservation in terms of the classical structure function
S T 2 (u; r) := T 0 T 2 − Br |u(t, x + h) − u(t, x)| 2 dh dx dt 1/2 .
The main statement from [16] is the following.
Theorem 2.12. Let u ν be the unique solution of (2.5) with a smooth initial datum u ν 0 such that
u ν 0 → u 0 in L 2 (T 2 ). Let u ∈ L ∞ ((0, T ); L 2 (T 2 )
) be a solution of (1.1) with initial datum u 0 such that, up to a subsequence, u ν ⇀ u in L 2 (T 2 ). Then the following are equivalent:
(i) u ν → u strongly in L p ((0, T ); L 2 (T 2 )) for some 1 ≤ p < ∞, (ii) there exists a bounded modulus of continuity φ(r) such that, uniformly in ν,
S T 2 (u ν ; r) ≤ φ(r) ∀r ≥ 0, (iii) u is a conservative weak solution.
It is important to note that, by using the result in [18], the previous theorem implies that solutions constructed via (VV) are conservative if ω 0 ∈ L(log L) α (T 2 ). Then, our Theorem 4.2 will extend the aforementioned result to the class ω 0 ∈ L(log L) α c (R 2 ).
Remark 2.13. The Theorem 2.12 holds even if we replace the two-dimensional torus T 2 with the whole plane R 2 , taking into account the appropriate technical considerations.
A priori estimates
In this section we summarize some a priori estimates for the approximating vorticity constructed via the approximation methods introduced in Section 2.2. We will always assume that ω 0 ∈ L(log L) α c (R 2 ) with α > 1/2. As already stressed in the introduction, these estimates will be crucial in order to address the strong convergence of the velocity field in C([0, T ]; L 2 (R 2 )), which will be the topic of the next section.
3.1. Limit of exact smooth solutions. Let ρ δ be a standard smooth mollifier and consider the following Cauchy problem
∂ t ω δ + v δ · ∇ω δ = 0, v δ = K * ω δ , ω δ (0, ·) = ω δ 0 ,(3.1)
where ω δ 0 = ω 0 * ρ δ . We have the following. Lemma 3.1. Let ω δ be the unique smooth solution of (3.1). Then,
sup t∈[0,T ] R 2 |ω δ (t, x)|(log(e + |ω δ (t, x)|)) α dx ≤ C,(3.
2)
where C is a positive constant which does not depend on δ.
Proof. Define β(s) = s(log(e + s)) α and multiply the equations in (3.1) by β ′ (|ω δ |). Then, by integrating in space and time we get that d
dt R 2 β(|ω δ (t, x)|) dx = 0. (3.3)
By using the convexity of β and Jensen's inequality, it follows that
R 2 β(|ω δ 0 |) dx ≤ R 2 β(|ω 0 |) dx ≤ C ω 0 L 1 + R 2 |ω 0 |(log + (|ω 0 |)) α dx < ∞,
and then, by integrating in time in (3.3) we have the result.
3.2. The vanishing viscosity limit. We now deal with the vanishing viscosity limit of the Navier-Stokes equations. Let ρ ν a standard smooth mollifier and let ω ν the solution of
∂ t ω ν + v ν · ∇ω ν = ν∆ω ν , v ν = K * ω ν , ω ν (0, ·) = ω ν 0 ,(3.4)
where ω ν 0 = ω 0 * ρ ν . We have the following Lemma 3.2. Let ω ν be the unique smooth solution of (3.4). Then,
sup t∈[0,T ] R 2 |ω ν (t, x)|(log(e + |ω ν (t, x)|)) α dx ≤ C,(3.
5)
where C is a positive constant which does not depend on ν.
Proof. We just sketch the proof since it is very similar to the one of Lemma 3.1. Define β(s) = s(log(e + s)) α and multiply the equations in (3.4) by β ′ (|ω ν |). Then, by integrating in space and time we get that
d dt R 2 β(|ω ν (t, x)|) dx = −ν T 0 R 2 |∇ω ν (t, x)| 2 β ′′ (|ω ν (t, x)|) dx dt ≤ 0,(3.6)
since β is convex. Then, integrating in time (3.6) we have the result.
3.3. The vortex-blob method. We finally deal with the vortex-blob method. The reader can find the precise definition of the vortex-blob method and some of its properties in the Appendix at the end of this note.
Lemma 3.3. Let ω ε be the approximating vorticity constructed via the vortex-blob method. Then,
sup t∈[0,T ] R 2 |ω ε (t, x)|(log(e + |ω ε (t, x)|)) α dx ≤ C, (3.7)
where C is a positive constant which does not depend on ε.
Proof. We start by proving that ω ε (0, ·) satisfies the bound (3.7), see (A.5). We define β(s) = s(log(e+s)) α and let j δ(ε) be the standard mollifier defined as in (A.1). Then, defining ω ε 0 = ω 0 * j δ(ε) , by Jensen's inequality we have that
R 2 β(|ω 0 * j δ(ε) |) dx ≤ R 2 β(|ω 0 |) dx.
We consider now ω ε 0 * ϕ ε where ϕ ε is the defined as in (A.2). Again by Jensen's inequality we have
R 2 β(|ω ε 0 * ϕ ε |) dx ≤ R 2 β(|ω 0 |) dx.
Then, by the properties of the function β we have that
R 2 β(|ω ε (0, x)|) dx ≤ C R 2 β(|ω ε (0, x) − ω ε 0 * ϕ ε (x)|) dx + C R 2 β(|ω ε 0 * ϕ ε |) dx.
We know that the second term on the right hand side is uniformly bounded in ε, while for the first term we have that
R 2 |ω ε (0, x) − ω ε 0 * ϕ ε (x)| log(e + |ω ε (0, x) − ω ε 0 * ϕ ε (x)|) dx ≤ R 2 |ω ε (0, x) − ω ε 0 * ϕ ε (x)| log(e + Cε) dx ≤ log(e + Cε) ω ε (0, ·) − ω ε 0 * ϕ ε L 1 ≤ Cε 3 log(e + Cε) ≤ C,
where we have used Lemma A.1 with p = ∞ in the second line and with p = 1 in the fourth line. As a consequence of the previous estimate we obtain R 2 |ω ε (0, x)|(log(e + |ω ε (0, x)|)) α dx ≤ C.
Let v ε be the velocity field constructed with the vortex-blob method and consider the linear problem ∂ tω ε + v ε · ∇ω ε = 0, ω ε (0, ·) = ω ε (0, ·).
(3.8)
By arguing as in the proof of Lemma 3.1 we have that
R 2 |ω ε |(log(e + |ω ε |)) α dx = R 2 |ω ε (0, x)|(log(e + |ω ε (0, x)|)) α dx ≤ C,
from which it follows that
R 2 β(|ω ε * ϕ ε |(t, x)) dx ≤ C.
So, in the end we get that
R 2 β(|ω ε |(t, x)) dx ≤ C R 2 β(|ω ε * ϕ ε |(t, x)) dx + C R 2 β(|ω ε −ω ε * ϕ ε |(t, x)) dx ≤ C + Cε 3 (log(e + Cε)) α ≤ C,
which concludes the proof.
Strong convergence of the velocity field
In this section we will prove that solutions constructed with the three different approximation methods described before are conservative. In particular, the uniform bound proved in Section 3 will be crucial in order to prove the global strong convergence in C([0, T ]; L 2 (R 2 )). We start by proving the result for (ES), then with the appropriate modifications we will describe how to prove such result also for (VV) and (VB).
Theorem 4.1. Let ω 0 ∈ L(log L) α c (R 2 )
, with α > 1/2, which verifies (2.11). Let u be a weak solution of (1.1), with curl u 0 = ω 0 , that can be obtained as a limit of a sequence u δ constructed via (ES). Then, u δ satisfies the following convergence
u δ → u in C([0, T ]; L 2 (R 2 )), (4.1)
and u is conservative.
Proof. In order to prove the convergence stated in (4.1), we will prove that u δ is a Cauchy sequence in C([0, T ]; L 2 (R 2 )). We recall that the parameter δ is always supposed to vary over a countable set, therefore given the sequence δ n → 0, we denote with u n and ω n the sequences u δn and ω δn . We divide the proof in several steps.
Step 1 A Serfati identity with fixed vorticity.
In this step we derive a formula for the approximate velocity u n .
Let a ∈ C ∞ c (R 2 ) be a smooth function such that a(x) = 1 if |x| < 1 and a(x) = 0 for |x| > 2. Differentiating in time the Biot-Savart formula we obtain that for i = 1, 2
∂ s u n i (s, x) = K i * (∂ s ω n )(s, x) = (aK i ) * (∂ s ω n )(s, x) + [(1 − a)K i ] * (∂ s ω n )(s, x). (4.2)
Now we use the equation for ω n obtaining ∂ s ω n = −u n · ∇ω n , and substituting in (4.2) we get
∂ s u n i = (aK i ) * (∂ s ω n ) − [(1 − a)K i ] * (u n · ∇ω n ). (4.3)
By using the identity u n · ∇ω n = curl(u n · ∇u n ) = curl div(u n ⊗ u n ) we obtain that
[(1 − a)K i ] * (u n · ∇ω n ) = ∇∇ ⊥ [(1 − a)K i ] ⋆ (u n ⊗ u n ). (4.4)
Substituting the expressions (4.4) in (4.2) and integrating in time we have that u n satisfies the following formula, known as Serfati identity:
u n i (t, x) = u n i (0, x) + (aK i ) * (ω n (t, ·) − ω n (0, ·)) (x) − t 0 ∇∇ ⊥ [(1 − a)K i ] ⋆ (u n (s, ·) ⊗ u n (s, ·))(x) ds. (4.5)
We modify the Serfati identity (4.5) introducing a new cut-off functions a ε : let ε ∈ (0, 1) and define a ε to be equal to 1 on B ε and 0 outside B 2ε . In this way we rewrite the identity (4.5) as u n i (t, x) = u n i (0, x) + (a ε K i ) * (ω n (t, ·) − ω n (0, ·)) (x) + (a − a ε ) * K i ] * (ω n (t, ·) − ω n (0, ·)) (x)
− t 0 ∇∇ ⊥ [(1 − a)K i ] ⋆ (u n (s, ·) ⊗ u n (s, ·))(x) ds.
We can prove that u n is a Cauchy sequence using the previous formula. We consider u n , u m with n, m ∈ N. By linearity of the convolution we have that u n − u m satisfies the following
u n i (t, x) − u m i (t, x) = u n i (0, x) − u m i (0, x) (I) + (a ε K i ) * (ω n (t, ·) − ω m (t, ·))(x) (II) + (a ε K i ) * (ω m 0 − ω n 0 )(x) (III) + ((a − a ε )K i ) * (ω n (t, ·) − ω m (t, ·))(x) (IV ) + ((a − a ε )K i ) * (ω m 0 − ω n 0 )(x) (V ) − t 0 ∇∇ ⊥ [(1 − a)K i ] ⋆ (u n (s, ·) ⊗ u n (s, ·) − u m (s, ·) ⊗ u m (s, ·))(x) (V I)
ds.
(4.6)
In order to prove that u n is a Cauchy sequence, we fix a parameter η > 0 and we will estimates all the terms in (4.6). First of all, since the initial datum u n 0 converges strongly in L 2 , it is obvious that there exists N 1 such that ∀n, m > N 1
u n i (0, ·) − u m i (0, ·) L 2 < η.
Step 2 Estimate on (V I).
By Young's convolution inequality we have that
∇∇ ⊥ [(1 − a)K] ⋆ (u n (s, ·) ⊗ u n (s, ·) − u m (s, ·) ⊗ u m (s, ·) L 2 ≤ ∇∇ ⊥ [(1 − a)K] L 2 u n (s, ·) ⊗ u n (s, ·) − u m (s, ·) ⊗ u m (s, ·) L 1 (V I * )
.
(4.7)
We add and subtract u n (s, ·) ⊗ u m (s, ·) in (V I * ) and by Hölder inequality we have u n (s, ·) ⊗ u n (s, ·) − u n (s, ·) ⊗ u n (s, ·) L 1 ≤ ( u n (t, ·) L 2 + u m (t, ·) L 2 ) u n (s, ·) − u m (s, ·) L 2 .
For the first factor in (4.7) we have that
∇∇ ⊥ [(1 − a)K i ] = −(∇∇ ⊥ a)K i − ∇ ⊥ a∇K i − ∇a∇ ⊥ K i + (1 − a)∇∇ ⊥ K i ,
and it is easy to see that each term on the right hand side has uniformly bounded L 2 norm. Then we have that
t 0 ∇∇ ⊥ [(1 − a)K] ⋆ (u n (s, ·) ⊗ u n (s, ·) − u m (s, ·) ⊗ u m (s, ·) L 2 ds ≤ C u 0 L 2 t 0 u n (s, ·) − u m (s, ·) L 2 ds.
(4.8)
Step 3 Estimate on (II) and (III).
For simplicity we will estimate only (III), but it will be clear from the proof that by using the uniform estimates proved in Section 3 the same estimate holds true for (II). We compute
(a ε K i ) * (ω n 0 − ω m 0 ) 2 L 2 = R 2 B 2ε (x) a ε (x − y)K i (x − y) (ω n 0 (y) − ω m 0 (y)) dy 2 dx ≤ R 2 B 2ε (x) 1 |x − y| |ω n 0 (y) − ω m 0 (y)| dy 2 dx = R 2 B 2ε (x) 1 |x − y|(log(1/|x − y|)) α |ω n 0 (y) − ω m 0 (y)|(log(e + |ω n 0 (y) − ω m 0 (y)|)) α × log 1 |x − y| α |ω n 0 (y) − ω m 0 (y)| (log(e + |ω n 0 (y) − ω m 0 (y)|)) α dy 2 dx ≤ R 2 B 2ε (x) 1 |x − y| 2 (log(1/|x − y|)) 2α |ω n 0 (y) − ω m 0 (y)|(log(e + |ω n 0 (y) − ω m 0 (y)|)) α dy × B 2ε (x) log 1 |x − y| 2α |ω n 0 (y) − ω m 0 (y)| (log(e + |ω n 0 (y) − ω m 0 (y)|)) α dy dx ≤ sup x B 2ε (x) log 1 |x − y| 2α |ω n 0 (y) − ω m 0 (y)| (log(e + |ω n 0 (y) − ω m 0 (y)|)) α dy (I * ) × R 2 B 2ε (x) 1 |x − y| 2 (log(1/|x − y|)) 2α |ω n 0 (y) − ω m 0 (y)|(log(e + |ω n 0 (y) − ω m 0 (y)|)) α dy dx (II * )
.
We estimate (I * ) and (II * ) separately. By defining β(t) = t(log(e + t)) α and
g ε (x) = χ B 2ε (x) 1 |x| 2 (log(1/|x|) 2α , we have that (II * ) = g ε * β(|ω n 0 − ω m 0 |) L 1 ≤ g ε L 1 β(|ω n 0 − ω m 0 |) L 1 .
(4.9) By using the convexity of β and Lemma 3.1, we have that β(|ω n 0 − ω m 0 |) L 1 ≤ C, where C is independent from n, m, while for α > 1/2
g ε L 1 = C (log(1/ε)) 2α−1 ,(4.10)
which can be made as small as we want by choosing properly ε. For (I * ) we use the following facts on the Legendre transform. Let β(t) = t(log(e + t)) α ; the maximum of st − β(t) occurs at a point t where s ≥ (log(e + t)) 2α , that is, where t * (s) ≤ e s/(2α) , so that β * (s) ≤ se s/(2α) . We apply the inequality st ≤ Φ(t) + Φ * (s) ≤ se s/(2α) + t(log(e + t)) 2α , to s = log 1 |x−y| 2α and t = |ω n 0 −ω m 0 | (log(e+|ω n 0 −ω m 0 |)) α and we find that (I * ) is bounded by
(I * ) ≤ sup x B 2ε (log(1/|z|)) 2α |z| dz + B 2ε (x)
|ω n 0 (y) − ω m 0 (y)| (log(e + |ω n 0 (y) − ω m 0 (y)|)) α log 2 e + |ω n 0 (y) − ω m 0 (y)| (log(e + |ω n 0 (y) − ω m 0 (y)|)) α dy
(I * * ) ,
and we can estimate (I * * ) by (I * * ) ≤ |ω n 0 − ω m 0 |(log(e + |ω n 0 − ω m 0 |) α , so that (I * ) is finite using the properties of the function t → t(log(e + t)) α together with Lemma 3.1. So, by fixing ε properly we get that (II) + (III) ≤ Cη.
Step 4 Estimates on (IV) and (V).
In the previous step we have fixed the constant ε, so by applying Young's inequality to
[(a − a ε ) * K i ] * (ω n 0 − ω m 0 ) L 2 ≤ (a − a ε ) * K i L 2 ω n 0 − ω m 0 L 1 ≤ C(ε) ω n 0 − ω m 0 L 1 , [(a − a ε ) * K i ] * (ω n − ω m ) L 2 ≤ (a − a ε ) * K i L 2 ω n − ω m L 1 ≤ C(ε) ω n − ω m L 1 ,
where C(ε) blows up as ε → 0. Now, ε = ε(η) has been fixed in the previous step and by Remark 2.8 the vorticity converges strongly in C([0, T ], L 1 (R 2 )). Then, we have that there exists N 2 such that ∀n, m > N 2
ω n 0 − ω m 0 L 1 , ω n − ω m C(L 1 ) < η/C(ε).
Step 5 u n is a Cauchy sequence in C([0, T ]; L 2 (R 2 )).
By collecting all the estimates obtained in the previous steps we get that for all n, m > N :=
max{N 1 , N 2 } u n (t, ·) − u m (t, ·) L 2 ≤ C η + t 0 u n (s, ·) − u m (s, ·) L 2 ds ,(4.11)
and by Gronwall's lemma u n (t, ·) − u m (t, ·) L 2 ≤ C(T )η. (4.12) Taking the supremum in time in (4.12) we have the result.
Step 6 Conservation of the energy.
Since u n is an exact smooth solution and smooth solutions are conservative, we have that
u n (t, ·) L 2 = u n 0 L 2 . (4.13)
Then, since u n converges strongly to u in C([0, T ]; L 2 (R 2 )), by letting n → ∞ in (4.13) we have the result.
Now we deal with the vanishing viscosity method.
Theorem 4.2.
Let ω 0 ∈ L(log L) α c (R 2 ), with α > 1/2, which verifies (2.11). Let u be a weak solution of (1.1), with curl u 0 = ω 0 , that can be obtained as a limit of a sequence u ν constructed via (VV). Then, u ν satisfies the following convergence (4.14) and u is conservative.
u ν → u in C([0, T ]; L 2 (R 2 )),
Proof. Since the parameter ν is supposed to vary over a countable set, given the sequence ν n → 0, we denote with u n and ω n the sequences u νn and ω νn . Thanks to Remark 2.13, it is enough to prove that u n is a Cauchy sequence in C([0, T ]; L 2 (R 2 )). We proceeds as in the proof of Theorem 4.1. The only difference is that an error term appears in the Serfati identity, which is By Young's inequality we have that
(∆[(1 − a)K i ]) * (ν n ω n (s, ·) − ν m ω m (s, ·)) L 2 ≤ ν n ∆[(1 − a)K i ] L 2 ω n (s, ·) − ω m (s, ·) L 1 + |ν m − ν n | ∆[(1 − a)K i ] L 2 ω m (s, ·) L 1 ,
Since ∆K i is in L 2 (B c 1 ), a straightforward computation shows that ∆[(1 − a)K] is bounded in L 2 . So, because of Remark 2.8, there exists N 3 such that for all n, m > N 3 we have that
(∆[(1 − a)K]) * (ν n ω n (s) − ν m ω m (s)) L 2 ≤ Cη,
and this concludes the proof.
Finally we deal with the vortex-blob method. The theorem is the following.
Theorem 4.3. Let ω 0 ∈ L(log L) α c (R 2 )
, with α > 1/2, which verifies (2.11). Let u be a weak solution of (1.1), with curl u 0 = ω 0 , that can be obtained as the limit of a sequence u ε constructed via (VB). Then, u ε satisfies the following convergence
u ε → u in C([0, T ]; L 2 (R 2 )), (4.16)
and u is conservative.
Proof. Since the parameter ε is supposed to vary over a countable set, given the sequence ε n → 0, we denote with u n and ω n the sequences u εn and ω εn . We divide the proof in several steps.
Step 1 u ε is a Cauchy sequence in C([0, T ]; L 2 (R 2 )).
We proceeds as in the proof of Theorem 4.1. The only difference is that an error term appears in the Serfati identity, which is
t 0 ((∇[(1 − a)K i ]) ⋆ (F n (s, ·) − F m (s, ·)) (x) ds. (4.17)
Since ∇[(1 − a)K i ] ∈ L 2 (R 2 ), by using Young's inequality we get that
((∇[(1 − a)K i ]) ⋆ (F n (s, ·) − F m (s, ·)) L 2 ≤ ∇[(1 − a)K i ] L 2 F n (s, ·) − F m (s, ·) L 1 ,
which can be made as small as we want because of Lemma A.2.
Step 2 Conservation of the energy.
We prove now that u is a conservative weak solution. With our notations, multiplying (A.10) by u n and integrating in space and time we have that
R 2 |u n | 2 (t, x) dx = R 2 |u n | 2 (0, x) dx − t 0 R 2 (∇K ⋆ F n ) · u n dx. (4.18)
For the second term on the right hand side, by Lemma A.2 we have that
t 0 R 2 (∇K ⋆ F n ) · u n dx ≤ ∇K ⋆ F n (s, ·) L 2 u n (s, ·) L 2 ≤ F n (s, ·) L 2 u n (s, ·) L 2 ≤ C(δ n ) − 7 3 (ε n ) 1 3 ,
which goes to 0 as ε n → 0. Then, by the convergence (4.16) letting ε n → 0 in (4.18) we have that
R 2 |u| 2 (t, x) dx = R 2 |u 0 | 2 (x) dx,
which gives the result.
ω ε 0 := ω 0 * j δ(ε) . (A.1)
For any δ ∈ (0, 1) the support of ω ε 0 is contained in a fixed compact set in R 2 , then it can be tiled by a finite number N (ε) of squares R i . Define the quantities
Γ ε i = R i ω ε 0 (x) dx, for i = 1, ..., N (ε).
Let ϕ ε be another mollifier, we define the approximate vorticity to be
ω ε (t, x) = N (ε) i=1 Γ ε i ϕ ε (x − X ε i (t)), (A.2) where {X ε i (t)} N (ε) i=1 is a solution of the O.D.E. system Ẋ ε i (t) = u ε (t, X ε i (t)), X ε i (0) = α i , (A.3)
with u ε defined as
u ε (t, x) = K * ω ε (t, x) = N (ε) i=1 Γ ε i K ε (x − X ε i (t)), (A.4)
where K ε = K * ϕ ε . Note that, since δ and h are ε-dependent, we only use the superscript, or subscript, ε. The ordinary differential equations (A.3) are known as the vortex-blob approximation.
In particular, the approximation of the initial vorticity and the initial velocity are given by
ω ε (0, x) = N (ε) i=1 Γ ε i ϕ ε (x − α i ), u ε (0, x) = N (ε) i=1 Γ ε i K ε (x − α i ). (A.5)
It is not difficult to show the bound (see [15])
sup t∈[0,T ] ( u ε (t, ·) L ∞ + ∇ u ε (t, ·) L ∞ ) ≤ C ε 2 . (A.6)
From (A.6) it follows that, for every fixed ε > 0, there exists a unique smooth solution {X ε i (t)}
N (ε) i=1
of the O.D.E. system (A.3), which implies that u ε and ω ε are well-defined smooth functions. Note that u ε and ω ε are not exact solutions of the Euler equations. Precisely, the approximate vorticity ω ε satisfies the following equation
∂ t ω ε + u ε · ∇ω ε = E ε , (A.7)
where by a direct computation the error term is given by
E ε (t, x) := N (ε) i=1
[u ε (t, x) − u ε (t, X ε i (t)] · ∇ϕ ε (x − X ε i (t))Γ ε i . (A.8)
Concerning the approximate velocity u ε , consider the quantity w ε = ∂ t u ε + (v ε · ∇) u ε .
Since w ε satisfies the system curl w ε = E ε , div w ε = div div (u ε ⊗ u ε ) , (A. 9) we derive that there exists a function p ε such that −∆p ε = div div (u ε ⊗ u ε ) , and w ε = −∇p ε + K * E ε . Then, the velocity given by the vortex-blob approximation verifies the following equations ∂ t u ε + (u ε · ∇) u ε + ∇p ε = K * E ε , div u ε = 0.
(A.10)
Since u ε is divergence-free, E ε can be rewritten as E ε (t, x) = div F ε (t, x) where
F ε (t, x) := N (ε) i=1 [u ε (t, x) − u ε (t, X ε i (t)] ϕ ε (x − X ε i (t))Γ ε i . (A.11)
Letω ε be the solution of the linear transport equation with vector field u ε , that is ∂ tω ε + u ε · ∇ω ε = 0, ω ε (0, ·) = ω ε 0 .
(A.12)
Since u ε satisfies (A.6), there exists a unique smooth solutionω ε , which is given by the formulā
ω ε (t, x) = ω ε 0 ((X ε ) −1 (t, ·)(x)), (A.13)
where X ε is the flow of u ε , that is, Ẋ ε (t, x) = u ε (t, X ε (t, x)), X ε (0, x) = x. (A.14)
Moreover, since div u ε = 0, we have ω ε (t, ·) L p = ω ε 0 L p ≤ ω 0 L p . The following estimates between the L p norms of ω ε andω ε hold true, see [2,10].
Lemma A.1. Let ω 0 ∈ L 1 (R 2 ) and let h = h(ε) be chosen as
h(ε) = ε 4 exp (C 1 ε −2 ω 0 L 1 T ) , (A.15)
where C 1 > 0 is a positive constant. Then, the estimate sup 0≤t≤T ω ε − ϕ ε * ω ε L p ≤ Cε Moreover, with a suitable choice of the parameters in the definition of the vortex-blob method we also have that the error term F ε goes to 0 in the limit, see [2]. Moreover, choosing h(ε) = C 1 ε 6 exp −C 0 ε −2 where C 1 , C 0 are positive constants, we have that F ε satisfies the following additional bound F ε (t, ·) L 2 ≤ Cδ −β ε 7 3 ω 0 L 1 , which goes to 0 choosing δ as above and 0 < σ < 1/7.
Finally, by showing the equi-integrability of the sequence ω ε one of the main results in [10] is the following. Theorem A.3. Let ω 0 ∈ L 1 c (R 2 ) and ω ε 0 defined as (A.1). Then the sequence ω ε as in (A.2) is equi-integrable in L 1 ((0, T ) × R 2 ). Moreover, there exists a function ω ∈ C([0, T ]; L 1 (R 2 )) such that, along a sub-sequence, ω ε → ω in C([0, T ]; L 1 (R 2 )), where ω is a renormalized and Lagrangian solution of the two-dimensional Euler equations.
Theorem 2 . 7 .
27Let ω 0 ∈ L p c (R 2 ) with p ≥ 1 and let ω n be a sequence of approximating vorticity constructed via one of the three methods (ES), (VV), or (VB). Then, there exists ω ∈ C([0, T ];
t 0 (
0∆[(1 − a)K i ]) * (ν n ω n (s, ·) − ν m ω m (s, ·)) ds.(4.15)
for all 1 ≤ p ≤ ∞, where C > 0 is a positive constant which does not depend on ε.
Lemma A. 2 .
2Let ω 0 ∈ L p c (R 2 ) with p ≥ 1, then the quantity F ε defined in (A.11) satisfies sup t∈[0,T ]
F
ε (t, ·) L 1 → 0, as ε → 0. (A.17)
Acknowledgments. The author gratefully acknowledge useful discussions with Gianluca Crippa and Stefano Spirito. This work has been started while the author was a PostDoc at the Departement Mathematik und Informatik of the Universität Basel. This research has been partially supported by the ERC Starting Grant 676675 FLIRT.Appendix A. The vortex-blob methodIn this appendix we describe the vortex-blob approximation and some of its properties. Let us consider an initial vorticity ω 0 ∈ L p c (R 2 ) with 1 ≤ p ≤ ∞. Let ε ∈ (0, 1), we consider two small parameters in (0, 1), which later will be chosen as functions of ε, denoted by δ(ε) and h(ε). First of all, we consider the latticeand define R i the square with sides of lenght h parallel to the coordinate axis and centered at α i ∈ Λ h . Let j δ be a standard mollifier and define
Nussenzveig Lopes: Serfati solutions to the 2D Euler equations on exterior domain. D M Ambrose, J P Kelliher, M C Lopes Filho, H J , J. Differential Equations. 259D. M. Ambrose, J. P. Kelliher, M. C. Lopes Filho, H. J. Nussenzveig Lopes: Serfati solutions to the 2D Euler equations on exterior domain. J. Differential Equations 259, (2015), 4509-4560.
The approximation of weak solutions to the 2-D Euler equations by vortex elements. Multidymensional Hyperbolic Problems and Computations. J T Beale, Math. Appl. J. Glimm and A. Majda, IMA29J. T. Beale: The approximation of weak solutions to the 2-D Euler equations by vortex elements. Multidymen- sional Hyperbolic Problems and Computations, Edited by J. Glimm and A. Majda, IMA Vol. Math. Appl. 29, (1991), 23-37.
Lagrangian solutions to the 2D Euler system with L 1 vorticity and infinite energy. A Bohun, F Bouchut, G Crippa, Nonlinear Analysis: Theory, Methods & Applications. 132A. Bohun, F. Bouchut, G. Crippa: Lagrangian solutions to the 2D Euler system with L 1 vorticity and infinite energy. Nonlinear Analysis: Theory, Methods & Applications 132, (2016), 160-172.
On Self-Similar Solutions to the Incompressible Euler Equations. A Bressan, R Murray, J. Differential Equations. 269A. Bressan, R. Murray: On Self-Similar Solutions to the Incompressible Euler Equations. J. Differential Equations 269, (2020), 5142-5203.
A Posteriori Error Estimates for Self-Similar Solutions to the Euler Equations. A Bressan, W Shen, Discrete & Continuous Dynamical Systems -A. 41A. Bressan, W. Shen: A Posteriori Error Estimates for Self-Similar Solutions to the Euler Equations. Discrete & Continuous Dynamical Systems -A 41, (2021), 113-130.
Weak solutions of the 2-D Euler equations with intial vorticity in L(logL). D Chae, J. Differential Equations. 103D. Chae: Weak solutions of the 2-D Euler equations with intial vorticity in L(logL). J. Differential Equations 103, (1993), 323-337.
Weak solutions of the 2-D incompressible Euler equations. D Chae, Nonlinear Analysis: Theory, Methods & Applications 23. D. Chae: Weak solutions of the 2-D incompressible Euler equations. Nonlinear Analysis: Theory, Methods & Applications 23, (1994), 629-638.
Energy conservation in Two-dimensional incompressible ideal fluids. A Cheskidov, M C Lopes Filho, H J Lopes, R Shvydkoy, Comm. Math. Phys. 348A. Cheskidov, M. C. Lopes Filho, H. J. Nussenzveig Lopes, R. Shvydkoy: Energy conservation in Two-dimensional incompressible ideal fluids. Comm. Math. Phys. 348, (2016), 129-143.
Strong convergence of the vorticity for the 2D Euler Equations in the inviscid limit. G Ciampa, G Crippa, S Spirito, 10.1007/s00205-021-01612-zArch. Rational Mech. Anal. G. Ciampa, G. Crippa, S. Spirito: Strong convergence of the vorticity for the 2D Euler Equations in the inviscid limit. Arch. Rational Mech. Anal. (2021). https://doi.org/10.1007/s00205-021-01612-z
Weak solutions obtained by the vortex method for the 2D Euler equations are Lagrangian and conserve the energy. G Ciampa, G Crippa, S Spirito, J. Nonlinear Sci. 30G. Ciampa, G. Crippa, S. Spirito: Weak solutions obtained by the vortex method for the 2D Euler equations are Lagrangian and conserve the energy. J. Nonlinear Sci. 30, 2787-2820 (2020).
Inviscid limit of vorticity distributions in Yudovich class. P Constantin, T D Drivas, T M Elgindi, 10.1002/cpa.21940Comm. Pure Appl. Math. P. Constantin, T. D. Drivas, T. M. Elgindi: Inviscid limit of vorticity distributions in Yudovich class. Comm. Pure Appl. Math. (2020). https://doi.org/10.1002/cpa.21940
Eulerian and Lagrangian solutions to the continuity and Euler equations with L 1 vorticity. G Crippa, C Nobili, C Seis, S Spirito, SIAM J. Math. Anal. 495G. Crippa, C. Nobili, C. Seis, S. Spirito: Eulerian and Lagrangian solutions to the continuity and Euler equations with L 1 vorticity. SIAM J. Math. Anal. 49 (2017), no. 5, 3973-3998.
Renormalized solutions of the 2D Euler equations. G Crippa, S Spirito, Comm. Math. Phys. 339G. Crippa, S. Spirito: Renormalized solutions of the 2D Euler equations. Comm. Math. Phys. 339, (2015), 191-198.
Existence de nappes de tourbillon en dimension deux. J.-M Delort, J. Amer. Math. Soc. 4J.-M. Delort: Existence de nappes de tourbillon en dimension deux. J. Amer. Math. Soc. 4, (1991), 553-586.
R J Diperna, A Majda, Concentrations in regularizations for 2-D incompressible flow. 40R. J. DiPerna, A. Majda: Concentrations in regularizations for 2-D incompressible flow. Comm. Pure Appl. Math. 40, (1987), 301-345.
On the conservation of energy in two-dimensional incompressible flows. S Lanthaler, S Mishra, C Parés-Pulido, Nonlinearity. 34S. Lanthaler, S. Mishra, C. Parés-Pulido: On the conservation of energy in two-dimensional incompressible flows. Nonlinearity 34, (2021), 1084-1135.
L Lichtenstein, Über einige Existenzproblem der Hydrodynamik homogener, unzusammendrückbarer, reibungsloser Flüssigkeiten und die Helmholtzschen Wirbelsätze. L. Lichtenstein:Über einige Existenzproblem der Hydrodynamik homogener, unzusammendrückbarer, rei- bungsloser Flüssigkeiten und die Helmholtzschen Wirbelsätze., M. Z. 23, (1925), 89-154.
Approximate solutions of the incompressible Euler equations with no concentrations. M C Lopes Filho, H J Lopes, E Tadmor, Ann. Inst. H. Poincaré Anal. Non Linéaire. 17M. C. Lopes Filho, H. J. Nussenzveig Lopes, E. Tadmor: Approximate solutions of the incompressible Euler equations with no concentrations. Ann. Inst. H. Poincaré Anal. Non Linéaire 17, 3 (2000), 317-412.
Remarks on Weak Solutions for Vortex Sheets with a Distinguished Sign. A J Majda, Indiana Univ. Math. J. 42A. J. Majda: Remarks on Weak Solutions for Vortex Sheets with a Distinguished Sign. Indiana Univ. Math. J. 42, (1993), 921-939.
A J Majda, A L Bertozzi, of Cambridge Texts in Applied Mathematics. London/New YorkCambridge Univ. Press27Vorticity and incompressible flowA. J. Majda, A. L. Bertozzi: Vorticity and incompressible flow, vol. 27 of Cambridge Texts in Applied Mathematics. Cambridge Univ. Press, London/New York, 2002.
Dissipative Euler flows for vortex sheet initial data without distinguished sign. F Mengual, L SzèkelyhidiJr, F. Mengual, L. Szèkelyhidi Jr: Dissipative Euler flows for vortex sheet initial data without distinguished sign. https://arxiv.org/abs/2005.08333
H J Lopes, C Seis, E Wiedemann, On the vanishing viscosity limit for 2D incompressible flows with unbounded vorticity. H. J. Nussenzveig Lopes, C. Seis, E. Wiedemann: On the vanishing viscosity limit for 2D incompressible flows with unbounded vorticity. https://arxiv.org/abs/2007.01091
Solutions C ∞ en temps, n-log Lipschitz bornées en espace etéquation d'Euler. P Serfati, C. R. Acad. Sci. Paris Sér. I Math. 320P. Serfati: Solutions C ∞ en temps, n-log Lipschitz bornées en espace etéquation d'Euler. C. R. Acad. Sci. Paris Sér. I Math. 320, (1995), 555-558.
The point-vortex method for periodic weak solutions of the 2-D Euler equations. S Schochet, Comm. Pure Appl. Math. 49S. Schochet: The point-vortex method for periodic weak solutions of the 2-D Euler equations. Comm. Pure Appl. Math. 49, (1996), 911-965.
On L 1 -vorticity for 2-D incompressible flow. I Vecchi, S J Wu, Manuscripta Math. 784I. Vecchi, S. J. Wu: On L 1 -vorticity for 2-D incompressible flow. Manuscripta Math. 78 4, (1993), 403-412.
Instability and non-uniqueness in the Cauchy problem for the Euler equations of an ideal incompressible fluid. Part I. M Vishik, M. Vishik: Instability and non-uniqueness in the Cauchy problem for the Euler equations of an ideal incom- pressible fluid. Part I. https://arxiv.org/abs/1805.09426
Instability and non-uniqueness in the Cauchy problem for the Euler equations of an ideal incompressible fluid. Part II. M Vishik, M. Vishik: Instability and non-uniqueness in the Cauchy problem for the Euler equations of an ideal incom- pressible fluid. Part II. https://arxiv.org/abs/1805.09440
Un théorème sur l'existence du mouvement plan d'un fluide parfait, homogene, incompressible, pendant un temps infiniment long. W Wolibner, Math. Z. 37W. Wolibner: Un théorème sur l'existence du mouvement plan d'un fluide parfait, homogene, incompressible, pendant un temps infiniment long, Math. Z. 37, (1933), 698-726.
Non-stationary flows of an ideal incompressible fluid. V I Yudovič, Ž. Vyčisl. Mat. i Mat. Fiz. 3V. I. Yudovič: Non-stationary flows of an ideal incompressible fluid.Ž. Vyčisl. Mat. i Mat. Fiz. 3, (1963), 1032-1066.
Cambridge Mathematical Lybrary. A Zygmund, Trigonometric Series. London/New YorkCambridge Univ. Press3rd edA. Zygmund: Trigonometric Series, 3rd ed. Cambridge Mathematical Lybrary. Cambridge Univ. Press, Lon- don/New York, 2003.
Università degli Studi di Padova, Via Trieste 63, 35131 Padova. "tullio Levi Ciampa) Dipartimento Di Matematica, Civita, Italy Email address: [email protected], [email protected]) Dipartimento di Matematica "Tullio Levi Civita", Università degli Studi di Padova, Via Trieste 63, 35131 Padova, Italy Email address: [email protected], [email protected]
| [] |
[
"Formulation of Liouville's Theorem for Grand Ensemble Molecular Simulations",
"Formulation of Liouville's Theorem for Grand Ensemble Molecular Simulations"
] | [
"Luigi Delle Site \nInstitute for Mathematics\nFreie Universität Berlin\nGermany\n"
] | [
"Institute for Mathematics\nFreie Universität Berlin\nGermany"
] | [] | Liouville's theorem in a grand ensemble, that is for situations where a system is in equilibrium with a reservoir of energy and particles, is a subject that, so far, has not been explicitly treated in literature related to molecular simulation. Instead Liouville's theorem, a central concept for the correct employment of molecular simulation techniques, is implicitly considered only within the framework of systems where the total number of particles is fixed. However the pressing demand of applied science in treating open systems leads to the question of the existence and possible exact formulation of Liouville's theorem when the number of particles changes during the dynamical evolution of the system. The intention of this note is to stimulate a debate about this crucial issue for molecular simulation. | 10.1103/physreve.93.022130 | [
"https://arxiv.org/pdf/1602.02031v3.pdf"
] | 118,698,687 | 1602.02031 | 5097efe46a2cb429a74833bf207067fb2c11ae2e |
Formulation of Liouville's Theorem for Grand Ensemble Molecular Simulations
29 Mar 2016
Luigi Delle Site
Institute for Mathematics
Freie Universität Berlin
Germany
Formulation of Liouville's Theorem for Grand Ensemble Molecular Simulations
29 Mar 2016arXiv:1602.02031v3 [cond-mat.stat-mech] PACS code: 05.20.Gg 02.70.-c 05.20.Jj * [email protected] 1
Liouville's theorem in a grand ensemble, that is for situations where a system is in equilibrium with a reservoir of energy and particles, is a subject that, so far, has not been explicitly treated in literature related to molecular simulation. Instead Liouville's theorem, a central concept for the correct employment of molecular simulation techniques, is implicitly considered only within the framework of systems where the total number of particles is fixed. However the pressing demand of applied science in treating open systems leads to the question of the existence and possible exact formulation of Liouville's theorem when the number of particles changes during the dynamical evolution of the system. The intention of this note is to stimulate a debate about this crucial issue for molecular simulation.
I. INTRODUCTION
We propose a problem that, in our knowledge and at least in the field of molecular simulation, has not been explicitly treated before, namely whether or not it is possible a rigorous formulation of Liouville's theorem (and corresponding operator) when a system is characterized by a varying number of particles. In the following discussion we will restrict the treatment to classical systems due to the direct implications for classical molecular simulations. Actually, as it will be discussed later, there exists a rich literature for quantum open systems whose formal results can be employed to define, in a rather precise way, the specific concepts needed in classical molecular simulation. We will start from the very general concept of Lindblad operator [1] for open quantum systems and consider the (simpler) subcase of a classical system. The main result which emerges from this analysis is the central role played by the (formal) definition of the reservoir, implicitly encrypted into the definition of Lindblad operator. We will consider one, physically well founded, definition of reservoir, the so-called Bergmann-Lebowitz model [2][3][4], and analyze the consequences when its formal concepts are translated into practical definitions for numerical calculations in a molecular simulation framework. It is important to notice that the Bergmann-Lebowitz model have been already applied in molecular dynamics studies and led to satisfactory results [5,6]; thus its positive application rises the need of a mathematical and physical analysis. In particular we will discuss the existence and meaning of Liouville's operator and Liouville's theorem for systems with varying N. These two concepts, in case of fixed N, are directly used in the calculation of key physical quantities, thus it is of interest to understand what happens when N is variable (for a basic theoretical formulation of the different aspects of the problem see also the summary reported in Ref. [7] and references therein). In fact, Liouville's theorem is central for the correct physical definition of quantities calculated via ensemble averaging (that is the main aim of molecular simulation) [8]; statistical time correlation functions are relevant examples where, in particular, the definition of Liouville's operator is explicitly needed (see discussion later). In fact, we will see that when such quantities are calculated in an open system, they require a technical redefinition (directly linked to the definition of Lindblad operator) and a careful reinterpretation of their physical meaning in terms of a relation between the locality in space and locality in time [5]. Other models, in Molecular Dynamics, are based on the unphysical assumption that the number of particles N is considered as a continuous variable ; also in this case numerical results are satisfactory [9], but conceptually the idea is not consistent with first principles of open systems as derived in Ref. [10]. The relevant aspect of this discussion is that in molecular dynamics the existence of a first principle behind the definition of certain physical quantities is often not necessary (from the technical point of view) for their numerical calculation.
The consequence is that practical definitions (and corresponding calculation procedures) are empirical, however their transferability to other situations requires the existence of a physical well posed principle. For example, consistently with the discuss above, in Refs. [11,12], equilibrium time correlation functions for open boundary systems are calculated on the basis of physical intuition but without explicitly specifying what is the Liouville operator of the atomistic region considered. It is assumed that such operator exists and it is a straightforward extension of the case with fixed N (see also note [13] for more details). Of course for the systems considered in Refs. [11,12] the results can be usefully employed for the specific purposes of the study. However, in general, the question is not so trivial, in fact, for the calculation of time correlation functions it is crucial to know how to unambiguously define the correlation function when a molecule leaves the system and enters in the reservoir.
In Ref. [5] it is discussed how a precise definition of the correlation function may be a natural consequence of the (first principle) definition of Liouville operator for open systems, given some well defined properties of the reservoir; in this paper we developed the formalism of Ref. [5] further. In general, all modern multiscale techniques dealing with open boundaries (see e.g. [15][16][17][18][19][20]) need a clear formulation (extension)of Liouville's theorem (and related operator) for varying N in order to justify any statistical sampling/averaging performed over the produced trajectories. In this perspective, the aim of this paper is not that of providing a final solution to the problem, but actually is that of laying the basis of a discussion starting from an analysis of what can be concluded according to the research available today.
II. OPEN QUANTUM SYSTEMS AND REDUCTION TO THE CLASSICAL CASE
The study of open quantum systems is a subject of high interest in modern physics and thus the associated physical concepts and related formalism have been extensively treated so that the formal backbone of the theory is very solid [21]. In particular the paper of G.Lindblad [1] is of central importance; in this paper, about generators of quantum dynamical semigroups, a general form of a certain class of Markovian quantum mechanical master equations is derived. This work can also be used to describe classical systems in a grand ensemble as well. The starting point is the equation for the time evolution of the density matrix ρ(t):
ρ(t) = L(ρ) = −i[H, ρ] + 1 2 j ([L j ρ, L + j ] + [L j , ρL + j ])(1)
where H is the Hamiltonian, L j ,L + j are operators which describe the interaction of the system with a reservoir; they are called Lindblad operators while equation 1 is also called
A. Bergmann-Lebowitz model of reservoir
Bergmann and Lebowitz (BL) [2,3] (and subsequently Lebowitz and Shimony [4]) proposed a generalization of Liouville's equation to systems that can exchange matter with a reservoir. This work appeared much before the publication of Lindblad's paper, however the BL model can be seen, from the formal point of view, as a specific case of the general approach of Ref. [1]. The model is based on the physical principle that each interaction between the system and the reservoir is characterized by a discontinuous transition of a system from a state with N particles (X ′ N ) to one with M particles (X M ). Importantly, the macroscopic state of the reservoir is not changed by the interaction with the system and thus its microscopic degrees of freedom are not considered (see also note
[23]). The transitions from one state to another are governed by a contingent probability
K N M (X ′ N , X M )dX ′ dt. The kernel K N M (X ′ N , X M ) is a stochastic function independent of time and K N M (X ′ N , X M )
is defined as the probability per unit time that the system at X M has a transition to X ′ N as a result of the interaction with the reservoir. The term
∞ N =0 dX ′ N [K M N (X M , X ′ N )ρ(X ′ N , N, t) − K N M (X ′ N , X M )ρ(X M , M, t)]
expresses the total interaction between the system and the reservoir and its action is the equivalent of the action of Lindblad operators, thus the general equation of time evolution of the probability is:
∂ρ(X M , M, t) ∂t = −{ρ(X M , M, t), H(X M )} + (2) + ∞ N =0 dX ′ N [K M N (X M , X ′ N )ρ(X ′ N , N, t) − K N M (X ′ N , X M )ρ(X M ,dρ(X M , M, t) dt = f (X M , t) −Qρ(X M , M, t) (3) where f (X M , t) = ∞ N =0 dX ′ N [K M N (X M , X ′ N )ρ(X ′ N , t)] andQ( * ) = ∞ N =0 dX ′ N [K N M (X ′ N , X M ), * ]
. If the condition of detailed balance is satisfied:
∞ N =0 [e −βH(X ′ N )+βµN K M N (X M , X ′ N ) − K N M (X ′ N , X M )e −βH(X M )+βµM ]dX ′ N = 0(4)
it follows that the stationary Grand Ensemble is the Grand Canonical ensemble with density:
ρ M (X M , M) = 1 Q e −βH M (X M )+βµM
where β = kT and µ the chemical potential. This is a necessary and sufficient condition for stationarity with respect to the Grand Canonical distribution [3,4]. In the next section we explore the consequences of such results for the formulation of Liouville's theorem and the definition of Liouville's operator. Let us analyze the concept of conservation of Lebesgue measure, that is let us consider an equivalent formulation of Liouville's theorem:
dρ(q, p, t) dt = ∂ρ(q, p, t) ∂t + 3N i=1 ∂ρ ∂q iq i + ∂ρ ∂p iṗ i = 0.(5)ρ(q 0 , p 0 , 0)dq 0 dp 0 = ρ(q τ , p τ , τ )dq τ dp τ .(6)
Here q 0 = q(0), that is q(t) at t = 0; p 0 is defined analogously for the momenta and the same applies to q(t) and p(t) with t = τ , moreover we have ρ(q 0 , p 0 , 0) = ρ(q τ , p τ , τ ). We end up in a compact formulation of Liouville's theorem:
dq 0 dp 0 = dq τ dp τ ; ∀τ
Eq.7 is a simple consequence of the fact that Hamiltonian dynamics is a canonical transformation; in Ref. [8] it is explained how this relation can be adapted for non-Hamiltonian dynamics. In standard textbooks of statistical mechanics and molecular simulation, it is stated that the formalization of Eq.7, is crucial for justifying the fact that ensemble averages can be performed at any point (see e.g. Ref. [8]); this is a key concept in molecular simulation.
However, the derivation of Eq.7 in based on the fact that q 0 p 0 are related to q τ , p τ by a coordinate transformation regulated by a Jacobian:
J(q τ , p τ , q 0 , p 0 ) = det(Q)(8)
where Q is a 6N × 6N matrix for a system of N particles defined as:
Q ij = ∂x i τ ∂x j 0(9)
where x 0 = (q 1 (0).....q N (0), p 1 (0).....p N (0)) and equivalently x τ = (q 1 (τ ).....q N (τ ), p 1 (τ ).....p N (τ )).
The indices i, j label each of the 6N coordinates of x 0 and x τ , that is: The implication of the above statement would be that Eq.7, may now be extended as:
x i = x 1 ....x 6N (equiv- alently for x j , with (x 1 , x 2 , x 3 ) = (q x 1 , q y 1 , q z 1 ) and (x 3N +1 , x 3N +2 , x 3N +3 ) = (p x 1 , p y 1 , p z 1 ) for example).d N q 0 d N p 0 = d M q τ d M p τ ; ∀τ where M = N.(12)
t) between A and B (physical observables) is[8]:
C AB (t) = a(0)b(t) = dpdqf (p, q)a(p, q)e iLt b(p, q)
= dpdqf (p, q)a(p, q)b(p t (p, q), q t (p, q)) (13) a(p, q) and b(p, q) are functions in phase space which correspond to A and B (respectively), f (p, q) is the equilibrium distribution function and iL is the Liouville operator. The general notation is the same used in the guiding reference Ref. [8] and p t (p, q), q t (p, q) indicates the time evolution at time t of the momenta and positions with p, q initial condition. For a system at fixed N (canonical ensemble), Eq.13 is written as:
C AB (t) = 1 Q N dpdqe − H N (p,q) kT a(p, q)b(p t (p, q), q t (p, q)).(14)
where Q N is the Canonical partition function and H N (p, q) the Hamiltonian of a system with N (constant) molecules. It follows that the numerical calculation of C AB (t) is done by calculating a(p, q) and b(p t (p, q), q t (p, q)) along MD trajectories and then taking the average. The dynamics generated by Liouville's operator is well defined, since the Hamiltonian of N molecules is well defined at any time t:
iL = N j=1 ∂H ∂p j ∂ ∂q j − ∂H ∂q j ∂ ∂p j = { * , H}(15)
What does it happen in case of a Grand Canonical (µ V, T) ensemble?
Let us generalize the formalism of Eq.15:
C AB (t) = 1 Q GC N dp N dq N e − [H N (p N ,q N )−µN] kT a(p N , q N )b(p t (p N , q N ), q t (p N , q N )). (16)
where Q GC is the Grand-Canonical Partition function, µ the chemical potential and N the number of particles (now varying in time) of the system. The question now is about how to interpret the quantity b(p t (p N , q N ), q t (p N , q N )); at a given time t the system evolved from its initial condition and may have a number of particles N ′ different from the initial state. The BL model (see previous discussions and Refs. [3][4][5]) allows to make N , q N )). In order to emphasize the problem we want to discuss, let us consider time correlation functions based on single-molecule properties, such as, for example, molecular velocity-velocity time autocorrelation functions or molecular dipole-dipole autocorrelation function. The definition of the velocity autocorrelation function is:
sense of b(p t (p N , q N ), q t (p N , q N )):e i(L+L Lindblad )t b(p, q) = b(p t (p N , q N ), q t (pC V V (t) = 1 N N i=1 v i (t) · v i (0) v i (0) · v i (0)(17)
where · denotes the ensemble average and v i (t) · v i (0) computes the correlation between the velocities of i th molecule at initial time 0 and at a time t. In the same way one may define the dipole auto correlation function: For the case (i) we have that the Lindblad operator applied to a molecule i is such that annihilates the microscopic identity of molecule i once it enters into the reservoir. As a consequence it removes its contribution to the correlation function by destroying the quantity b i (p i , q i ) corresponding to the specific molecule since the index i of such molecule does not exist anymore (see also discussion in the Appendix). It must be noticed that the fact that the correlation does not exist because the molecule, once it enters into the reservoir, does not exist anymore it is not equivalent to say that the correlation becomes zero. This is not true from the physical point of view, instead the molecule simply does not contribute to the average of the correlation function because, by definition, does not posses a correlation. Finally the case (iii) is trivial because a j (p j , q j ) of the molecule j, not present in the system before, is not defined; that is a molecule entering from the reservoir, may posses instantaneous microscopic memory once it enters in the system (i.e. b j (p j , q j ) is defined), but do not have memory of preceding time (i.e. a j (p j , q j )) , thus by definition the integral in the calculation of C AB (t) is not done over particles that are not present in the system at the initial time. The important point of this discussion is that once the action of the reservoir is specified, then it follows an unambiguous "numerical" recipe on how to count molecules for C AB (t) in a molecular dynamics study. For the case of Bergmann-Lebowitz model, according to the discussion above, the following definition arises:"When, within the observation time window, a molecule crosses the border of the system and enters in the reservoir, its contribution to the correlation function must be neglected because, given the specific definition of the reservoir, the microscopic identity is deleted".
C µµ (t) = 1 N N i=1 µ i (t)·µ i (0) µ i (0)·µ i (e i(L Lindblad )t b i (p i , q i ) = b i (p i , q i ),
An interesting consequence of the definition above is that a correlation function related to a certain physical process depends on how local in space a certain process is, because of the finite size (usually relatively small) of an open system and thus of the number of particles considered. However, at the same time, because of the lost of microscopic memory when a molecule enters into the reservoir, the correlation function depends also on the locality in time of the physical process. This means that, although the physical meaning is clear, the correlation function calculated in an open system differs from the correlation function calculated in a closed system, and they do coincide only in the thermodynamic limit (see also note [26] for indirect implications of this result). Moreover, the definition above, although may look natural, is actually not obvious or trivial, in fact, for example, one can also define a subsystem of a large system and consider the embedding larger part as a natural reservoir (as further methodological and theoretical examples of Grand Canonical simulation set up that cannot be defined as a subsystem of a large microscopic (but closed) system and must follow the treatment proposed here see Refs. [28,29]). Since the microscopic details of the reservoir in such case are known, then the Lindblad operator may be explicitly defined through the microscopic interaction in the system-reservoir term of the Hamiltonian. In such case the numerical definition for time correlation function of the system is obviously different from the one above (with a different physical interpretation), since the microscopic memory may be considered, in this case, not lost, thus the correlation function has to be calculated also when a molecule leaves the system and enters in the reservoir; this implies that locality in time does not play a role. Both definitions have a physical sense, but such a physical sense becomes unambiguous once the formal action of the reservoir (and thus its corresponding Hamiltonian term between the system and the reservoir (equivalent to the integral term of the Bergmann-Lebowitz model). They define such term as:
H I = Ω R dx Ω S dyV (x, y)J R (x)J S (y)(18)
where Ω R and Ω S are the phase space of the reservoir and of the system respectively, V (x, y)
is the interaction potential between the reservoir and the system and J R (x), J S (y) are operators acting respectively on the x space of the reservoir and on the y space of the system.
Next the authors state: "More generally J R and J S might be function of the creation and annihilation operators for particles in the reservoir and in the system". One shall consider [23] The Bergman-Lebowitz model considers the most general case of a system in contact with several reservoirs each acting independently of the other; the total action is the sum of each reservoir's action on the system. The case of a single reservoir, as it is done in this work, is then a sub-case of the more general model. The interaction with this reservoir is impulsive/stochastic in the sense that the system jumps in a discontinuous way regarding the number of particles and the new particles entering into the system have a velocity consistent with the temperature of the reservoir and thus, in our case, being in thermal equilibrium, of the system. However, an explicit back-reaction from the system to the reservoir that changes the macroscopic (thermodynamic) state of the reservoir is not considered.
[24] An explicit expression of the Lindblad operator may have the form: 1 2 j ([L j ρ, L + j ] + [L j , ρL + j ]) = N n=1 M m=1 [l mn (a n ρa + m − 1 2 (ρa + m a n + a + m a n ρ))], where l mn are transition probabilities from state n to state m and a m,n is a creation annihilation operator which creates or destroy a state m (n) in the density matrix. How it is discussed in this paper, the Bergmann-Lebowitz model for classical system can be seen as a realization of such an operator for classical cases where the system is allowed with some rate/probability to go from a number of molecules N to a number of molecules M with the implicit creation (annihilation) of molecules.
[25] M.H.Peters, An extended Liouville equation for variable particle number systems, arXiv:physics/9809039, (1998) [26] From this work it can be concluded that calculated time correlation functions will depend upon both the simulated ensemble and also the dynamical equations of motion. In particular, even if two dynamics generate the same equilibrium ensemble, they may in principle generate different
Kossakowski-Lindblad equation [22]. The Kossakowski-Lindblad equation describes the most general case of quantum (non linear) evolution of a system embedded in a certain environment. Due to the term j ([L j ρ, L + j ] + [L j , ρL + j ]), Eq.1 has the form of a rate equation (quantum jumps in the state of the system due to the action of the external environment) where [L j ρ, L + j ] and [L j , ρL + j ] can be interpreted as transition rates between two events (see also note [24]). Under the condition of flux balance: j ([L j ρ, L + j ] + [L j , ρL + j ]) = 0, ρ(t) the stationary solution for ρ(t), in case of a thermal bath (heat reservoir), is the density matrix of a canonical ensemble. The mathematical analysis of such concepts has been extensively done in Ref.[1] but, for our current focus, there is one important concept that we can transfer into the treatment of classical systems in a grand ensemble: the Liouville operator in presence of a reservoir takes the form given by Lindblad and the specific action of the reservoir must be well defined through the definition of Lindblad operators. For classical systems ρ is the probability distribution (equivalent of the density matrix of quantum systems) and it is defined as, ρ(X N , N, t), where X N is a point in the phase space, N the total number of particles; moreover the commutator [ * , * ] becomes the Poisson bracket { * , * }. The classical equivalent of Eq.1 is the standard Liouville equation, plus the corresponding classical term of the Lindblad operators. This latter depends on the specific definition of the reservoir and thus it is model-dependent. Below we treat one specific model of reservoir which is general enough to be of relevance in molecular simulation studies in a grand ensemble.
X M ) is the Hamiltonian of the system corresponding to the point X M and { * , * } are the standard Poisson brackets. According to Eq.3, if one considers the number of particle as a stochastic variable not explicitly depending on time, the corresponding (classical) Kossakowski-Lindblad equation or the generalized Liouville's equation can be expressed as:
III. EXTENSION OF LIOUVILLE'S THEOREM AND LIOUVILLE'S EQUATION TO THE CASE OF VARYING N A. Liouville's Theorem Liouville's theorem and the corresponding equation are key notions of statistical mechanics. The theorem expresses the concept that a dynamical system composed of N particles conserves its distribution, ρ, of positions and momenta (q, p) along the trajectory. This concept leads to the equation:
One possible (but not unique) solution of Eq.5 is the canonical distribution: e − H kT Z ; Z = e − H kT dΓ with H the Hamiltonian of the system and Γ the available phase space. Often, in mathematical language, Liouville's theorem is formulated as follows: the Lebesgue measure is preserved under the dynamics. So far we have considered closed systems where N is constant, however what happens in systems (in equilibrium) which exchange particles with external sources?
However in a system where N is variable det(Q) cannot be calculated, since as the system evolves in time the set x 0 and x τ do not necessarily have the same dimension. At this point a natural question arises: on the basis of the results discussed in the first part of the paper, is there a generalized principle, similar to the Liouville's equation for fixed N, which extends the concept of Eq.7 to the case of variable N? As discussed before, if Eq.4 is satisfied one would have: X M , M, t) ∂t = −{ρ(X M , M, t), H(X M )} (11) this equation is formally equivalent to the standard Liouville's equation with fixed number of particles M, however this time ρ(X M , M, t) and H(X M ) are instantaneously defined (w.r.t. M); similar considerations can be done regarding the definition of Liouville's operator (as it will be discussed later). On the basis of such considerations, a generalized Liouville's theorem extending the formulation for fixed N to the case of a system in contact with a reservoir of particles, may be written in the following form: For systems in contact with a reservoir of particles, under the condition of statistical flux balance, the Lebesgue measure is conserved for each individual M. Here we implicitly make the conjecture that the Lebesgue measure cannot be defined globally, but only for single subsets of the phase space, each characterized by a specific number of particles, M. This hypothesis is based on the fact that a straightforward definition of a global Lebesgue measure is not obvious. The fact that N is discrete and a change in N implies a discrete change of the phase space dimensionality, represents a major obstacle. However I do not exclude that it may be possible to define a generalized space where somehow an invariant measure may be defined; this could represent an interesting research program. Here, we have instead proposed a simpler approach, a local definition, meaning for "local" a definition of a Lebesgue measure at a given M, that is on Canonical hyperplanes.
This approach would be equivalent to the formulation of the problem in terms of canonical hyperplanes as suggested by Peters [25]; this means that the condition applies when after some time τ along a trajectory one returns to the same number of molecules N from which the observation has started. In other terms, in Molecular Dynamics one should sort out instantaneous configurations of a trajectory characterized by the same number of molecules N and for each N then apply the standard Liouville theorem. IV. PRACTICAL IMPLICATIONS FOR MOLECULAR SIMULATION: CALCU-LATION OF EQUILIBRIUM TIME CORRELATION FUNCTIONS In this section we analyze the consequences of the results of the previous section for quantities of key importance in the physical description of any system: equilibrium time correlation functions. The general definition of the equilibrium time correlation function (e.g. in an NVT ensemble), C AB (
in numerical simulations. In fact in Eq.13, when extended to the case of an open system, the propagator e iLt must be substituted by the propagator characterized by the extended Liouville operator which includes the action of the reservoir via the (additional) Lindblad operator. Let us write the extended Liouville operator as L ext = L + L Lindblad , thus its action on b(p, q) can be written as
0) or other time correlation functions based on the properties of single molecules. In molecular dynamics the correlation functions introduced above require the calculation, in a time window ∆t, of the value of C AB (t) for each molecule. Such a procedure, in an open system, requires the unambiguous and precise definition of "each molecule". In fact, differently from the case at fixed N where "each molecule" is trivially defined, for open systems we would have to consider three cases: • (i) Molecules which remains in the system within the time window considered • (ii) Molecules which initially are in the system and then enters into the reservoir within the time window considered • (iii) Molecules which entered, within the time window considered, from the reservoir into the system
because in this case the Lindblad operator (by definition of its action through K M N and K N M in the Bergmann-Lebowitz model, as discussed before) does not act directly on molecule i, that is, in terms of physics, the reservoir does not directly modify the microscopic status of molecule i, since the molecule remained in the system. As a consequence in the definition of C AB (t) only Liouville's operator (as for the case at fixed N) is involved. For the case (ii) instead we have that the action e i(L Lindblad )t b i (p i , q i ), according to the specific definition of reservoir of Bergmann and Lebowitz (i.e. according to the action of K M N and K N M ), is similar to the action of an annihilation operator which
generalized Liouville operator or its Lindblad operator) is specified. It must be noticed that the derivation of an extended Liouville operator does not apply only to the calculation of equilibrium time correlation functions but to the calculation, in open systems, of observable characterized by a response in time. An example is the Onsager-Kubo method developed by Ciccotti and coworkers [30-32], which, if applied to open systems, requires, similarly to the case of equilibrium time correlation functions, the definition of extended Liouville operator as discussed here; in this case one shall consider an additional perturbation to the unperturbed propagator, e i(L+L Lindblad )t , which then gives rise to the situations (i), (ii) and (iii) as discussed before. V. DISCUSSION AND CONCLUSIONS We have discussed the problem of the extension of Liouville's theorem for systems with varying number of particles. Starting from the general formalism for open quantum systems we have proposed, through the work of Bergmann and Lebowitz, a generalization of Liouville's equation and Liouville's Theorem for classical systems. Finally we have discussed its relevance for molecular simulation studies in a grand ensemble, in particular for the calculation of equilibrium time correlation function. A clear formulation of Liouville's theorem Liouville's operator and Liouville's equation is mandatory for justifying the validity of numerical calculations. In any case, as the premise of the paper states, the intention of this work is to provide only the basis of a discussion about this issue by discussing (some of) its relevant aspects. Regarding concrete examples, we have discussed only one specific the-oretical model of reservoir, chosen because in its current formulation is already of practical utility for molecular simulation. However, as reported in the appendix, there are other models which in the current form are not practical for simulation but may represent the basis for designing new reservoirs; our hope is that this paper may inspire further development in the field. ACKNOWLEDGMENT I would like to thank Giovanni Ciccotti and Carsten Hartmann for uncountable discussions about the problem of Liouville's Theorem in open systems and Matej Praprotnik for a critical reading of the manuscript and useful suggestions. This research has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114. VI. APPENDIX In the discussion above we have qualitatively described the action of the Lindblad operator in the Bergmann-Lebowitz model as similar to the action of an annihilation or creation operator. The question of whether or not one can explicitly write the action of the reservoir of Bergmann and Lebowitz in terms of annihilation or creation operators may be, within the framework of molecular simulation, a question of practical convenience. For example, in another seminal paper about open systems, Emch and Sewell [33] derive a formally elegant Liouville-like equation based on the explicit use of well defined operators. The interesting part, related to the annihilation or creation operator, is in the definition of interaction
Press, Oxford, 2002 [22] A.Kossakowski, Rep. Math. Phys. 3, 247 (1972)
As stated before, for molecular simulation the formulation of Emch and Sewell, as it is currently done, would not be practical for one main reason, that is the requirement of an explicit definition of V (x, y). In fact the model of Emch and Sewell is based on the idea of projector operators where the underlying microscopic character of the reservoir is projected out. however it implies that the reservoir has an explicit microscopic evolutionin such specific case the authors are thinking of a quantum system, however, formally, the problem is equivalent to that of a classical system (as considered by us). As stated before, for molecular simulation the formulation of Emch and Sewell, as it is currently done, would not be practical for one main reason, that is the requirement of an explicit definition of V (x, y). In fact the model of Emch and Sewell is based on the idea of projector operators where the underlying microscopic character of the reservoir is projected out, however it implies that the reservoir has an explicit microscopic evolution.
. G Lindblad, Comm.Math.Phys. 48119G.Lindblad, Comm.Math.Phys. 48, 119 (1976)
. P G Bergmann, J L Lebowitz, Phys.Rev. 99578P.G.Bergmann and J.L.Lebowitz, Phys.Rev. 99, 578 (1955)
. J L Lebowitz, P G Bergmann, Annals of Physics. 11J.L.Lebowitz and P.G.Bergmann, Annals of Physics, 1, 1 (1957)
. J L Lebowitz, A Shimony, Phys.Rev. 1281945J.L.Lebowitz and A.Shimony, Phys.Rev. 128, 1945 (1962)
. A Agarwal, J Zhu, C Hartmann, H Wang, L Delle Site, New J. Phys. 1783042A.Agarwal, J.Zhu, C.Hartmann, H.Wang and L.Delle Site, New J. Phys. 17, 083042 (2015)
. A Agarwal, L Delle Site, J.Chem.Phys. 14394102A.Agarwal and L.Delle Site, J.Chem.Phys. 143, 094102 (2015)
. O Penrose, Rep.Prog.Phys. 421938O.Penrose, Rep.Prog.Phys. 42, 1938 (1979)
M Tuckerman, Statistical Mechanics: Theory and Simulation. New YorkOxford University PressM.Tuckerman, Statistical Mechanics: Theory and Simulation, Oxford University Press, New York 2010.
. H Eslami, F Müller-Plathe, J.Compt.Chem. 281763H.Eslami and F.Müller-Plathe, J.Compt.Chem. 28, 1763 (2007)
A Rivas, S F Huelga, Open Quantum System. An Introduction, SpringerBriefs in Physics. A.Rivas and S.F.Huelga, Open Quantum System. An Introduction, SpringerBriefs in Physics, Spriger 2012.
. J Zavadlav, R Podgornik, M Praprotnik, J. Chem. Theory Comput. 115035J.Zavadlav, R.Podgornik, M.Praprotnik, J. Chem. Theory Comput. 11, 5035 (2015)
. A C Fogarty, R Potestio, K Kremer, J.Chem.Phys. 142195101A.C.Fogarty, R.Potestio and K.Kremer, J.Chem.Phys. 142, 195101 (2015)
] the regions characterized by different molecular resolution are coupled by a space-dependent interpolation between the atomistic and coarsegrained force acting on the molecules. I have shown that for such a scheme a global Hamiltonian cannot exist and thus the very definition of a global Liouville operator is not possible (see Ref. In the AdResS method used in Refs.[11, 12. such case it is mandatory to define the Liouville operator for the atomistic region where quantities are calculatedIn the AdResS method used in Refs.[11, 12] the regions characterized by different molecular resolution are coupled by a space-dependent interpolation between the atomistic and coarse- grained force acting on the molecules. I have shown that for such a scheme a global Hamiltonian cannot exist and thus the very definition of a global Liouville operator is not possible (see Ref.[14]). In such case it is mandatory to define the Liouville operator for the atomistic region where quantities are calculated.
. L Delle Site, Phys.Rev.E. 7647701L.Delle Site, Phys.Rev.E 76, 047701 (2007)
. R Delgado-Buscalioni, J Sablic, M Praprotnik, Eur.Phys.J. Spec.Top. 2242321R.Delgado-Buscalioni, J. Sablic and M. Praprotnik, Eur.Phys.J. Spec.Top. 224, 2321 (2015)
. H Wang, C Hartmann, C Schütte, L Delle Site, Phys.Rev.X. 311018H.Wang, C.Hartmann, C.Schütte and L.Delle Site, Phys.Rev.X 3, 011018 (2013)
. R Potestio, S Fritsch, P Espanol, R Delgado-Buscalioni, K Kremer, R Everaers, D Donadio, Phys.Rev.Lett. 110108301R.Potestio, S.Fritsch, P.Espanol, R.Delgado-Buscalioni, K.Kremer, R. Everaers and D.Donadio, Phys.Rev.Lett. 110, 108301 (2013)
. J A Wagoner, V Pande, J.Chem.Phys. 139234114J.A.Wagoner and V.Pande, J.Chem.Phys. 139, 234114 (2013)
. B Ensing, S O Nielsen, P B Moore, K L Klein, M Parrinello, J.Chem.Th.Comp. 31100B.Ensing, S.O.Nielsen, P.B. Moore, K.L. Klein and M.Parrinello, J.Chem.Th.Comp. 3, 1100 (2007)
. A Heyden, D G Truhlar, J.Chem.Th.Comp. 4217A.Heyden and D.G.Truhlar, J.Chem.Th.Comp. 4, 217 (2008)
The Theory of Open Quantum Systems, Oxford University time-correlation functions. This aspect has been treated in Ref. H P Breuer, F Petruccione, 27], where the influence of the thermostat on dynamical properties of a system is extensively treatedH. P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, Oxford University time-correlation functions. This aspect has been treated in Ref.[27], where the influence of the thermostat on dynamical properties of a system is extensively treated.
Molecular Dynamics: with deterministic and stochastic numerical methods. B Leimkuhler, E Noorizadeh, F Theil, J.Stat.Phys. 135261see alsoB.Leimkuhler, E.Noorizadeh and F.Theil, J.Stat.Phys. 135, 261 (2009); see also: "Molecu- lar Dynamics: with deterministic and stochastic numerical methods", B. Leimkuhler and C.
. Springer Matthews, Matthews, Springer, 2015.
. R Delgado-Buscalioni, P V Coveney, Phys. Rev. E. 6746704R. Delgado-Buscalioni and P. V. Coveney, Phys. Rev. E, 67, 046704 (2003)
. R Delgado-Buscalioni, K Kremer, M Praprotnik, J. Chem. Phys. 128114110R. Delgado-Buscalioni, K. Kremer and M. Praprotnik, J. Chem. Phys. 128, 114110 (2008);
. J. Chem. Phys. 131244107ibid J. Chem. Phys. 131, 244107 (2009)
. G Ciccotti, G Jacucci, Phys. Rev. Lett. 35789G.Ciccotti, G.Jacucci, Phys. Rev. Lett. 35, 789 (1975)
. S Orlandini, S Meloni, G Ciccotti, Phys. Chem. Chem. Phys. 1313177S.Orlandini, S.Meloni, G.Ciccotti, Phys. Chem. Chem. Phys. 13, 13177 (2011)
. H Wang, Ch Schütte, G Ciccotti, L Delle Site, J.Chem.Th.Comp. 101376H.Wang, Ch.Schütte, G.Ciccotti, and L.Delle Site. J.Chem.Th.Comp. 10, 1376 (2014)
. G G Emch, G L Sewell, J.Math.Phys. 9946G.G.Emch and G.L.Sewell, J.Math.Phys. 9, 946 (1968)
| [] |
[
"Energy pumping in a quantum nanoelectromechanical system",
"Energy pumping in a quantum nanoelectromechanical system"
] | [
"T Nord \nDepartment of Applied Physics\nChalmers University of Technology\nGöteborg University\nSE-412 96GöteborgSweden\n",
"L Y Gorelik \nDepartment of Applied Physics\nChalmers University of Technology\nGöteborg University\nSE-412 96GöteborgSweden\n"
] | [
"Department of Applied Physics\nChalmers University of Technology\nGöteborg University\nSE-412 96GöteborgSweden",
"Department of Applied Physics\nChalmers University of Technology\nGöteborg University\nSE-412 96GöteborgSweden"
] | [] | Fully quantized mechanical motion of a single-level quantum dot coupled to two voltage biased electronic leads is studied. It is found that there are two different regimes depending on the applied voltage. If the bias voltage is below a certain threshold (which depends on the energy of the vibrational quanta) the mechanical subsystem is characterized by a low level of excitation. Above a threshold the energy accumulated in the mechanical degree of freedom dramatically increases. The distribution function for the energy level population and the current through the system in this regime is calculated. | 10.1063/1.1943539 | [
"https://arxiv.org/pdf/cond-mat/0305464v1.pdf"
] | 119,371,483 | cond-mat/0305464 | 1055190226f7e5877b1ec9e584a5b0700323ad56 |
Energy pumping in a quantum nanoelectromechanical system
20 May 2003 (Dated: November 17, 2018)
T Nord
Department of Applied Physics
Chalmers University of Technology
Göteborg University
SE-412 96GöteborgSweden
L Y Gorelik
Department of Applied Physics
Chalmers University of Technology
Göteborg University
SE-412 96GöteborgSweden
Energy pumping in a quantum nanoelectromechanical system
20 May 2003 (Dated: November 17, 2018)
Fully quantized mechanical motion of a single-level quantum dot coupled to two voltage biased electronic leads is studied. It is found that there are two different regimes depending on the applied voltage. If the bias voltage is below a certain threshold (which depends on the energy of the vibrational quanta) the mechanical subsystem is characterized by a low level of excitation. Above a threshold the energy accumulated in the mechanical degree of freedom dramatically increases. The distribution function for the energy level population and the current through the system in this regime is calculated.
During the past few years experimental methods of physics has seen an advancing capability to manufacture smaller and smaller structures and devices. This has lead to many new interesting investigations of nanoscale physics. Examples include, for instance, observation of the Kondo effect in single-atom junctions [1], manufacturing of single-molecular transistors [2], and so on. There has also been a great interest in the promising field of molecular electronics [3]. One of the main features of the conducting nanoscale composite systems is its susceptibility to significant mechanical deformations. This results from the fact that on the nanoscale level the mechanical forces controlling the structure of the system are of the same order of magnitude as the capacitive electrostatic forces governed by charge distributions. This circumstance is of the utmost importance in the so called electromechanical single-electron transistor (EM-SET), which has been in focus of recent research. The EM-SET is basically a double junction system where the additional (mechanical) degree of freedom, describing the relative position of the central island, significantly influences the electronic transport. Experimental work in relation to EM-SET structures range from the macroscopic [5] to the micrometer scale [6,7,8] and down to the nanometer scale [9]. Various aspects of electronic transport in such systems have been theoretically investigated in a series of articles [11,12,13,14,15,16,17,18,19,20].
In Ref. 4 and Ref. 15 it was, among other things, shown that coupling the mechanical degree of freedom of an EM-SET to the nonequilibrium bath of electrons constituted by the biased leads, can lead to dynamical self excitations of the mechanical subsystem and as a result bring the EM-SET to the shuttle regime of charge transfer. This phenomena is usually referred to as a shuttle instability. In these papers the grain dynamics are treated classically and the key issue is that the charge of the grain, q(t), is correlated with its velocity,ẋ(t), in a way so that the time average, q(t)ẋ = 0.
Decreasing the size of the central island in the EM-SET structure to the nanoscale level results in the quantization of its mechanical motion. Charge transfer in this regime was studied theoretically in Ref. 10. How-ever, the strong additional dissipation in the mechanical subsystem suggested in this paper keeps the mechanical subsystem in the vicinity of its ground state and prevents the developing of the mechanical instability. The aim of our paper is to investigate the behavior of the EM-SET system in the quantum regime when its interaction with the external thermodynamic environment generating additional dissipation processes can be partly ignored in such a manner that the mechanical instability becomes possible. We will show that in this case at relatively low bias voltages, intrinsic dissipation processes bring the mechanical subsystem to the vicinity of the ground state. But if the bias voltage exceeds some threshold value, the energy of the mechanical subsystem, initially located in the vicinity of the ground state, starts to increase exponentially. We have found that intrinsic processes alone saturate this energy growth at some level of excitation. The distribution function for the energy level population and the current through the system in this regime is calculated.
We will consider a model EM-SET structure consisting of a one level quantum dot situated between two leads (see Fig. 1). To describe such a system we use the Hamiltonian
H = k,α E k,α a † k,α a k,α + (E 0 − DX x 0 )c † c + 1 2mP 2 + 1 2 mω 2 0X 2 + T k,α (X)[a † k,α c + c † a k,α ].(1)
The first term describes electronic states with energies E k,α and where a † k,α (a k,α ) are creation (annihilation) operators for these noninteracting electrons with momentum k in the left (α = L) or right (α = R) lead. The second term describes the interaction of the electronic level on the dot with the electric field so that c † (c) is the creation (annihilation) operator for the dot level electrons and E 0 is the dot energy level. The scalar D represents the strength of the Coulomb force acting on a charged grain,X is the position operator, and x 0 = h/mω 0 is the harmonic oscillator length scale for an oscillator with mass m and angular frequency ω 0 . The third and fourth terms describe the center of mass movement of the dot in a harmonic oscillator potential so thatP is the center of mass momentum operator. The last term is the tunneling interaction between the lead states and the dot level and T k,α (X) is the tunneling coupling strength. We will consider the case when the tunneling coupling depends exponentially on the position operatorX, i.e. T k,R (X) = T R exp{X/Λ} and T k,L (X) = T L exp{−X/Λ}, where T R and T L are constants and Λ is the tunneling length.
To introduce a connection to the quantified vibrational states of the oscillator we perform a unitary transformation of the Hamiltonian (1) so that U HU † =H, where U = e ī hP d0c † c . In this paper we consider the situation whenH has the most symmetric form:
H = k,α E k,α a † k,α a k,α +Ẽ 0 c † c +hω 0 b † b + 1 2 +T 0 k a † k,R ce x−b+x+b † + a † k,L ce −x+b−x−b † + h.c.. (2)
Here b † (b) is a bosonic creation (annihilation) operator for the vibronic degree of freedom, and the dimensionless parameters
x ± = 1/ √ 2(x 0 /Λ ± d 0 /x 0 ) (where d 0 = D/(x 0 mω 2 0 )
) characterize the strength of the electromechanical coupling. Furthermore, T 0 is the renormalized tunneling coupling constant, andẼ 0 = E 0 − Dd 0 /(2x 0 + mω 2 0 d 2 0 /2) is the shifted dot level. For simplicity, but without loss of generality, we chooseẼ 0 equal to the chemical potential of the leads at zero bias voltage.
First let us study the situation when the mechanical subsystem is characterized by a low level of excitation. We will consider the case of small electromechanical coupling. This means that the dimensionless parameters x ± ≪ 1 and that only elastic electronic transitions and transitions accompanied by emission or absorption of a single vibronic quantum (single-vibronic processes) are important. If the applied voltage is smaller than 2hω 0 /e and the temperature is equal to zero, the six allowed transitions of this type are the ones described in Fig. 1a.
Here we see that only elastic tunneling processes and tunneling processes in which the vibronic degree of freedom absorbs one vibronic quantum are allowed and as a result the rate equation for the distribution function of the energy level population P (n, t) has the form:
Γ −1 ∂ t P (n, t) = P (n, t) + (x 2 + + x 2 − )(n + 1)P (n + 1, t) − 1 + n(x 2 + + x 2 − ) P (n, t),
where Γ = 2πT 2 0 ν/h and ν is the density of states in the leads.
It is straightforward to solve these equations and find that the solution exponentially fast approaches the stable solution P (0) = 1 and P (n) = 0 for all n > 0. As a result, the dimensionless average extra energy excited in
E(t) = ∞ n=0 nP (n, t),(3)
goes to 0. If the applied bias voltage is increased above the threshold value V c = 2hω 0 /e we instead get the allowed transitions described in Fig. 1b, i.e. two absorption processes has changed into emission processes where the energy quantumhω 0 is transferred to the vibronic degree of freedom. These transitions lead to the following equation for P (n):
Γ −1 ∂ t P (n, t) = − 1 + x 2 − + (n + 1)x 2 + P (n, t) +x 2 − (n + 1)P (n + 1, t) + P (n, t) + x 2 + nP (n − 1, t).(4)
One can find from this equation that the time evolution of the exited energy is given by the formula: i.e. energy is continuously pumped into the mechanical subsystem, which is strong evidence that the low exited regime is unstable if the bias voltage exceeds the critical value V c . Furthermore, it is necessary to remark here that for this case we thus have a linear increase in the energy as a function of time even when x 2 + approaches x 2 − . As the excitation of the vibronic subsystem increases multi-vibronic processes become important. They give rise to an additional dissipation which saturates the energy growth induced by the single-vibronic processes. As a result the system comes to a stationary regime which is characterized by a significant level of excitation of the vibronic subsystem. To demonstrate this we will now expand our analysis by taking into account electronic transitions accompanied by the emission or absorption of two vibronic quanta (two-vibronic processes). To describe such transitions one has to take into account second order terms in b † and b in the tunneling part of the Hamiltonian (2). As illustrated in Fig. 2 these terms will generate four processes in which two vibrational quanta are absorbed by the electron during the tunneling event. There is also a renormalization of the elastic channel coming from the inclusion of these terms. Now the equation for the distribution function of the energy level population has the form:
E(t) = x 2 + x 2 + − x 2 − e Γ(x 2 + −x 2 − )t − 1 ,(5)Γ −1 ∂ t P (n, t) = nP (n − 1, t)
− ǫn 2 + (α − ǫ + 1)n + 1 P (n, t) +α(n + 1)P (n + 1, t) +ǫ(n + 1)(n + 2)P (n + 2, t) = 0, (6) where we have introduced the constants ǫ = (x 4 + + x 4 − )/(4x 2 + ) and α = x 2 − /x 2 + . To find the stationary solution of this equation we introduce the generating function:
P(z) = ∞ n=0 z n P (n),
where z is a complex number inside the unit circle. Rewriting Eq. 6 we find the equation for P(z)
ǫ(z + 1)∂ 2 z P(z) + (α − z)∂ z P(z) − P(z) = 0.
The solution to this equation is
P(z) = e − z 1 dz ′ α−ǫ−z ′ ǫ(z ′ +1) * z z0 dz ′ e z ′ 1 dz ′′ α−ǫ−z ′′ ǫ(z ′′ +1) C 1 ǫ(z ′ + 1) ,
where C 1 and z 0 are constants. Since the probabilities P (n) are positive and normalized, the sum ∞ n=0 (−1) n P (n) = P(z = −1) converges absolutely. This is true only for z 0 = −1. The second constant C 1 can be determined from the normalization condition P(z = 1) = 1 to be
C 1 = ǫ2 γ / 1 −1 dxe (1−x)/ǫ (x + 1) γ−1 ,
where we have introduced the constant γ = (α − ǫ + 1)/ǫ. Therefore the final expression for P(z) is
P(z) = C 1 ǫ e z−1 ǫ (z + 1) γ z −1 dz ′ e 1−z ′ ǫ (z ′ + 1) γ−1 .(7)
We can now calculate the average energy excited in the harmonic oscillator, which is just ∂ z P(z) calculated at z = 1,
E = 1 2ǫ (2 + C 1 − ǫγ).(8)
To see how the energy pumped into the harmonic oscillator affects the charge transport we calculate the current I through the system in units of eΓ. For voltages below V c the current is only mediated by the elastic channel and is thus I = 1/2.
For voltages in the range 2hω 0 /e < V < 4hω 0 /e the current can be calculated to leading order in ǫ as Fig. 3 we have chosen a set of numerical values and plotted (solid line) the calculated current as a function of the bias voltage. For comparison we have also plotted (dashed line) the current as given in the high dissipation limit where the harmonic oscillator goes to the ground state between tunneling events. It is clear that the current in the regime characterized by a high level of excitation is much greater than the one in the regime of low level of excitation. The current makes a jump as the non-elastic channel is opened at V = 2hω0/e. We have also plotted (dashed line) the current as a function of voltage for the high dissipation limit where the harmonic oscillator goes to the ground state between tunneling events.
I = 1 4 (x 4 − − x 4 + ) + x 2 + x 2 − P ′′ (z = 1) + (x + − x − ) 2 P ′ (z = 1) + 1.( In
In conclusion, we have studied fully quantized mechanical motion of a single-level quantum dot coupled to two voltage biased electronic leads. We have shown that above a certain threshold voltage the energy accumulated in the mechanical subsystem increases dramatically. We have also shown that second order inelastic tunneling events are enough to stabilize this pumping of energy. Finally the current through the system was calculated and it was found that the development of the mechanical instability is accompanied by a dramatic increase in the current.
The authors would like to thank Robert Shekhter, Jari Kinaret and Anatoli Kadigrobov for valuable discussions related to this manuscript. * [email protected]
FIG. 1 :
1(a) Model system consisting of a one level quantum dot placed between two leads. The level of the dot equals the chemical potential of the leads and a bias voltage of V is applied between the leads. The center of mass movement of the dot is in a harmonic oscillator potential with the vibrational quantahω0. The applied bias voltage is such that eV /2 <hω0. (b) Same as (a) but with the applied bias voltage larger than 2hω0/e. the vibronic subsystem,
FIG. 2 :
2Illustration of the second order case where elastic tunneling and inelastic tunneling exchanging two or less vibrational quanta are included. The level of the dot is equal to the chemical potential and the bias voltage is set so that 2hω0/e < V < 4hω0/e.
FIG. 3 :
3The current as plotted as a function of the bias voltage (solid line).
. J Park, A N Pasupathy, J I Goldsmith, C Chang, Y Yaish, J R Petta, M Rinkoski, J P Sethna, H D Abruña, P L Mceuen, D C Ralph, Nature. 417722J. Park, A. N. Pasupathy, J. I. Goldsmith, C. Chang, Y. Yaish, J. R. Petta, M. Rinkoski, J. P. Sethna, H. D. Abruña, P. L. McEuen, and D. C. Ralph, Nature (Lon- don) 417, 722 (2002).
. W Liang, M P Shores, M Bockrath, J R Long, H Park, Nature. 417725W. Liang, M. P. Shores, M. Bockrath, J. R. Long, and H. Park, Nature (London) 417, 725 (2002).
. C Joachim, J K Gimzewski, A Aviram, Nature. 408541C. Joachim, J. K. Gimzewski, and A. Aviram, Nature (London) 408, 541 (2000).
. L Y Gorelik, A Isacsson, M V Voinova, B Kasemo, R I Shekhter, M Jonson, Phys. Rev. Lett. 804526L.Y. Gorelik, A. Isacsson, M.V. Voinova, B. Kasemo, R.I. Shekhter, and M. Jonson, Phys. Rev. Lett. 80, 4526 (1998).
. M T Tuominen, R V Krotkov, M L Breuer, Phys. Rev. Lett. 833025M.T. Tuominen, R.V. Krotkov, and M.L. Breuer, Phys. Rev. Lett. 83, 3025 (1999).
. D V Scheible, A Erbe, R H Blick, New J. of Phys. 41D. V. Scheible, A. Erbe, and R. H. Blick, New J. of Phys. 4, 86.1 (2002).
. A Erbe, C Weiss, W Zwerger, R H Blick, Phys. Rev. Lett. 8796106A. Erbe, C. Weiss, W. Zwerger, and R.H. Blick, Phys. Rev. Lett. 87, 096106 (2001).
. A Erbe, R H Blick, A Kriele, J P Kotthaus, Appl. Phys. Lett. 733751A. Erbe, R.H. Blick, A. Kriele, and J.P. Kotthaus, Appl. Phys. Lett. 73, 3751 (1998).
. H Park, J Park, A K Lim, E H Anderson, A P Alivisatos, P L Mceuen, Nature (London). 40757H. Park, J. Park, A.K.L Lim, E.H. Anderson, A.P. Alivisatos, and P.L. McEuen, Nature (London) 407, 57 (2000).
. K D Mccarthy, N , M Tuominen, cond-mat/0205419K. D. McCarthy, N. Prokof'ev, and M. Tuominen, (un- published), cond-mat/0205419.
. N Nishiguchi, Phys. Rev. Lett. 8966802N. Nishiguchi, Phys. Rev. Lett. 89, 066802 (2002).
. A D Armour, A Mackinnon, Phys. Rev. B. 6635333A.D. Armour, and A. MacKinnon, Phys. Rev. B 66, 035333 (2002).
. D Mozyrsky, I Martin, Phys. Rev. Lett. 8918301D. Mozyrsky, and I. Martin, Phys. Rev. Lett. 89, 018301 (2002).
. T Nord, L Y Gorelik, R I Shekhter, M Jonson, Phys. Rev. B. 65165312T. Nord, L.Y. Gorelik, R.I. Shekhter, and M. Jonson, Phys. Rev. B 65, 165312 (2002).
. D Fedorets, L Y Gorelik, R I Shekhter, M Jonson, Europhys. Lett. 5899D. Fedorets, L.Y. Gorelik, R.I. Shekhter, and M. Jonson, Europhys. Lett. 58, 99 (2002).
. N Nishiguchi, Phys. Rev. B. 6535403N. Nishiguchi, Phys. Rev. B 65, 035403 (2001).
. A Isacsson, Phys. Rev. B. 6435326A. Isacsson, Phys. Rev. B 64, 035326 (2001).
. D Boese, H Schoeller, Europhys. Lett. 54668D. Boese, H. Schoeller, Europhys. Lett. 54, 668 (2001).
. L Y Gorelik, A Isacsson, Y M Galperin, R I Shekhter, M Jonson, Nature. 411454L.Y. Gorelik, A. Isacsson, Y.M. Galperin, R.I. Shekhter, and M. Jonson, Nature (London) 411, 454 (2001).
. C Weiss, W Zwerger, Europhys. Lett. 4797C. Weiss, W. Zwerger, Europhys. Lett. 47, 97 (1999).
| [] |
[
"Quantile-based Bias Correction and Uncertainty Quantification of Extreme Event Attribution Statements",
"Quantile-based Bias Correction and Uncertainty Quantification of Extreme Event Attribution Statements"
] | [
"Soyoung Jeon [email protected] ",
"Christopher J Paciorek [email protected] ",
"Michael F Wehner [email protected] ",
"\nEarth Sciences Division\nDepartment of Statistics\nLawrence Berkeley National Laboratory Berkeley\nCaliforniaUSA\n",
"\nComputational Research Division, Lawrence Berkeley National Laboratory Berkeley\nUniversity of California Berkeley\nCalifornia, CaliforniaUSA, USA\n"
] | [
"Earth Sciences Division\nDepartment of Statistics\nLawrence Berkeley National Laboratory Berkeley\nCaliforniaUSA",
"Computational Research Division, Lawrence Berkeley National Laboratory Berkeley\nUniversity of California Berkeley\nCalifornia, CaliforniaUSA, USA"
] | [] | Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output based on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event. | 10.1016/j.wace.2016.02.001 | [
"https://arxiv.org/pdf/1602.04139v1.pdf"
] | 88,520,553 | 1602.04139 | 84704b77c891e790ce1cca0a7ef24f3eb40950bf |
Quantile-based Bias Correction and Uncertainty Quantification of Extreme Event Attribution Statements
12 Feb 2016
Soyoung Jeon [email protected]
Christopher J Paciorek [email protected]
Michael F Wehner [email protected]
Earth Sciences Division
Department of Statistics
Lawrence Berkeley National Laboratory Berkeley
CaliforniaUSA
Computational Research Division, Lawrence Berkeley National Laboratory Berkeley
University of California Berkeley
California, CaliforniaUSA, USA
Quantile-based Bias Correction and Uncertainty Quantification of Extreme Event Attribution Statements
12 Feb 2016Generated using version 3.0 of the official AMS L A T E X template 1
Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output based on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.
Introduction
The summer of 2011 was extremely hot in Texas and Oklahoma, producing a record of 30.26 • C for the average June-July-August (JJA) temperature (3.24 • C above the 1961-1990 mean) as measured in the CRU observational dataset (CRU TS 3.21, Harris et al. (2014)). In a previous study of the 2011 Texas heat wave by Hoerling et al. (2013), a major factor contributing to the magnitude of 2011 heat wave was the severe drought over Texas resulting from the La Niña phase of the ocean state. However, the analysis found a substantial anthropogenic increase in the chance of an event of this magnitude. As in most mid-latitude land regions, the probability of extreme summer heat in this region has increased due to human-induced climate change (Min et al. 2013). However, as Stone et al. (2013) note, depending on spatial extent of the region analyzed, observed summer warming is low in Texas in 2011 and traceable to the so-called "warming hole" (Meehl et al. 2012).
Extreme event attribution analyses attempt to characterize whether and how the probability of an extreme event has changed because of external forcing, usually anthropogenic, of the climate system. As with traditional detection and attribution of trends in climate variables (Bindoff et al. 2013), climate models must play an important role in the methodology due to the absence of extremely long observational records. The fraction of attributable risk (F AR) or the risk ratio (RR) are commonly-used measures that quantify this potential human influence (Palmer 1999;Allen 2003;Stott et al. 2004;Jaeger et al. 2008;Pall et al. 2011;Wolski et al. 2014). Following the notation used in Stott et al. (2004), let p A be the probability in a simulation using all external (anthropogenic plus natural) forcings of an event of similar magnitude, location and season to the actual event and p C be the probability of such an event under natural forcings. The F AR is defined as F AR = 1 − p C /p A while the RR is defined as RR = p A /p C , with each quantity a simple mathematical transformation of the other. We note that the commonly used term "risk ratio" is more precisely a "probability ratio" (Fischer and Knutti 2015) but we will stick to the RR nomenclature in this study-in part because RR is well-established terminology.
In the seminal study of the 2003 European heat wave by Stott et al. (2004), their climate model did remarkably well in simulating both European mean summer temperature and its interannual standard deviation. However, this is not generally the case for the entirety of available climate model outputs nor for the wide range of extreme events of current interest (Peterson et al. 2012(Peterson et al. , 2013Herring et al. 2014). Hence there is a need to correct model output, particularly in the tail of its distribution, to more realistically estimate both p A and p C . Quantile-based mapping is often used to reduce such climate model biases in statistical downscaling studies of future climate change projections. Such methods match quantiles of climate model outputs to observed data for monthly GCM temperature and precipitation (Wood et al. 2004). For instance, quantile-based corrections to the transfer function between the coarse mesh of the global models and the finer downscaled mesh have been obtained by using cumulative distribution functions (CDFs) to match percentiles between the model outputs and observations over a specified base period (Maurer and Hidalgo 2008). Li et al. (2010) proposed an adjustment of the traditional quantile matching method (Panofsky and Brier 1968) to account for time-dependent changes in the distribution of the future climate and suggested that the quantile-matching method is a simple and straightforward method for reducing the scale differences between simulations and observations, for the tails of the distribution as well. The quantile mapping approach of Li et al. (2010) has been previously used to empirically estimate annual and decadal maximum daily precipitation in an attribution study of an early season blizzard in western South Dakota (Edwards et al. 2014).
This paper is concerned with developing a formal statistical methodology using extreme value analysis combined with quantile mapping to adjust for model biases in event attribution analyses. We apply the methodology to the 2011 central US heatwave as a case study, using an ensemble of climate model simulations. In Section 2, we describe the observed and simulated data for the central US heatwave analysis. Section 3 presents our statistical methodology, describing the use of extreme value methods combined with the quantile bias correction to estimate the risk ratio. We describe several approaches for estimating uncertainty in the risk ratio, focusing on the use of a likelihood ratio-based confidence interval that provides a one-sided interval even when the estimated risk ratio is infinity. In Section 4 we present results from using the methodology for event attribution for the central US heatwave, showing strong evidence of anthropogenic influence.
Case Study: Summer 2011 Central USA Heatwave
For a representative case study of extreme temperature attribution, we define a central United States region bordered by 90 • W to 105 • W in longitude and 25 • N to 45 • N in latitude, chosen to encompass the Texas and Oklahoma heatwave that occurred in summer 2011 (see Figure 1). For this region, we calculated summer (June, July, August [JJA]) average temperature anomalies for the time period 1901-2012 by averaging daily maximum temperatures for grid cells falling within the study region. Anomalies are computed using 1961-1990 as the reference period.
The observational data in this study are obtained from the gridded data product (CRU TS 3.21, Climatic Research Unit Time Series) available on a 0.5 • ×0.5 • grid provided by the Climatic Research Unit (Harris et al. 2014). This dataset provides monthly average daily maximum surface air temperature anomalies. Similarly, monthly averaged daily maximum surface air temperatures were obtained from the CMIP5 database through the Earth System Grid Federation (ESGF) archive. For both the observations and model output, spatial averages over the cells covering the land surface of the region were calculated, resulting in simple 1-dimensional time series. In this study, we use a single climate model, the fourth version of the Community Climate System Model (CCSM4) with a resolution of 1.25 • ×0.94 • grid. To more fully explore the structural uncertainty in event attribution statements, additional models would need to be included in the analysis. While that topic is outside the scope of this paper, our methodology is also relevant for analyses that use multiple models that will each have their own biases.
The CCSM4 ensemble consists of multiple simulations, each initialized from different times of a control run; we treat the ensemble members as independent realizations of the model's possible climate state. For the actual scenario with all forcings included, we use an ensemble of five members, constructed by concatenating the period 1901-2005 from the CMIP5 "historical" forcings experiment and the period 2006-2012 from the matching RCP8.5 emissions scenario experiment. As a representation of a world without human interference on the climate system, we construct a counterfactual scenario by producing an ensemble of 12 100-year segments drawn from the preindustrial control run. In this scenario, greenhouse gases, aerosols and stratospheric ozone concentrations are set at pre-industrial levels, but other external natural forcings such as solar variability and volcanoes are not included. We use this counterfactual scenario as a proxy for the natural climate system without any external forcing factors.
An important consideration in event attribution analyses is whether the climate model(s) reasonably represent the magnitudes and frequencies of the event of interest (Christidis et al. 2013). Figure 2 shows that summer temperatures vary more in the CCSM4 output than in the observations. The record observed extreme value in our central US region in 2011 was 2.467 • C above the 1961-1990 average (represented by the large black dot); even this extreme is somewhat lower than the observed values over just the states of Texas and Oklahoma. However, this value is not particularly rare in either model scenario dataset. Due to this scale mismatch in temperature variability, the climate model incorrectly estimates the probabilities of extreme events of this magnitude in both scenarios. In light of this model bias, a quantile mapping procedure to scale the extreme values of either the model or the observations to the other is warranted to more consistently relate the model's risk ratio to the real world. More precisely, we define the event according to observations, even in the presence of observational error, and calibrate the model to the observations with the quantile-based method described in this paper. The methodology presented in Section 3 implements such a scaling by first estimating the probability,p O , of reaching or exceeding the actual event magnitude from the observations. Then, the magnitude,ẑ A , of an event in that time with the same probability, p O (=p A ), is estimated from the actual scenario of the model. The risk ratio can then be estimated from the probability,p C , of an event of magnitudeẑ A from the counterfactual scenario of the model as RR =p A /p C . Implicit in this estimation of RR is an assumption that the asymptotic behavior of the all forcings model ensemble is similar to the observations. Indeed, it is not clear how to validate that assumption given the limited observational data availability and the rarity of the events of interest in attribution studies. However, it is clear that errors from estimating RR directly from the model without a quantile mapping correction would be larger, because probability estimates would be drawn from a different part of the distribution. In this case study, such probabilities would not be representative of the tail of the distribution. Furthermore, in other cases, the model may underestimate variability, and the probability in the model of an event of the actual magnitude may be zero due to the boundedness of the distribution function. We return to the implications of bounded distributions for uncertainty estimates in Section 3. There is a risk that bias correction could mask serious model errors in simulating the processes responsible for the extreme event in question. This risk is also present in more commonly-used bias correction techniques such as the use of anomalies based on subtracting off or dividing by a reference value. In the present example, a complete assessment of the robustness of the results would also include analysis of CCSM4's ability to reproduce the type of large-scale meteorological patterns leading to central US heatwaves as well as its simulation of ENSO.
Methodology
a. Quantile Bias Correction
Here we describe a quantile mapping methodology to adjust for the difference in scales between observations and model outputs; we call this methodology quantile bias correction. The methodology seeks to estimate adjusted probabilities p A and p C and the corresponding RR. From this point forward, since we will work exclusively with the adjusted probabilities, we will simply use p A and p C to refer to the adjusted probabilities rather than introduce additional notation to distinguish adjusted and unadjusted probabilities. The steps of the method are as follows:
(1) observe some extreme event, e.g., the extreme value of 2.467 • C for the 2011 central US heatwave, and estimate the probability,p O , of the observed event using appropriate extreme value statistical methods,
(2) use extreme value methods applied to the model output under the actual scenario to estimate the magnitude,ẑ A , associated with the probabilityp O , thereby defining
p A = p O ,
(3) use extreme value methods applied to the model output under the counterfactual scenario to estimate the probabilityp C of exceeding the valueẑ A , and (4) calculate the estimated risk ratio RR =p A /p C .
Step 2 is the critical bias adjustment, where the method adjusts the magnitude of the extreme event considered in the model output to be of the same rarity in the model under the all forcings scenario as the actual extreme event is in the observations. This correction in the tail of the distribution is likely to be very different than a simple adjustment of the model mean and/or variance and more appropriate to event attribution studies. Figure 3 illustrates the quantile bias correction method and demonstrates the steps with cumulative distribution functions for the 2011 central US heatwave analysis.
b. Using Extreme Value Statistics to Estimate Event Probabilities
The probabilities, p O and p C , can be estimated using a variety of techniques. For instance, in studies using ensembles with tens of thousands of model realizations (Pall et al. 2011), probabilities of very rare events can often be estimated simply using the proportion of realizations in which the event was observed. However, in our case study, as will be the case in many other analyses, there are only a few simulations and the tail of the distribution is not well sampled. Extreme value statistical methods involve fitting a three parameter extreme value distribution function to a subset of the available sample and are well suited to estimating such probabilities. After estimating the distribution's parameters, step 2 can be accomplished by inverting the distribution to estimate the magnitude ofẑ A in the form of a return value for the period 1/p O .
In the current study, we use a point process (PP) approach to extreme value analysis (Smith 1989;Coles 2001;Katz et al. 2002;Furrer et al. 2010). This approach involves modeling exceedances over a high threshold and is described in detail in the Appendix. The simplest formulations of extreme value models assume that the distribution of the extremes does not change over time, an assumption of stationarity. The PP approach can be extended to non-stationary cases in which the parameters of the model, µ, σ, and ξ, are allowed to be (arbitrary but often linear) functions of covariates. Covariates are chosen to incorporate additional physical insight into the statistical model. A common practice is to represent nonstationarity through only the location parameter, µ, and take σ and ξ to be constant (Coles 2001;Kharin and Zwiers 2005). For example, one could represent the location of the extreme value distribution µ t to depend on time t as a function of time-varying covariates x kt :
µ t = β 0 + K k=1 β k x kt .
(1)
The model under the actual scenario, as seen in Figure 2, is non-stationary due to the effects of anthropogenic climate change. Rather than try to directly develop a covariate as an explicitly nonlinear function of time, it is simpler to use a more physically-based "covariate" as a linear source of non-stationarity. A simple choice is a temporally-smoothed global mean temperature anomaly (x t ). A 13-point filter (Solomon et al. 2007) removes some of the natural modes of variability that may affect central US summer temperature but retains the anthropogenic warming signal. This function is then a non-linear proxy for time that we can use as a covariate in a linear representation of the location parameter, µ t = β 0 + β 1 x t . We note that adding additional covariates to account for other known physical dependencies, such as an El Niño/La Niña index, may improve the quality of the fitted distribution but as such is outside the scope of this study. Finally, as the model under the counterfactual scenario is presumed to be stationary, we do not use a covariate in fitting that dataset. In this study, we computed the Akaike Information Criterion (AIC) to compare stationary and non-stationary models for the observations and actual scenario output, where the model with the smaller AIC value is preferred. For the actual scenario, the non-stationary model was strongly preferred based on AIC. However, we found that the AIC for the stationary model for observations (152.93) was slightly smaller than the AIC for the non-stationary model for the observations (154.14). This is a consequence of the very small observed warming trend in the selected region. Despite this preference for omitting the covariate, we use the non-stationary model for the observational data to be consistent with the statistical representation for the actual scenario output.
The PP model requires the choice of an arbitrary threshold, with only data above the threshold used to fit the model, as described in the Appendix. There are few rigid guidelines for how high the threshold should be. It must be high enough to be in the 'asymptotic' regime, i.e., that the assumptions of the extreme value statistical theory are satisfied, but low enough that enough points from the original sample are retained to reduce the uncertainty in estimating the parameters of the statistical model. Here we use the 80th percentile of the values in each dataset. Standard diagnostics (Coles 2001;Scarrott and MacDonald 2012), including mean residual life plots shown in Figure 4, suggest this is a reasonable choice.
Given the choice of a threshold and covariates, the PP-based extreme value distribution is straightforward to fit using maximum likelihood methods, providing estimates of µ t (i.e., β 0 and β 1 ), σ, and ξ. To fit the model, we use the fevd routine of the R package, extRemes (Gilleland and Katz 2011). Note that for seasonal data such as for this case study, the time.units argument should be specified to be "m/year", where m is the number of observations in each block of data. It is useful to treat a 'block' as a year so that return levels can be considered to be the value exceeded once in 1/p years. When using an ensemble of model runs, we have multiple replicates for each year, so m is the number of ensemble members (e.g., m = 5 for the all forcings ensemble). To implement steps 2 and 3 of the quantile bias correction method, we need to be able to calculate both a return level given a specified probabilityẑ A (p O ) and a probability given a specified return level,p C (ẑ A ). Both of these values are obtained from the estimated parameter values as shown in the Appendix, equations (A2) and (A3).
c. Uncertainty Quantification of the Risk Ratio
We have presented an approach to estimating the RR using the quantile bias correction method. We turn now to accounting for the various sources of uncertainties in the estimate of RR produced by this method. Here we focus on uncertainty from statistical estimation of the various probabilities; structural uncertainty that arises from using model simulations in place of the real climate system is of course important but is beyond the scope of our work. More precisely, the uncertainties in estimating the risk ratio can be separated into three sources: uncertainty in estimating p O using the observations (step 1), uncertainty in estimating z A using the actual scenario model output (step 2), and uncertainty in estimating p C using the counterfactual scenario output (step 3). In this section we quantify the uncertainty in the risk ratio considering the second and third sources of uncertainty. With regard to the first source, for now we consider the magnitude of the extreme event to be a given, as a precise estimate of p O will be shown to not be absolutely necessary to make a confident attribution statement. Rather, we believe the sensitivity of the estimate of RR to a defensible range of z O values (and p O ) is critical to confident extreme event attribution.
In our uncertainty analysis below, we condense our notation of the fitted extreme value distributions to θ A = (β 0 A , β 1 A , σ A , ξ A ) and θ N = (µ C , σ C , ξ C ), where A again indicates the model under the actual scenario and C the model under the counterfactual scenario. We consider several approaches to deriving a confidence interval for the RR. Given that the RR is non-negative and its sampling distribution is likely to be skewed, we work on the base-2 logarithmic scale.
A standard approach to estimating the standard error of a non-linear functional of parameters in a statistical model is to use the delta method and then derive a confidence interval using a normal approximation (Sections 5.5.4 & 10.4.1, Casella and Berger (2002)). Another possibility is to use the bootstrap to either estimate the standard error or directly estimate a confidence interval (Section 10.1.4, Casella and Berger (2002)). However, both of these methods fail when the estimated RR is infinity. The bootstrap uncertainty estimate will also pose difficulties if some of the bootstrap datasets produce estimated risk ratios that are infinity. This outcome is quite likely if the extreme value distribution of the model output under the counterfactual scenario is bounded and the magnitude ofẑ A is close to that bound. Therefore, after a brief discussion of the delta method and the bootstrap, we develop an alternative confidence interval by inverting a likelihood ratio test (LRT) and propose this is as a general approach to estimating a lower bound of RR.
(i) Delta Method
In this uncertainty analysis, we estimate the log risk ratio and log RR = f (θ) as a function of the parameter vector θ = (θ A , θ C ). The delta method uses an analytic approximation by a first-order Taylor series expansion: f (θ) ≈ f (θ) + f (θ) T (θ − θ), where f is a vector of the partial derivatives of f andθ is the maximum likelihood estimate of θ. Taking the variance of both sides of the Taylor approximation above, the delta method gives that
Var(log RR) = Var[f (θ)] ≈ f (θ) T Cov(θ) f (θ).(2)
The variance-covariance matrix ofθ, Cov (θ)
The delta method relies on the approximate linearity represented by the Taylor approximation and approximate normality of the distribution of the maximum likelihood estimates. In particular, the delta method will not perform well when the sampling distribution for log RR is skewed, which will be a particular concern for large values of RR, as the sampling distribution ofp C is bounded below by zero.
(ii) Bootstrap Method
Our bootstrap procedure attempts to reflect the structure of the climate model outputs in the resampling procedure that produces bootstrapped datasets. To generate a bootstrap dataset, we first resample with replacement from the set of ensemble members, as the ensemble members are independent realizations of the climate state. In addition, for each resampled ensemble member, we resample years with replacement from the years represented in the dataset. This second type of resampling is a block bootstrap that is justified by the low correlation in seasonal climate from year to year. Note that by resampling both ensemble members and years, we reduce the discreteness in approximating the sampling distribution that would occur from only resampling from the small number of ensemble members. However, note that in our example, results were similar when either excluding or including the resampling of years.
By repeating the resampling procedure, we produce bootstrap datasets, D 1 , · · · , D B where B is the bootstrap sample size, e.g., B = 500. For example, for the actual scenario, we resample with replacement from the five ensemble members and with replacement from the 112 years and the associated smoothed global temperature values. We obtain bootstrap samples with analogous resampling for the counterfactual scenario. The return levels, z
A ,ẑ (2) A , · · · ,ẑ (B)(1)
A , are computed from the bootstrapped samples for the actual scenario for the fixed probabilityp O . Pairing each bootstrapped return level estimate from the actual scenario with a bootstrapped dataset from the counterfactual scenario, we obtain bootstrapped probabilitiesp
(1)
C (ẑ 1 A ),p(2)C (ẑ 2 A ), · · · ,p(B)
C (ẑ B A ) of exceeding the bootstrapped return levels. We can then calculate log RR
, log RR
, · · · , log RR (B) , which allows us to estimate the sampling distribution of log RR. From this, one can obtain a bootstrap standard error or confidence interval for the log RR via standard methods. For the basic bootstrap confidence interval of log RR, we use the 2.5th and 97.5th percentiles of the bootstrapped values for log RR (b) , b = 1, . . . , B, to compute the 95% confidence interval:
log RR − (log RR (b)
.975 − log RR), log RR − (log RR (b) .025 − log RR) .
(4)
(iii) Method of Inverting a Likelihood Ratio Test
The delta method fails whenp C = 0 ( RR = ∞) as it relies on asymptotic normality, and the bootstrap method fails forp C = 0 and can fail to varying degrees whenp C is very small and one obtains log( RR) (b) = ∞ for one or more bootstrap samples. Hansen et al. (2014) discussed the case ofp C = 0 under the counterfactual scenario in the context of event attribution and suggested a one-sided confidence interval for attributable risk using stationary Poisson processes in the setting where probabilities are estimated simply by empirical proportions. Here we propose a likelihood ratio test-based method to find a lower bound for RR that can be employed when extreme value statistics are used. We note that a lower bound is actually more relevant for making an attribution statement than a point estimate of RR as it encapsulates both the potential magnitude of the risk ratio and our uncertainty in estimating it.
A standard approach to finding a confidence interval is to invert a test statistic (Casella and Berger 2002). The basic intuition is that for a hypothesized parameter value, θ 0 , if we cannot reject the null hypothesis that θ = θ 0 based on the data, then that θ 0 is a plausible estimate for the true value of θ and should be included in a confidence interval for θ. A confidence interval is then constructed by taking all values of θ 0 such that a null hypothesis test of θ = θ 0 is not rejected.
The likelihood ratio test (Sections 9.2.1 & 10.3.1, Casella and Berger (2002)) compares the likelihood of the data based on the MLE (i.e., the maximized likelihood estimate) to the likelihood of the data when restricting the parameter space (which in the notation above can be expressed as setting θ = θ 0 ). If the null hypothesis is true then as the sample size goes to infinity, twice the log of the ratio of these two likelihoods has a chi-square distribution with ν degrees of freedom. ν is equal to the difference in the number of parameters when comparing the original parameter space to the restricted space. The hypothesis test of θ = θ 0 is rejected when twice the log of the likelihood ratio exceeds the 1 − α quantile of the chisquare distribution, which would be the 95th percentile (i.e., α = 0.05) for a 95% confidence interval.
Specifically, we are interested in the plausibility of RR = p A p C = r 0 versus the alternative that RR = p A p C > r 0 where r 0 is a non-negative constant, so it would be natural to derive a one-sided confidence interval, RR ∈ (RR L , ∞), that gives a lower bound, RR L , on the risk ratio. The likelihood ratio test we use here is one where the restricted parameter space sets RR = r 0 . Under this null hypothesis, which is equivalent to p C = p A /r 0 , we construct the constrained likelihood function by letting β 0 A , β 1 A , σ A , ξ A , σ C and ξ C be free parameters and setting
µ C = z A (β 0 A , β 1 A , σ A , ξ A ) + σ C ξ C 1 − − log(1 − p A /r 0 ) −ξ C ,
where z A is the return level corresponding to probability of exceedance under the actual scenario and p A is based onp O or chosen in advance without directly making use of the observations. This likelihood ratio test has one degree of freedom, corresponding to the restriction on µ N in the constrained likelihood. The joint likelihood for the model output from both the actual scenario and counterfactual scenario can be expressed as
L(θ A , θ C ) ∝ exp − 1 n yA n A i=1 1 + ξ u − µ t i A σ A −1/ξ A + m A i=1 σ −1 A 1 + ξ A x i − µ t i A σ A −1/ξ A −1 + × exp − n C n yC 1 + ξ C u − µ C σ C −1/ξ C + m C j=1 σ −1 C 1 + ξ C x j − µ C σ C −1/ξ C −1 +
where m A is the number of exceedances (out of the total of n A observations) for the actual scenario and m C the analogous quantity for the counterfactual scenario. Thus, the lower bound of RR L = min RR is found by finding the smallest value r 0 such that
2[log L(β 0 A ,β 1 A ,σ A ,ξ A ,μ C ,σ C ,ξ C ; x) − log L(β 0 A ,β 1 A ,σ A ,ξ A ,σ C ,ξ C ; x, RR = r 0 )] < 3.841,(5)
where 3.841 is the 95th percentile of a chi-square distribution with one degree of freedom.
Numerically this can be solved by one dimensional minimization subject to the constraint for the condition (5). The simplest way to do this is to move the constraint into the objective function and minimize an unconstrained problem. The new unconstrained objective function is
r 0 + c · I(λ(r 0 ) > 3.841)
where c is set to be a large number (mathematically c = ∞), λ(·) is twice the log of the likelihood ratio, and I(·) is an indicator function that evaluates to one if the inequality is satisfied and zero if not. The resulting objective function is not continuous, hence many standard optimization techniques are not applicable. One that can be used here is "golden section search" (particularly if the objective function is modified slightly to be unimodal -albeit still discontinuous). In R, we use the optimize function. This function is designed for continuous objective functions as it combines golden section search with parabolic interpolation, but it seems to work reasonably well in our analyses.
Results
In this section we apply our proposed methodology to the central US heatwave event. The analysis relies on estimation of the probabilities p O and p C and the adjusted event magnitude z A . As described in the previous section, we use the smoothed global mean temperature anomaly as a covariate to account for non-stationarity in temperature extremes in both the observations and the model output under the all forcings scenarios. The smoothed global mean temperature anomalies are plotted on Figure 2. Table 1 gives the parameter estimates from fitting the PP model to observations and to the model output from both scenarios. Note that the estimated shape parameters (ξ) are all negative, indicating that the fitted distributions are bounded.
As shown in Table 1, the estimated probability,p O , of exceeding the observed extreme value of 2.467 is 0.032. Following the proposed quantile bias correction method, we set p A = 0.032 and, based on the fitted PP model for the actual scenario, estimate the return level asẑ A = 4.842. Then, using the fitted PP model for the counterfactual scenario, the estimated probability,p C , of an event as or more extreme than z C = 4.842 is 1.5e-08. The corresponding estimated logarithm of risk ratio is 21.0 (or RR ≈ 2, 100, 000), indicating a very large increase in probability of a heatwave due to human influence. Figure 3 graphically illustrates the quantile bias correction methodology for this particular case study. Without the bias correction, one would obtainp A = 0.657 andp C = 0.132, giving an estimated RR of approximately 5, which is quite different than the estimate with the bias correction. Note that the observed event is not extreme in the model simulations under the actual scenario, which suggests that without bias correction we would be inappropriately be estimating a RR from a different part of the distribution than is of interest based on the observations. The uncertainty in estimating RR with the quantile bias correction is quantified using three methods: the delta method, the bootstrap, and our suggested likelihood ratio testbased interval; Table 2 shows 95% confidence intervals for log RR from each method. As discussed in Section 3c, both the delta method and the bootstrap face difficulties when the estimated probability under counterfactual scenario is near zero, as it is here. In this example, the bootstrap resamples often produce estimates of large return levels under the actual scenario that correspond to estimating probabilities of zero under counterfactual scenario. The result is that many of the bootstrap datasets (246 of the 500) have estimates of log RR that are infinity, but these bootstrap estimates cannot be sensibly included in the estimate of the bootstrap confidence interval. Hence, the confidence interval in Table 2 is calculated based only on the finite values, but we cannot expect this to provide a reliable estimate of the uncertainty.
Instead, we focus on the likelihood ratio-based interval described in the previous section. We apply our method by inverting a LRT in two ways. First we ignore uncertainty inẑ A and consider only uncertainty inp C , and second we consider uncertainty in bothẑ A andp C (note that when we consider only uncertainty inp C , one can derive a LRT-based interval analogously to that derived in Section 3c).
The estimated lower bound, when considering both sources of uncertainties, is 4.0 (i.e., 16.1 on the original scale of the risk ratio), which indicates strong evidence that the true risk ratio is substantially elevated under actual scenario compared to counterfactual scenario. As expected, the lower bound is lower (4.0) when considering both sources of uncertainty than when considering only uncertainty inp C (4.3).
In Section 3c, we argued that a precise event magnitude and corresponding p O is not necessary to making confident event attribution statements. Rather, the sensitivity of the risk ratio to a plausible range of extreme event definitions is essential. Table 3 shows the sensitivity of the risk ratio and its lower bound to various values of p O = p A . Critically, while the estimate of the risk ratio varies dramatically as one varies the event definition, with the estimated risk ratio as large as infinity, the lower bound from the one-sided confidence interval is quite stable for a wide range of event definitions. This is a critically important component to the confident event attribution statement: "For the summer 2011 central US heat wave, anthropogenic changes to the atmospheric composition caused the chance of the observed temperature anomaly to be increased by at least a factor of 16.1." Of course this statement is conditional on the climate model accurately representing relative changes in probabilities of extreme events under the different scenarios after the quantile-based correction.
Conclusion
We present an approach to extreme event attribution that addresses differences in the scales of variability between observations and model output using the methodology of quantilebased bias correction in the context of a formal statistical treatment of uncertainty. The correction rescales matching quantiles between the observations and the models to obtain an event in realistically-forced climate model simulations of corresponding rarity to the actual extreme weather or climate event of interest. We develop a procedure for estimation and for quantifying uncertainty in the risk ratio, a measure of the anthropogenic effect on the change in the chances of an extreme event. In particular we calculate a lower bound on the risk ratio by inverting a likelihood ratio test statistic that can be used even when the estimated probability of the event is zero or near-zero in climate model simulations of a hypothetical world without anthropogenic climate change. This lower bound provides the key element in constructing confident attribution statements about the human influence on individual extreme weather and climate events.
We caution that bias correction can mask serious errors and is not a replacement for expert judgment and physical insight into the source of the bias between model and observation. For instance in our case study, it is well known that extreme temperatures in Texas and Oklahoma are associated with the La Niña phase of ocean surface temperatures. The statistical methods presented here could account for this source of bias by including an El Niño/La Niña index as a covariate in the statistical model for event probabilities in the model dataset (see Section 3b) and bias correct the index rather than directly bias correcting the distribution for the variable of interest. Pursuing such ideas is beyond the scope of our work here but could lead to an approach that offers more insight into the source of bias and provide a physically-based justification for the bias correction.
The lower bound on the risk ratio estimated using our proposed method implies a substantial increase in the probability of reaching or exceeding the observed extreme temperature of 2011 central US heatwave event under human-influenced climate change. However the precise probability and magnitude of the observed extreme event is not a key component in extreme event attribution analyses. We explored the sensitivity of the lower bound of the risk ratio to various definitions of the event (i.e., probabilities corresponding to different magnitudes of extreme events) and found that the lower bound of the risk ratio confidence interval is more stable than point estimates of the risk ratio. As a result, confident attribution statements about the minimum amount of anthropogenic influence on extreme events are more readily constructed than statements about the most likely amount of anthropogenic influence. We also maintain that such more conservative statements are more consistent with the vast literature of attribution statements about the human influence on trends in the average state of the climate.
DE-AC02-05CH11231. This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor the Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or the Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof or the Regents of the University of California.
APPENDIX
Background for Modeling of Extreme Values
Extreme value theory (EVT) provides a statistical theory of extreme values that models the tail of a probability distribution. Univariate extreme value theory to study so-called block maxima (e.g., annual or seasonal maxima of daily data) is well-developed. The theory shows that the distribution of the maxima converges to a distribution function G,
G(x; µ, σ, ξ) = exp − 1 + ξ x − µ σ −1/ξ + , (x + = max(0, x)) (A1)
that is known as the generalized extreme value (GEV) distribution. The parameters µ, σ, and ξ are known as the location, scale and shape parameters, respectively. The shape parameter, ξ, determines the type of tail behavior -whether the tail is heavy (ξ > 0), light (ξ → 0), or bounded (ξ < 0), implying a short-tailed distribution. For example, analysts usually obtain a negative estimated shape parameter for temperature data and a non-negative estimated shape parameter for precipitation data. Return levels are quantiles -a return level z such that P (Z > z) = p implies that the level z is expected to be exceeded once every 1/p years on average. The probability p of exceeding z is easily obtained in closed form, given µ, σ, and ξ, based on the distribution function (A1),
p = 1 − P (Z ≤ z) = 1 − exp − 1 + ξ z − µ σ −1/ξ + .(A2)
As a counterpart to this, given p, the return level is obtained by solving the equation P (Z > z) = p, which gives
z = µ − σ ξ 1 − − log(1 − p) −ξ (ξ = 0).(A3)
However, the block maxima approach only uses the maximum (or analogously the minimum when analyzing extreme low values) of blocks in time series data. An alternative that can make use of more of the data is the peaks over threshold (POT) approach (Coles 2001;Katz et al. 2002). POT modeling is based on the observations above a high threshold, u. The distribution of exceedances over the threshold is approximated by a generalized Pareto distribution (GPD) as u becomes sufficiently large. In this approach, the limiting distribution of threshold exceedances is characterized by the following: for x > u,
P (X ≤ x|X > u) = 1 − 1 + ξ x − u σ u −1/ξ + .(A4)
The scale parameter σ u > 0 depends on the threshold. As with the GEV distribution the shape parameter, ξ, determines the tail behavior.
The point process (PP) provides a closely-related alternative peaks over threshold approach to the GPD that is convenient because the PP parameters can be directly related to the GEV parameters and then the GEV equations above can be used to calculate return values and return probabilities. The corresponding likelihood of the threshold excesses can be approximated by a Poisson distribution with the intensity measure depending on µ, σ, and ξ, where µ, σ, and ξ are location, scale, and shape parameters equivalent to those in the GEV distribution (A1). More precisely, for a vector of n observations X 1 , X 2 , · · · , X n standardized under the conditions of GEV distribution, the point process on regions of (0, 1) × [u, ∞) converges to a Poisson process with the intensity measure given by
Λ [t 1 , t 2 ] × (x, ∞) = (t 2 − t 1 ) 1 + ξ x − µ σ −1/ξ + .(A5)
Taking m to be the number of observations above the threshold u (out of the total of n observations), the likelihood function is
L(θ; x 1 , x 2 , · · · , x n ) ∝ exp − n n y 1+ξ u − µ σ −1/ξ + m i=1 σ −1 1+ξ x i − µ σ −1/ξ−1 + (A6)
where n y is number of observations per year (e.g., n y = 5 for the all forcings ensemble and n y = 12 for the counterfactual ensemble). For the event magnitude indicated by the vertical red dashed line, the green dashed line indicates the probability under counterfactual scenario. The three colored dots represent the upper bounds of each distribution function, which occurs because with a negative shape parameter (as is estimated in these cases), the extreme value distribution has a finite upper bound. We chose the 80th percentile as a reasonable threshold beyond which there are relatively linear trends.
List of Tables
2Fig. 1 .Fig. 2 .Fig. 3 .
123Illustration of the mismatch in scales between observations and model output for central US summer temperatures. Observed values for 1901-2012 (blue), model output under actual scenario for 1901-2012 (red) and model output under counterfactual scenario for 100-year time period (green). The vertical lines show the 5-95% range of values for the different datasets. The larger black dot represents the observed value of 2.467 for 2011. The blue and red lines represent smoothed global mean temperature anomalies used as observational and actual scenario model output covariates, respectively. 26 3 Demonstration of the quantile bias correction applied to the central US heatwave example, showing estimated cumulative distribution functions of observed (blue) and modeled datasets under actual scenario (red) and counterfactual scenario (green). The blue dashed line shows the observed event, with the horizontal red dashed line translating the observed event magnitude to the equivalent magnitude under the actual scenario, holdingp O =p A . For the event magnitude indicated by the vertical red dashed line, the green dashed line indicates the probability under counterfactual scenario. The three colored dots represent the upper bounds of each distribution function, which occurs because with a negative shape parameter (as is estimated in these cases), the extreme value distribution has a finite upper bound. Central United States region, 90 • W to 105 • W in longitude and 25 • N to 45 • N in latitude (bold rectangular area), covering the states of Texas and Oklahoma. Illustration of the mismatch in scales between observations and model output for central US summer temperatures. Observed values for 1901-2012 (blue), model output under actual scenario for 1901-2012 (red) and model output under counterfactual scenario for 100year time period (green). The vertical lines show the 5-95% range of values for the different datasets. The larger black dot represents the observed value of 2.467 for 2011. The blue and red lines represent smoothed global mean temperature anomalies used as observational and actual scenario model output covariates, respectively. Demonstration of the quantile bias correction applied to the central US heatwave example, showing estimated cumulative distribution functions of observed (blue) and modeled datasets under actual scenario (red) and counterfactual scenario (green). The blue dashed line shows the observed event, with the horizontal red dashed line translating the observed event magnitude to the equivalent magnitude under the actual scenario, holdingp O =p A .
Fig. 4 .
4Mean residual life plot for each dataset: (a) observations, (b) model output under actual scenario, and (c) model output under counterfactual scenario. Red dashed lines represent the 75th, 80th, and 85th percentiles, respectively, as possible choices of thresholds.
Parameter estimates from the point process model fitted to observations(top, 1901-2012), actual scenario model output(middle, 1901-2012), and counterfactual scenario model output (bottom, 100 years). The right column gives the estimated return levels and/or probabilities calculated in the steps of the quantile bias correction method. The threshold, u, is the 80th percentile of values for each given dataset. 20 2 Estimated log RR and corresponding confidence intervals using delta method, bootstrap resampling (B=500), and the proposed likelihood ratio test (LRT)based method giving a lower bound for the risk ratio. For the bootstrap, 246 of the 500 bootstrap samples are excluded as the bootstrapped RR estimate is infinity. For the LRT-based approach, we consider two cases of uncertainty quantification: first uncertainty only in estimating p C , and second uncertainty in estimating both z A and p C . 21 3 Sensitivity of results to definition of the event, i.e., different values of p O = p A . 22Table 1. Parameter estimates from the point process model fitted to observations(top, 1901-2012), actual scenario model output(middle, 1901-2012), and counterfactual scenario model output (bottom, 100 years). The right column gives the estimated return levels and/or probabilities calculated in the steps of the quantile bias correction method. The threshold, u, is the 80th percentile of values for each given dataset.Observation location scale shape u = 0.856β 0β1σξpO = P (Z > 2.467)Table 2. Estimated log RR and corresponding confidence intervals using delta method, bootstrap resampling (B=500), and the proposed likelihood ratio test (LRT)-based method giving a lower bound for the risk ratio. For the bootstrap, 246 of the 500 bootstrap samples are excluded as the bootstrapped RR estimate is infinity. For the LRT-based approach, we consider two cases of uncertainty quantification: first uncertainty only in estimating p C , and second uncertainty in estimating both z A and p C .1
global mean tmp
-0.802 0.404 1.250 -0.239
0.032
Model (actual scenario) location
scale shape
u = 1.405β 0 Aβ 1 Aσ AξAẑA
global mean tmp
1.263
1.382 0.926 -0.197
4.842
Model (counterfactual) location
scale shape
u = 0.811μ CσCξCpC = P (Z > 4.842)
no trend (K = 0)
1.415
0.638 -0.179
1.503e-08
Table 3 .
3Sensitivity of results to definition of the event, i.e., different values of p O = p A . AẑApC log 2 RR one-sided CI for log 2 RR lower bound of RR p O = p A List of Figures 1 Central United States region, 90 • W to 105 • W in longitude and 25 • N to 45 • N in latitude (bold rectangular area), covering the states of Texas and Oklahoma. 25p (α = .05)
0.200
3.7 2.8e-03
6.1
[3.0, ∞)
8.0
0.100
4.2 1.9e-04
9.1
[3.6, ∞)
11.7
0.050
4.6 3.1e-06
14.0
[3.9, ∞)
14.8
0.032
4.8 1.5e-08
21.0
[4.0, ∞)
16.1
0.023
5.0
0
∞
[4.1, ∞)
16.8
0.010
5.3
0
∞
[4.1, ∞)
16.9
Acknowledgments.
Liability for climate change. M R Allen, Nature. 4216926Allen, M. R., 2003: Liability for climate change. Nature, 421 (6926), 891-892.
Detection and Attribution of Climate Change: from Global to Regional In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. N Bindoff, Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. MidgleyBindoff, N., et al., 2013: Detection and Attribution of Climate Change: from Global to Regional In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)].
G Casella, R L Berger, Statistical Inference. Australia Pacific Grove, CAThomson Learning2d ed.Casella, G. and R. L. Berger, 2002: Statistical Inference. 2d ed., Thomson Learning, Australia Pacific Grove, CA.
2013: A new HadGEM3-A-Based system for attribution of weatherand climate-related extreme events. N Christidis, P A Stott, A A Scaife, A Arribas, G S Jones, D Copsey, J R Knight, W J Tennant, Journal of Climate. 26Christidis, N., P. A. Stott, A. A. Scaife, A. Arribas, G. S. Jones, D. Copsey, J. R. Knight, and W. J. Tennant, 2013: A new HadGEM3-A-Based system for attribution of weather- and climate-related extreme events. Journal of Climate, 26, 2756-2783.
An Introduction to Statistical Modeling of Extreme Values. S G Coles, Springer VerlagNew YorkColes, S. G., 2001: An Introduction to Statistical Modeling of Extreme Values. Springer Verlag, New York.
L M Edwards, M J Bunkers, J T Abatzoglou, D P Todey, L E Parker, western South Dakota. American Meteorological Society95in "Explaining Extremes of 2013 from a Climate Perspective"]. Bulletin of theEdwards, L. M., M. J. Bunkers, J. T. Abatzoglou, D. P. Todey, and L. E. Parker, 2014: October 2013 Blizzard in western South Dakota [in "Explaining Extremes of 2013 from a Climate Perspective"]. Bulletin of the American Meteorological Society, 95 (9), S23-S26.
Anthropogenic contribution to global occurrence of heavy-precipitation and high-temperature extremes. E M Fischer, R Knutti, Nature Climate Change. 56Fischer, E. M. and R. Knutti, 2015: Anthropogenic contribution to global occurrence of heavy-precipitation and high-temperature extremes. Nature Climate Change, 5 (6), 560- 564.
Statistical modeling of hot spells and heat waves. E M Furrer, R W Katz, M D Walter, R Furrer, Climate Research. 43Furrer, E. M., R. W. Katz, M. D. Walter, and R. Furrer, 2010: Statistical modeling of hot spells and heat waves. Climate Research, 43, 191-205.
New software to analyze how extremes change over time. E Gilleland, R W Katz, Eos. 922Gilleland, E. and R. W. Katz, 2011: New software to analyze how extremes change over time. Eos, 92 (2), 13-14.
On the attribution of a single event to climate change. G Hansen, M Auffhammer, A R Solow, J. Climate. 27Hansen, G., M. Auffhammer, and A. R. Solow, 2014: On the attribution of a single event to climate change. J. Climate, 27, 8297-8301.
Updated high-resolution grids of monthly climatic observations -the CRU TS3.10 Dataset. I Harris, P D Jones, T J Osborn, D Lister, 10.1002/joc.3711Int. J. Climatol. 34Harris, I., P. D. Jones, T. J. Osborn, and D. Lister, 2014: Updated high-resolution grids of monthly climatic observations -the CRU TS3.10 Dataset. Int. J. Climatol., 34, 623-642, doi:10.1002/joc.3711.
Explaining Extreme Events of 2013 from a Climate Perspective. S C Herring, M P Hoerling, T C Peterson, P A Stott, Bulletin of the American Meteorological Society. 959Herring, S. C., M. P. Hoerling, T. C. Peterson, and P. A. Stott, (Eds.) , 2014: Explaining Ex- treme Events of 2013 from a Climate Perspective. Bulletin of the American Meteorological Society, 95 (9), S1-S96.
Anatomy of an extreme event. M Hoerling, J. Climate. 26Hoerling, M., et al., 2013: Anatomy of an extreme event. J. Climate, 26, 2811-2832.
A method for computing the fraction of attributable risk related to climate damages. C C Jaeger, J Krause, A Haas, R Klein, K Hasselmann, 10.1111/j.1539-6924.2008.01070.xRisk Analysis. 284Jaeger, C. C., J. Krause, A. Haas, R. Klein, and K. Hasselmann, 2008: A method for computing the fraction of attributable risk related to climate damages. Risk Analysis, 28 (4), 815-823, doi:10.1111/j.1539-6924.2008.01070.x, URL http://dx.doi.org/10. 1111/j.1539-6924.2008.01070.x.
Statistics of extremes in hydrology. R W Katz, M B Parlange, P Naveau, Advances in Water Resources. 25Katz, R. W., M. B. Parlange, and P. Naveau, 2002: Statistics of extremes in hydrology. Advances in Water Resources, 25, 1287-1304.
Estimating extremes in transient climate change simulations. V V Kharin, F W Zwiers, J. Climate. 18Kharin, V. V. and F. W. Zwiers, 2005: Estimating extremes in transient climate change simulations. J. Climate, 18, 1156-1173.
Bias correction of monthly precipitation and temperature fields from Intergovernmental Panel on Climate Change AR4 models using equidistant quantile matching. H Li, J Sheffield, E F Wood, 10.1029/2009JD012882J. of Geophys. Res. 115Li, H., J. Sheffield, and E. F. Wood, 2010: Bias correction of monthly precipitation and temperature fields from Intergovernmental Panel on Climate Change AR4 mod- els using equidistant quantile matching. J. of Geophys. Res., 115 (D10101), doi: 10.1029/2009JD012882.
Utility of daily vs. monthly large-scale climate data: an intercomparison of two statistical downscaling methods. E P Maurer, H G Hidalgo, Hydrol. Earth Syst. Sci. 12Maurer, E. P. and H. G. Hidalgo, 2008: Utility of daily vs. monthly large-scale climate data: an intercomparison of two statistical downscaling methods. Hydrol. Earth Syst. Sci., 12, 551-563.
2012: Mechanisms contributing to the warming hole and the consequent U.S. East-West differential of heat extremes. G A Meehl, J M Arblaster, G Branstator, Journal of Climate. 25Meehl, G. A., J. M. Arblaster, and G. Branstator, 2012: Mechanisms contributing to the warming hole and the consequent U.S. East-West differential of heat extremes. Journal of Climate, 25, 6394-6408.
Multimodel detection and attribution of extreme temperature changes. S.-K Min, X Zhang, F Zwiers, H Shiogama, Y.-S Tung, M Wehner, Journal of Climate. 26Min, S.-K., X. Zhang, F. Zwiers, H. Shiogama, Y.-S. Tung, and M. Wehner, 2013: Multi- model detection and attribution of extreme temperature changes. Journal of Climate, 26, 7430-7451.
Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000. P Pall, T Aina, D A Stone, P A Stott, T Nozawa, A G J Hilberts, D Lohmann, M R Allen, Nature. 4707334Pall, P., T. Aina, D. A. Stone, P. A. Stott, T. Nozawa, A. G. J. Hilberts, D. Lohmann, and M. R. Allen, 2011: Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000. Nature, 470 (7334), 382-385.
A nonlinear dynamical perspective on climate prediction. T N Palmer, J. Climate. 12Palmer, T. N., 1999: A nonlinear dynamical perspective on climate prediction. J. Climate, 12, 575-591.
Some Applications of Statistics to Meteorology. The Pennsylvania State University. H A Panofsky, G W Brier, 224ppUniversity Park, PA, USAPanofsky, H. A. and G. W. Brier, 1968: Some Applications of Statistics to Meteorology. The Pennsylvania State University, University Park, PA, USA, 224 pp.
2013: Explaining Extreme Events of 2012 from a Climate Perspective. T C Peterson, M P Hoerling, P A Stott, S Herring, Bulletin of the American Meteorological Society. 949Peterson, T. C., M. P. Hoerling, P. A. Stott, and S. Herring, (Eds.) , 2013: Explaining Ex- treme Events of 2012 from a Climate Perspective. Bulletin of the American Meteorological Society, 94 (9), S1-S74.
Explaining extreme events of 2011 from a climate perspective. T C Peterson, P A Stott, S Herring, 10.1175/BAMS-D-12-00021.1Bulletin of the American Meteorological Society. 93Peterson, T. C., P. A. Stott, and S. Herring, 2012: Explaining extreme events of 2011 from a climate perspective. Bulletin of the American Meteorological Society, 93, 1041-1067, doi:10.1175/BAMS-D-12-00021.1.
2012: A review of extreme value threshold estimation and uncertainty quantification. C Scarrott, A Macdonald, REVSTAT -Statistical Journal. 101Scarrott, C. and A. MacDonald, 2012: A review of extreme value threshold estimation and uncertainty quantification. REVSTAT -Statistical Journal, 10 (1), 33-60.
Extreme value analysis of environmental time series: An application to trend detection in ground-level ozone (with discussion). R L Smith, Statistical Science. 4Smith, R. L., 1989: Extreme value analysis of environmental time series: An application to trend detection in ground-level ozone (with discussion). Statistical Science, 4, 367-393.
S Solomon, D Qin, M Manning, Z Chen, M Marquis, K B Averyt, M Tignor, H L Miller, Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom; New York, NY, USACambridge University PressSolomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, M. Tignor, and H. L. Miller, 2007: In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Inferring the anthropogenic contribution to local temperature extremes. D A Stone, C J Paciorek, P Prabhat, P Pall, M F Wehner, Proc. Natl. Acad. Sci. USA. 110171543Stone, D. A., C. J. Paciorek, P. Prabhat, P. Pall, and M. F. Wehner, 2013: Inferring the anthropogenic contribution to local temperature extremes. Proc. Natl. Acad. Sci. USA, 110 (17), E1543.
Human contribution to the European heatwave of 2003. P A Stott, D A Stone, M R Allen, Nature. 4327017Stott, P. A., D. A. Stone, and M. R. Allen, 2004: Human contribution to the European heatwave of 2003. Nature, 432 (7017), 610-614.
Attribution of floods in the Okavango basin, Southern Africa. P Wolski, D Stone, M Tadross, M Wehner, B Hewitson, doi:10.1016/ j.jhydrol.2014.01.055Journal of Hydrology. 511Wolski, P., D. Stone, M. Tadross, M. Wehner, and B. Hewitson, 2014: Attribution of floods in the Okavango basin, Southern Africa. Journal of Hydrology, 511, 350-358, doi:10.1016/ j.jhydrol.2014.01.055.
Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. A Wood, L R Leung, V Sridhar, D P Lettenmaier, Climatic Change. 62Wood, A., L. R. Leung, V. Sridhar, and D. P. Lettenmaier, 2004: Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Climatic Change, 62, 189-216.
Mean residual life plot for each dataset: (a) observations, (b) model output under actual scenario, and (c) model output under counterfactual scenario. Mean residual life plot for each dataset: (a) observations, (b) model output under actual scenario, and (c) model output under counterfactual scenario.
| [] |
[
"PREDICTIVE MAINTENANCE IN PHOTOVOLTAIC PLANTS WITH A BIG DATA APPROACH",
"PREDICTIVE MAINTENANCE IN PHOTOVOLTAIC PLANTS WITH A BIG DATA APPROACH"
] | [
"Alessandro Betti ",
"Maria Luisa Lo Trovato ",
"Fabio Salvatore Leonardi ",
"Giuseppe Leotta ",
"Fabrizio Ruffini ",
"Ciro Lanzetta "
] | [] | [] | This paper presents a novel and flexible solution for fault prediction based on data collected from SCADA system. Fault prediction is offered at two different levels based on a data-driven approach: (a) generic fault/status prediction and (b) specific fault class prediction, implemented by means of two different machine learning based modules built on an unsupervised clustering algorithm and a Pattern Recognition Neural Network, respectively. Model has been assessed on a park of six photovoltaic (PV) plants up to 10 MW and on more than one hundred inverter modules of three different technology brands. The results indicate that the proposed method is effective in (a) predicting incipient generic faults up to 7 days in advance with sensitivity up to 95% and (b) anticipating damage of specific fault classes with times ranging from few hours up to 7 days. The model is easily deployable for on-line monitoring of anomalies on new PV plants and technologies, requiring only the availability of historical SCADA and fault data, fault taxonomy and inverter electrical datasheet. | 10.4229/eupvsec20172017-6dp.2.4 | [
"https://arxiv.org/pdf/1901.10855v1.pdf"
] | 59,413,840 | 1901.10855 | b856240f125234f5a4c76c22c5c100713451c8e0 |
PREDICTIVE MAINTENANCE IN PHOTOVOLTAIC PLANTS WITH A BIG DATA APPROACH
Alessandro Betti
Maria Luisa Lo Trovato
Fabio Salvatore Leonardi
Giuseppe Leotta
Fabrizio Ruffini
Ciro Lanzetta
PREDICTIVE MAINTENANCE IN PHOTOVOLTAIC PLANTS WITH A BIG DATA APPROACH
† i-EM srl, via Aurelio Lampredi 45, Livorno (Italy) ‡ Enel Green Power SPA, Viale Regina Margherita 125, Rome (Italy)Data MiningFault PredictionInverter ModuleKey Performance IndicatorLost Production
This paper presents a novel and flexible solution for fault prediction based on data collected from SCADA system. Fault prediction is offered at two different levels based on a data-driven approach: (a) generic fault/status prediction and (b) specific fault class prediction, implemented by means of two different machine learning based modules built on an unsupervised clustering algorithm and a Pattern Recognition Neural Network, respectively. Model has been assessed on a park of six photovoltaic (PV) plants up to 10 MW and on more than one hundred inverter modules of three different technology brands. The results indicate that the proposed method is effective in (a) predicting incipient generic faults up to 7 days in advance with sensitivity up to 95% and (b) anticipating damage of specific fault classes with times ranging from few hours up to 7 days. The model is easily deployable for on-line monitoring of anomalies on new PV plants and technologies, requiring only the availability of historical SCADA and fault data, fault taxonomy and inverter electrical datasheet.
INTRODUCTION
The provision of a Preventive Maintenance strategy is emerging nowadays as an essential field to keep high technical and economic performances of solar PV plants over time [1]. Analytical monitoring systems have been installed therefore worldwide to timely detect possible malfunctions through the assessment of PV system performances [2][3].
However, high customization costs, the need of collecting a great number of physical variables and of a stable Internet connection on field generally limit their effectiveness, especially for farms in remote places with unreliable communication infrastructures. The lack of a predictive component in the maintenance strategy is also a hindrance to minimize downtime costs. In order to keep lower the implementation costs and model complexity, statistical methods based on Data Mining are recently emerging as a very promising approach both for fault prediction and early detection. However, while the Literature is mainly focusing on equipment level failures in wind farms [4][5], research for PV plants is still in an early stage [6].
The present paper describes an innovative and versatile solution for inverter level fault prediction based on a data-driven approach, already tested with remarkable performances on six PV plants of variable size up to 10 MW located in Romania and Greece and three different inverter technologies ( Table 1). The proposed approach is easily portable on different plants and technologies and simplifies the update process to follow the PV plant evolution.
METHODOLOGY
The model is composed by two parallel modules both being capable of predicting incipient faults, but differing for the level of details provided and the remaining operational time after first indication of fault: a Supervision-Diagnosis Model (SDM) based on a Self-Organizing Map (SOM) predicting generic failures through a measure of deviations from normal operation, and a Short-Term Fault Prediction Model (FPM) based on artificial Neural Network (NN) addressing the prediction of specific fault classes. The main steps of the model workflow are described in the following.
Data and Alarm Logbook Import
Historical data extracted from 5-min averaged SCADA data (Table 2) and inverter manufacturer electrical parameters for the on-site inverter technology are first collected to train the model. SCADA logbooks, as well as fault taxonomy, are also imported for offline performance assessment and normal training period selection in the case of the SDM, and for NN training for the FPM. Logbook import is achieved by matching fault classes listed in the fault taxonomy, which includes also fault severity, to the fault occurrences recorded in logbooks and discretizing them on the timestamp grid of SCADA data. In particular, a k-th fault is assigned to timestamp if the following condition occurs:
, ≤ ≤ , ,(1)
where , ( , ) are the initial (final) instant of fault event. Then SCADA data labelling is realized by assigning fault codes (as integer numbers) to SCADA data. In the case of concurrent fault events, a prioritization rule is adopted considering only the most severe fault and, if necessary, the most frequent fault in that day.
Data Preprocessing
AC power (PAC) depends primarily on the level of solar irradiance (GTI) and, secondarily, on ambient temperature (Tamb). A first-order regression of signals GTI and PAC applied on training set samples allows to remove outliers corresponding to the furthest points from fitting:
| − • + | > ℎ • ( • + ),
where m and b are the slope and the intercept, respectively, computed by a least squares approach and thr is the threshold set by a trial and error process.
Further preprocessing steps include removal of days with a large amount of missing data, setting periodic tags to 0 in the nighttime, data range check and removal of unphysical plateaus.
Data Imputation
Missing instances in test set are imputed by means of a k-Nearest Neighbors (k-NN) algorithm using the training set as the reference historical dataset, selecting nearest neighbors according to the Euclidean distance and exploiting hyperbolic weights.
Feature Engineering: Data De-trending and Scaling
In order to remove the season-dependent variability from input data, a de-trending procedure has been applied by following customized approaches for each tag. Tmod training data have been de-trended by means of the least squares method to find the best line Tfit against Tamb and selecting only low GTI samples to remove the effect of panel heating due to sunlight:
= ( − )⁄ ,(2)
where = • + is the fitting, is the regression slope and is the intercept. All the remaining input tags, but DC and AC voltages, have been de-trended according to a classical Moving Average (MA) smoothing to compute the trend component and applying an additive model for time series decomposition. In both cases also test instances have been de-trended by means of a moving window mask 1-day long for tracking of time-varying input patterns. Finally, input data normalization is performed to avoid unbalance between heterogeneous tags.
Data Processing: the Supervision Model (SDM)
SDM has been built by means of an unsupervised clustering approach based on a 20×20 Self-Organizing Map (SOM) algorithm ( Figure 1) performing a non- linear mapping from a n-dimensional space (n=11) to a 2dimensional space with an online weight update rule [7] and trained on a normal operation period, identified by a time interval, through a competitive learning process. SOM has the valuable property of preserving input topology: neighbor neurons in the SOM layer respond to similar input vectors. As a consequence, a change in the distribution of input instances, due for example to inverter malfunction, leads to a different data mapping in the output grid. A multivariate statistical process control analysis in the form of a control chart may be therefore built to detect this change in the patterns distribution.
A Key Performance Indicator (KPI) has been defined as in Eq. (3), measuring a process variation at generation unit level from the normal state towards abnormal operating conditions when a threshold is crossed:
= ∑ • , 1−| − | 1+| − | ,(3)
where sums run on the SOM cells, = ∑ ⁄ is the normalized number of input patterns mapped into the cell (i,j) in the training phase and = ∑ ⁄ is the normalized number of input patterns mapped into the cell (i,j) in a 24-hours test phase ( Figure 2).
The ratio factor in Eq. (Table 3). Model performances have been evaluated by means of usual classification metrics (accuracy, sensitivity and specificity) exploiting the alarm logbook knowledge and assigning a predictive connotation to sensitivity by assuming correctly predicted a fault event if in the last N days prior to fault a warning has been triggered by the SDM.
2.6 Data Processing: the Short-Term Fault Prediction Model (FPM) Short-Term FPM is based on a 11-10-2 Pattern Recognition Feed-Forward Back-Propagation NN fed with 11 input tags ( Table 2) and containing one hidden layer with 10 neurons and two output neurons (Figure 1) trained to recognize specific fault classes by means of a Bayesian regularization algorithm to prevent overfitting. NN architecture has been optimized building up an ensemble of statistical simulations to maximize classification metrics. Once the full dataset is labelled (Section 2.1) and preprocessed (Section 2.2), a data sampling step is followed, to compensate the large unbalance between the number of normal operation data of majority class and the one of low frequency failure data, causing a prediction bias affecting event classification.
Sample balancing is achieved by first collecting the number of fault instances , available for the i-th fault class and then assigning 2 3
⁄ of them to training set, which is finally filled by randomly sampling normal instances up to the percentage required for almost balanced classes. Test set is instead built by means of the remaining 1 3 ⁄ × , fault instances and randomly sampling normal instances up to the proportion necessary for representing the distribution of the labeled SCADA data. Normal and fault samples are sampled separately to avoid unbalanced fault distribution in training and test set.
Specifically, for prediction from current time an event occurs (either faulty or normal) to the previous hours a different training (test) set has been built by considering instances at time ( − ). In this manner, NN is trained to recognize events at time they occur ( ) and tested at previous instants ( − ). Due to the poor fault statistics available, missing data at time − have been imputed by means of a k-NN algorithm. Model performances have been finally assessed by setting-up a Monte Carlo approach for each timestamp and averaging ensemble classification metrics.
RESULTS AND DISCUSSION
SDM has been tested on six different PV plants located in Romania and Greece ( Figure 3) and corresponding, globally, to more than one hundred inverters of three well-known technology brands and typologies (inverter module, central inverter, master slave). The time period considered for offline performance assessment spans from 2014 to 2016, depending on data source availability.
In Figure 4 the KPI, warning levels, and fault occurrences percentage time series are shown for inverter A.3 of plant #1 in Romania with installed capacity of almost 9.8 MW and inverter technology #1. The two thresholds 3 and 5 are also represented by dashed and dotted black curves, respectively. According to alarm logbooks available, a series of thermal issue damages happened on different inverters in 2014-2015 which led to inverter replacements. In particular, a generalized failures occurred from 3rd to 28th of November 2014 to inverter A (AC Switch Open, severity 2/5) but it was recognized only later by the operator, with a severe lost production. As can be seen, Supervision Model well anticipates logbook fault events, with a clear correlation between the deep KPI degradation (with warning triggered up to level 4) and fault occurrences, even for events happening in 7-10 January 2015 (DC Ground Fault, i.e. high leakage current to ground on DC side, severity 1/5) and 23-24 August 2015 (DC Insulation Fault, i.e. overvoltage across DC capacitors, severity 2/5). Sensitivity roughly degrades from 93% to 84% anticipating faults from 0 up to 7 days, and with an overall accuracy of almost 85%. Figure 5 shows a general overview of SDM performance on plant #1 at 3 and 7 days in advance with respect to fault occurrences. As can be seen, at 3 (7) days sensitivity (SEN) is larger than 60% for 17 (14) devices (on a total of 23), with a mean sensitivity of 72% (61%) at 3 (7) days. Accuracy (ACC) and specificity (SPE) are instead, on average, almost 80% for both the considered time horizons.
An estimate of the production gain achievable by means of a predictive service may be obtained by computing lost production as:
( ) = ∫ [ ℎ ( ′ ) − ( ′ )] ′ ,(4)where ℎ = [1 − ] • ⁄
is the theoretical power in normal condition [8], = 1200 / 2 is the irradiance in standard conditions and = ( − 25) 100 ≥ 25° if ≥ 25°
and 0 otherwise. In Figure 6 the energy yield computed with respect to the ideal case ( ( ) = ℎ ( ) ∀ ) is presented as a function of time for the case with (assuming = ℎ if a fault is correctly predicted) and without SDM enabled. As can be seen, an energy yield improvement (yellow area) up to 10-15% may be achieved with SDM.
In Figure 7a-b the KPI and warning levels 1 and 4, respectively, are illustrated as a function of time for inverter B.1 of PV plant #2 (2.8 MW) with installed technology #1. As may be observed, the strong fault spike occurred in August 2015 (red curve) due to an input overcurrent on DC side (severity 2/5) is well predicted by SDM, with a sensitivity larger than 95% and an overall accuracy above 80% event at 7 days in advance. The energy yield gain achievable when applying SDM is almost 6-7% (Figure 7c).
Some fault classes cannot be predicted due to their instantaneous nature. SDM, thanks to its parametric structure with respect to inverter technology and plant configuration and to an ad-hoc tuning of model parameters on each specific plant, guarantees early detection for these faults. In Figure 8 the case of plant #4 in Greece (4.9 MW) with installed inverter technology #3 is reported. As can be seen, in the second half of May 2016, SDM early detects a severe anomaly at inverter 3.5 due to an IGBT stack fault which led to inverter replacement. Due to the sharp KPI decrease below threshold 5 , warnings 1 and 3 are triggered almost instantaneously, whereas warnings 2 and 4 with a delay of almost 2 days.
Short-Term FPM model has been applied to three PV plants (#1, #2, #3) and two different inverter technologies (#1 for plants #1, #2 and #2 for plant #3). In Figure 9 the classification metrics, as well as the number of detected faults, are shown as a function of the time prior to the fault occurrence for the highest frequency failure class of PV plants #1, #2 and #3 at fixed inverter module. As can be seen in Figure 9a, when thousands of occurrences are available, an outstanding prediction capability is achieved with sensitivity decreasing down to value of almost 50-60% seven days in advance. According to State of the Art [4], prediction performances degrade generally much faster on time horizon ranging from 1 hour to 12 hours for low frequency failures with less than 100 fault instances available (Figure 9b), with some exceptions depending on correlations among predictors and faults (Figure 9c).
In Figure 10 a global FPM performance overview on the inverter park is shown for classes AC Switch Open, DC Ground Fault and Thermal Fault (Low Ambient Temperature) of PV plant #1 computed 3 days prior of the failure, and for class Input Over-Current for PV plant #2 calculated 2 hours before the damage. In general, FPM is capable of detecting incipient faults for up to three or four fault classes for each inverter module. Figure 10a highlights however a strong correlation between FPM performances and failure data availability, with sensitivity (red bars) degrading dramatically from 80% (inverter A) to roughly 30-40% (inverter B-F), on average, when the number of fault instances (black curve) decreases from thousands to one hundred or less.
Accuracy (green bars) is still over 80% due to good negative samples classification performances. Figure 10bd confirm previous conclusions: when few fault instances (roughly 30-40) are available, sensitivity is satisfactory generally on very short time horizons (2 hours in Figure 10d), but in some cases even larger (3 days in Figure 10bc) depending on the strength of correlations among predictors and failure data.
FINAL REMARKS
An original methodology to predict inverter faults at two different levels of detail (prediction of a status/fault, prediction of a specific fault) has been presented and validated on SCADA data collected from 2014 to 2016 on a park of six PV plants up to 10 MW.
Results demonstrates that the proposed SDM effectively anticipates high frequency inverter failures up to almost 7 days in advance, with sensitivity up to 95% and specificity of almost 80%. SDM also guarantees early detection for unpredictable failures. FPM exhibits also excellent predictive capability for high frequency fault classes up to 7 days prior of the damage but, depending on fault statistics available, sensitivity may degrades also on horizons of few hours. Combination of these two prediction modules can therefore enable PV system operators to move from a traditional reactive maintenance activity towards a proactive maintenance strategy, improving decision-making process thanks to a complete information on the incoming failure before the fault occurs. The model is actually tested for on-line monitoring of anomalies in Romania and Greece and can be easily deployed on new PV plants thanks to the limited amount of information required.
Next steps may include the introduction of a deeplearning based automated system of fault detection in drone-based thermal images of PV modules [9] and integration of the predictive model in a smart solar monitoring software, including an intervention management system integrated with alarm handling and a business intelligence based reporting tool from intervention up to portfolio level [10].
ACKNOWLEDGMENTS
Authors thank Prof. Mauro Tucci and Prof. Emanuele Crisostomi from Destec Department of University of Pisa for fruitful discussions and helpful suggestions.
Figure 1 :
1workflow of the Model SDM (SOM based) and FPM (NN based) from SCADA input tags to model outputs available in an operator dashboard.
Figure 2 :)
2(considering the full training and test phases. The larger the occupancy difference in (b), the smaller the contribution to KPI value (Eq.(3)).
or viceversa), then → 0. From a physical point of view, Eq. (3) is a robust indicator detecting changes in the underlying non-linear dynamics of the generation unit from normal status, represented by = 1. To remove the seasonality pattern from Eq. (3) due to time-varying number of daytime instances, a KPI detrending procedure has been followed by means of a linear regression of both signals to compute the KPI trend and selecting the de-trended component
Figure 3 :
3position of PV plants. Plants #2 and #3 in Greece are close to each other and their corresponding circles overlap in the figure.
Figure 4 :
4raw (sky-blue) and filtered (blue) KPIs, warning levels (green) and fault occurrences percentage (red, on right side) as a function of time for INV. A.3 of PV plant #1 (9.8 MW) and technology #1. Warnings 1 (a) to 4 (b) are shown from top left to bottom right for period October 2014 to August 2015.
Figure 5 :
5radar chart of classification metrics, expressed as a percentage 0-100% (ACC: accuracy, SEN: sensitivity, SPE: specificity), computed at 3 and 7 days prior of fault occurrences for inverter A-F of PV plant #1. Inverters G-H are neglected since no faults happened.
Figure 6 :
6energy yield time series w.r.t. ideal case with (blue curve) or without (black) the predictive service: yellow area represents the energy gain enabling SDM. Fault occurrences percentage is also shown on the right (red). Inset: energy yield in the ideal case, as well as with or without SDM (INV. A.3 of Plant #1).
Figure 7 :
7raw and filtered KPIs, as well as normalized fault occurrences and warning levels 1 (a) and 4 (b) for INV. B.1 of PV plant #2 (2.8 MW) and technology #1. (c): energy yield time series with respect to the ideal case with or without SDM enabled, as well as fault occurrences percentage as a function of time. Inset: energy yield in the ideal case, and with or without SDM.
Figure 8 :
8raw and filtered KPIs, as well as normalized fault occurrences and warning levels 1 (a) -4 (d) for INV. 3.5 of PV plant #4 (4.9 MW) and technology #3.
Figure 9 :
9classification metrics (bar plot on the left) and number of faults and of detected faults (black and blue curves on the right, respectively) as a function of time in advance. (a): fault class AC Switch Open (plant #1); (b): fault class Input Over-Current (plant #2); (c): fault class AC Switch Fault (plant #3).
Figure 10 :
10classification metrics (bar plot on the left) and number of fault instances (black curve on the right) as a function of inverter module for fault classes (a) AC Switch Open, (b) DC Ground Fault, (c) Thermal Fault (Low Tamb) of plant #1 computed at t-3Days, and (d) for class Input Over-Current of plant #2 calculated at t-2Hours.
Table 1 :
1list of tested PV plants.Plant
Number
# of
Inverter
Modules
Inverter
Manufacturer
Number
Max Active
Power
(KW)
Plant
Nominal
Power
(MW)
1
35
1
385
9.8
2
3
4
5
6
7
2
25
34
10
1
2
3
3
3
385
731.6
183.4
183.5
183.4
2.8
1.63
4.9
6.0
1.99
Table 2 :
2electrical and environmental input SCADA data (GHI (GTI): Global Horizontal (Tilted) Irradiance).DC Electrical
tags
AC Electrical
tags
Temperature
tags
Irradiance
tags
Current (IDC) Current (IAC) Internal (Tint)
GTI
Voltage (VDC) Voltage (VAC) Panel (Tmod)
GHI
Power (PDC) Power (PAC) Ambient (Tamb)
Table 3 :
3rules for switching ON of the 4 warning levels (w) of the SDM (d: day). the de-trended KPI is available, a low-pass MA filter with a 4-weeks long window is applied. Finally, four warning levels of different severity are computed based on thresholds crossing and time persistence rulesWarning
Level (w)
Crossing
of
Threshold
KPI
Derivative
Persistence
1
3
< 0
1 d
2
3
< 0
2 d of w 1
3
5
< 0
1 day
4
5
< 0
2 d of w 3
Minimizing Technical Risks in Photovoltaic Projects. U Jahn, M Herz, E Ndrio, D Moser, U. Jahn, M. Herz, E. Ndrio, D. Moser et al, Minimizing Technical Risks in Photovoltaic Projects, www.solarbankability.eu
. A Woyte, M Richter, D Moser, N Reich, IEA-PVPS T13-03:2014Systems ReportA. Woyte, M. Richter, D. Moser, N. Reich et al, Systems Report IEA-PVPS T13-03:2014 (March 2014)
. S Stettler, P Toggweiler, E Wiemken, W Heydenreich, Barcelona, SpainS. Stettler, P. Toggweiler, E. Wiemken, W. Heydenreich et al, 20th EUPVSEC (Barcelona, Spain 2005) 2490-2493
. A Kusiak, W Li, Renewable Energy. 36A. Kusiak,W. Li, Renewable Energy 36 (2011) 16-23
K Kim, G Parthasarathy, O Uluyol, W Foslien, Proceedings of 2011 Energy Sustainability Conference and Fuel Cell Conference. 2011 Energy Sustainability Conference and Fuel Cell ConferenceK. Kim, G. Parthasarathy, O. Uluyol, W. Foslien et al, Proceedings of 2011 Energy Sustainability Conference and Fuel Cell Conference (2011) 1-9
. F A Olivencia Polo, J J F Ferrero Bermejo, A Crespo Gomez Fernandez, Marquez, Renewable Energy. 81F. A. Olivencia Polo, J. Ferrero Bermejo. J. F. Gomez Fernandez, A. Crespo Marquez, Renewable Energy 81 (2015) 227-238
. M Tucci, M Raugi, Neurocomputing. 74M. Tucci, M. Raugi, Neurocomputing 74 (2011) 1815- 1822
. M Fuentes, G Nofuentes, J Aguilera, D L Talavera, M Castro, Solar Energy. 81M. Fuentes, G. Nofuentes, J. Aguilera, D. L. Talavera, M. Castro, Solar Energy 81 (2007) 1396-1408.
. V Jiri, R Ilja, K Jakub, P Tomas, V. Jiri, R. Ilja, K. Jakub, P. Tomas, 32nd EUPVSEC (2016) 1931-1935
White Paper: Beyond standard monitoring practice. White Paper: Beyond standard monitoring practice, www.3e.eu
| [] |
[
"Properties of real metallic surfaces: Effects of density functional semilocality and van der Waals nonlocality",
"Properties of real metallic surfaces: Effects of density functional semilocality and van der Waals nonlocality"
] | [
"Abhirup Patra \nDepartment of Physics\nTemple University\nPA-19122Philadelphia\n",
"Jefferson E Bates \nDepartment of Physics\nTemple University\nPA-19122Philadelphia\n",
"Jianwei Sun \nDepartment Of Physics\nUniversity of Texas-El Paso\nTX-79902El Paso\n",
"John P Perdew \nDepartment of Physics & Chemistry\nTemple University\nPA-19122Philadelphia\n"
] | [
"Department of Physics\nTemple University\nPA-19122Philadelphia",
"Department of Physics\nTemple University\nPA-19122Philadelphia",
"Department Of Physics\nUniversity of Texas-El Paso\nTX-79902El Paso",
"Department of Physics & Chemistry\nTemple University\nPA-19122Philadelphia"
] | [] | We have computed the surface energies, work functions, and interlayer surface relaxations of clean (111), (110), and (100) surfaces of Al, Cu, Ru, Rh, Pd, Ag, Pt, and Au. Many of these metallic surfaces have technological or catalytic applications. We compare experimental reference values to those of a family of non-empirical semilocal density functionals from the basic local density approximation (LDA) to our most advanced, general-purpose meta-generalized gradient approximation, SCAN. The closest agreement within experimental uncertainty is achieved by the simplest density functional LDA, and by the most sophisticated general-purpose one, SCAN+rVV10. The long-range van der Waals interaction incorporated through rVV10 increases the surface energies by about 10%, and the work functions by about 3%. LDA works for metal surfaces through a stronger-than-usual error cancellation. The PBE generalized gradient approximation tends to underestimate both surface energies and work functions, yielding the least accurate results. Interlayer relaxations from different functionals are in reasonable agreement with one another, and usually with experiment. arXiv:1702.08515v3 [cond-mat.mtrl-sci] | 10.1073/pnas.1713320114 | [
"https://arxiv.org/pdf/1702.08515v3.pdf"
] | 21,373,946 | 1702.08515 | edf73c2c630536416178970c6547c137f167548e |
Properties of real metallic surfaces: Effects of density functional semilocality and van der Waals nonlocality
(Dated: May 19, 2017)
Abhirup Patra
Department of Physics
Temple University
PA-19122Philadelphia
Jefferson E Bates
Department of Physics
Temple University
PA-19122Philadelphia
Jianwei Sun
Department Of Physics
University of Texas-El Paso
TX-79902El Paso
John P Perdew
Department of Physics & Chemistry
Temple University
PA-19122Philadelphia
Properties of real metallic surfaces: Effects of density functional semilocality and van der Waals nonlocality
(Dated: May 19, 2017)
We have computed the surface energies, work functions, and interlayer surface relaxations of clean (111), (110), and (100) surfaces of Al, Cu, Ru, Rh, Pd, Ag, Pt, and Au. Many of these metallic surfaces have technological or catalytic applications. We compare experimental reference values to those of a family of non-empirical semilocal density functionals from the basic local density approximation (LDA) to our most advanced, general-purpose meta-generalized gradient approximation, SCAN. The closest agreement within experimental uncertainty is achieved by the simplest density functional LDA, and by the most sophisticated general-purpose one, SCAN+rVV10. The long-range van der Waals interaction incorporated through rVV10 increases the surface energies by about 10%, and the work functions by about 3%. LDA works for metal surfaces through a stronger-than-usual error cancellation. The PBE generalized gradient approximation tends to underestimate both surface energies and work functions, yielding the least accurate results. Interlayer relaxations from different functionals are in reasonable agreement with one another, and usually with experiment. arXiv:1702.08515v3 [cond-mat.mtrl-sci]
I. INTRODUCTION
The rapid development of electronic structure theory has made it easier to analyze and describe complex metallic surfaces [1], but understanding the underlying physics behind surface energies, work functions, and interlayer relaxations has remained a long-standing challenge [2]. Metallic surfaces are of particular importance because of their wide range of applications including metal-molecule junctions, junction field-effect transistors, and in catalysis [3][4][5][6][7][8]. A detailed knowledge of the electronic structure is required for accurate theoretical investigations of metallic surfaces [9,10].
Consequently, metal surfaces have played a key role in the development and application of Kohn-Sham density functional theory (KS DFT) [11]. The work of Lang and Kohn [12][13][14] in the early 1970's demonstrated the ability of the simple local density approximation (LDA) [11,15] for the exchange-correlation (xc) energy to capture the surface energies and work functions of real metals. Their work stimulated the effort to understand why simple approximate functionals work and how they can be improved [16,17]. Later, correlated-wavefunction calculations [18,19] gave much higher surface energies for jellium, but were not supported by further studies [20,21] and were eventually corrected by a painstaking Quantum Monte Carlo calculation [22]. The too-low surface energies from the Perdew-Burke-Ernzerhof [23] (PBE) generalized-gradient approximation (GGA) led in part to the AM05 [24] and PBEsol [25] (PBE for solids) GGAs, and to general-purpose meta-GGAs that remain computationally efficient, including the recent strongly constrained and appropriately normed (SCAN) meta-GGA [26,27]. SCAN captures intermediate-range van der Waals (vdW) interactions, but capturing longer-ranged vdW interactions requires the addition of a non-local vdW correction such as from the revised Vydrov-Van Voorhis 2010 (rVV10) functional [28].
Ref. 29 suggests that the vdW interaction is semilocal at short and intermediate range, but displays pairwise full nonlocality at longer ranges, and many-body full nonlocality [30] at the longest and least energetically important distances. Accounting for intermediate and long-ranged vdW interactions is especially important for layered materials [31][32][33] and ionic solids [34,35]. van der Waals interactions are also needed to correct the errors of GGAs for bulk metallic systems [35]. The importance of the vdW contribution to surface properties is something we emphasize below. By naturally accounting for both intermediate and long-range interactions, SCAN+rVV10 [29] represents a major improvement over previous functionals for many properties of diversely-bonded systems [27], however, it had not been tested for real metallic surfaces. By studying metallic surfaces with this generalpurpose functional we can better understand why LDA can be accidentally accurate, and demonstrate the systematic improvement of SCAN over other non-empirical functionals. Furthermore, we will also be able to extract the impact of intermediate and long-range dispersion.
The surface energy is the amount of energy required per unit area to cleave an infinite crystal and create a new surface [12]. Accurate theoretical face-dependent surface energies are straightforward to obtain from accurate bulk and surface calculations, since we have absolute control over morphology and purity. Experimentally, however, surface energies have been determined by measuring the surface tension of the liquid metal and then extrapolating to 0K using a phenomenological method [36]. The surface tension of the liquid phase is generally different from the actual surface free energy of the solid state metals and can be considered as an "average" surface energy.
Available experimental values are also rather old (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980). They provide useful but uncertain estimates for the low-energy faces of bulk crystals. With this in mind, one should be careful when comparing defect-free theoretical predictions with experimental data.
The work function on the other hand, is easier to measure [37] compared to the surface energy. One can consider the work functions measured from a polycrystalline sample [38] as a "mean" work function of all the possible crystallographic faces present in the sample. In a recent work [39], Halender et. al argued that ultraviolet photoemission spectroscopy can be used as an accurate measurement of the work function of materials, despite of the chances of surface contamination of the exposed surface. However, it still remains an open question as to which experimental work function value one should compare theoretical values with. In practice the work function can be calculated using DFT by accurately determining the Fermi energy and vacuum potential [13,14] of the surface slab. Since the work function depends on slab thickness, the role of quantum size-effects may need to be considered.
Another fundamental property of surfaces is geometric relaxations due to strong inward electrostatic attraction on the top surface layer, which occurs mainly due to charge density smoothing at the surface. This effect can be accurately measured experimentally using low-energy electron diffraction (LEED) [36,40] intensity analysis. The role played by the xc functional in surface relaxations was unclear and worth exploring in more detail.
In spite of the theoretical challenges to model and explain these metallic surface properties, density functional theory [11,26,27,41,42] has proven to be one of the leading electronic structure theory methods to understand characteristics of metal surfaces. Lang and Kohn [12][13][14] and Skriver and Rosengaard [43] reported the surface energies and work functions of close-packed metal surfaces from across the period table using Green's function techniques based on linear muffin-tin-orbitals within the tight-binding and atomic-sphere approximations. In another work, Perdew et al. used the stabilized jellium and liquid drop model (SJM-LDM) [44] to understand the dependency of surface energies and work functions of simple metals on the bulk electron density as well as the atomic configuration of the exposed crystal face. Developing functionals that are accurate for surfaces has been an active area of recent activity [25,45,46].
Although previous works [16,47,48] gave very reasonable descriptions of fundamental metallic surfaces, we must consider the limitations of the local and semilocal xc density functionals for metal surfaces [49]. Wang et al. [50] calculated surface energies and work functions of six close-packed fcc and bcc metal surfaces using LDA and PBE. Their study confirms the face-dependence of the surface energy and work function. Singh-Miller and Marzari [51] used PBE to study surface relaxations, surface energies, and work functions of the low-index metallic surfaces of Al, Pt, Pd, Au and Ti. Ref. 51 found that LDA qualitatively agrees with the experimental surface energies, but neither LDA or PBE can be considered as a default choice for quantitative comparison with experimental values for surface properties. Following what they have suggested, we will demonstrate that higher rungs of Jacob's ladder in DFT [3], such as meta-GGA's or the Random Phase Approximation [45], must be used to accurately study surface properties.
In this work we investigated the surface energies, work functions and interlayer relaxations of the lowindex clean metallic surfaces of Al, Pt, Pd, Cu, Ag, Au, Rh and Ru. Here we focus on three main crystallographic faces, (111), (110), and (100), to explore the facedependence of the surface properties [52]. Furthermore, we have explored the xc-functional dependence to demonstrate the improvements non-empirical meta-GGAs can achieve compared to GGAs. We utilized the following approximations: LDA [11], the PBE generalized gradient approximation and its modification for solids, PBEsol [25,53], and the newly constructed meta-GGAs SCAN and SCAN+rVV10. Pt (111) was used as a test case to explore the convergence of the surface energy and work function with respect to slab thickness, kinetic energy cutoff, and k-points, see Appendix VI C.
II. THEORY
A. Surface energy
The surface energy, σ, can be defined as the energy per atom, or per area, required to split an infinite crystal into two semi-infinite crystals with two equivalent surfaces [54],
σ = 1 2A [E Slab − N Slab N Bulk E Bulk ],(1)
where E Slab is the total energy of the slab, N is the total number of atoms in the slab, E Bulk is the total energy of the bulk, and A is the surface area of the slab. The factor of 1/2 in the above equation comes from fact that each slab is bounded by two symmetric surfaces. DaSilva et. al [2] showed that a dense mesh can be used to avoid numerical instabilities in Eq. (1) coming from using different numbers of atoms in the slab and bulk calculations [56]. The linear fit method is one way to find converged values of the surface energies from slab calculations [55]. Previously, Fiorentinni et. al [57] applied this method for Pt (100) surface. We have applied this method to obtain converged values of the surface energies using equivalent cutoff energies and dense k-meshes for bulk and surface calculations. One can write Eq. (2) for the large N limit of the layer thickness as
E Slab ≈ N E Bulk + 2σA ,(2)
so that the surface energy can be extracted from an extrapolation of the slab energy with respect to the layer thickness.
B. Work function
The work function for metallic systems can be determined computationally taking the difference between the vacuum potential and the Fermi energy [13]:
φ = V vacuum − E F ermi .(3)
The reported anisotropy of the work function for different surfaces implies it can depend on the particular face due to edge effects [58]. For different surfaces there are different densities of electrons at the edges of the surface and that causes different surface dipole barriers D, which is explicitly related to V vacuum . Surface dependent values of D can yield different values for the work function since the Fermi energy is solely a bulk property.
C. Surface relaxations:
Surface relaxations arise due to the minimization of the energy at the surface and can be computed using the simple formula: d ij % = di−dj d0 × 100, where d i and d j are the distance of i th and j th layer from the top layer of the relaxed slab, and d 0 is the distance between the layers of the unrelaxed slab.
III. RESULTS & DISCUSSIONS:
A. Surface energy
The relevant surface energies measured experimentally [36] are usually "average" surface energies of all the possible surfaces present in the sample, hence experimentally measured surface energies should be compared with the average of the surface energies for (111), (110), and (100) surfaces [61]. Here we also use the average surface energies to compare with the experimentally measured values, but from a different perspective. LDA is known to yield accurate surface energies for jellium, within the uncertainty of the latest QMC values [62], and an equally-weighted average over the three lowest-index faces from LDA reproduces the experimental surface energies to within their uncertainties. Many people have justified the accuracy of LDA through an excellent error cancellation between its exchange and correlation contributions. Usually the LDA exchange energy contribution to the surface energy is an overestimate, while the correlation contribution is a significant underestimate, and their combination results in an accurate prediction.
In Table I we report the mean surface energies calculated using different density functionals, including results from the random phase approximation (RPA) [60]. Figure 1(left) shows the relative error (in J/m 2 ) of the computed values of the average surface energies compared to the best available experimental results for each metal. The consistent performance of SCAN+rVV10 can be seen in all cases whereas PBE and SCAN both perform poorly. RPA results are overall in good agreement with the experimental results, however the computational cost is higher and the improvement only marginal. One can argue that SCAN+rVV10 is the "best" candidate for predicting metallic surface energies with its moderate computational cost and high accuracy.
The relative errors and mean absolute percentage errors for the computed average surface energies are shown in Figure 1. The relative errors are calculated with respect to the average experimental value from all the three crystallographic surfaces. Our results are in agreement within an acceptable margin compared to those previously reported in the literature [50,51,61,63]. For Al, the lefthand plot illustrates the accuracy of all methods for simple metals that are close to the jellium limit. Table II demonstrates that there is an overall systematic improvement from PBE to SCAN to SCAN+rVV10 in the Al surface energy due to the step-wise incorporation of intermediate-range dispersion in SCAN and long-range dispersion in rVV10. The long-range contributions from rVV10 in Al account for 12% of the total surface energy, and foreshadows the importance of including this contribution for the d metals. However, we find that the dispersion contribution from SCAN+rVV10 to the total surface energy can be as large as 18%.
Transition metal surfaces are more challenging due to their localized d orbitals which cause inhomogeneities in the electron density at the surface. These inhomogeneities lead to a wider spread in the results from the different functionals. PBE yields the largest errors for the transition metal surface energies due to its parametrization for slowly varying bulk densities. PBEsol was instead fit to jeliium surface exchange-correlation energies and yields a significant improvement compared to PBE. To improve the results further, vdW interactions need to be incorporated. SCAN was constructed to interpolate the xc enhancement factor between covalent and metallic bonding limits in order to deliver an improved description of intermediate-range vdW interactions. Consequently, SCAN is more accurate than PBE but not PBEsol, since no information about surfaces was used in SCAN's construction. Still, even without parameterizing to jellium surfaces, SCAN and PBEsol are similar and exhibit analogous trends for all of the metals. With the addition of long-range vdW from rVV10, SCAN+rVV10 surpasses the accuracy of PBEsol, indicating that the total vdW contribution to the surface energy is more important than previously recognized since SCAN alone is unable to outperform PBEsol.
Although LDA does not explicitly include vdW interactions, we infer portions of the long-range part are somehow captured through error cancellation in the exchange and correlation contributions. RPA, which also includes vdW interactions, tends to overestimate the surface energies. This is somewhat expected based upon the results for jellium slabs [64,65] where the xc contributions to the surface energy show similar relative trends between (111), (110), and (100) surfaces compared to experiment [36,59]. The mean absolute percentage error (MAPE) of the surface energies (right) for each functional.
LDA and RPA as a function of the Seitz radius.
The righthand plot in Figure 1 shows the mean absolute percentage errors (MAPE). SCAN+rVV10 is clearly the best semilocal density functional, though LDA is a close second. Incorporation of vdW interactions is important for dealing with the interactions of clean metallic surfaces and their surroundings, however, and SCAN+rVV10 can be expected to perform more systematically than LDA for a broader set of properties. Though RPA provides a good additional benchmark when experimental data is scarce, its higher computational cost limits its applicability for general surface problems and reinforces the utility of a functional such as SCAN+rVV10, which is accurate, efficient, and naturally incorporates dispersion.
Table II and Figure 2 illustrate the detailed performance of each method for each crystallographic face. SCAN+rVV10 frequently overlaps with LDA, while the systematic underestimation of the surface energies by PBE is easy to see. We find excellent agreement of our PBEsol results with that of Sun et al. [66], and that our LDA and PBE values and trends are in good agreement with others recently reported [50,51,61]. The general trend of σ 111 < σ 100 < σ 110 can be seen from Fig. 2 for Al, Cu, Ag, Pt and Au respectively. However, this trend seems to be broken for Ru, Rh and Pd.
B. Work function
The relative errors and MAPE with respect to experiment for the work functions of the (111) surfaces are plotted in Figure 3. Tabulated values of the work function for each face can be found in Table III, and are plotted in Fig. 4. Since we could not find experimental references for Ru (110) and (100) instead focus on the (111) surface for simplicity. The performance trends for (111) generally hold for the other crystallographic faces as well. Our results for LDA and PBE are generally within ≈ 0.15 eV of those reported in the literature [51,70]. For Al, LDA overestimates the work function for the (111) surface by 0.1 eV, but is dead on experiment for the other two faces. PBE and SCAN perform similarly for Al, but show larger deviations from one another for the d-block metals. PBEsol and SCAN+rVV10 yield the smallest errors for Al. The effect of geometric relaxation on the work function was not explored, but is likely negligible. Figure 3 also shows the relative errors in the calculated values of the work function for the transition metals. These systems have entirely or partly filled d-orbitals which are localized on the atoms.
Hybridization between the d and s orbitals varies with the crystallographic orientation resulting in changes in the surface-dipole and, consequently, the work function. The redistribution of the d electrons in noble metals also impacts the surface energy due to changes in the bulk Fermi energy, and these changes vary from one face to another [71].
The difference in work function values predicted by different functionals originates from the different Fermi energies predicted by each functional. The lower Fermi energy predicted by LDA results in an overestimated work function, since the average electrostatic potentials calculated from different functionals differs by less than 0.2 eV from each other. PBE and SCAN predict comparatively larger Fermi energies leading to underestimated work functions. On the other hand, SCAN+rVV10 not only lowers the Fermi energy value but also increases the average electrostatic potential compared to SCAN and thus gives a much better work function. From Fig. 3 it is clear that PBE systematically underestimates the (111) work functions and its accuracy is erratic. The underestimation of the (111) work function by PBE persists for the other faces as well. For Al and Cu PBE is accurate but shows much larger errors for the other metals. For Al and Ru, PBE and SCAN are fortuitously close though not for any particular physical reason, and in general SCAN improves upon PBE through its incorporation of vdW contributions to the surface potentials. Though PBEsol and SCAN incorporate different physical limits in their construction, their overall performance for work functions is quite similar, and typically the errors from these functionals are within the experimental uncertainties. They are also outperforming LDA for the work functions, which was not the case for the surface energies above.
The inclusion of interemdiate-range vdW interactions is not enough, however, as the long-range contributions can still raise the work function by an appreciable amount. The (111) surface of Ru is one such case where the addition of rVV10 to SCAN increases the work function by nearly 0.3 eV, significantly reducing the error compared to experiment. Incorporating the longrange dispersion amounts to between 3 and 6% of the total work function for the (111) surfaces, underscoring the importance of its inclusion. Though LDA and SCAN+rVV10 were of similar quality for the surface energies, SCAN+rVV10 clearly takes the top spot for computing accurate work functions of the (111) surfaces. We note that the trend φ 110 < φ 100 < φ 111 predicted by Smoluchowski [58] is not observed for Ru, Rh and Pt, but is observed for the other metals.
The photoelectrons ejected from the metal experience an image potential, which in DFT is influenced by the behavior of the exchange-correlation potential (functional). The attractive dispersion interaction lowers this potential, systematically increasing the work function. By incorporating a long-range contribution to the potential from rVV10, SCAN+rVV10 systematically and accurately predicts surface energies within experimental uncertainties. Addition of rVV10 to the GGAs would likely reduce their errors as well, provided the bare functional underestimates the experimental reference, but it would worsen the LDA results for all but Rh. The systematic behavior of SCAN for diversely bonded systems lends itself to correction by rVV10 achieving a well balanced performance both for surface and bulk [26,27] properties.
Since, the surface energies are calculated using the relaxed slab model the effect of surface relaxations are already incorporated in the calculations.
C. Surface relaxation
At the surface, the presence of fewer neighboring ions can cause changes in the equilibrium positions of the ions due to the changes in the inter-atomic forces. Sur- face reconstructions have been measured for Ag, Pt and Au. [94][95][96] In these kinds of experimental measurements surface reconstructions are possible since the top layers rearrange in order to reach a new equilibrium position, and hence can change the work function. Nevertheless, we can always compare the performance of one functional to another compared to the best available experimental values.
Tables VI, VII, and VIII show the tabulated values of the percentage relaxation for the top 4 layers of the three surfaces. Different exchange-correlation functionals may predict different interlayer relaxations compared to the experimental data [40,[97][98][99][100][101]. It is important to note that for d 23 % and d 34 % we have found only a few experimental results to compare with.
In most cases SCAN+rVV10 and SCAN predict reasonably accurate interlayer relaxations in comparison to the experimental results. However, the values for the (100) surfaces of Cu, Pd and Ag calculated using the SCAN+rVV10 and SCAN are much lower than the experimental results. Only PBEsol predicts a reliable value of d 12 % in these cases. Notice that in many cases the percentage interlayer relaxation values calculated by LDA are much larger than the experimental values. Moreover, Tables VI, VII, and VIII also show that the LDA and PBE results calculated in this work are in agreement with Ref. 51.
IV. SUMMARY
We studied three important surface properties of metals using the local density approximation, two generalized gradient approximations (PBE and PBEsol), and a new meta-GGA (SCAN) with and without a van der Waals correction. The surface energy, work function and interlayer relaxation were calculated and compared with the best available experiential values. The choice of exchange-correlation potential has a noticeable effect on the surface properties of metals especially on the surface energy [102]. The performance of SCAN is comparable to PBEsol, which is a boon for meta-GGA development, since existing GGA's struggle compared to LDA in predicting good surface energies and work functions [51]. van der Waals interactions are present at metallic surfaces and have a non-negligible contribution to the surface energy and work function, but make a smaller impact on the interlayer relaxation. Ferri et al. [103] also found that van der Waals corrections can improve surface properties. van der Waals interactions provide an attractive interaction between one half of a bulk solid and the other when the halves are separated, so vdW interactions tend to increase the surface energy. Typically vdW forces lower the energy of a neutral solid more than for a charged system, so vdW interactions also tend to increase the work function. LDA overestimates the intermediate-range vdW attraction but has no long-range component. These two errors of LDA may cancel almost perfectly for surface properties. Although it is well understood that LDA predicts an exponential decay rather than the true asymptotic nature of the surface barrier potential, the discrepancy only matters for cases where the surface states are extended into the vacuum, such as in the case of photoemission and scanning tunneling microscopy. Surface energies and work functions are more or less a property of the local electron density at the surface and hence LDA can give excellent results.
PBE underestimates the intermediate range vdW and has no long-range vdW, so it underestimates surface energies and work functions. PBEsol and SCAN have realistic intermediate-range vdW and no long-range vdW, so they are more accurate than PBE but not as good as LDA for predicting surface properties. The asymptotic longrange vdW interactions missing in semilocal functionals can make up to a 10% difference in the surface energy or work function, which implies there is a limit to the accuracy of these methods. SCAN+rVV10 stands out in this regard as it is a balanced combination of the most advanced non-empirical semilocal functional to date and the flexible non-local vdW correction from rVV10. In addition to delivering superior performance for layered materials [33], SCAN delivers high quality surface energies, work functions, and surface relaxations for metallic surfaces. SCAN+rVV10 includes realistic intermediate-and long-range vdW interactions, so it tends to yield more systematic and accurate results than LDA, PBEsol, or SCAN. Accurate measurements for these properties are needed in order to validate the performance of new and existing density functionals. Overall we find that SCAN is a systematic step up in accuracy from PBE and that adding rVV10 to SCAN yields a highly accurate method for diversely bonded systems.
VI. APPENDICES
A. Stabilized jellium model
The jellium model (JM) studied in Refs. 12, 13, and 118 is a simple model to study surface properties in which the positive ionic charge is replaced by a uniform positive background truncated at a planar surface. Although the jellium model shows "universality" in predicting the r s dependence of metallic surface properties, it's not perfect. It has the following defects:
1. Negative surface energy for r s ≈ 2, [12] 2. Negative bulk modulus for r s ≈ 6. [119] These defects are corrected in the SJM using a "structureless pseudopotential" [120]. SJM treats the "differential potential" between the pseudopotential of the lattice and the electrostatic potential of the uniform positive background perturbatively, adapting the idea that each bulk ion belongs to a neutral Wigner-Seitz sphere of radius r 0 with r 0 = z 1/3 r s . We performed first-principles density functional theory (DFT) calculations using the VASP package [123] in combination with projector augmented wave (PAW) method [124,125]. For both bulk and surface computations, a maximum kinetic energy cutoff of 700 eV was used for the plane-wave expansion. The Brillouin zone was sampled using Gamma centered k-mesh grids of size 16 × 16 × 16 for the bulk and 16 × 16 × 1 for the surfaces. The top few layers in the slab were fully relaxed until the energy and forces were converged to 0.001 eV and 0.02 eV/Å, respectively. Dipole corrections were employed to cancel the errors of the electrostatic potential, atomic forces and total energy, caused by periodic boundary condition.
For the slab geometry, 20Å of vacuum was used to reduce the Coulombic interaction between the actual surface and its periodic image. For (111) surfaces, a hexagonal cell was used with each layer containing one atom per layer. The same procedure was employed for (100) tetragonal and (100) orthorhombic cell. The cells are built using the theoretical lattice constants obtained from fitting the Birch-Murnaghan(BM) equation of state for the bulk with each functional, see Tab. XV. We do not consider exchange-correlation contributions to the planar averaged local electrostatic potential (V V acuum ). We used Pt (111) to test the convergence of the surface properties with respect to different computational variables such as k-mesh, cut-off energy, layer and vacuum thickness of the slab geometry. All the computed surface properties presented in this work, are well converged with respect to these computational variables.
FIG. 1 .
1The relative error (RE) of the average surface energy (left) of the
FIG. 2 .
2σ111 (left), σ110 (middle), and σ100 (right) for the selected metals in this work.
FIG. 3 .FIG. 4 .
34The relative errors in the work functions (top) predicted by each functional for the(111) surfaces. Mean absolute percentage errors (MAPE) of the calculated work functions (bottom) for the same systems. φ111 (left), φ110 (middle), and φ100 (right) for the surfaces studied in this work.
TABLE I .
IMean surface energy of (111)[σ111],(110)[σ110] and (100) [σ100] surfaces of different metals in J/m 2 .Al Cu Ru Rh Pd Ag Pt Au
Metals
0.8
0.6
0.4
0.2
0.0
0.2
0.4
Relative Error (J/m 2
)
LDA
PBE
PBEsol
SCAN
SCAN+rVV10
RPA
LDA
PBE PBEsol SCAN SCAN+rVV10 RPA
TABLE VII. Inter-layer relaxations for the (110) surfaces of different metals. PWPP-LDA ; Ref. 111 h LDA-SGF ; Ref. 67 i LEED ; Ref. 95 j DFT(LDA & PBE) ; Ref. 106 B. Jellium surface energies from semilocal density functionalsJellium surface energies for different r s can depends on the semilocal exchange-correlation functional. In this section, we tabulated the calculated values of jellium surface energies for LDA, PBE, PBEsol and SCAN. We mention here that the rVV10 long-range correction to SCAN is less important for the jellium surface than a real surface. This preserves the fact that the jellium surface isTABLE VIII. Inter-layer relaxations for the (100) surface of different metals. Hex XRD ; Ref. 100 g PBE ; Ref. 51 h PWPP-LDA ; Ref. 117 i LDA-SGF ; Ref. 67 j PWPP-LDA ; Ref. 69 k DFT(LDA & PBE) ; Ref. 106 an appropriate norm for SCAN itself. σExact−EX ]/σExact−EX ) of jellium surface for different values of rs.TABLE XII. Error in Exchange energies of jellium surface for different values of rs calculated for the values in Table XI. TABLE XIII. Exchange-correlation energies ([σXC − σExact−XC ]/σExact−XC ) of the jellium surface for different values of rs.TABLE XIV. Error in Exchange-correlation energies of jellium surface for different values of rs calculated for the values inTable XIII.Metals
Surface
LDA
PBE
PBEsol
SCAN
SCAN
LDA
PBE
Expt.
+rVV10
(Other work)
(Other work)
Al
d12%
1.64
1.46
1.55
1.81
1.89
+1.35 g
+1.04 f
1.7 ± 0.3 i
d23%
-0.66
-0.72
-0.73
-1.27
-1.27
+0.54 g
−0.54 f
−0.5 ± 0.7 i
d34%
0.1
0.07
0.09
0.17
0.16
+1.04 g
+0.19 f
Cu
d12%
-0.44
-0.34
-0.39
-0.39
-0.51
−0.7 ± 0.5 b
d23%
-5.43
0.01
0.1
-0.1
0.14
d34%
-4.8
-0.01
-0.1
0.13
-0.09
Ru
d12%
-16.73
-18.41
-19.84
-19.82
-17.26
d23%
-15.35
-9.58
-11.39
-12.06
-15.8
d34%
-3.74
-8.43
5.84
6.11
-4.01
Rh
d12%
-1.24
-1.93
-1.56
-1.53
-1.37
d23%
-0.5
-0.89
-0.36
-0.2
-0.72
d34%
-1.11
1.05
1.07
1.16
1.29
Pd
d12%
-0.45
-0.45
0.55
0.91
1.07
−0.22 g
+0.25 f
+1.3 ± 1.3 c
d23%
-0.52
-0.24
-0.3
-0.49
-0.42
−0.53 g
−0.34 f
−1.3 ± 1.3 c
d34%
-0.48
0.15
0.11
0.2
0.14
−0.33 g
+0.10 f
Ag
d12%
0.15
-0.15
-0.07
-0.43
0.24
−0.53 h
−0.3 h
−0.5 ± 0.3 d
d23%
-0.11
-0.3
-0.07
-0.12
-0.27
−0.07 h
−0.04 h
−0.4 ± 0.4 d
d34%
-0.14
-0.15
-0.8
-0.24
-0.49
0.22 h
0.16 h
0 ± 0.4 d
Pt
d12%
1.05
0.89
0.79
2.48
2.67
0.88 g
+0.85 f
+1.1 ± 4.4 e
d23%
-0.32
-0.71
-0.65
-0.39
-0.15
−0.22 g
−0.56 f
d34%
0.14
-0.04
-0.03
-0.62
-0.61
−0.17 g
−0.15 f
Au
d12%
-0.42
0.99
0.76
1.09
1.5
0.8 g
−0.04 f
d23%
-0.58
-0.75
-0.65
-0.78
-0.82
−0.3 g
−1.86 f
d34%
-0.24
-0.29
-0.18
-0.27
-0.31
−1.4 f
TABLE VI. Inter-layer relaxations for the (111) surfaces of different metals.
a LEED ; Ref. 40
b LEED ; Ref. 104
c LEED ; Ref. 105
d LEED ; Ref. 101
e LEED ; Ref. 99
f PBE ; Ref. 51
g FLAPW-LDA ; Ref. 2
h DFT(LDA & PBE) ; Ref. 106
surface
JM
SJM
SJ-LDM
Expt
Al
111
−.605 a
0.953
1.096 b
1.14 a
TABLE IX. Surface energies (J/m 2 ) of (111)surfaces of Al
from different jellium models.
surface
JM
SJM
SJ-LDM
Expt
Al
111
3.74 a
4.24
4.09 b
4.3 a
TABLE X. Tabulated values of work function (eV) of (111)
surface of Al using different jellium models.
a Ref. 48 and 121
b Ref. [122]
Metals Surface
LDA
PBE
PBEsol SCAN
SCAN
LDA
PBE
Expt.
+rVV10 (Other work) (Other work)
Al
d12%
-7.04
-7.27
-6.84
-8.86
-8.44
−6.9 g
−5.59 f
−8.5 ± 1.0 a
d23%
5.28
4.1
3.99
6.05
3.88
+2.2 f
+5.5 ± 1.1 a
d34%
-1.02
-0.86
-0.91
-1.12
-0.79
+2.2 ± 1.3 g
−1.29 f
+2.2 ± 1.3 a
Cu
d12%
-9.9
-9.98
-10.07
-11.75
-11.17
−10 ± 2.5 b
d23%
5.26
4.81
5.00
5.82
5.73
0 ± 2.5 b
d34%
-2.91
-1.18
-0.99
-3.89
-2.6
Ru
d12%
-18.44
-19.8
-14.7
-15.75
-17.2
d23%
-9.65
-6.23
-8.55
-9.41
-9.24
d34%
-2.01
0.93
1.64
-0.88
-1.85
Rh
d12%
-14.2
-10.54
-8.97
-11.33
-11.25
d23%
2.74
2.49
3.09
2.87
3.57
d34%
-1.45
1.41
1.91
3.26
3.89
Pd
d12%
-6.88
-5.38
-8.88
-9.5
-7.6
−5.3 g
−8.49 f
−5.8 ± 2.2 c
d23%
4.03
3.84
4.11
4.73
4.00
+3.47 f
+1.0 ± 2.2 c
d34%
-0.35
-0.21
-0.3
-0.28
-0.44
−0.19 f
Ag
d12%
-7.71
-6.87
-8.84
-8.81
-7.38
−8.81 j
−9.19 j
−7.8 ± 2.5 d
d23%
4.71
3.69
4.45
3.99
4.16
3.59 j
4.1 j
d34%
-1.07
-0.97
-1.23
-0.86
-0.42
−1.11 j
−1.5 j
Pt
d12%
-16.47 -17.15
-16.1
-24.5
-23.3
−15.03 f
−18.5 ± 2.2 e
d23%
8.96
10.08
8.85
14.37
14.55
+7.61 f
−24.2 ± 4.3 e
d34%
-1.82
-1.91
-1.82
-2.53
-2.23
−1.7 f
Au
d12%
-14.08 -13.87
-13.52
-14.54
-14.09
−9.8 g
−12.94 f
−20.1 ± 3.5 i
d23%
9.01
9.24
8.71
10.11
10.14
−7.8 g
+7.83 f
−6.2 ± 3.5 i
d34%
-4.00
-3.29
-3.58
-4.17
-3.68
−0.8 g
−2.66 f
a LEED ; Ref. 107
b LEED ; Ref. 108
c LEED ; Ref. 109
d LEED ; Ref. 110
e LEED ; Ref. 99
f PBE ; Ref. 51
g Metals
Surface
LDA
PBE
PBEsol
SCAN
SCAN
LDA
PBE
Expt.
+rVV10
(Other work)
(Other work)
Al
d12%
1.22
1.18
0.9
1.08
1.26
+0.5 g
+1.73 g
1.8 a
d23%
-0.44
-0.65
-0.38
0.0
-0.06
0.47 g
d34%
-0.32
-0.26
-0.24
-0.67
-0.69
−0.27 g
Cu
d12%
-2.88
-2.48
-2.18
-3.98
-3.19
−1.1 ± 0.4 b
d23%
-0.41
1.22
1.15
-0.25
-0.75
1.7 ± 0.6 b
d34%
0.0
0.11
0.03
-0.13
-0.03
Ru
d12%
-17.71
-13.12
-15.79
-15.85
-15.65
d23%
-2.24
0.4
-1.69
-2.76
-2.52
d34%
0.83
3.29
1.32
1.47
1.39
Rh
d12%
-7.71
-4.13
-3.63
-4.45
-4.43
d23%
-2.39
0.47
0.81
1.37
1.83
d34%
-2.23
0.92
1.14
0.98
1.32
Pd
d12%
-0.69
-1.13
-1.17
-0.94
-0.9
−0.6 i
−1.3 g
3.0 ± 1.5 c
d23%
0.28
0.24
0.43
0.32
0.21
0.0 g
+1.0 ± 1.5 c
d34%
-0.54
0.26
0.23
0.49
0.59
+0.35 g
Ag
d12%
-1.13
-1.74
-1.41
-1.78
-1.16
−1.81 k
−1.87 k
0 ± 1.5 d
d23%
0.87
0.79
0.69
0.89
0.83
0.56 k
0.51 k
d34%
0.17
0.2
0.16
-0.06
0.28
0.42 k
0.3 k
Pt
d12%
-2.61
-2.2
-3.84
-4.29
-3.72
−2.37 g
+0.2 ± 2.6 e
d23%
-0.04
0.03
0.38
-0.96
-0.96
−0.55 g
d34%
-1.66
-1.35
-1.39
-1.37
-0.77
+0.29 g
Au
d12%
0.88
0.52
0.6
0.54
0.53
−1.2 g
−1.51 g
−20 ± 3 f
d23%
-0.721
-0.75
-0.65
-0.78
-0.82
0.4 g
0.33 g
+2 ± 3 f
d34%
0.44
0.24
0.17
0.22
0.19
0.24 g
a LEED ; Ref. 112
b LEED ; Ref. 113
c LEED ; Ref. 114
d LEED ; Ref. 115
e LEED ; Ref. 116
f rs
Exact
LDA
PBE
PBEsol
SCAN
(erg/cm 2 )
2
2624
15.7
-7.2
1.6
0.3
3
526
27.2
-11.6
2.7
-7.0
4
157
42.7
-18.5
3.2
-19.1
6
22
15.7
-46.4
4.1
-71.4
TABLE
XI.
Exchange
energies
([σEX −
Error
LDA
PBE
PBEsol
SCAN
ME (Ha)
160.89
-72.3
15.53
-18.66
MARE (Ha)
160.89
-72.3
15.53
22.60
MRE (%)
45.83
-20.93
2.90
-24.30
MARE (%)
45.83
-20.93
2.90
24.45
RMSD (%)
36.31
17.61
1.04
32.40
rs TDDFT
DMC
LDA PBE PBEsol SCAN
(erg/cm 2 )
2
3446
3392 ± 50 -3.2
-5.8
-2.7
-0.7
3
797
768 ± 10 -4.1
-7.0
-2.9
-1.1
4
278
261 ± 8
-4.1
-9.4
-4.0
-1.6
6
58
53
-6.1 -10.3
-2.9
1.6
Error
LDA
PBE
PBEsol
SCAN
ME (Ha)
-41.38
-72.23
-32.37
-9.14
MARE (Ha)
41.38
72.23
32.37
9.60
MRE (%)
-5.5
-20.93
-3.13
-0.45
MARE (%)
5.5
-20.93
3.13
1.25
RMSD (%)
2.4
17.61
0.59
1.42
C. Computational details:
TABLE XV. Calculated values of the lattice constants of the metals. Experimental data are taken form ref126 and 127.Metals
LDA
PBE
PBEsol
SCAN
SCAN+rVV10
Experimental
Al
3.981
4.034
4.008
4.004
3.996
4.018
Cu
3.524
3.631
3.561
3.558
3.545
3.595
Ru
c=4.265
c=4.269
c=4.267
c=4.265
c=4.266
c=4.281
c/a =1.571
c/a=1.574
c/a=1.573
c/a=1.572
c/a=1.575
c/a= 1.582
Rh
3.751
3.825
3.8
3.784
3.773
3.794
Pd
3.834
3.935
3.866
3.896
3.877
3.876
Ag
4.001
4.145
4.079
4.087
4.060
4.062
Pt
3.897
3.967
3.919
3.896
3.888
Au
4.052
4.156
4.079
4.087
4.073
4.063
79 11 Experiment ; Ref. 80 12 Experiment ; Ref. 81 13 Experiment ; Ref. 82 14 FLAPW-LDA ; Ref. 2 15 Experiment ; Ref. 83 16 Experiment ; Ref. 84 17 Experiment ; Ref. 85 18 Experiment ; Ref. 86 19 Experiment. Full potential LMTO. Ref. 87 20 Experiment ; Ref. 88 21 PWPP-LDA ; Ref. 68 22 Experiment ; Ref. 89 23 Experiment ; Ref. 90 24 Experiment ; Ref. 91 25 PWPP-LDA ; Ref. 71 26 Experiment ; Ref. 92 27 Experiment ; Ref. 93Full potential LMTO ; Ref. 67 10 Experiment ; Ref. 79 11 Experiment ; Ref. 80 12 Experiment ; Ref. 81 13 Experiment ; Ref. 82 14 FLAPW-LDA ; Ref. 2 15 Experiment ; Ref. 83 16 Experiment ; Ref. 84 17 Experiment ; Ref. 85 18 Experiment ; Ref. 86 19 Experiment ; Ref. 87 20 Experiment ; Ref. 88 21 PWPP-LDA ; Ref. 68 22 Experiment ; Ref. 89 23 Experiment ; Ref. 90 24 Experiment ; Ref. 91 25 PWPP-LDA ; Ref. 71 26 Experiment ; Ref. 92 27 Experiment ; Ref. 93
. A Gross, Theoretical Surface Science. 1SpringerA. Gross, Theoretical Surface Science, Vol. 1 (Springer, 2014).
Converged properties of clean metal surfaces by allelectron first-principles calculations. J L F Silva, C Stampfl, M Scheffler, Surface Science. 600J. L.F. Da Silva, C. Stampfl, and M. Scheffler, "Converged properties of clean metal surfaces by all- electron first-principles calculations," Surface Science 600 (2006).
Climbing the ladder of density functional approximations. J P Perdew, MRS Bulletin. 38J P. Perdew, "Climbing the ladder of density functional approximations," MRS Bulletin 38 (2013).
Physical and chemical properties of thin metal overlayers and alloy surfaces. J M White, MRS Proceedings. 83J M. White, "Physical and chemical properties of thin metal overlayers and alloy surfaces," MRS Proceedings 83 (1986).
Theoretical heterogeneous catalysis: scaling relationships and computational catalyst design. J Greeley, Annual review of chemical and biomolecular engineering. 7J. Greeley, "Theoretical heterogeneous catalysis: scal- ing relationships and computational catalyst design," Annual review of chemical and biomolecular engineer- ing 7 (2016).
Computational studies of synthetically relevant homogeneous organometallic catalysis involving ni, pd, ir, and rh: An overview of commonly employed dft methods and mechanistic insights. T Sperger, I A Sanhueza, I Kalvet, F Schoenebeck, Chemical Reviews. 115T. Sperger, I A. Sanhueza, I. Kalvet, and F. Schoenebeck, "Computational studies of syntheti- cally relevant homogeneous organometallic catalysis in- volving ni, pd, ir, and rh: An overview of commonly em- ployed dft methods and mechanistic insights," Chemical Reviews 115 (2015).
Quantum and classical dynamics of reactive scattering of h2 from metal surfaces. G J Kroes, C Diaz, Chemical Society Reviews. 45G J. Kroes and C. Diaz, "Quantum and classical dy- namics of reactive scattering of h2 from metal surfaces," Chemical Society Reviews 45 (2016).
Towards the computational design of solid catalysts. J K Nørskov, T Bligaard, J Rossmeisl, C H Christensen, Nature chemistry. 1J K. Nørskov, T. Bligaard, J. Rossmeisl, and C H. Christensen, "Towards the computational design of solid catalysts," Nature chemistry 1 (2009).
Surface electronic structure. J E Inglesfield, Reports on Progress in Physics. 45J E. Inglesfield, "Surface electronic structure," Reports on Progress in Physics 45 (1982).
Density Functional Theory. V Sahni, A Solomatin, Advances in Quantum Chemistry. Academic Press33V. Sahni and A. Solomatin, Density Functional Theory, Advances in Quantum Chemistry, Vol. 33 (Academic Press, 1998).
Self-consistent equations including exchange and correlation effects. W Kohn, L J Sham, Physical Review. 140W. Kohn and L. J. Sham, "Self-consistent equations in- cluding exchange and correlation effects," Physical Re- view 140 (1965).
Theory of metal surfaces: Charge density and surface energy. N D Lang, W Kohn, Physical Review B. 1N. D. Lang and W. Kohn, "Theory of metal surfaces: Charge density and surface energy," Physical Review B 1 (1970).
Theory of metal surfaces: Work function. N D Lang, W Kohn, Physical Review B. 3N. D. Lang and W. Kohn, "Theory of metal surfaces: Work function," Physical Review B 3 (1971).
Theory of metal surfaces: Induced surface charge and image potential. N D Lang, W Kohn, Physical Review B. 7N. D. Lang and W. Kohn, "Theory of metal surfaces: Induced surface charge and image potential," Physical Review B 7 (1973).
Accurate and simple analytic representation of the electron-gas correlation energy. J P Perdew, Y Wang, Physical Review B. 45J P. Perdew and Y. Wang, "Accurate and simple an- alytic representation of the electron-gas correlation en- ergy," Physical Review B 45 (1992).
Exchange-correlation energy of a metallic surface: Wave-vector analysis. D C Langreth, J P Perdew, Physical Review B. 15D C. Langreth and J P. Perdew, "Exchange-correlation energy of a metallic surface: Wave-vector analysis," Physical Review B 15 (1977).
Theory of nonuniform electronic systems. I. Analysis of the gradient approximation and a generalization that works. D C Langreth, J P Perdew, Physical Review B. 21D C. Langreth and J P. Perdew, "Theory of nonuni- form electronic systems. I. Analysis of the gradient ap- proximation and a generalization that works," Physical Review B 21 (1980).
Theory of inhomogeneous quantum systems. IV. Variational calculations of metal surfaces. E Krotscheck, W Kohn, G-X Qian, Physical Review B. 32E. Krotscheck, W. Kohn, and G-X. Qian, "Theory of inhomogeneous quantum systems. IV. Variational calcu- lations of metal surfaces," Physical Review B 32 (1985).
Diffusion Monte Carlo study of jellium surfaces: Electronic densities and pair correlation functions. P H Acioli, D M Ceperley, Physical Review B. 54P. H. Acioli and D. M. Ceperley, "Diffusion Monte Carlo study of jellium surfaces: Electronic densities and pair correlation functions," Physical Review B 54 (1996).
Surface and curvature energies from jellium spheres: Density functional hierarchy and Quantum Monte Carlo. L M Almeida, J P Perdew, Carlos Fiolhais, Physical Review B. 66L. M. Almeida, J P. Perdew, and Carlos Fiolhais, "Sur- face and curvature energies from jellium spheres: Den- sity functional hierarchy and Quantum Monte Carlo," Physical Review B 66 (2002).
High-level correlated approach to the jellium surface energy, without uniformgas input. L A Constantin, J M Pitarke, J F Dobson, A Garcia-Lekue, J P Perdew, Physical Review Letters. 100L A. Constantin, J M. Pitarke, J F. Dobson, A. Garcia- Lekue, and J P. Perdew, "High-level correlated ap- proach to the jellium surface energy, without uniform- gas input," Physical Review Letters 100 (2008).
Quantum Monte Carlo calculations of the surface energy of an electron gas. B Wood, N D M Hine, W M C Foulkes, P García-González, Physical Review B. 76B. Wood, N. D. M. Hine, W. M. C. Foulkes, and P. García-González, "Quantum Monte Carlo calcula- tions of the surface energy of an electron gas," Physical Review B 76 (2007).
Generalized gradient approximation made simple. J P Perdew, K Burke, M Ernzerhof, Physical Review Letters. 77J P. Perdew, K. Burke, and M. Ernzerhof, "Generalized gradient approximation made simple," Physical Review Letters 77 (1996).
Functional designed to include surface effects in self-consistent density functional theory. R Armiento, A E Mattsson, Physical Review B. 72R. Armiento and A. E. Mattsson, "Functional designed to include surface effects in self-consistent density func- tional theory," Physical Review B 72 (2005).
Restoring the density-gradient expansion for exchange in solids and surfaces. J P Perdew, A Ruzsinszky, G Csonka, O A Vydrov, G E Scuseria, L A Constantin, X Zhou, K Burke, Physical Review Letters. 100J P. Perdew, A. Ruzsinszky, G. Csonka, O A. Vydrov, G E. Scuseria, L A. Constantin, X. Zhou, and K. Burke, "Restoring the density-gradient expansion for exchange in solids and surfaces," Physical Review Letters 100 (2008).
Strongly constrained and appropriately normed semilocal density functional. J Sun, A Ruzsinszky, J P Perdew, Physical Review Letters. 115J. Sun, A. Ruzsinszky, and J P. Perdew, "Strongly constrained and appropriately normed semilocal density functional," Physical Review Letters 115 (2015).
. J Sun, R C Remsing, Y Zhang, Z Sun, A Ruzsinszky, H Peng, Z Yang, A Paul, U Waghmare, X Wu, M , J. Sun, R C. Remsing, Y. Zhang, Z. Sun, A. Ruzsinszky, H. Peng, Z. Yang, A. Paul, U. Waghmare, X. Wu, M L.
Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional. J P Klein, Perdew, Nature Chemistry. 8Klein, and J P. Perdew, "Accurate first-principles struc- tures and energies of diversely bonded systems from an efficient density functional," Nature Chemistry 8 (2016).
Nonlocal van der Waals density functional made simple and efficient. R Sabatini, T Gorni, S De Gironcoli, Physical Review B. 87R. Sabatini, T. Gorni, and S. de Gironcoli, "Nonlo- cal van der Waals density functional made simple and efficient," Physical Review B 87 (2013).
Versatile van der Waals density functional based on a metageneralized gradient approximation. H Peng, Z H Yang, J P Perdew, J Sun, Physical Review X. 6H. Peng, Z H. Yang, J P. Perdew, and J. Sun, "Versa- tile van der Waals density functional based on a meta- generalized gradient approximation," Physical Review X 6 (2016).
Wavelike charge density fluctuations and van der Waals interactions at the nanoscale. A Ambrosetti, N Ferri, R A Distasio, A Tkatchenko, Science. 351A. Ambrosetti, N. Ferri, R A. DiStasio, and A. Tkatchenko, "Wavelike charge density fluctuations and van der Waals interactions at the nanoscale," Sci- ence 351 (2016).
. S Lebègue, J Harl, T Gould, J G Ángyán, G Kresse, J F Dobson, Phys. Rev. Lett. 105196401S. Lebègue, J. Harl, T. Gould, J. G.Ángyán, G. Kresse, and J. F. Dobson, Phys. Rev. Lett. 105, 196401 (2010).
van der waals bonding in layered compounds from advanced density-functional firstprinciples calculations. T Björkman, A Gulans, A V Krasheninnikov, R M Nieminen, 10.1103/PhysRevLett.108.235502Phys. Rev. Lett. 108235502T. Björkman, A. Gulans, A. V. Krasheninnikov, and R. M. Nieminen, "van der waals bonding in lay- ered compounds from advanced density-functional first- principles calculations," Phys. Rev. Lett. 108, 235502 (2012).
Scan+rvv10: A promising van der Waals density functional. H Peng, Z H Yang, J Sun, J P Perdew, arXiv:1510.05712ArXiv e-printsH. Peng, Z.H. Yang, J. Sun, and J P. Perdew, "Scan+rvv10: A promising van der Waals density func- tional," ArXiv e-prints (2015), arXiv:1510.05712.
Importance of dispersion in density functional calculations of cesium chloride and its related halides. F Zhang, J D Gale, B P Uberuaga, C R Stanek, N A Marks, Phys. Rev. B. 8854112F. Zhang, J. D. Gale, B. P. Uberuaga, C. R. Stanek, and N. A. Marks, "Importance of dispersion in density functional calculations of cesium chloride and its related halides," Phys. Rev. B 88, 054112 (2013).
Screened van der waals correction to density functional theory for solids. J Tao, F Zhen, J Gebhardt, J P Perdew, A M Rappe, submittedJ. Tao, F. Zhen, J. Gebhardt, J. P. Perdew, and A. M. Rappe, "Screened van der waals correction to density functional theory for solids," submitted.
Surface free energies of solid metals: Estimation from liquid surface tension measurements. W R Tyson, W A Miller, Surface Science. 62W.R. Tyson and W.A. Miller, "Surface free energies of solid metals: Estimation from liquid surface tension measurements," Surface Science 62 (1977).
High-resolution Kelvin probe microscopy in corrosion science: Scanning Kelvin probe force microscopy (skpfm) versus classical scanning Kelvin probe (skp). M Rohwerder, F Turcu, Electrochimica Acta. 53M. Rohwerder and F. Turcu, "High-resolution Kelvin probe microscopy in corrosion science: Scanning Kelvin probe force microscopy (skpfm) versus classical scanning Kelvin probe (skp)," Electrochimica Acta 53 (2007).
Solid surface physics. J Hölzl, F K Schulte, Chap. Work function of metals. Berlin HeidelbergSpringerJ. Hölzl and F. K. Schulte, "Solid surface physics," (Springer Berlin Heidelberg, 1979) Chap. Work function of metals.
Pitfalls in measuring work function using photoelectron spectroscopy. M G Helander, M T Greiner, Z B Wang, Z H Lu, Applied Surface Science. 256M.G. Helander, M.T. Greiner, Z.B. Wang, and Z.H. Lu, "Pitfalls in measuring work function using photoelec- tron spectroscopy," Applied Surface Science 256 (2010).
Confirmation of an exception to the general rule of surface relaxations. J R Noonan, H L Davis, Journal of Vacuum Science & Technology A: Vacuum, Surfaces, and Films. 8J. R. Noonan and H. L. Davis, "Confirmation of an ex- ception to the general rule of surface relaxations," Jour- nal of Vacuum Science & Technology A: Vacuum, Sur- faces, and Films 8 (1990).
Density functional theory in surface chemistry and catalysis. J K Nørskov, F Abild-Pedersen, F Studt, T Bligaard, Proceedings of the National Academy of Sciences. 108J K. Nørskov, F. Abild-Pedersen, F. Studt, and T. Bli- gaard, "Density functional theory in surface chemistry and catalysis," Proceedings of the National Academy of Sciences 108 (2011).
Climbing the density functional ladder: Nonempirical meta-generalized gradient approximation designed for molecules and solids. J Tao, J P Perdew, V N Staroverov, G E Scuseria, Physical Review Letters. 91J. Tao, J P. Perdew, V. N. Staroverov, and G. E. Scuse- ria, "Climbing the density functional ladder: Nonempir- ical meta-generalized gradient approximation designed for molecules and solids," Physical Review Letters 91 (2003).
Surface energy and work function of elemental metals. H L Skriver, N M Rosengaard, Physical Review B. 46H. L. Skriver and N. M. Rosengaard, "Surface energy and work function of elemental metals," Physical Re- view B 46 (1992).
Liquid-drop model for crystalline metals: Vacancy-formation, cohesive, and face-dependent surface energies. J P Perdew, Y Wang, E Engel, Physical Review Letters. 66J P. Perdew, Y. Wang, and E. Engel, "Liquid-drop model for crystalline metals: Vacancy-formation, cohe- sive, and face-dependent surface energies," Physical Re- view Letters 66 (1991).
The exchangecorrelation energy of a metallic surface. D C Langreth, J P Perdew, Solid State Communications. 17D C. Langreth and J P. Perdew, "The exchange- correlation energy of a metallic surface," Solid State Communications 17 (1975).
Tests of a ladder of density functionals for bulk solids and surfaces. V N Staroverov, G E Scuseria, J Tao, J P Perdew, Physical Review B. 69V N. Staroverov, G E. Scuseria, J. Tao, and J P. Perdew, "Tests of a ladder of density functionals for bulk solids and surfaces," Physical Review B 69 (2004).
Metal surface energy: Persistent cancellation of short-range correlation effects beyond the random phase approximation. J M Pitarke, J P Perdew, Physical Review B. 67J M. Pitarke and J P. Perdew, "Metal surface energy: Persistent cancellation of short-range correlation effects beyond the random phase approximation," Physical Re- view B 67 (2003).
Energies of curved metallic surfaces from the stabilized-jellium model. C Fiolhais, J P Perdew, Physical Review B. 45C. Fiolhais and J P. Perdew, "Energies of curved metal- lic surfaces from the stabilized-jellium model," Physical Review B 45 (1992).
The shortcomings of semilocal and hybrid functionals: What we can learn from surface science studies. A Stroppa, G Kresse, New Journal of Physics. 10A. Stroppa and G. Kresse, "The shortcomings of semi- local and hybrid functionals: What we can learn from surface science studies," New Journal of Physics 10 (2008).
Surface energy and work function of fcc and bcc crystals density functional study. J Wang, S Wang, Surface Science. 630J. Wang and S. Wang, "Surface energy and work func- tion of fcc and bcc crystals density functional study," Surface Science 630 (2014).
Surface energies, work functions, and surface relaxations of low-index metallic surfaces from first principles. N E Singh-Miller, N Marzari, Physical Review B. 80N E. Singh-Miller and N. Marzari, "Surface energies, work functions, and surface relaxations of low-index metallic surfaces from first principles," Physical Review B 80 (2009).
Surface energies of elemental crystals. R Tran, Z Xu, B Radhakrishnan, D Winston, W Sun, K A Persson, S P Ong, Scientific Data. 3R. Tran, Z. Xu, B. Radhakrishnan, D. Winston, W. Sun, K A. Persson, and S P. Ong, "Surface energies of ele- mental crystals," Scientific Data 3 (2016).
Exchange-correlation hole of a generalized gradient approximation for solids and surfaces. L A Constantin, J P Perdew, J M Pitarke, Physical Review B. 79L. A. Constantin, J P. Perdew, and J. M. Pitarke, "Exchange-correlation hole of a generalized gradient ap- proximation for solids and surfaces," Physical Review B 79 (2009).
A , Physics at surfaces. Cambridge University PressA. Zangwill, Physics at surfaces (Cambridge University Press, 1988).
Extracting convergent surface formation energies from slab calculations. J Boettger, J R Smith, U Birkenheuer, N Rösch, S B Trickey, J R Sabin, S P Apell, Journal of Physics: Condensed Matter. 10893J . Boettger, J R. Smith, U. Birkenheuer, N. Rösch, S B. Trickey, J R. Sabin, and S P. Apell, "Extracting convergent surface formation energies from slab calcu- lations," Journal of Physics: Condensed Matter 10, 893 (1998).
Nonconvergence of surface energies obtained from thin-film calculations. J C Boettger, Physical Review B. 49J. C. Boettger, "Nonconvergence of surface energies ob- tained from thin-film calculations," Physical Review B 49 (1994).
Extracting convergent surface energies from slab calculations. V Fiorentini, M Methfessel, Journal of Physics: Condensed Matter. 8V. Fiorentini and M. Methfessel, "Extracting conver- gent surface energies from slab calculations," Journal of Physics: Condensed Matter 8 (1996).
Anisotropy of the electronic work function of metals. R Smoluchowski, Physical Review. 60R. Smoluchowski, "Anisotropy of the electronic work function of metals," Physical Review 60 (1941).
F R De Boer, W C M Mattens, R Boom, A R Miedema, A K Niessen, Cohesion in metals. North-HollandF. R. De Boer, W.C.M. Mattens, R. Boom, A.R. Miedema, and A.K. Niessen, Cohesion in metals (North-Holland, 1988).
. J E Bates, N Sengupta, A Ruzsinszky, In preparationJ. E. Bates, N. Sengupta, and A. Ruzsinszky, In prepa- ration.
Error estimates for density-functional theory predictions of surface energy and work function. S De Waele, K Lejaeghere, M Sluydts, S Cottenier, Physical Review B. 94S. De Waele, K. Lejaeghere, M. Sluydts, and S. Cotte- nier, "Error estimates for density-functional theory pre- dictions of surface energy and work function," Physical Review B 94 (2016).
Quantum monte carlo calculations of the surface energy of an electron gas. B Wood, N D M Hine, W M C Foulkes, P García-González, Physical Review B. 76B. Wood, N. D. M. Hine, W. M. C. Foulkes, and P. García-González, "Quantum monte carlo calculations of the surface energy of an electron gas," Physical Re- view B 76 (2007).
The surface energy of metals. L Vitos, A V Ruban, H L Skriver, J Kollár, Surface Science. 411L. Vitos, A.V. Ruban, H.L. Skriver, and J. Kollár, "The surface energy of metals," Surface Science 411 (1998).
Surface energy of a bounded electron gas: Analysis of the accuracy of the local-density approximation via ab initio self-consistentfield calculations. J M Pitarke, A G Eguiluz, Physical Review B. 57J. M. Pitarke and A. G. Eguiluz, "Surface energy of a bounded electron gas: Analysis of the accuracy of the local-density approximation via ab initio self-consistent- field calculations," Physical Review B 57 (1998).
Density-functional correction of random-phase-approximation correlation with results for jellium surface energies. S Kurth, J P Perdew, Physical Review B. 59S. Kurth and J P. Perdew, "Density-functional correc- tion of random-phase-approximation correlation with results for jellium surface energies," Physical Review B 59 (1999).
Improved lattice constants, surface energies, and CO desorption energies from a semilocal density functional. J Sun, M Marsman, A Ruzsinszky, G Kresse, J P Perdew, Physical Review B. 83J. Sun, M. Marsman, A. Ruzsinszky, G. Kresse, and J P. Perdew, "Improved lattice constants, surface energies, and CO desorption energies from a semilocal density functional," Physical Review B 83 (2011).
Trends of the surface relaxations, surface energies, and work functions of the 4 d transition metals. M Methfessel, D Hennig, M Scheffler, Physical Review B. 46M. Methfessel, D. Hennig, and M. Scheffler, "Trends of the surface relaxations, surface energies, and work functions of the 4 d transition metals," Physical Review B 46 (1992).
Reconstruction of charged surfaces: General trends and a case study of Pt(110) and Au(110). A Y Lozovoi, Ali A , Physical Review B. 68A Y. Lozovoi and Ali A, "Reconstruction of charged surfaces: General trends and a case study of Pt(110) and Au(110)," Physical Review B 68 (2003).
Physical origin of exchange diffusion on fcc (100) metal surfaces. M Yu, B D Scheffler, Physical Review B. 56M. Yu, B. D.and Scheffler, "Physical origin of exchange diffusion on fcc (100) metal surfaces," Physical Review B 56 (1997).
Theoretical maps of work-function anisotropies. C J Fall, N Binggeli, A Baldereschi, Physical Review B. 65C. J. Fall, N. Binggeli, and A. Baldereschi, "Theoretical maps of work-function anisotropies," Physical Review B 65 (2001).
Workfunction anisotropy in noble metals: Contributions from d states and effects of the surface atomic structure. C J Fall, N Binggeli, A Baldereschi, Physical Review B. 61C. J. Fall, N. Binggeli, and A. Baldereschi, "Work- function anisotropy in noble metals: Contributions from d states and effects of the surface atomic structure," Physical Review B 61 (2000).
Anomaly in the anisotropy of the aluminum work function. C J Fall, N Binggeli, A Baldereschi, Physical Review B. 58C. J. Fall, N. Binggeli, and A. Baldereschi, "Anomaly in the anisotropy of the aluminum work function," Physi- cal Review B 58 (1998).
Anisotropic work function of clean and smooth lowindex faces of aluminium. J K Grepstad, P O Gartland, B J Slagsvold, Surface Science. 57J.K. Grepstad, P.O. Gartland, and B.J. Slagsvold, "Anisotropic work function of clean and smooth low- index faces of aluminium," Surface Science 57 (1976).
Recommended values of clean metal surface work functions. G N Derry, M E Kern, E H Worth, Journal of Vacuum Science & Technology A: Vacuum, Surfaces, and Films. 33G N. Derry, M E. Kern, and E H. Worth, "Recom- mended values of clean metal surface work functions," Journal of Vacuum Science & Technology A: Vacuum, Surfaces, and Films 33 (2015).
Work function measurements on (100), (110) and (111) surfaces of aluminium. R M Eastment, C H B Mee, Journal of Physics F: Metal Physics. 3R. M. Eastment and C. H. B. Mee, "Work function measurements on (100), (110) and (111) surfaces of alu- minium," Journal of Physics F: Metal Physics 3 (1973).
Work function and secondary emission studies of various Cu crystal faces. G A Haas, R E Thomas, Journal of Applied Physics. 48G. A. Haas and R. E. Thomas, "Work function and secondary emission studies of various Cu crystal faces," Journal of Applied Physics 48 (1977).
Electron work function: A parameter sensitive to the adhesion behavior of crystallographic surfaces. D Y Li, W Li, Applied Physics Letters. 79D. Y. Li and W. Li, "Electron work function: A parame- ter sensitive to the adhesion behavior of crystallographic surfaces," Applied Physics Letters 79 (2001).
Photoelectric work function of a copper single crystal for the (100). P O Gartland, S Berge, B J Slagsvold, Physical Review Letters. 28110and (112) facesP. O. Gartland, S. Berge, and B. J. Slagsvold, "Photo- electric work function of a copper single crystal for the (100), (110), (111), and (112) faces," Physical Review Letters 28 (1972).
The work function of the elements and its periodicity. H B Michaelson, Journal of Applied Physics. 48H. B. Michaelson, "The work function of the elements and its periodicity," Journal of Applied Physics 48 (1977).
The low temperature adsorption of oxygen on Rh (111). P Brault, H Range, J P Toennies, Ch Wöll, Zeitschrift für Physikalische Chemie. 198P. Brault, H. Range, J. P. Toennies, and Ch. Wöll, "The low temperature adsorption of oxygen on Rh (111)," Zeitschrift für Physikalische Chemie 198 (1997).
The work function of kinked areas on clean, thermally rounded Pt and Rh crystallites: its dependence on the structure of terraces and edges. R Vanselow, X Q D Li, Surface Science. 264R. Vanselow and X.Q.D. Li, "The work function of kinked areas on clean, thermally rounded Pt and Rh crystallites: its dependence on the structure of terraces and edges," Surface Science 264 (1992).
Electron spectroscopic study of the interaction of coadsorbed CO and D2 on Rh(100) at low temperature. D E Peebles, H C Peebles, J M White, Surface Science. 136D.E. Peebles, H.C. Peebles, and J.M. White, "Electron spectroscopic study of the interaction of coadsorbed CO and D2 on Rh(100) at low temperature," Surface Sci- ence 136 (1984).
Image states and local work function for Ag/Pd(111). R Fischer, S Schuppler, N Fischer, Th Fauster, W Steinmann, Physical Review Letters. 70R. Fischer, S. Schuppler, N. Fischer, Th. Fauster, and W. Steinmann, "Image states and local work function for Ag/Pd(111)," Physical Review Letters 70 (1993).
Interaction of metastable noble-gas atoms with transition-metal surfaces: Resonance ionization and auger neutralization. W Sesselmann, B Woratschek, J Küppers, G Ertl, H Haberland, Physical Review B. 35W. Sesselmann, B. Woratschek, J. Küppers, G. Ertl, and H. Haberland, "Interaction of metastable noble-gas atoms with transition-metal surfaces: Resonance ioniza- tion and auger neutralization," Physical Review B 35 (1987).
Electronic structure of palladium (100). J G Gay, J R Smith, F J Arlinghaus, T W Capehart, Physical Review B. 23J. G. Gay, J. R. Smith, F. J. Arlinghaus, and T. W. Capehart, "Electronic structure of palladium (100)," Physical Review B 23 (1981).
Photoelectric work functions of (100) and (111) faces of silver single crystals and their contact potential difference. H E Farnsworth, R P Winch, Physical Review. 58H. E. Farnsworth and R. P. Winch, "Photoelectric work functions of (100) and (111) faces of silver single crystals and their contact potential difference," Physical Review 58 (1940).
Binding energy of image-potential states: Dependence on crystal structure and material. K Giesen, F Hage, F J Himpsel, H J Riess, W Steinmann, Physical Review B. 35K. Giesen, F. Hage, F. J. Himpsel, H. J. Riess, and W. Steinmann, "Binding energy of image-potential states: Dependence on crystal structure and material," Physical Review B 35 (1987).
Binding energy of image-potential states: Dependence on crystal structure and material. K Giesen, F Hage, F J Himpsel, H J Riess, W Steinmann, Physical Review B. 35K. Giesen, F. Hage, F. J. Himpsel, H. J. Riess, and W. Steinmann, "Binding energy of image-potential states: Dependence on crystal structure and material," Physical Review B 35 (1987).
Pt(111), and Pt(997) surfaces. M Salmerón, S Ferrer, M Jazzar, G A , Photoelectron-spectroscopy study of the electronic structure of Au and Ag overlayers on Pt(100). 28SomorjaiM. Salmerón, S. Ferrer, M. Jazzar, and G. A. Somor- jai, "Photoelectron-spectroscopy study of the electronic structure of Au and Ag overlayers on Pt(100), Pt(111), and Pt(997) surfaces," Physical Review B 28 (1983).
Influence of the surface structure on the adsorption of hydrogen on platinum, as studied by field emission probe-hole microscopy. B E Nieuwenhuys, Surface Science. 59B.E. Nieuwenhuys, "Influence of the surface structure on the adsorption of hydrogen on platinum, as studied by field emission probe-hole microscopy," Surface Sci- ence 59 (1976).
Empty electronic states at the (1 × 1) and (5 × 20) surfaces of Pt(100): An inverse photoemission study. R Drube, V Dose, A Goldmann, Surface Science. 197R. Drube, V. Dose, and A. Goldmann, "Empty elec- tronic states at the (1 × 1) and (5 × 20) surfaces of Pt(100): An inverse photoemission study," Surface Sci- ence 197 (1988).
Spin polarized photoemission from gold using circularly polarized light. D Pescia, F Meier, Surface Science. 117D. Pescia and F. Meier, "Spin polarized photoemission from gold using circularly polarized light," Surface Sci- ence 117 (1982).
Photoemission study of the bulk and surface electronic structure of single crystals of gold. G V Hansson, S A Flodström, Physical Review B. 18G. V. Hansson and S. A. Flodström, "Photoemission study of the bulk and surface electronic structure of single crystals of gold," Physical Review B 18 (1978).
Photoemission study of the bulk and surface electronic structure of single crystals of gold. G V Hansson, S A Flodström, Physical Review B. 18G. V. Hansson and S. A. Flodström, "Photoemission study of the bulk and surface electronic structure of single crystals of gold," Physical Review B 18 (1978).
Surface structure of gold and silver (110)-surfaces. K P Bohnen, K M Ho, Electrochimica Acta. 40K.P. Bohnen and K.M. Ho, "Surface structure of gold and silver (110)-surfaces," Electrochimica Acta 40 (1995).
Reconstruction of the (111) and (001) surfaces of Au and Pt: thermal behavior. D L Abernathy, D Gibbs, G Grubel, K G Huang, S G J Mochrie, A R Sandy, D M Zehner, Surface Science. 283D.L. Abernathy, D. Gibbs, G. Grubel, K.G. Huang, S.G.J. Mochrie, A.R. Sandy, and D.M. Zehner, "Re- construction of the (111) and (001) surfaces of Au and Pt: thermal behavior," Surface Science 283 (1993).
Core-and valence-band energy-level shifts in small two-dimensional islands of gold deposited on Pt(100): The effect of step-edge, surface, and bulk atoms. M Salmerón, S Ferrer, M Jazzar, G A , Physical Review B. 28SomorjaiM. Salmerón, S. Ferrer, M. Jazzar, and G. A. Somor- jai, "Core-and valence-band energy-level shifts in small two-dimensional islands of gold deposited on Pt(100): The effect of step-edge, surface, and bulk atoms," Phys- ical Review B 28 (1983).
. J Burchhardt, M M Nielsen, D L Adams, E Lundgren, J N Andersen, Physical Review B. 50J. Burchhardt, M. M. Nielsen, D. L. Adams, E. Lund- gren, and J. N. Andersen, Physical Review B 50 (1994).
Quantitative analysis of low-energy-electron diffraction: Application to Pt(111). D L Adams, H B Nielsen, M A Van Hove, Physical Review B. 20D. L. Adams, H. B. Nielsen, and M. A. Van Hove, "Quantitative analysis of low-energy-electron diffrac- tion: Application to Pt(111)," Physical Review B 20 (1979).
Structure and phases of the Au(001) surface: Absolute x-ray reflectivity. B M Ocko, D Gibbs, K G Huang, D M Zehner, S G J Mochrie, Physical Review B. 44B. M. Ocko, D. Gibbs, K. G. Huang, D. M. Zehner, and S. G. J. Mochrie, "Structure and phases of the Au(001) surface: Absolute x-ray reflectivity," Physical Review B 44 (1991).
Leed intensity analysis of the surface structures of Pd (111) and of co adsorbed on Pd (111) in a ( √ 3 × √ 3)R30 0 arrangement. H Ohtani, M A Van Hove, G A Somorjai, Surface Science. 187H. Ohtani, M.A. Van Hove, and G.A. Somorjai, "Leed intensity analysis of the surface structures of Pd (111) and of co adsorbed on Pd (111) in a ( √ 3 × √ 3)R30 0 arrangement," Surface Science 187 (1987).
Image charge at a metal surface. A E Mohammed, V Sahni, Physical Review B. 31A E. Mohammed and V. Sahni, "Image charge at a metal surface," Physical Review B 31 (1985).
Electronic properties of molecules and surfaces with a self-consistent interatomic van der Waals density functional. N Ferri, R A DistasioJr, A Ambrosetti, R Car, A Tkatchenko, Physical review letters. 114N. Ferri, R A. DiStasio Jr, A. Ambrosetti, R. Car, and A. Tkatchenko, "Electronic properties of molecules and surfaces with a self-consistent interatomic van der Waals density functional," Physical review letters 114 (2015).
Low-energy electron diffraction from Cu(111): Subthreshold effect and energy-dependent inner potential; surface relaxation and metric distances between spectra. S Å Lindgren, L Walldén, J Rundgren, P Westrin, Physical Review B. 29S.Å. Lindgren, L. Walldén, J. Rundgren, and P. Westrin, "Low-energy electron diffraction from Cu(111): Subthreshold effect and energy-dependent in- ner potential; surface relaxation and metric distances between spectra," Physical Review B 29 (1984).
Low-energy electron diffraction study of the thermal expansion of Ag(111). E A Soares, G S Leatherman, R D Diehl, M A Van Hove, Surface Science. 468E A. Soares, G S. Leatherman, R D. Diehl, and M A. Van Hove, "Low-energy electron diffraction study of the thermal expansion of Ag(111)," Surface Science 468 (2000).
Structural and electronic properties of silver surfaces: ab initio pseudopotential density functional study. Y Wang, W Wang, K Fan, J Deng, Surface Science. 490Y. Wang, W. Wang, K. Fan, and J. Deng, "Structural and electronic properties of silver surfaces: ab initio pseudopotential density functional study," Surface Sci- ence 490 (2001).
Truncation-induced multilayer relaxation of the A1(110) surface. J R Noonan, H L Davis, Physical Review B. 29J. R. Noonan and H. L. Davis, "Truncation-induced multilayer relaxation of the A1(110) surface," Physical Review B 29 (1984).
Determination of a Cu(110) surface contraction by LEED intensity analysis. H L Davis, J R Noonan, L H Jenkins, Surface Science. 83H.L. Davis, J.R. Noonan, and L.H. Jenkins, "Determi- nation of a Cu(110) surface contraction by LEED inten- sity analysis," Surface Science 83 (1979).
A LEED structural study of the Pd (110)-(1×1) surface and an alkali-metal-induced (1×2) surface reconstruction. C J Barnes, M Q Ding, M Lindroos, R D Diehl, D A King, Surface Science. 162C.J. Barnes, M.Q. Ding, M. Lindroos, R.D. Diehl, and D.A. King, "A LEED structural study of the Pd (110)- (1×1) surface and an alkali-metal-induced (1×2) surface reconstruction," Surface Science 162 (1985).
Oscillatory relaxation of the Ag(110) surface. Y Kuk, L C Feldman, Physical Review B. 30Y. Kuk and L. C. Feldman, "Oscillatory relaxation of the Ag(110) surface," Physical Review B 30 (1984).
Quantum-size effect in thin Al(110) slabs. A Kiejna, J Peisert, P Scharoch, Surface Science. 432A. Kiejna, J. Peisert, and P. Scharoch, "Quantum-size effect in thin Al(110) slabs," Surface Science 432 (1999).
Anomalous interplanar expansion at the (0001) surface of Be. H L Davis, J B Hannon, K B Ray, E W Plummer, Physical Review Letters. 68H. L. Davis, J. B. Hannon, K. B. Ray, and E. W. Plum- mer, "Anomalous interplanar expansion at the (0001) surface of Be," Physical Review Letters 68 (1992).
Multilayer relaxation in metallic surfaces as demonstrated by LEED analysis. H L Davis, J R Noonan, Surface Science. 126H.L. Davis and J.R. Noonan, "Multilayer relaxation in metallic surfaces as demonstrated by LEED analysis," Surface Science 126 (1983).
Anomalous multilayer relaxation of Pd(001). J Quinn, Y S Li, D Tian, H Li, F Jona, P M Marcus, Physical Review B. 42J. Quinn, Y. S. Li, D. Tian, H. Li, F. Jona, and P. M. Marcus, "Anomalous multilayer relaxation of Pd(001)," Physical Review B 42 (1990).
Low-energy electron diffraction and photoemission study of epitaxial films of Cu on Ag(001). H Li, J Quinn, Y S Li, D Tian, F Jona, P M Marcus, Physical Review B. 43H. Li, J. Quinn, Y. S. Li, D. Tian, F. Jona, and P. M. Marcus, "Low-energy electron diffraction and photoe- mission study of epitaxial films of Cu on Ag(001)," Physical Review B 43 (1991).
Surface relaxation of the platinum (100)(1×1) surface at 175 K. J A Davies, T E Jackman, D P Jackson, P R Norton, Surface Science. 109J.A. Davies, T.E. Jackman, D.P. Jackson, and P.R. Norton, "Surface relaxation of the platinum (100)(1×1) surface at 175 K," Surface Science 109 (1981).
Experimental and theoretical surface core-level shifts of aluminum (100) and (111). M Borg, M Birgersson, M Smedh, A Mikkelsen, D L Adams, R Nyholm, C Almbladh, N Jesper, Physical Review B. 69AndersenM. Borg, M. Birgersson, M. Smedh, A. Mikkelsen, D L. Adams, R. Nyholm, C. Almbladh, and Jesper N. An- dersen, "Experimental and theoretical surface core-level shifts of aluminum (100) and (111)," Physical Review B 69 (2004).
Structures of small alkali-metal clusters. M Manninen, Physical Review B. 34M. Manninen, "Structures of small alkali-metal clus- ters," Physical Review B 34 (1986).
Compressibility and binding energy of the simple metals. N W Ashcroft, D C Langreth, Physical Review. 155N. W. Ashcroft and D C. Langreth, "Compressibility and binding energy of the simple metals," Physical Re- view 155 (1967).
Stabilized jellium: Structureless pseudopotential model for the cohesive and surface properties of metals. J P Perdew, H Q Tran, E D Smith, Physical Review B. 42J P. Perdew, H. Q. Tran, and E D. Smith, "Stabi- lized jellium: Structureless pseudopotential model for the cohesive and surface properties of metals," Physical Review B 42 (1990).
Spherical voids in the stabilized jellium model: Rigorous theorems and padé representation of the void-formation energy. J P Ziesche, P Perdew, C Fiolhais, Physical Review B. 49J P. Ziesche, P.and Perdew and C. Fiolhais, "Spherical voids in the stabilized jellium model: Rigorous theorems and padé representation of the void-formation energy," Physical Review B 49 (1994).
Simple theories for simple metals: Face-dependent surface energies and work functions. J P Perdew, Progress in surface science. 48J P Perdew, "Simple theories for simple metals: Face-dependent surface energies and work functions," Progress in surface science 48 (1995).
Ab-initio simulations of materials using VASP: Density-functional theory and beyond. J Hafner, Journal of computational chemistry. 29J. Hafner, "Ab-initio simulations of materials using VASP: Density-functional theory and beyond," Journal of computational chemistry 29 (2008).
Projector augmented-wave method. E Peter, Blöchl, Physical Review B. 50Peter E Blöchl, "Projector augmented-wave method," Physical Review B 50 (1994).
From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, Phyical Review B. 59G. Kresse and D. Joubert, "From ultrasoft pseudopoten- tials to the projector augmented-wave method," Phyical Review B 59 (1999).
Assessing the quality of the random phase approximation for lattice constants and atomization energies of solids. J Harl, L Schimka, G Kresse, Physical Review B. 81J. Harl, L. Schimka, and G. Kresse, "Assessing the quality of the random phase approximation for lattice constants and atomization energies of solids," Physical Review B 81 (2010).
Lattice constants and cohesive energies of alkali, alkaline-earth, and transition metals: Random phase approximation and density functional theory results. L Schimka, R Gaudoin, J Klimeš, M Marsman, G Kresse, Physical Review B. 87L. Schimka, R. Gaudoin, Klimeš J., M. Marsman, and G. Kresse, "Lattice constants and cohesive energies of alkali, alkaline-earth, and transition metals: Random phase approximation and density functional theory re- sults," Physical Review B 87 (2013).
| [] |
[
"Gate Capacitance Coupling of Singled-walled Carbon Nanotube Thin-film Transistors",
"Gate Capacitance Coupling of Singled-walled Carbon Nanotube Thin-film Transistors",
"Gate Capacitance Coupling of Singled-walled Carbon Nanotube Thin-film Transistors",
"Gate Capacitance Coupling of Singled-walled Carbon Nanotube Thin-film Transistors"
] | [
"Qing Cao \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Minggang Xia \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Coskun Kocabas \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Moonsub Shim \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"John A Rogers \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Slava V Rotkin \nLehigh University\n16 Memorial Dr.E18015BethlehemPAUSA\n",
"Qing Cao \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Minggang Xia \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Coskun Kocabas \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Moonsub Shim \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"John A Rogers \nDepartment of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA\n",
"Slava V Rotkin \nLehigh University\n16 Memorial Dr.E18015BethlehemPAUSA\n"
] | [
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Lehigh University\n16 Memorial Dr.E18015BethlehemPAUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Department of Materials Science and Engineering\nDepartment of Physics\nDepartment of Mechanical Science and Engineering\nDepartment of Electrical and Computer Engineering\nDepartment of Chemistry\nDepartment of Physics and Center for Advanced Materials and Nanotechnology\nBeckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n405 N.Mathews Ave61801UrbanaILUSA",
"Lehigh University\n16 Memorial Dr.E18015BethlehemPAUSA"
] | [] | The electrostatic coupling between singled-walled carbon nanotube (SWNT) networks/arrays and planar gate electrodes in thin-film transistors (TFTs) is analyzed both in the quantum limit with an analytical model and in the classical limit with finite-element modeling. The computed capacitance depends on both the thickness of the gate dielectric and the average spacing between the tubes, with some dependence on the distribution of these spacings. Experiments on transistors that use sub-monolayer, random networks of SWNTs verify certain aspects of these calculations. The results are important for the development of networks or arrays of nanotubes as active layers in TFTs and other electronic devices. | 10.1063/1.2431465 | [
"https://arxiv.org/pdf/cond-mat/0612012v2.pdf"
] | 119,496,250 | cond-mat/0612012 | adf941d11f0fe210c499a7066029327bafe1b054 |
Gate Capacitance Coupling of Singled-walled Carbon Nanotube Thin-film Transistors
6 Dec 2006
Qing Cao
Department of Materials Science and Engineering
Department of Physics
Department of Mechanical Science and Engineering
Department of Electrical and Computer Engineering
Department of Chemistry
Department of Physics and Center for Advanced Materials and Nanotechnology
Beckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory
University of Illinois at Urbana-Champaign
405 N.Mathews Ave61801UrbanaILUSA
Minggang Xia
Department of Materials Science and Engineering
Department of Physics
Department of Mechanical Science and Engineering
Department of Electrical and Computer Engineering
Department of Chemistry
Department of Physics and Center for Advanced Materials and Nanotechnology
Beckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory
University of Illinois at Urbana-Champaign
405 N.Mathews Ave61801UrbanaILUSA
Coskun Kocabas
Department of Materials Science and Engineering
Department of Physics
Department of Mechanical Science and Engineering
Department of Electrical and Computer Engineering
Department of Chemistry
Department of Physics and Center for Advanced Materials and Nanotechnology
Beckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory
University of Illinois at Urbana-Champaign
405 N.Mathews Ave61801UrbanaILUSA
Moonsub Shim
Department of Materials Science and Engineering
Department of Physics
Department of Mechanical Science and Engineering
Department of Electrical and Computer Engineering
Department of Chemistry
Department of Physics and Center for Advanced Materials and Nanotechnology
Beckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory
University of Illinois at Urbana-Champaign
405 N.Mathews Ave61801UrbanaILUSA
John A Rogers
Department of Materials Science and Engineering
Department of Physics
Department of Mechanical Science and Engineering
Department of Electrical and Computer Engineering
Department of Chemistry
Department of Physics and Center for Advanced Materials and Nanotechnology
Beckman Institute for Advanced Science and Technology and Frederick Seitz Materials Research Laboratory
University of Illinois at Urbana-Champaign
405 N.Mathews Ave61801UrbanaILUSA
Slava V Rotkin
Lehigh University
16 Memorial Dr.E18015BethlehemPAUSA
Gate Capacitance Coupling of Singled-walled Carbon Nanotube Thin-film Transistors
6 Dec 2006
The electrostatic coupling between singled-walled carbon nanotube (SWNT) networks/arrays and planar gate electrodes in thin-film transistors (TFTs) is analyzed both in the quantum limit with an analytical model and in the classical limit with finite-element modeling. The computed capacitance depends on both the thickness of the gate dielectric and the average spacing between the tubes, with some dependence on the distribution of these spacings. Experiments on transistors that use sub-monolayer, random networks of SWNTs verify certain aspects of these calculations. The results are important for the development of networks or arrays of nanotubes as active layers in TFTs and other electronic devices.
Single-walled carbon nanotube (SWNT) networks/arrays show great promise as active layers in thin-film transistors (TFTs). [1] Considerable progress has been made in the last couple of years to improve their performance, and to integrate them with various substrates, including flexible plastics. [2,3] Nevertheless, the understanding the physics of the electrostatic coupling between SWNT networks/arrays and the planar gate electrode, which is critical to device operation and can be much different than that in conventional TFTs, is not well established. The simplest procedure, which has been used in many reports of SWNT TFTs, is to treat the coupling as that of a parallel plate capacitor, for which the gate capacitance, C i , is given by ǫ/4πd, where ǫ and d are the dielectric constant and the thickness of gate dielectric, respectively. This procedure enables a useful approximate evaluation of device level performance, [4,5,6] but it is quantitatively incorrect especially when the average spacing between SWNTs is large compared to the thickness of the dielectric. [7] Since C i critically determines many aspects of device operation, accurate knowledge of this parameter is important both for device optimizing device designs and for understanding basic transport mechanisms in the SWNT networks/arrays.
In this letter, we use a model system, illustrated in Fig. 1, consisting of a parallel array of evenly spaced SWNTs fully embedded in a gate dielectric with a planar gate electrode to examine capacitive coupling in SWNT TFTs. The influence of nonuniform intertube spacings is also evaluated to show the applicability of those results to real devices. Results obtained in the single subband quantum limit and those obtained in the classical limit agree qualitatively in the range of dielectric thicknesses and tube densities (as measured in number of tubes per unit length) explored here. The models are used to provide insights into factors that limit the effective mobilities (µ) achievable in these devices.
We begin by calculating C i of the model system in the quantum limit, where the charge density of the SWNT in the ground state has full circular symmetry. [8] In this case, the charge distributes itself uniformly around the tube, and each tube, when tuned to the metallic region, can be treated as a perfectly conducting wire with radius R and uniform linear charge density ρ. The electrostatic potential induced by such a perfect conducting wire is: φ(r) = 2ρ log(R/r) where r is the distance away from the center of the wire. The potential generated by an array of such wires is also a linear function of r and in general, the induced potential at the i-th tube depends on ρ of every tube in array according to
φ ind ([ρ j ]) = j C −1 ij ρ j(1)
where coefficients of inductive coupling between i-th and j-th tube C −1 ij are geometry dependent. [9] For SWNTs in the metallic regime, ρ is proportional to the shift of the Fermi level which is itself proportional to an average acting potential at the nanotube according to:
ρ = −C Q φ act = −C Q φ xt + φ ind(2)
where the proportionality coefficient is the quantum capacitance C Q . [8,10,11] This equation is written to separate contributions from the external potential, φ xt , as applied by distant electrodes and as generated by any other external source associated with charge traps, interface states, etc, and the induced potential, φ ind , as given by Eq. (1).
For the case of a SWNT TFT device, we are interested in a solution that corresponds to the case of a uniform φ xt : φ xt i = φ, ρ i = ρ. Thus, ρ is the same for each tube in the array and can be written as:
ρ = −C Q (φ + n C −1 n ρ) where C −1 n
is the reciprocal geometric capacitance between a single tube and its nth neighbor in the given array geometry. The total induced potential is φ ind = ρ j C −1 ij and the total reciprocal capacitance of the tube is the sum of all C −1 n plus C −1 Q . The exact analytical expressions for C −1 n and the sum C −1 ∞ = n C −1 n for each tube in a regular array of SWNTs separated by the distance Λ 0 can be derived as,
C −1 ∞ = 1 ǫ 2 log 2d R + 2 ∞ n=1 log Λ 2 n + (2d) 2 Λ 2 n = 2 ǫ log Λ 0 R sinh(π2d/Λ 0 π (3)
where Λ n is the distance between a given tube and its n-th neighbor. To apply this result to the problem of SWNT TFT we calculate the total charge per unit area induced in the array under φ:
C i = Q φS = ρ φΛ 0 = 1 Λ 0 C −1 Q + ∞ n=1 C −1 n −1 = 2 ǫ log Λ 0 R sinh(π2d/Λ 0 π + C −1 Q Λ −1 0(4)
C ∞ and C i depends on two characteristic lengths: x = 2πd/Λ 0 , which defines the inter-tube coupling, and 2d/R, which defines the coupling of a single tube to the gate.
The physics of C i coupling is clearly different for two different regimes determined by x (assuming that R is always the smallest quantity). In the limit x ≪ 1 (i.e. sparse tube density) the planar gate contribution dominates, sinh x ∼ x, and C ∞ reduces to that for a single, isolated tube. C i is approximately equal to the product of the capacitance of a transistor that uses a single isolated SWNT with the number of tubes per screening length Λ 0 . In the opposite limit, x ≫ 1, C i approaches that of a parallel plate primarily due to the higher surface coverage of tubes. Meanwhile, C ∞ decreases due to the screening by neighboring tubes. To compare the performance of SWNT network/array based TFTs with that of conventional TFTs that uses continuous, planar channels, we calculate the ratio of the capacitance of the SWNT-array to that of a parallel plate (Fig. 2). This capacitance ratio, Ξ, is close to unity for x ≫ 1:
Ξ = 1 − Λ 0 2dπ ǫ 2 C −1 + C + Q −1 − log 2 + . . . ≃ 1 (5)
where C −1 = (2 log(2d/R))/ǫ is the single tube capacitance. The term in parenthesis is multiplied by the inverse tube density 1/x = Λ 0 /2πd and is, therefore, negligible when the Λ 0 becomes much smaller than d. For
x ≪ 1, in the opposite limit, Ξ is small and grows linearly with d as given by:
Ξ = 2dπ Λ0 2 ǫ C −1 + C −1 Q + . . ..
To verify certain aspects of these calculations, we performed finite-element-method (FEM) electrostatic simulations (FEMLab, Comsol, Inc) to determine the classical C i of the same model system, in which the induced charge distributes itself to establish an equal potential over the nanotubes. In these calculations, R was set to 0.7 nm, which corresponds to the average radius of SWNTs formed by chemical vapor deposition. [12] We chose ǫ r = 4.0. The computations, shown in Fig. 2, agree qualitatively with those determined by Eq. 4, with deviations that are most significant at small gate dielectric thicknesses where quantum effects are significant.
In experimentally achievable SWNT TFTs, the SWNTs are not spaced equally and, except in certain cases, they are completely disordered in the form of random networks. [5,13] To estimate qualitatively the influence of uneven spacings, we constructed an array composed of five hundred parallel SWNTs with a normal distribution of Λ 0 = 100 ± 40 nm (Fig. 3a inset). C i was calculated by inverting the matrix of potential coefficients (Eq. 1). The small difference of computed capacitances (∆C) indicates that Eq.4 can be used for aligned arrays with uneven spacings, and perhaps even random SWNT networks (Fig. 3). Another experimental fact is that most SWNT TFTs are constructed in the bottom-gate structure where nanotubes are in an equilibrium distance above the gate dielectric, ∼ 4Å, [14] due to van der Waals interactions. To account for the effect of low ǫ air medium on C i , we performed the FEM simulation for nanotube arrays either fully embedded in gate dielectric or fully exposed in the air. Comparing the capacitances in these two cases shows that the low ǫ air medium has most significant influence on those systems that use high ǫ dielectrics because of the higher dielectric contrast. Moreover, at x ≪ 1 the effect of the air on SWNT arrays is close to results obtained for devices based on individual tubes (Fig.3 b). [15] However, with increasing x, the screening between neighboring tubes forces electric field lines to terminate on the bottom of nanotubes without fringing through the air and thus the air effect diminishes.
To explore these effects experimentally, we built TFTs that used random networks of SWNT with fixed 1/Λ 0 (approximately 10 tubes/micron, as evaluated by AFM) and different d. Details on the device fabrication can be found elsewhere. [16] For the range of Λ 0 and d here the difference in C i that results from the air effect is less than 20%, smaller than the experimental error in determining transconductance (g m ). So, Eq.4 gives sufficiently accurate estimation of C i . Figure 4a shows the transfer curves of SWNT TFTs. Figure 4c compares effective mobilites calculated using C i derived from parallel plate model (µ p ) to those that from Eq.4 (µ). Consistent with the previous discussion, the parallel-plate capacitor model overestimates C i significantly for low tube densities/thin dielectrics. As a result, the effective mobilities calculated in this manner (µ p ) have an apparent linear dependence on d that derives from inaccurate values for C i . On the other hand, effective mobilities calculated using Eq.4 (µ) show no systematic change with d, which provides a validation of the model.
The computed capacitances also reveal two important guidelines for the development of SWNT TFTs. First, Eq.3 and Eq.4, indicate that the effective µ should be close to the intrinsic mobility of SWNTs (µ pertube ) if the contact resistance is neglected since 6) where N is the total number of effective pathways, connecting source/drain electrodes. µ can be a little smaller than µ pertube because the actual length of effective pathway is longer than L. The huge difference between µ of devices based on SWNT films [5,6,13] and µ pertube extracted from those FETs based on individual tubes [17] suggests that the tube/tube contacts severely limit the transport in SWNT networks or partially aligned arrays, either due to tunneling barrier or electrostatic screening at the contact which prevents the effective gate modulation at that specific point. [18] Secondly, efforts on improving g m through increasing tube density for a given device geometry and V DS are limited by d. From Fig. 2a we can see that at given d, when x ≪ 1, C i , and thus g m increases with the increase of tube density. However, when x ≫ 1, C i saturates and g m no longer increases with decreasing Λ 0 . This prediction was verified by the almost identical g m for devices with different tube densities on thick gate dielectric (Fig. 4b).
µ = L W C i V DS ∂I DS ∂V GS = C −1 ∞ + C −1 Q L V DS ∂I DS /N ∂V GS ≃ µ pertube(
In summary, we evaluated C i of SWNT TFTs. Our analysis shows that the best electrostatic coupling between the gate and the SWNT occurs in dense arrays of tubes, but advantage gained in C i coupling from higher tube density starts to saturate as Λ 0 becoming close to d. These conclusions are corroborated by vertical scaling experiments on SWNT network TFTs. We further propose two guidelines for improving the performance of SWNT TFTs.
FIG. 1 :
1Schematic illustration of the model system. R is the nanotube radius; the distance between each tube and the dielectric thickness are Λo and d respectively.
FIG. 2 :
2(a) Capacitance ratio (Ξ) calculated by FEM versus linear SWNT densities (1/Λo) for d ranging from 5nm (filled square), 10nm (filled circle), 20nm (filled triangle), 50nm (upsidedown triangle), 100nm (open square), 200nm (open circle) to 500nm (small square
FIG. 3 :
3(a) Relative variation of Ci (∆C) induced by uneven Λo versus d. Inset: Λo associated with each nanotube in an array. (b) Relative difference between Ci of fully embedded and fully exposed nanotube arrays (∆C) versus d. Solid lines and dashed lines represent results obtained from normal (ǫr = 4.0) and high ǫr (15) dielectrics respectively. Solid circles and squares show results obtained for Λo=100nm and SWNT TFTs using high density SWNT network (SEM image shown in part d) with a bilayer dielectric of 3nm HfO2 layer and an overcoat epoxy layer of 10 nm (solid line), 27.5 nm(dash line) and 55nm (dot line) thickness, or a single layer 100 nm SiO2 (dash dot line). (Part b) IDS versus VGS collected from SWNT TFTs using high density network (shot dash line) and low density SWNT network (SEM image shown in part e) (dash dot dot line) with a single layer 1.6µm epoxy dielectrics. Devices have channel lengths (L) of 100 µm and effective channel widths (W ) of 125µm. (Part c) µ computed based on parallel plate and SWNT array models for Ci (µp, solid line, and µ, dashed line, respectively).
). Lines are Ξ calculated according to Eq.4 for d ranging from 5nm to 500nm, from bottom to up. (b) Ξ calculated by FEM versus d for various 1/Λo ranging from 1µm (black, filled square), 500nm (filled circle), 200nm (filled triangle), 100nm (upsidedown triangle), 50nm (filled diamond), 20nm (open circle) to 10nm (small square).Lines are Ξ calculated according to Eq.4 for Λo ranging from 1µm to 10nm, from bottom to up.
ACKNOWLEDGEMENTSWe thank T. Banks for help with the processing. This work was supported by the U. S. Department of Energy under grant DEFG02-91-ER45439 and the NSF through grant NIRT-0403489.
Q Cao, C Kocabas, M A Meitl, S J Kang, J U Park, J A Rogers, Carbon Nanotube Electronics. A. Javey and J. KongSpringer Verlag GmbH Co., KGQ. Cao, C. Kocabas, M. A. Meitl, S. J. Kang, J. U. Park, and J. A. Rogers, in Carbon Nanotube Electronics, edited by A. Javey and J. Kong (Springer Verlag GmbH Co., KG, 2007).
. C Kocabas, M Shim, J A Rogers, J. Am. Chem. Soc. 128C. Kocabas, M. Shim, and J. A. Rogers, J. Am. Chem. Soc. 128, 4540-4541 (2006).
. Q Cao, S.-H Hur, Z.-T Zhu, Y Sun, C Wang, M A Meitl, M Shim, J A Rogers, Adv. Mater. 18Q. Cao, S.-H. Hur, Z.-T. Zhu, Y. Sun, C. Wang, M. A. Meitl, M. Shim, and J. A. Rogers, Adv. Mater. 18, 304- 309 (2006).
. K Bradley, J C P Gabriel, G Grner, Nano Lett. 3K. Bradley, J. C. P. Gabriel, and G. Grner, Nano Lett. 3, 1353-1355 (2003).
. E S Snow, P M Campbell, M G Ancona, J P Novak, Appl. Phys. Lett. 8633105E. S. Snow, P. M. Campbell, M. G. Ancona, and J. P. Novak, Appl. Phys. Lett. 86, 033105 (2005).
. S.-H Hur, C Kocabas, A Gaur, M Shim, O O Park, J A Rogers, J. Apply. Phys. 98114302S.-H. Hur, C. Kocabas, A. Gaur, M. Shim, O. O. Park, and J. A. Rogers, J. Apply. Phys. 98, 114302 (2005).
. J Guo, S Goasguen, M Lundstrom, S Datta, Appl. Phys. Lett. 81J. Guo, S. Goasguen, M. Lundstrom, and S. Datta, Appl. Phys. Lett. 81, 1486-1488 (2002).
S V Rotkin, Applied Physics of Nanotubes. Ph. AvourisSpringer Verlag GmbH Co., KGS. V. Rotkin, in Applied Physics of Nanotubes, edited by Ph. Avouris (Springer Verlag GmbH Co., KG, 2005).
. S V Rotkin, unpublished resultsS. V. Rotkin, unpublished results.
. K A Bulashevich, S V Rotkin, JETP Lett. 75K. A. Bulashevich and S. V. Rotkin, JETP Lett. 75, 205- 209 (2002).
. S Rosenblatt, Y Yaish, J Park, J Gore, V Sazonova, P L Mceuen, Nano Lett. 2S. Rosenblatt, Y. Yaish, J. Park, J. Gore, V. Sazonova, and P. L. McEuen, Nano Lett. 2, 869-872 (2002).
. Y Li, W Kim, Y Zhang, M Rolandi, D Wang, H Dai, J. Phys. Chem. B. 105Y. Li, W. Kim, Y. Zhang, M. Rolandi, D. Wang, and H. Dai, J. Phys. Chem. B 105, 11424 -11431 (2001).
. C Kocabas, S H Hur, A Gaur, M A Meitl, M Shim, J A Rogers, Small. 1C. Kocabas, S. H. Hur, A. Gaur, M. A. Meitl, M. Shim, and J. A. Rogers, Small 1, 1110-1116 (2005).
. D Qian, G J Wagner, W K Liu, M.-F Yu, R S Ruoff, Appl. Mech. Rev. 55D. Qian, G. J. Wagner, W. K. Liu, M.-F. Yu, and R. S. Ruoff, Appl. Mech. Rev. 55, 495-532 (2002).
. O Wunnicke, Appl. Phys. Lett. 8983102O. Wunnicke, Appl. Phys. Lett. 89, 083102 (2006).
. Q Cao, M.-G Xia, M Shim, J A Rogers, Adv. Func. Mater. in pressQ. Cao, M.-G. Xia, M. Shim, and J. A. Rogers, Adv. Func. Mater. in press (2006).
. X J Zhou, J Y Park, S M Huang, J Liu, P L Mceuen, Phys. Rev. Lett. 95146805X. J. Zhou, J. Y. Park, S. M. Huang, J. Liu, and P. L. McEuen, Phys. Rev. Lett. 95, 146805 (2005).
. A A Odintsov, Phys. Rev. Lett. 85A. A. Odintsov, Phys. Rev. Lett. 85, 150-153 (2000).
. S J Kang, C Kocabas, T Ozel, M Shim, S V Rotkin, J A Rogers, unpublished resultsS. J. Kang, C. Kocabas, T. Ozel, M. Shim, S. V. Rotkin, and J. A. Rogers, unpublished results (2006).
| [] |
[
"RELATIVISTIC HYDRODYNAMICS WITH WAVELETS",
"RELATIVISTIC HYDRODYNAMICS WITH WAVELETS"
] | [
"Jackson Debuhr \nCenter for Research in Extreme Scale Technologies\nSchool of Informatics and Computing\nDepartment of Physics and Astronomy\nIndiana University\n47404BloomingtonIN\n",
"Bo Zhang \nCenter for Research in Extreme Scale Technologies\nSchool of Informatics and Computing\nDepartment of Physics and Astronomy\nIndiana University\n47404BloomingtonIN\n",
"Matthew Anderson \nCenter for Research in Extreme Scale Technologies\nSchool of Informatics and Computing\nDepartment of Physics and Astronomy\nIndiana University\n47404BloomingtonIN\n",
"David Neilsen \nBrigham Young University\n84602ProvoUT\n",
"Eric W Hirschmann \nBrigham Young University\n84602ProvoUT\n"
] | [
"Center for Research in Extreme Scale Technologies\nSchool of Informatics and Computing\nDepartment of Physics and Astronomy\nIndiana University\n47404BloomingtonIN",
"Center for Research in Extreme Scale Technologies\nSchool of Informatics and Computing\nDepartment of Physics and Astronomy\nIndiana University\n47404BloomingtonIN",
"Center for Research in Extreme Scale Technologies\nSchool of Informatics and Computing\nDepartment of Physics and Astronomy\nIndiana University\n47404BloomingtonIN",
"Brigham Young University\n84602ProvoUT",
"Brigham Young University\n84602ProvoUT"
] | [] | Methods to solve the relativistic hydrodynamic equations are a key computational kernel in a large number of astrophysics simulations and are crucial to understanding the electromagnetic signals that originate from the merger of astrophysical compact objects. Because of the many physical length scales present when simulating such mergers, these methods must be highly adaptive and capable of automatically resolving numerous localized features and instabilities that emerge throughout the computational domain across many temporal scales. While this has been historically accomplished with adaptive mesh refinement (AMR) based methods, alternatives based on wavelet bases and the wavelet transformation have recently achieved significant success in adaptive representation for advanced engineering applications. This work presents a new method for the integration of the relativistic hydrodynamic equations using iterated interpolating wavelets and introduces a highly adaptive implementation for multidimensional simulation. The wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to the fluid flow while providing good conservation of fluid quantities. The resulting implementation, oahu, is applied to a series of demanding one-and two-dimensional problems which explore high Lorentz factor outflows and the formation of several instabilities, including the Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. | 10.3847/1538-4357/aae5f9 | [
"https://arxiv.org/pdf/1512.00386v1.pdf"
] | 118,882,555 | 1512.00386 | 2da22974b2d8c9b2c15fa4c30e8aad042e95a348 |
RELATIVISTIC HYDRODYNAMICS WITH WAVELETS
December 2, 2015 Draft version December 2, 2015
Jackson Debuhr
Center for Research in Extreme Scale Technologies
School of Informatics and Computing
Department of Physics and Astronomy
Indiana University
47404BloomingtonIN
Bo Zhang
Center for Research in Extreme Scale Technologies
School of Informatics and Computing
Department of Physics and Astronomy
Indiana University
47404BloomingtonIN
Matthew Anderson
Center for Research in Extreme Scale Technologies
School of Informatics and Computing
Department of Physics and Astronomy
Indiana University
47404BloomingtonIN
David Neilsen
Brigham Young University
84602ProvoUT
Eric W Hirschmann
Brigham Young University
84602ProvoUT
RELATIVISTIC HYDRODYNAMICS WITH WAVELETS
December 2, 2015 Draft version December 2, 2015Draft version Preprint typeset using L A T E X style emulateapj v. 01/23/15wavelets, relativistic hydrodynamics
Methods to solve the relativistic hydrodynamic equations are a key computational kernel in a large number of astrophysics simulations and are crucial to understanding the electromagnetic signals that originate from the merger of astrophysical compact objects. Because of the many physical length scales present when simulating such mergers, these methods must be highly adaptive and capable of automatically resolving numerous localized features and instabilities that emerge throughout the computational domain across many temporal scales. While this has been historically accomplished with adaptive mesh refinement (AMR) based methods, alternatives based on wavelet bases and the wavelet transformation have recently achieved significant success in adaptive representation for advanced engineering applications. This work presents a new method for the integration of the relativistic hydrodynamic equations using iterated interpolating wavelets and introduces a highly adaptive implementation for multidimensional simulation. The wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to the fluid flow while providing good conservation of fluid quantities. The resulting implementation, oahu, is applied to a series of demanding one-and two-dimensional problems which explore high Lorentz factor outflows and the formation of several instabilities, including the Kelvin-Helmholtz instability and the Rayleigh-Taylor instability.
INTRODUCTION
Relativistic fluids are used to model a variety of systems in high-energy astrophysics, such as neutron stars, accretion onto compact objects, supernovae, and gammaray burst outflows. Consequently, methods to solve the relativistic hydrodynamic equations are a key scientific kernel in a large number of astrophysics simulations and toolkits. Because of the many physical length scales present when simulating astrophysical phenomena, these methods must also be highly adaptive and capable of automatically resolving many localized emerging features and instabilities throughout the computational domain across many temporal scales. For Eulerian fluid methods, this has been historically accomplished with adaptive mesh refinement (AMR) based methods (Berger and Oliger 1984;Anderson et al. 2006). Other approaches include smoothed particle hydrodynamics (Rosswog 2010), and solving the fluid equations on a moving Voronoi mesh (Springel 2010;Duffell and MacFadyen 2011). Alternative methods based on wavelets and the wavelet transformation have recently achieved significant success in adaptive representation for advanced engineering applications (Paolucci et al. 2014a, Paolucci et al. 2014b). This has inspired and encouraged their investigation and possible application in relativistic hydrodynamics. This work presents a new method for the integration of the relativistic hydrodynamic equations using iterated interpolating wavelets and introduces a highly adaptive imple-{jdebuhr,zhang416,andersmw}@indiana.edu {david.neilsen,ehirsch}@physics.byu.edu mentation for multidimensional simulations called oahu.
The merger of two neutron stars or a black hole and a neutron star is an astrophysical system that has attracted significant interest. The orbital motion of the compact objects generates gravitational waves that are likely to be observed in the new Advanced-LIGO class of gravitational wave detectors. When operating at design sensitivity, these detectors are expected to make many detections of gravitational wave events each year (Abadie et al. 2010). These binaries are also expected to be sources of significant electromagnetic emission, such as magnetosphere interactions that give a precursor signal to merger (Palenzuela et al. 2013), kilonova events from r-process reactions on the neutron-rich ejecta (Li and Paczynski 1998), and short gamma-ray bursts (SGRBs) (Berger 2013). The combination of gravitational wave and electromagnetic observations, known as multi-messenger astronomy, should open new insights into some important questions, such as fundamental tests of general relativity, the neutron star equation of state, the earliest stages of supernova explosions, and models for GRB progenitors.
Computational models of neutron star binary mergers, when compared with observational data, are giving additional insights into such systems. For example, a black hole-neutron star merger can produce enough ejecta to power an SGRB, but only if the black hole has a small mass or a high spin (Chawla et al. 2010;Foucart et al. 2013). Moreover, simulations of neutron star binaries with a soft equation of state produce sufficient ejecta to power an SGRB through accretion, while those with a stiff equation of state produce much less ejecta (Hotokezaka et al. 2013;Sekiguchi et al. 2015;Palenzuela et al. 2015). Given this evidence from computer simulations, it appears that binary neutron star mergers and a softer nuclear equation of state may be preferred for the production of SGRBs. However, this continues to be an active area of research, and we expect these results to be refined as more results become available.
Computer models of neutron star binaries can be challenging to perform. On one hand, these models require a considerable amount of sophisticated physics to be realistic. Such models should include full general relativity for the dynamic gravitational field, a relativistic fluid model that includes a magnetic field (e.g. ideal or resistive magnetohydrodynamics), a finite-temperature equation of state for the nuclear matter, and a radiation hydrodynamics scheme for neutrinos. All of these components must be robust and work for a large range of energies. A second challenge, alluded to above, is the large range of scales that must be resolved. The neutron star radius sets one scale, 10-15 km, as the star must be well resolved on the computational domain. Other length scales are set by the orbital radius, 50-100 km, and the gravitational wave zone, approximately 100-1, 000 km. Furthermore, some fluid instabilities can significantly increase the magnetic field strength in the post-merger remnant, making resolutions on the scale of meters advantageous (Kiuchi et al. 2014). Finally, the computer models need to run efficiently on modern high performance computers, requiring them to be highly parallelizable and scalable to run on thousands of computational cores.
We are developing the oahu code to address the challenge of simulating binary mergers with neutron stars. A key component of this code is that we combine a robust high-resolution shock-capturing method with an unstructured dyadic grid of collocation points that conforms to the features of the solution. This grid adaptivity is realized by expanding functions in a wavelet basis and adding refinement only where the solution has small-scale features.
Wavelets allow one to represent a function in terms of a set of basis functions which are localized both spatially and with respect to scale. In comparison, spectral bases are infinitely differentiable, but have global support; basis functions used in finite difference or finite element methods have small compact support, but poor continuity properties. Wavelets with compact support have been applied to the solutions of elliptic, parabolic, and hyperbolic PDEs (Beylkin 1992;Beylkin and Coult 1998;Alpert et al. 2002;Qian and Weiss 1993a,b;Latto and Tenenbaum 1990;Glowinski et al. 1989;Holmström 1999;Dahmen et al. 1997;Urban 2009;Alam et al. 2006;Chegini and Stevenson 2011). Wavelets have also been applied to the solutions of integral equations (Alpert et al. 1993). We note that when applied to nonlinear equations, some of these previous methods will map the space of wavelet coefficients onto the physical space and there compute the nonlinear terms. They then project that result back to the wavelet coefficients space using analytical quadrature or numerical integration. Our approach is rather to combine collocation methods with wavelets thus allowing us to operate in a single space (Bertoluzza and Naldi 1996;Vasilyev and Bowman 2000;Regele and Vasilyev 2009;Vasilyev et al. 1995;Paolucci 1996, 1997).
In astronomy, wavelets have seen extensive use in analysis tasks, from classifying transients (Powell et al. 2015;Varughese et al. 2015), to image processing (Mertens and Lobanov 2015), and to finding solutions to nonlinear initial value problems (Kazemi Nasab et al. 2015). They have not, however, seen much use in solutions of PDEs in astrophysics.
This paper reports on an initial version of oahu that implements the first two elements above, concentrating on the initial tests of the fluid equations and adaptive wavelet grid. A discussion of the Einstein equation solver and parallelization will be presented in subsequent papers. The organization of this paper is as follows. In section 2 we describe our model system and the numerical methods used. Section 3 presents one dimensional tests of the resulting scheme. In section 4 we present the results of applying the method to the relativistic Kelvin-Helmholtz instability. Section 5 presents a stringent test of our method as applied to a relativistic outflow that develops Rayleigh-Taylor generated turbulence. Finally, in section 6 we summarize results and make note of future work suggested by the method.
METHODS
This section describes some of the numerical approaches and algorithms used in oahu.
Relativistic Hydrodynamics
In general relativity the spacetime geometry is described by a metric tensor g µν , and we write the line element in ADM form as
ds 2 = g µν dx µ dx ν (1) = (−α 2 + β i β i ) dt 2 + 2β i dtdx i + γ ij dx i dx j . (2)
We write these equations in units where the speed of light is set to unity, c = 1. Repeated Greek indices sum over all spacetime coordinates, 0, 1, 2, 3, and repeated Latin indices sum over the spatial coordinates, 1, 2, 3. α and β i are functions that specify the coordinates and γ ij is the 3-metric on spacelike hypersurfaces. While our code is written for a completely generic spacetime, the tests presented in this paper are all performed in flat spacetime. To simplify the presentation, we write the equations in special relativity (i.e. flat spacetime) in general curvilinear coordinates, and we set α = 1 and β i = 0. In Cartesian coordinates, the flat space metric is the identity γ ij = diag(1, 1, 1), but γ ij is generally a function of the curvilinear coordinates. A perfect fluid in special relativity is described by a stress-energy tensor of the form
T µν = hu µ u ν + P g µν ,(3)
where h is the total enthalpy of the fluid
h = ρ(1 + ε) + P.(4)
The fluid variables ρ, ε, u µ , and P are the rest mass density, the specific internal energy, the four-velocity and the pressure of the fluid, respectively. Once an equation of state of the form P = P (ρ, ε) is adopted, the equations determining the matter dynamics are obtained from the conservation law ∇ µ T µ ν = 0 and the conservation of baryons ∇ µ (ρu µ ) = 0.
We introduce the three-velocity of the fluid, v i , and the Lorentz factor W by writing the four-velocity as
u µ = (W, W v i ) T .(5)
The four-velocity has a fixed magnitude u µ u µ = −1, which gives the familiar relation between W and v i
W 2 = 1 1 − v i v i .(6)
We introduce a set of conservative variables
D ≡ √ γρW (7) S i ≡ √ γ hW 2 v i (8) τ ≡ √ γ hW 2 − P − ρW .(9)
These quantities correspond in the Newtonian limit to the rest mass density, the momentum, and the kinetic energy of the fluid, respectively. A tilde (˜) indicates that each quantity has been densitized by the geometric factor √ γ, where γ = det γ ij . In terms of these fluid variables, the relativistic fluid equations are
∂ tD + ∂ i D v i = 0 (10) ∂ tSj + ∂ i v iS j + √ γP γ i j = 3 Γ i kj v kS i + √ γP γ k i (11) ∂ tτ + ∂ i S i −Dv i = 0,(12)
where 3 Γ i jk are the Christoffel symbols associated with the spatial metric γ ij . The fluid equations of motion can be written in balance law form
∂ t u + ∂ i f i (u) = s(u),(13)
where u is the state vector of conserved variables and f i are the fluxes
u = D S ĩ τ , f i = D v i v iS j + √ γP γ i j S i −Dv i .(14)
The fluid equations in curvilinear coordinates have geometric source terms, which are included in s. Finally, the system of equations is closed with an equation of state. We use the Γ-law equation of state
P = (Γ − 1)ρ ε,(15)
where Γ is the adiabatic constant.
Sparse Field Representation
In this section we describe how we construct the sparse, adaptive representation of fields. The essential ingredients are the iterative interpolation of Deslauriers and Dubuc (1989) and the wavelet representation of Donoho (1992). This presentation follows that in Holmström (1999). We begin with the one dimensional case; see Section 2.3 for the generalization to higher dimensions.
The method begins with a nested set of dyadic grids, V j (see Figure 1): Figure 1. The one dimensional nested dyadic grids, V j . This example has N = 1 and shows up to level j = 2. The point at j = 2, k = 2 is labeled. In red are shown those points that are part of the alternate grids, W j , which are defined for j > 1.
V j = x j,k : x j,k = 2 −j k∆x . (16) V 0 V 1 V 2 ∆x x 2,2 W 1 W 2
Here ∆x is the spacing at level j = 0, called the base level, and k is an integer indexing the points within the various grid levels. Notice that the points in V j will also appear in all higher level grids V l (where l > j). The points of even k at level j will also be in the grid at level j − 1. If the overall domain size is L, and there are N + 1 points in grid V 0 , then ∆x = L/N . Starting with a set of field values at level j, u j,k , we can extend these values to higher levels of the grid using interpolation. For those points in V j+1 that are also in V j , we just copy the value from the coarser grid: u j+1,2k = u j,k . The previous is also the means by which the field values can be restricted to coarser levels: points at coarser levels have values copied from finer levels. For the points first appearing in grid V j+1 we take the nearest p field values from grid V j and interpolate:
u j+1,2k+1 = m h j+1,2k+1 j,m u j,m .(17)
Here h l,m j,k are the coefficients for interpolation from level j to level l. In practice, for a given k, only a small number of these coefficients are nonzero. In this work we use p = 4, so we have u j+1,2k+1 = − 1 16 u j,k−1 + 9 16 u j,k
+ 9 16 u j,k+1 − 1 16 u j,k+2 .(18)
The previous applies in the interior of the grid. Near the boundaries, little changes, except the nearest points are no longer symmetric around the refined point, and the coefficients in the sum are different. Having advanced the field values to grid V j+1 , the procedure can be iterated to advance the field to V j+2 . In this way, any level of refinement can be achieved from the initial sequence, and when performed ad infinitum produces a function on the interval [0, L] (see Donoho (1992) for details about the regularity of these functions).
The linearity of this procedure suggests a natural set of basis functions for iterated interpolation functions formed from Kronecker sequences. That is, start with a sequence that has a single 1 at level j and interpolate, φ j,k (x j,l ) = δ l k . The resulting function, φ j,k (x) has a number of properties, among which is the two-scale relation:
φ j,k (x) = l c l φ j+1,l (x).(19)
One step of the interpolation will produce a sequence on V j+1 , which can be written as a weighted sum of the 0 0.5
1 −2 0 2 φ(x) φ 1,1 (x) Figure 2.
The fundamental solution of the iterated interpolation for p = 4, φ(x) (black) and a basis element in level one, φ 1,1 (x) (gray). Each basis function is a scaled, translated version of the fundamental solution.
Kronecker sequences on V j+1 . Indeed, the weights c l are easy to find from the interpolation. For p = 4 these are c l = {..., 0, −1/16, 0, 9/16, 1, 9/16, 0, −1/16, 0, ...}. Each of these functions is a scaled, translated version of a single function φ(x), shown in Figure 2, called the fundamental solution of the interpolation:
φ j,k (x) = φ(2 j x/∆x − k).
The previous functions can be used to form bases for each level of the grid separately. The two-scale relation prevents the full set from being a basis on the full set of collocation points. In particular, note that for each location in the grid, x j,k there is one such function. However, certain locations are represented on multiple levels, for example x j,k = x j+m,2 m k for all m > 0. To form a basis for the full grid, introduce the alternate higher level grids, W j , for j > 0:
W j = x j,k : x j,k = 2 −j k∆x, k odd .(20)
The W j grids are the V j grids with the points from earlier levels removed (see the red points in Figure 1). The set of grids {V 0 , W j } now has each point represented exactly once. Forming a basis for these points is achieved by taking the set of φ j,k (x) functions that correspond to the points in the base grid, and these higher level alternate grids, W j . With this basis, we can represent a field, u, as follows:
u(x) = k∈S0 u 0,k φ 0,k (x) + j=1 k∈Sj d j,k φ j,k (x).(21)where S 0 = {0, 1, ..., N } is the index set for grid V 0 and S j = {1, 3, ..., 2 j N − 1} is the index set for grid W j .
The previous is the interpolating wavelet expansion of the field. Intuitively, the expansion contains the coarse picture (level 0) and refinements of that picture at successively finer levels. The expansion coefficients u 0,k are just the field values at the base level points: u 0,k = u(x 0,k ). We can extend this notation to include u j,k = u(x j,k ). The coefficients d j,k , called wavelet coefficients, are computed by comparing the interpolation from the previous level to the field value, u j,k . In particular if we denote the interpolated value from level j at a level j + 1 point,
u j+1,k = P (x j+1,k , j), then d j,k = u j,k − P (x j,k , j − 1).(22)
Intuitively, the wavelet coefficient measures the failure of the field to be the interpolation from the previous level. The previous is also called the forward wavelet transformation. This transformation starts with field values on the multi-level grid, and produces wavelet coefficients. The transformation can be easily inverted by rearranging the equation, and computing field values given the wavelet coefficients.
There are two descriptions of the field on the multilevel grid. The first, called the Point Representation is the set of values {u j,k }. The second, called the Wavelet Representation is the set of values {u 0,k , d j,k }. These representations can be made sparse via thresholding. Starting with the Wavelet Representation, and given a threshold , the Sparse Wavelet Representation is formed by removing those points whose wavelet coefficient are below some threshold: |d j,k | < . This amounts to discarding those points that are well approximated by interpolation. This naturally cuts down the number of points in the grid, and introduces an a priori error bound (Donoho 1992) on the representation of the field. The points whose values are kept are called essential points. The level 0 points are always kept and are always essential.
The field values at the essential points form the Sparse Point Representation.
Higher Dimensions
In multiple dimensions, the construction is not much more involved. The basis functions are taken to be the products of the 1-dimensional functions:
φ j, k (x, y, z) = φ j,kx (x)φ j,ky (y)φ j,kz (z).(23)
In the previous, k = (k x , k y , k z ) is the set of three indices required to label a three dimensional grid (see Figure 3). Another way to construct these functions is by interpolation in multiple dimensions. Depending on the location of the level j + 1 point in the level j grid, this interpolation will involve p, p 2 or p 3 terms ( Figure 4). The wavelet coefficient for a point is again computed as the difference between the field value and the interpolated value, it is just that the interpolation often contains more terms. The rest of the method goes through as one might expect. The field is expanded in terms of these basis functions:
u(x, y, z) = k u 0, k φ 0, k (x, y, z) + j=1 k d j, k φ j, k (x, y, z).(24)
The sparse representation is formed by removing those points whose wavelet coefficient have a magnitude less than the prescribed error threshold, .
2.4. Conservation It is possible to measure the conservation of the fields being evolved using the wavelet basis. In particular, the quantity Figure 3. A slice of the 3d grid. The black points (circles) are those in the base grid, the blue points (squares) belong to level 1, and the red points (circles) belong to level 2. Notice that some points at level 1 line up with points at level 0, and that some points at level 2 line up with points at level 1. Figure 4. A portion of a slice of the 3d grid at level j. Shown are those points at level j (circles) that contribute to the wavelet transformation for the marked points at level j + 1 (squares). Depending on the relative placement of the level j + 1 points to the level j grid, the wavelet transformation will use p (black) or p 2 (red) points. In three dimensions this extends to cases that also use p 3 points.
V 0 W 1 W 2Σ u(x)dx,(25)
where Σ is the computational domain, is straightforward to compute using the standard expansion of the field in the wavelet basis. Making the substitution yields
Σ u(x)dx = k u 0,k D φ 0,k (x)dx + j=1 k∈Sj d j,k D φ j,k (x)dx.(26)
Given the integrals of the basis functions, we can easily compute the total amount of some quantity. That each basis element is a scaled, translated version of the fundamental solution to the interpolation implies that, ignoring edges of the computational domain for the moment,
φ j,k (x)dx = ∆x 2 j φ(x)dx.(27)
The prefactor takes into account the difference in spacing for the two functions. The fundamental solution is de-fined for a spacing of 1. It is straightforward to show that the integral of the fundamental solution is 1 by defining a sequence of approximations to the integral using Riemann sums:
I (j) = k 1 2 j φ(x j,k ).(28)
The interpolation property of the fundamental solution makes it easy to show that I (j) = I (j+1) . Then, given the starting point that
I (0) = 1 we have that φ(x)dx = lim j→∞ I (j) = 1.(29)
Thus,
φ j,k (x)dx = ∆x 2 j .(30)
We can use this in the expansion of the field to write
Σ u(x)dx = k u 0,k ∆x + j=1 k∈Sj d j,k ∆x 2 j .(31)
This expression allows the monitoring of the conserved quantities during the simulation. In every case examined, the conservation is good to the level of the chosen . Near the edges of the computational grid, the basis elements no longer have unit integrals. To compute these integrals, it is necessary to make use of the two-scale relation for the basis to compute partial integrals:
I b a ≡ b a φ(x)dx(32)
where a, b are integers, one of which might be infinite. Some are simple, I ∞ 0 = 0.5 due to the symmetry of the basis, but others require setting up a linear system using the two-scale relation. Once the set of partial integrals is computed, the basis elements near the edges are written as the sum of an extended basis that stretches past the edge of the computational domain. It is an exercise in algebra to show that each of the original basis elements modified by the edges can be written as a sum of these extended basis elements. The extended basis elements are again translated, scaled versions of the fundamental solution, so we can use the partial integrals for only the interval inside the original domain to compute the integral of the original basis elements. For the case of p = 4, with superscripts labeling the location in the grid, we find that
Similar expressions hold at the largest k values. In multiple dimensions, because the basis functions are simple products, the integral of the basis is just the product of the integrals in each direction.
Fluid Methods
As mentioned in the introduction, one goal of this project is to develop a fully relativistic fluid code to study the binary mergers of compact objects. This involves solving both the Einstein equations of general relativity for the spacetime geometry and coordinate conditions, and the relativistic fluid equations for an arbitrary geometry. With neutron star binary mergers as our model problem, we choose a numerical algorithm that satisfies the following conditions: (1) The fluid density and pressure in a neutron star spans several orders of magnitude, and ejecta from the stars can reach speeds near the speed of light, so we choose a robust high-resolution shockcapturing method.
(2) The proper calculation of normal modes for neutron stars requires high-order reconstruction methods for the fluid variables so we implement both PPM and MP5 reconstructions. (3) The calculation of characteristic variables is computationally intensive for relativistic fluids, especially for relativistic MHD, so we choose a central scheme with an approximate Riemann solver. (4) The Einstein equations-fundamentally equations for classical fields often written in a second-order formulation-are most naturally discretized using finite differences so we choose a finite-difference fluid method to simplify coupling the two sets of equations.
In this section we assume a uniform grid of points labeled with an index, i.e., x j = x min + j∆x, and a function evaluated at point x j has the shorthand notation
f j = f (x j ).
Our numerical method for solving the fluid equations is based on the finite-difference Convex ENO (CENO) scheme of Liu and Osher for conservation laws (Liu and Osher 1998). For simplicity we present the method for a one-dimensional problem. The extension to multiple dimensions is done by differencing each dimension in turn, as discussed in Section 2.6. We write the conservation law in semi-discrete form as
du i dt = − ∆t ∆x f i+1/2 −f i−1/2 ,(34)
wheref i+1/2 is a consistent numerical flux function
f i+1/2 =f (u i−k , . . . , u i+m ) (35) f (u) =f (u, . . . , u).(36)
Liu and Osher base the CENO method on the local Lax-Friedrichs (LLF) approximate Riemann solver, and they use a ENO interpolation scheme to calculate the numerical flux functions f i+1/2 . In previous work we have found that the CENO scheme is too dissipative to reproduce the normal modes of neutron stars (Anderson et al. 2006), so we use the HLLE numerical flux (Harten et al. 1983;Einfeldt 1988) in place of LLF, and we use higher-order finite volume reconstruction methods, such as PPM (Colella and Woodward 1984) and MP5 (Suresh and Huynh 1997).
The numerical fluxf i+1/2 requires the fluid state at u i+1/2 = u(x i+1/2 ). We use the fluid variables near this point to reconstruct both left and right states at the midpoint, u i+1/2 and u r i+1/2 , respectively. The numerical flux can then be written in terms of these new states as f i+1/2 = f (u i+1/2 , u r i+1/2 ). We have implemented piece-wise linear (TVD) reconstruction, the Piece-wise Parabolic Method (PPM), and MP5 reconstruction. The MP5 reconstruction method usually gives superior results compared to the other methods, so we have used this reconstruction for all tests in this paper, except for Case IV below. As we use a central scheme for the approximate Riemann solver, we reconstruct each fluid variable separately. Moreover, given the difficulty of calculating primitive variables (ρ, v i , P ) from the conserved variables (D, S i , τ ), we reconstruct the primitive variables (ρ, v i , P ), and then calculate corresponding conserved variables. The MP5 method is a polynomial reconstruction of the fluid state that preserves monotonicity (Suresh and Huynh 1997;Mösta et al. 2014). It preserves accuracy near extrema and is computationally efficient. The reconstruction of a variable q proceeds in two steps. We first calculate an interpolated value for the state q i+1/2 , called the original value. In the second step, limiters are applied to the original value to prevent oscillations, producing the final limited value. The original value at the midpoint is
q i+1/2 = 1 60 (2q i−2 − 13q i−1 + 47q i + 27q i+1 − 3q i+2 ) .
(37) We then compute a monotonicity-preserving value
q M P = u i + minmod(q i+1 − q i ,α(q i − q i−1 )),(38)
whereα is a constant which we set asα = 4.0. The minmod function gives the argument with the smallest magnitude when both arguments have the same sign minmod(x, y) = 1 2 (sgn (x) + sgn (y)) min (|x|, |y|) .
(39) The limiter is not applied to the original value when
(q i+1/2 − q i )(q i+12 − q M P ) ≤ |q|,(40)
where = 10 −10 and |q| is the L 2 norm of q i over the stencil points {q i−2 , . . . , q i+2 }. The |q| factor does not appear in the original algorithm, but we follow Mösta et al. (2014) in adding this term to account for the wide range of scales in the different fluid variables.
When condition Eq. (40) does not hold, we apply a limiter to the original value. We then compute the second derivatives
D − i = q i−2 − 2q i−1 + q i(41)D 0 i = q i−1 − 2q i + q i+1(42)D + i = q i − 2q i+1 + q i+2 ,(43)
and
D M 4 i+1/2 = minmod(4D 0 i − D + i , 4D + i − D 0 i , D 0 i , D + i )(44)D M 4 i−1/2 = minmod(4D 0 i − D − i , 4D − i − D 0 i , D 0 i , D − i ).(45)
The minmod function is easily generalized for an arbitrary number of arguments as
minmod(z 1 , . . . , z k ) = s min (|z 1 |, . . . , |z k |) ,(46)
where s = 1 2 (sgn(z 1 ) + sgn(z 2 )) × 1 2 (sgn(z 1 ) + sgn(z 3 )) × . . . × 1 2 (sgn(z 1 ) + sgn(z k )) .
(47)
We then compute the following quantities
q U L = q i + α (q i − q i+1 ) (48) q AV = 1 2 (q i + q i+1 )(49)q M D = q AV − 1 2 D M 4 i+1/2 (50) q LC = q i + 1 2 (q i − q i−1 ) + 4 3 D M 4 i−1/2 ,(51)
to obtain limits for an accuracy-preserving constraint
q min = max min q i , q i+1 , q M D , min q i , q U L , q LC (52) q max = min max q i , q i+1 , q M D , max q i , q U L , q LC .(53)
Finally, the limited value for the midpoint is
q ,Lim i+1/2 = q i+1/2 +minmod q min − q i+1/2 , q max − q i+1/2 .(54)
To compute the right state q r i−1/2 , we repeat the algorithm but reflect the stencil elements about the center, replacing {q i−2 , q i−1 , q i , q i+1 , q i+2 } with {q i+2 , q i+1 , q i , q i−1 , q i−2 }.
The HLLE approximate Riemann solver is a centralupwind flux function that uses the maximum characteristic speeds in each direction to calculate a solution to the Riemann problem (Harten et al. 1983;Einfeldt 1988
) f HLLE i+1/2 = λ + r f (u i+1/2 ) − λ − f (u r i+1/2 ) λ + r − λ − + λ + r λ − (u r i+1/2 − u i+1/2 ) λ + r − λ − ,(55)
where λ + and λ − represent the largest characteristic speeds at the interface in the right and left directions, respectively. The largest and smallest characteristic speeds of the relativistic fluid in flat spacetime in the direction x i are
λ ± = 1 1 − v 2 c 2 s v i (1 − c 2 s ) ± c 2 s (1 − v 2 ) [γ ii (1 − v 2 c 2 s ) − v i v i (1 − c 2 s )] ,(56)
where the sound speed c s is
c 2 s = 1 h ∂P ∂ρ + P ρ 2 h ∂P ∂ ρ .(57)
Time Integration
The conservation equations are written in semi-discrete form:
du i,j,k dt = − 1 ∆x f 1 i+1/2,j,k −f 1 i−1/2,j,k − 1 ∆y f 2 i,j+1/2,k −f 2 i,j−1/2,k − 1 ∆z f 3 i,j,k+1/2 −f 3 i,j,k−1/2 + s (u i )(58)
wheref i+1/2 is the numerical flux. The flux functions in each direction are evaluated separately. The sparse wavelet representation leads to a scheme for integrating a system of differential equations in time by using the method of lines. The coefficients in the expansion become time dependent and can be integrated in time using any standard time integrator. In this work, the classical fourth-order Runge-Kutta method is used which has a CFL coefficient λ 0 = 2/3. The velocity and characteristic speeds of the fluid are bounded by the speed of light, so the time step λ = c∆t/∆x is bounded by the CFL coefficient λ ≤ λ 0 . As the physical state evolves during the simulation, the set of essential points will change. This means that the method needs to support the promotion of a point to becoming essential and the demotion of a point from being essential. To allow for such changes to the set of essential points, so-called neighboring points are added to the grid. These are those points adjacent to an essential point at the next finer level. In a sense, these points are sentinels waiting to become essential. Given that the points at level 0 are always essential, the points at level 1 will all be at least neighboring, and in this way, both the level 0 and level 1 grids will be fully occupied. Both neighboring and essential points participate in time integration and, for this reason, are called active points.
At the end of each timestep, every active point has its wavelet transformation computed. Essential points that no longer exceed the error threshold are demoted, and neighboring points that exceed the threshold are promoted. Neighboring points promoted to being essential points will thus require their own neighboring points at the next finer level. In this way, as the solution develops features on finer scales, the grid adapts, adding points to the grid exactly where the resolution is needed. Initially, the field values for a neighboring point can be taken from the initial conditions. Neighbors added after the initial time slice are given field values from the inverse wavelet transformation. That is, they are given a wavelet coefficient of zero, so their fields are equal to the interpolation from the previous level.
Finally, there is a third class of grid points, called nonessential points, that are required to fill out wavelet or other computational stencils of essential and neighboring points. Nonessential points do not participate in time integration and are given values via interpolation.
In practice, an upper limit on refinement, j max , must be specified. During an evolution, it can happen that the solution naturally attempts to refine past j max . There are two options in this case. The first is for the code to complain and die, and the second is for the code to complain and warn the user. Taking the former approach assures that the error bounds implied by the chosen are not violated, because if the grid attempts to add a point at level j max + 1 it is because the point is needed. However, if there is a discontinuity in the solution, no level of refinement will be sufficient to satisfy the refinement criterion. One possible solution to this problem is to smooth over discontinuities in initial data, and add some viscosity to prevent their formation during the simulation. However, we are interested in sharp features that develop during the simulation and it is for that reason that we use a fluid reconstruction and numerical flux function so as to allow for these sharp features. Thus, with a method that allows for discontinuities, we can never fully satisfy the refinement criterion if a discontinuity develops, and so the grid will want to refine forever. As a result, we take the practical step of limiting the maximum refinement level.
Primitive Solver
An important aspect of any relativistic hydrodynamics code is the inversion between the conserved and the primitive variables. Because the equations are written in conservation form, the conserved variables are the evolved variables. However, the primitive variables are needed as part of the calculation. For Newtonian fluids, this inversion from the conserved to the primitive variables is algebraic, can be done in closed form and results in a unique solution for the primitive variables provided the conserved variables take on physical values. Such is not the case in relativistic situations. As a result, a number of related procedures can be found in the literature to effect this inversion (Duez et al. 2005;Etienne et al. 2012;Noble et al. 2006). Due to its importance, we sketch our approach to performing the inversion.
For this discussion, we revert to the undensitized form of the conserved variables (D, S i , τ ). In terms of the fluid primitives (ρ, v i , P ), and for our chosen (Γ-law) equation of state (with 1 < Γ ≤ 2), the conserved variables are given by the undensitized version of Eqs. (7-9). The inversion can be reduced to a single equation for x = hW 2 /(τ + D), namely,
− x − Γ 2 2 + Γ 2 4 −(Γ−1)β 2 = (Γ−1) δ x 2 − β 2 ,(59)
where we have defined
β 2 = S 2 (τ + D) 2 , δ = D τ + D .(60)
Note that the left hand side, call it f (x), is a downward pointing quadratic while the right hand side includes the square root of another quadratic, g(x) = x 2 − β 2 , which has roots ±|β|. Solving for x amounts to finding the intersections of f (x) and (Γ − 1)δ g(x). For a physical solution, x > |β|, the largest root of g(x). Therefore, there is a single intersection bracketed by this root of g(x) and the larger root of f (x), namely |β| < x < x * where
x * = Γ 2 + Γ 2 4 − (Γ − 1)β 2 1/2 .(61)
That this root of f (x) is real is guaranteed by the dominant energy condition, given here as β 2 < 1. Hence a unique solution to the primitive inversion exists in the pure hydrodynamics case provided the conserved variables satisfy this inequality. Using a straightforward Newton's method allows us to solve for the primitive variables in virtually every case. Occasionally, when the inequality is violated, rescaling β 2 to bring it within physical bounds is sufficient to allow the primitive solve to proceed to a solution.
ONE DIMENSIONAL TESTS
Lora-Clavijo et al. (2013) outlines a number of simple one-dimensional Riemann problems for relativistic hydrodynamics. Each of the four cases outlined was run with oahu, and the results are reported below. In each case, the base grid was chosen to have N = 10, and we have varied both the maximum number of refinement levels and the value of . For each Riemann problem, we take the overall domain to be from x = −1 to x = 1 with the two states separated at x = 0. The primitive fields on each side of the problem for the four cases are given in Table 1.
Case v ρ p I
x < 0 0 10 13.33
x > 0 0 1 10 −6 II x < 0 0 1 10 −6 x > 0 0 10 13.33 III x < 0 −0.2 0.1 0.05 x > 0 0.2 0.1 0.05 IV
x < 0 0.999999 0.001 3.333 × 10 −9 x > 0 −0.999999 0.001 3.333 × 10 −9 Table 1 Initial state for four Riemann problems tested with oahu. The simulated domain is the interval x ∈ [−1, 1], and the separation between the left and right states is at x = 0.
In each case, the results obtained with oahu match closely the exact solution. Figure 5 shows the results for Case I at t = 0.8 with N = 10, j max = 10 and = 10 −5 . The solution found by oahu matches the exact solution extremely well. The final grid has adapted to the features that form during the simulation. The final state shown has only 324 points out of a possible 10,241 points, giving this simulation a very high effective resolution at a savings of over 96 percent.
The bottom right panel of Figure 5 gives the level of each point in the grid. This figure is characteristic of the refinement of the wavelet method. When the solution is smooth, there is very little refinement, and where the solution exhibits sharp features, the refinement proceeds to higher levels. For the very sharp features in this example, the refinement proceeds to the highest level allowed for the particular simulation. A discontinuity in the solution will generically refine as far as is allowed by the simulation. There is no smooth approximation at any resolution to a sharp transition.
It is interesting to explore how the quality of the solution depends on the maximum refinement level allowed and on the refinement criterion, . Figure 6 shows two close up views of features for Case I run for three different values of j max . In each case, N = 10 and = 10 −5 . The overshoot in the density for the j max = 10 case is not an artifact of the adaptive scheme we employ; when run at an equivalent resolution in unigrid (N = 5120 Table 1 and j max = 1), the same overshoot is present. A similar close up is shown in Figure 7 giving a comparison of a set of runs with N = 10, j max = 8 and various values of . As decreases, the solution matches more closely the exact solution. However, there is little difference between = 10 −4 and 10 −5 . This demonstrates the interplay between j max and . In this case, the sharpened refinement criterion would drive more refinement near this feature, but the maximum refinement level has already been reached. Table 2 gives details of an error measure for each simulation:
L 2 (f ) = 1 N occ j,k (f (x j,k ) − f ex (x j,k )) 2 ,(62)
where N occ gives the number of occupied points, j and k index each point, f (x) is the computed solution and f ex (x) is the exact solution. Increasing the maximum refinement level at the same tended to decrease the overall error. Similarly, as is decreased, the error decreases. Though, as indicated in Figure 7, the refinement is reaching its allowed maximum and so the additional refinement that would be generated by the smaller is not realized, leading only to modest accuracy gains. The much smaller errors for Case III are not surprising as this test contains two rarefaction waves and though there are sharp features in the exact solution, there are no discontinuities.
Case jmax Nocc N grid gives the number of occupied grid points, while N grid gives the maximum number of available points. The last three columns give the L 2 -norms of the error of the velocity, density and pressure, as described in the text. Note that the MP5 reconstruction failed in Case IV, and so results using PPM reconstruction are given instead.
L 2 (v) L 2 (ρ) L 2(
oahu also monitors the conservation of the evolved variables during a simulation. For these test cases, the conservation is as good as the specified . This is to be expected as the representation keeps details only if those details are larger than . In each of the cases above, the relative drift in the conserved quantities is on the order of for that case.
RELATIVISTIC KELVIN-HELMHOLTZ INSTABILITY
We applied oahu to the relativistic Kelvin-Helmholtz instability in two dimensions. For comparison, we have used identical initial conditions as in Radice and Rezzolla (2012). The computational domain is taken to be a peri-
Here a = 0.01 is the thickness of the shear layer and V s = 0.5. A small perturbation of the velocity transverse to the shear layer seeds the instability: Radice and Rezzolla (2012).
v y (x, y) = A 0 V s sin(2πx) exp −(y − 0.5) 2 /σ , y > 0 −A 0 V s sin(2πx) exp −(y + 0.5) 2 /σ , y ≤ 0,(64)
where A 0 = 0.1, and σ = 0.1. For this test, Γ = 4/3 and the pressure is initially constant, P = 1. The density is given by a profile similar to v x superposed on a constant as follows
ρ(y) = ρ 0 + ρ 1 tanh [(y − 0.5)/a] , y > 0 ρ 0 − ρ 1 tanh [(y + 0.5)/a] , y ≤ 0,(65)
where ρ 0 = 0.505 and ρ 1 = 0.495. Shown in Figure 8 is the density of this system at time t = 3 for a run having a base grid size of (N x , N y ) = (40, 80), and a maximum refinement level of j max = 6. Thus, the effective grid size is 2560 × 5120. The refinement criterion was = 10 −4 . The final state is consistent with the results in Radice and Rezzolla (2012). The conservation of the fluid is consistent with the chosen : D suffers a relative change of 6.13 × 10 −6 , and τ suffers a relative change of 1.04 × 10 −4 . Note the appearance of the secondary whirl along the shear boundary. In accord with the results of Radice and Rezzolla (2012), we find that the number and appearance of these secondary instabilities depend on the maximum resolution employed in our simulation, supporting their finding that these secondary instabilities are numerical artifacts.
RAYLEIGH-TAYLOR INSTABILITY IN A RELATIVISTIC
OUTFLOW
One interesting problem in relativistic fluid dynamics is the interaction of a relativistic blast wave of ejecta from a GRB explosion with the surrounding ISM. As the shock expands adiabatically into the ISM, it loses thermal energy and eventually decelerates. This results in a doubleshock system with a forward shock (traveling into the ISM), a contact discontinuity, and a reverse shock (moving into the ejecta). Levinson (2011) showed that the contact discontinuity is unstable to the Rayleigh-Taylor instability. The turbulence generated by this instability can amplify magnetic fields and the emission from the thin shell of material behind the forward shock. Duffell and MacFadyen have studied this system with numerical simulations in a series of papers (Duffell and MacFadyen 2011, 2013, finding that the Rayleigh-Taylor instability can disrupt the forward shock for soft equations of state, which might be typical of radiative systems (Duffell and MacFadyen 2014).
Simulating the relativistic outflow of GRB ejecta constitutes an especially challenging numerical test for an adaptive relativistic fluid code. Relativistic effects compress the width of the thin shell by the Lorentz factor squared, ∆r/r ≈ 1/W 2 . Capturing the Rayleigh-Taylor instability that forms at the contact discontinuity within the shell thus requires very high resolution within a thin shell that propagates outward with a velocity near the speed of light. Duffell and MacFadyen succeeded in simulating this system using an elegant moving mesh code, TESS (Duffell and MacFadyen 2011). The computational cells in TESS are allowed to move with the fluid, giving very high resolution in the shell and at the shocks. As a final test, we repeat the decelerating shock test here with oahu to demonstrate the adaptive capability of the wavelet approach for relativistic hydrodynamics.
We use the initial data for the decelerating shock given in Duffell and MacFadyen (2011). The initial data are spherically symmetric, but this symmetry is broken by the instability, so we perform the simulation in cylindrical coordinates. We label these coordinates {s, z}, where s is the cylindrical radius, and the spherical radius r is given by r 2 = s 2 + z 2 . The initial data for the spherical explosion are
ρ =
ρ 0 (r 0 /r min ) k0 r < r min ρ 0 (r 0 /r) k0 r min < r < r 0
ρ 0 (r 0 /r) k r 0 < r,(66)
where the different parameters are chosen to be k 0 = 4, r 0 = 0.1, r min = 0.001.
A spherical explosion into a medium with a power-law dependence, ρ r −k , will decelerate if k < 3. So we choose k = {0, 1, 2} for our runs below. The pressure is given by
P = e 0 ρ/3 for r < r exp 10 −6 ρ for r > r exp ,(68)
where r exp = 0.003 and the constant e 0 is chosen such that the outgoing shock has a Lorentz factor of W ≈ 10. We set e 0 = {6, 4, 6} for k = {0, 1, 2}, respectively.
Three different cases k = {0, 1, 2} are simulated to t = 0.8, and the solutions at this time are shown in Figures 9, 10, 11. In each case, ten levels of refinement were used with a refinement criterion value of = 10 −4 . The Lorentz factor for the shock waves is ∼ 12. All simulations were evolved in their entirety in 2-D from the initial conditions using the wavelet method described in this work. The results are consistent with those obtained with the TESS code. As k is increased, the width of the blast wave decreases. For k = 0 the RT instability is well resolved. For k = 1, the instability is evident, though with less detail than k = 0. In the k = 2 case, the width of the blast wave is too narrow to properly resolve the substructures in the instability. That there is an instability is apparent, but it lacks the definition of even the k = 1 case. These features could be resolved by increasing j max for the k = 2 simulation. The conservation of all conserved variables was externally monitored throughout the simulation. Variation in the conservation of D amounted to less than 0.001% while variation in the conservation of τ was somewhat larger at 0.8%, visibly due to boundary effects along the z-axis.
SUMMARY
Motivated by the need to efficiently resolve the many emerging localized features and instabilities present in astrophysics simulations such as the merger of two neutron stars, this work has presented a wavelet based approach for solving the relativistic hydrodynamic equations. The resulting implementation of this approach, oahu, has reproduced a number of results in relativistic hydrodynamics, including the one dimensional shock tube tests of Lora-Clavijo et al. (2013), the relativistic Kelvin-Helmholtz instability (Beckwith and Stone 2011;Radice and Rezzolla 2012), and the Rayleigh-Taylor instability resulting from a gamma ray burst outflow model (Duffell and MacFadyen 2011). Unlike adaptive mesh refinement based on nested boxes, the unstructured dyadic grid of collocation points in the wavelet approach conforms to highly localized solution features without creating the box-shaped numerical artifacts typically present in nested box adaptive mesh refinement simulations. Further, the approach presented here demonstrates the efficiency and utility of using the coefficients from the wavelet transformation to drive refinement without requiring problem specific a priori refinement criteria.
The wavelet method described here can be directly applied to the equations of relativistic magnetohydrodynamics (MHD) which describe a plasma of relativistic particles in the limit of infinite conductivity. Investigating MHD with wavelets will be part of future work.
For use in the merger simulations of astrophysical compact objects such as neutron stars and black holes, the wavelet based relativistic hydrodynamics kernel must be integrated with a kernel solving the Einstein equations for gravity. The wavelet method presented in this work has been designed expressly for this purpose and is a crucial feature for its use in astrophysics. The results from fully dynamic gravitational and hydrodynamics simulations as well as the method for integrating the hydrodynamics and gravitation computational kernels with the wavelet approach will be reported in future work.
Although not a focus of this work, the parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking. The scalable asynchronous multi-tasking aspects of this work will be addressed in future work.
ACKNOWLEDGMENTS
It is a pleasure to thank our long-term collaborators Luis Lehner, Steven L. Liebling, and Carlos Palenzuela, with whom we have had many discussions on adaptive methods for relativistic hydrodynamics. We acknowledge Figure 9. The k = 0 spherical explosion case with a decelerating relativistic shock and a mass excess. After a coasting period, the solution develops two shocks separated by a contact discontinuity unstable to the Rayleigh-Taylor instability. Results are shown at t = 0.8 for jmax = 10 with = 10 −4 . This figure also demonstrates the adaptivity of the method; outside the blast wave there is little refinement, while inside the blast, the small scale features drive high levels of refinement, leading to a well-resolved Rayleigh-Taylor instability. Thomas Sterling with whom we have had many discussions on the parallel implementation of oahu. We also acknowledge many helpful discussions on wavelets with Temistocle Grenga and Samuel Paolucci. This material is based upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002377, the Department of Energy under Award Number DE-SC0008809, the National Science Foundation under Award Number PHY-1308727, and NASA under Award Number BL-4363100.
Figure 5 .
5Comparison of the results from oahu to the exact solution for the Case I Riemann problem (see
). The solution (shown in gray) is at time t = 0.8. The positions of the wavelet grid points and their associated field values are shown in blue. This simulation was performed with N = 10, jmax = 10 and = 10 −5 . The velocity, pressure and density all match the exact solution well. In the lower right panel the level of the point is plotted against the position of the point. The features in the profiles of v, P , and ρ match the location of refinement in the grid. The resulting wavelet grid is not hand-tuned; instead, it adapts to the evolution of the fluid. At the time shown, only 324 out of a possible 10241 grid points are occupied.
Figure 6 .Figure 7 .
67Detail of pressure at one end of the rarefaction wave (top) and the density in the shock (bottom) for Case I, with three different values of jmax. As the maximum refinement level is increased, the wavelet solution matches the exact solution more closely. Detail of the pressure similar to the top panel ofFigure 6, but with fixed jmax = 8 and a varying . The refinement criterion has already pushed the refinement to the maximum level with = 10 −4 , and so there is little change as is decreased to 10 −5 . odic box from x = −0.5 to x = 0.5, and from y = −1 to y = 1. The shear is introduced via a counterpropagating flow in the x direction v x (y) = V s tanh [(y − 0.5)/a], y > 0 −V s tanh [(y + 0.5)/a], y ≤ 0.
Figure 8 .
8The relativistic Kelvin-Helmholtz instability at time t = 3. The points are colored by their density. This simulation used (Nx, Ny) = (40, 80), jmax = 6 and = 10 −4 . Compare with
Figure 10 .
10The k = 1 spherical explosion case with a decelerating relativistic shock and a mass excess. Results are shown at t = 0.8 for jmax = 10 with = 10 −4 .
Statistics for the Riemann problems presented in this work. NoccP )
[10 −3 ] [10 −3 ] [10 −3 ]
I
6
10 −5
201
641
2.88
29.1
3.41
I
8
10 −5
261
2561
2.05
22.2
1.86
I
10
10 −5
324
10241
0.89
21.1
1.87
I
8
10 −3
145
2561
3.63
40.1
3.29
I
8
10 −4
215
2561
2.50
26.9
2.19
II
10
10 −5
320
10241
2.16
21.3
2.19
III
10
10 −5
422
10241 0.009
0.003
0.002
IV
10
10 −5 2759 10241 0.298
0.868
219
Table 2
Topical Review: Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors. J Abadie, B P Abbott, R Abbott, M Abernathy, T Accadia, F Acernese, C Adams, R Adhikari, P Ajith, B Allen, Classical and Quantum Gravity. 2717173001J. Abadie, B. P. Abbott, R. Abbott, M. Abernathy, T. Accadia, F. Acernese, C. Adams, R. Adhikari, P. Ajith, B. Allen, et al. Topical Review: Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors. Classical and Quantum Gravity, 27(17):173001, September 2010.
Simultaneous space-time adaptive wavelet solution of nonlinear parabolic differential equations. M Alam, N K .-R. Kevlahan, O V Vasilyev, J. Comput. Phys. 2142M. Alam, N. K.-R. Kevlahan, and O. V. Vasilyev. Simultaneous space-time adaptive wavelet solution of nonlinear parabolic differential equations. J. Comput. Phys., 214(2): 829-857, 2006.
Wavelet-like bases for the fast solution of second-kind integral equations. G Alpert, R Beylkin, V Coifman, Rokhlin, SIAM J. Sci. Comput. 14Alpert, G. Beylkin, R. Coifman, and V. Rokhlin. Wavelet-like bases for the fast solution of second-kind integral equations. SIAM J. Sci. Comput., 14:159-184, 1993.
Adaptive solution of partial differential equations in multiwavelet bases. G Alpert, D Beylkin, L Gines, Vozovoi, J. Comput. Phys. 182Alpert, G. Beylkin, D. Gines, and L. Vozovoi. Adaptive solution of partial differential equations in multiwavelet bases. J. Comput. Phys., 182:149-190, 2002.
Relativistic MHD with adaptive mesh refinement. E W Anderson, S L Hirschmann, D Liebling, Neilsen, Class. Quant Grav. 23Anderson, E. W. Hirschmann, S. L. Liebling, and D. Neilsen. Relativistic MHD with adaptive mesh refinement. Class. Quant Grav., 23:6503-6524, 2006.
A second-order Godunov method for multi-dimensional relativistic magnetohydrodynamics. K Beckwith, J M Stone, ApJS. 1936K. Beckwith and J. M. Stone. A second-order Godunov method for multi-dimensional relativistic magnetohydrodynamics. ApJS, 193:6, 2011.
Short-duration gamma-ray bursts. Berger, arXiv:1311.2603Berger. Short-duration gamma-ray bursts. arXiv:1311.2603, 2013.
Adaptive mesh refinement for hyperbolic partial differential equations. M J Berger, J Oliger, J. Comp. Phys. 53484M. J. Berger and J. Oliger. Adaptive mesh refinement for hyperbolic partial differential equations. J. Comp. Phys., 53: 484, 1984.
A wavelet collocation method for the numerical solution of partial differnetial equations. S Bertoluzza, G Naldi, Appl. Comput. Harmon. A. 3S. Bertoluzza and G. Naldi. A wavelet collocation method for the numerical solution of partial differnetial equations. Appl. Comput. Harmon. A., 3:1-9, 1996.
On the representation of operators in bases of compactly supported wavelets. G Beylkin, SIAM J. Numer. Anal. 6G. Beylkin. On the representation of operators in bases of compactly supported wavelets. SIAM J. Numer. Anal., 6: 1716-1740, 1992.
Multiresolution strategy for reduction of elliptic pdes and eigenvalue problems. G Beylkin, N Coult, Appl. Comput. Harmon. A. 5G. Beylkin and N. Coult. Multiresolution strategy for reduction of elliptic pdes and eigenvalue problems. Appl. Comput. Harmon. A., 5:129-155, 1998.
Mergers of magnetized neutron stars with spinning black holes: Disruption, accretion and fallback. S Chawla, Phys. Rev. Lett. 105111101S. Chawla et al. Mergers of magnetized neutron stars with spinning black holes: Disruption, accretion and fallback. Phys. Rev. Lett., 105:111101, 2010.
Adaptive wavelet schemes for parabolic problems: Sparse matrices and numerical results. R Chegini, Stevenson, SIAM. J. Numer. Anal. 49Chegini and R. Stevenson. Adaptive wavelet schemes for parabolic problems: Sparse matrices and numerical results. SIAM. J. Numer. Anal., 49:182-212, 2011.
The piecewise parabolic method (PPM) for gas dynamical simulations. P R Colella, Woodward, J. Comput. Phys. 54Colella and P. R. Woodward. The piecewise parabolic method (PPM) for gas dynamical simulations. J. Comput. Phys., 54: 174-201, 1984.
Multiscale wavelet methods for partial differential equations. A Dahmen, P Kurdila, Oswald, Academic PressDahmen, A. Kurdila, and P. Oswald. Multiscale wavelet methods for partial differential equations. Academic Press, 1997.
Symmetric iterative interpolation processes. G Deslauriers, S Dubuc, Constr. Approx. 5G. Deslauriers and S. Dubuc. Symmetric iterative interpolation processes. Constr. Approx., 5:49-68, 1989.
Interpolating wavelet transforms. L Donoho, 2Department of Statistics, Stanford UniversityPreprintL Donoho. Interpolating wavelet transforms. Preprint, Department of Statistics, Stanford University, 2, 1992.
Excitation of MHD modes with gravitational waves: A testbed for numerical codes. M D Duez, Y T Liu, S L Shapiro, B C Stephens, Phys. Rev. 7224029M. D. Duez, Y. T. Liu, S. L. Shapiro, and B. C. Stephens. Excitation of MHD modes with gravitational waves: A testbed for numerical codes. Phys. Rev., D72:024029, 2005.
TESS: A Relativistic Hydrodynamics Code on a Moving Voronoi Mesh. C Duffell, A I Macfadyen, 10.1088/0067-0049/197/2/15Astrophys. J. Suppl. 19715C. Duffell and A. I. MacFadyen. TESS: A Relativistic Hydrodynamics Code on a Moving Voronoi Mesh. Astrophys. J. Suppl., 197:15, 2011. doi:10.1088/0067-0049/197/2/15.
Rayleigh-Taylor Instability in a Relativistic Fireball on a Moving Computational Grid. C Duffell, A I Macfadyen, C. Duffell and A. I. MacFadyen. Rayleigh-Taylor Instability in a Relativistic Fireball on a Moving Computational Grid.
. 10.1088/0004-637X/775/2/87Astrophys. J. 77587Astrophys. J., 775:87, 2013. doi:10.1088/0004-637X/775/2/87.
Shock Corrugation by Rayleigh-Taylor Instability in GRB Afterglow Jets. C Duffell, A I Macfadyen, 10.1088/2041-8205/791/1/L1Astrophys. J. 7911C. Duffell and A. I. MacFadyen. Shock Corrugation by Rayleigh-Taylor Instability in GRB Afterglow Jets. Astrophys. J., 791:L1, 2014. doi:10.1088/2041-8205/791/1/L1.
On Godunov-type methods for gas dynamics. B Einfeldt, SIAM J. Numer. Anal. 25B. Einfeldt. On Godunov-type methods for gas dynamics. SIAM J. Numer. Anal., 25:294-318, 1988.
General relativistic simulations of black hole-neturon star mergers: Effects of magnetic fields. B Etienne, Y T Liu, V Paschalidis, S L Shapiro, Phys. Rev. 8564029B. Etienne, Y. T. Liu, V. Paschalidis, and S. L. Shapiro. General relativistic simulations of black hole-neturon star mergers: Effects of magnetic fields. Phys. Rev., D85:064029, 2012.
First direct comparison of nondisrupting neutron star-black hole and binary black hole merger simulations. L Foucart, M D Buchman, M Duez, L Grudich, Kidder, Phys.Rev. 88664017Foucart, L. Buchman, M. D. Duez, M. Grudich, L. Kidder, et al. First direct comparison of nondisrupting neutron star-black hole and binary black hole merger simulations. Phys.Rev., D88(6):064017, 2013.
Wavelet solution of linear and nonlinear elliptic, parabolic and hyperbolic problems in one space dimension. W Glowinski, M Lawton, E Ravachol, Tenenbaum, Aware IncTechnical reportGlowinski, W. Lawton, M. Ravachol, and E. Tenenbaum. Wavelet solution of linear and nonlinear elliptic, parabolic and hyperbolic problems in one space dimension. Technical report, Aware Inc., 1989.
On upstream differencing and Godunov-type schemes for hyperbolic conservation laws. P D Harten, B Lax, Van Leer, SIAM Rev. 25Harten, P. D. Lax, and B. van Leer. On upstream differencing and Godunov-type schemes for hyperbolic conservation laws. SIAM Rev., 25:35-61, 1983.
Solving hyperbolic pdes using interpolating wavelets. M Holmström, SIAM J. Sci. Comput. 212M. Holmström. Solving hyperbolic pdes using interpolating wavelets. SIAM J. Sci. Comput., 21(2):405-420, 1999.
. K Hotokezaka, K Kiuchi, H Kyutoku, Y Okawa, Hotokezaka, K. Kiuchi, K. Kyutoku, H. Okawa, Y.-i.
Mass ejection from the merger of binary neutron stars. M Sekiguchi, K Shibata, Taniguchi, Phys. Rev. D. 87224001Sekiguchi, M. Shibata, and K. Taniguchi. Mass ejection from the merger of binary neutron stars. Phys. Rev. D, 87(2): 024001, January 2013.
A numerical approach for solving singular nonlinear Lane-Emden type equations arising in astrophysics. A Nasab, A Kılıçman, Z Pashazadeh Atabakan, W J Leong, New A. 34A. Kazemi Nasab, A. Kılıçman, Z. Pashazadeh Atabakan, and W. J. Leong. A numerical approach for solving singular nonlinear Lane-Emden type equations arising in astrophysics. New A, 34:178-186, 2015.
High resolution numerical-relativity simulations for the merger of binary magnetized neutron stars. K Kiuchi, Y Kyutoku, M Sekiguchi, T Shibata, Wada, Phys.Rev. 90441502Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Shibata, and T. Wada. High resolution numerical-relativity simulations for the merger of binary magnetized neutron stars. Phys.Rev., D90(4):041502, 2014.
Compactly supported wavelets and the numerical solution of Burger's equation. A Latto, E Tenenbaum, C. R. Acad. SCi. 311A. Latto and E. Tenenbaum. Compactly supported wavelets and the numerical solution of Burger's equation. C. R. Acad. SCi. Paris, 311:903-909, 1990.
Relativistic jets at high energies. A Levinson, IAU Symp. 27524A. Levinson. Relativistic jets at high energies. IAU Symp., 275: 24, 2011.
Convex ENO high order multi-dimensional schemes without field by field decomposition or staggered grids. X Li, B Paczynski ; X-D Liu, S Osher, J. Comp. Phys. 507Astrophys.J.X. Li and B. Paczynski. Transient events from neutron star mergers. Astrophys.J., 507:L59, 1998. X-D Liu and S Osher. Convex ENO high order multi-dimensional schemes without field by field decomposition or staggered grids. J. Comp. Phys., 142:304-330, 1998.
Exact solution of the 1D Riemann problem in Newtonian and relativistic hydrodynamics. D Lora-Clavijo, J P Cruz-Prez, F S Guzmn, J A Gonzlez, Rev. Mex. Fis. 59D. Lora-Clavijo, J. P. Cruz-Prez, F. S. Guzmn, and J. A. Gonzlez. Exact solution of the 1D Riemann problem in Newtonian and relativistic hydrodynamics. Rev. Mex. Fis., E59:28-50, 2013.
Wavelet-based decomposition and analysis of structural patterns in astronomical images. A Mertens, Lobanov, A&A. 57467Mertens and A. Lobanov. Wavelet-based decomposition and analysis of structural patterns in astronomical images. A&A, 574:A67, 2015.
GRHydro: A new open source general-relativistic magnetohydrodynamics code for the Einstein Toolkit. B C Mösta, J A Mundim, R Faber, S C Haas, T Noble, F Bode, C D Lffler, C Ott, E Reisswig, Schnetter, Class. Quant. Grav. 3115005Mösta, B. C. Mundim, J. A. Faber, R. Haas, S. C. Noble, T. Bode, F. Lffler, C. D. Ott., C. Reisswig, and E. Schnetter. GRHydro: A new open source general-relativistic magnetohydrodynamics code for the Einstein Toolkit. Class. Quant. Grav., 31:015005, 2014.
Primitive variable solvers for conservative general relativistic magnetohydrodynamics. C Noble, C F Gammie, J C Mckinney, L Del Zanna, Astrophys. J. 641C. Noble, C. F. Gammie, J. C. McKinney, and L. Del Zanna. Primitive variable solvers for conservative general relativistic magnetohydrodynamics. Astrophys. J., 641:626-637, 2006.
Gravitational and electromagnetic outputs from binary neutron star mergers. L Palenzuela, M Lehner, S L Ponce, M Liebling, Anderson, 11161105Palenzuela, L. Lehner, M. Ponce, S. L. Liebling, M. Anderson, et al. Gravitational and electromagnetic outputs from binary neutron star mergers. 111:061105, 2013.
Effects of the microphysical equation of state in the mergers of magnetized neutron stars with neutrino cooling. S L Palenzuela, D Liebling, L Neilsen, O L Lehner, E Caballero, M Oconnor, Anderson, Phys. Rev. 92444045Palenzuela, S. L. Liebling, D. Neilsen, L. Lehner, O. L. Caballero, E. OConnor, and M. Anderson. Effects of the microphysical equation of state in the mergers of magnetized neutron stars with neutrino cooling. Phys. Rev., D92(4): 044045, 2015.
WAMR: An adaptive wavelet method for the simulation of compressible reacting flow. Part II. The parallel algorithm. Z J Paolucci, T Zikoski, Grenga, J. Comput. Phys. 272Paolucci, Z. J. Zikoski, and T. Grenga. WAMR: An adaptive wavelet method for the simulation of compressible reacting flow. Part II. The parallel algorithm. J. Comput. Phys., 272: 842 -864, 2014a.
WAMR: An adaptive wavelet method for the simulation of compressible reacting flow. Part I. Accuracy and efficiency of algorithm. Z J Paolucci, D Zikoski, Wirasaet, J. Comput. Phys. 272Paolucci, Z. J. Zikoski, and D. Wirasaet. WAMR: An adaptive wavelet method for the simulation of compressible reacting flow. Part I. Accuracy and efficiency of algorithm. J. Comput. Phys., 272:814 -841, 2014b.
Classification methods for noise transients in advanced gravitational-wave detectors. D Powell, E Trifirò, I S Cuoco, M Heng, Cavaglià, Classical Quant. Grav. 32215012Powell, D. Trifirò, E. Cuoco, I. S. Heng, and M. Cavaglià. Classification methods for noise transients in advanced gravitational-wave detectors. Classical Quant. Grav., 32: 215012, 2015.
Wavelets and the numerical solution of boundary value problems. S Qian, J Weiss, Appl. Math. Lett. 6S. Qian and J. Weiss. Wavelets and the numerical solution of boundary value problems. Appl. Math. Lett., 6:47-52, 1993a.
Wavelets and the numerical solution of partial differential equations. S Qian, J Weiss, J. Comput. Phys. 106S. Qian and J. Weiss. Wavelets and the numerical solution of partial differential equations. J. Comput. Phys., 106:155-175, 1993b.
THC: a new high-order finite-difference high-resolution shock-capturing code for special-relativistic hydrodynamics. D Radice, L Rezzolla, A&A. 54726D. Radice and L. Rezzolla. THC: a new high-order finite-difference high-resolution shock-capturing code for special-relativistic hydrodynamics. A&A, 547:A26, 2012.
An adaptive wavelet-collocation method for shock computations. D Regele, O V Vasilyev, Int. J. Comput. Fluid D. 23D. Regele and O. V. Vasilyev. An adaptive wavelet-collocation method for shock computations. Int. J. Comput. Fluid D., 23: 503-518, 2009.
Conservative, special-relativistic SPH. Rosswog, J. Comp. Phys. 229Rosswog. Conservative, special-relativistic SPH. J. Comp. Phys., 229:8591-8612, 2010.
Dynamical mass ejection from binary neutron star mergers: Radiation-hydrodynamics study in general relativity. K Sekiguchi, K Kiuchi, M Kyutoku, Shibata, Phys.Rev. 91664059Sekiguchi, K. Kiuchi, K. Kyutoku, and M. Shibata. Dynamical mass ejection from binary neutron star mergers: Radiation-hydrodynamics study in general relativity. Phys.Rev., D91(6):064059, 2015.
Galilean-invariant cosmological hydrodynamical simulations on a moving mesh. Springel, MNRAS. 401Springel. E pur si muove: Galilean-invariant cosmological hydrodynamical simulations on a moving mesh. MNRAS, 401: 791-851, 2010.
Accurate monotonicity-preserving schemes with runge-kutta time stepping. A Suresh, H T Huynh, J. Comput. Phys. 136A. Suresh and H. T. Huynh. Accurate monotonicity-preserving schemes with runge-kutta time stepping. J. Comput. Phys., 136:83-99, 1997.
Wavelet methods for elliptic partial differential equations. K Urban, Oxford University PressK. Urban. Wavelet methods for elliptic partial differential equations. Oxford University Press, 2009.
Non-parametric transient classification using adaptive wavelets. M Varughese, R Sachs, M Stephanou, B A Bassett, MNRAS. 453M. Varughese, R. von Sachs, M. Stephanou, and B. A. Bassett. Non-parametric transient classification using adaptive wavelets. MNRAS, 453:2848-2861, 2015.
Second-generation wavelet collocation method for the solution of partial differential equations. V Vasilyev, C Bowman, J. Comput. Phys. 165V. Vasilyev and C. Bowman. Second-generation wavelet collocation method for the solution of partial differential equations. J. Comput. Phys., 165:660-693, 2000.
A dynamically adaptive multilevel wavelet collocation method for solving partial differential equations in a finite domain. V Vasilyev, S Paolucci, J. Comput. Phys. 125V. Vasilyev and S. Paolucci. A dynamically adaptive multilevel wavelet collocation method for solving partial differential equations in a finite domain. J. Comput. Phys., 125:498-512, 1996.
A fast adaptive wavelet collocation algorithm for multidimensional pdes. V Vasilyev, S Paolucci, J. Comput. Phys. 138V. Vasilyev and S. Paolucci. A fast adaptive wavelet collocation algorithm for multidimensional pdes. J. Comput. Phys., 138:16-56, 1997.
A multilevel wavelet collocation method for solving partial differential equations in a finite domain. V Vasilyev, S Paolucci, M Sen, J. Comput. Phys. 120V. Vasilyev, S. Paolucci, and M. Sen. A multilevel wavelet collocation method for solving partial differential equations in a finite domain. J. Comput. Phys., 120:33 -47, 1995.
| [] |
[
"Generalized hypergeometric G-functions take linear independent values",
"Generalized hypergeometric G-functions take linear independent values"
] | [
"Sinnou David ",
"Noriko Hirata-Kohno ",
"Makoto Kawashima "
] | [] | [] | In this article, we show a new general linear independence criterion related to values of G-functions, including the linear independence of values at algebraic points of contiguous hypergeometric functions, which is not known before. Let K be any algebraic number field and v be a place of K. Let r ∈ Z with r ≥ 2. Consider a1, . . . , ar, b1, . . . , br−1 ∈ Q \ {0} not being negative integers. Assume neither a k nor a k + 1 − bj be strictly positive integers (1 ≤ k ≤ r, 1 ≤ j ≤ r − 1). Let α1, . . . , αm ∈ K \ {0} with α1, . . . , αm pairwise distinct. By choosing sufficiently large β ∈ Z depending on K and v such that the points α1/β, . . . , αm/β are closed enough to the origin, we prove that the rm + 1 numbers : r Fr−1 a1, . . . , ar b1, . . . , br−1 αi β , r Fr−1 a1 + 1, . . . , . . . , . . . , ar + 1 b1 + 1, . . . , br−s + 1, br−s+1, . . . , br−1 αi βand 1 are linearly independent over K. The essential ingredient is our term-wise formal construction of type II of Padé approximants together with new non-vanishing argument for the generalized Wronskian. | null | [
"https://arxiv.org/pdf/2203.00207v1.pdf"
] | 247,187,494 | 2203.00207 | 20d88fd6d7bd837e3d7686982efc4af1ed438fe5 |
Generalized hypergeometric G-functions take linear independent values
1 Mar 2022 2022, March 1
Sinnou David
Noriko Hirata-Kohno
Makoto Kawashima
Generalized hypergeometric G-functions take linear independent values
1 Mar 2022 2022, March 1arXiv:2203.00207v1 [math.NT]Generalized hypergeometric functionG-functionlinear independencethe irrationalityPadé approximation
In this article, we show a new general linear independence criterion related to values of G-functions, including the linear independence of values at algebraic points of contiguous hypergeometric functions, which is not known before. Let K be any algebraic number field and v be a place of K. Let r ∈ Z with r ≥ 2. Consider a1, . . . , ar, b1, . . . , br−1 ∈ Q \ {0} not being negative integers. Assume neither a k nor a k + 1 − bj be strictly positive integers (1 ≤ k ≤ r, 1 ≤ j ≤ r − 1). Let α1, . . . , αm ∈ K \ {0} with α1, . . . , αm pairwise distinct. By choosing sufficiently large β ∈ Z depending on K and v such that the points α1/β, . . . , αm/β are closed enough to the origin, we prove that the rm + 1 numbers : r Fr−1 a1, . . . , ar b1, . . . , br−1 αi β , r Fr−1 a1 + 1, . . . , . . . , . . . , ar + 1 b1 + 1, . . . , br−s + 1, br−s+1, . . . , br−1 αi βand 1 are linearly independent over K. The essential ingredient is our term-wise formal construction of type II of Padé approximants together with new non-vanishing argument for the generalized Wronskian.
Introduction
The generalized hypergeometric G-function, in the sense of C. L. Siegel, is one of central objects from analytic point of view as well as number theoretical interest. In the article, we study arithmetic properties of values of the generalized hypergeometric functions, relying on Padé approximations of type II. We provide a new general linear independence criterion for the values of the functions at several distinct points, over a given algebraic number field of any finite degree. Our statement extends previous ones due to D. V. Chudnovsky or D. V. Chudnovsky-G. V. Chudnovsky in [9, Theorem 1], which all dealt with values at one point and over the rational number field. We proceed constructions of Padé approximants by our formal method, generalizing that used in [18,19,20]. We are inspired, together with those quoted above, by works due to A. I. Galochkin in [22,23], V. N. Sorokin in [45], K. Väänänen in [47] and W. Zudilin in [49], which gave several linear independence criteria, either over the field of rational numbers or quadratic imaginary fields, of values those concerns polylogarithmic function or hypergeometric Gfunction. However, these previous results were either for values at only one point, or in the case where the ground field was limited. As related works, we refer to the algebraic independence announced in [16,Theorem 3.4] of the two special values of Gauss' hypergeometric functions 2 F 1 1 2 , 1 2 1 α and
2 F 1 − 1 2 , 1 2 1 α
when α is a non-zero algebraic number supposed to be of small module, that later proved by Y. André in [2] with the p-adic analogue. We also mention that the work by F. Beukers involves several algebraicity of values of the function [6,7]. A historical survey for further reference is given in [18,19], with comparaison which concerns earlier works. This criterion indeed shows the linear independence of values of generalized hypergeometric functions including the contiguous ones, whose functional linear independence has been discussed in [34,35]. Our contribution in the proof, if any, is an uncharted non-vanishing property for the generalized Wronskian of Hermite type, corresponding to the case of generalized hypergeometric G-function.
Notations and main result
We collect some notations which we use throughout the article. Let Q be the rational number field and K be an algebraic number field of arbitrary degree [K : Q] < ∞. Let us denote by N the set of strictly positive integers. We denote the set of places of K by M K (by M ∞ K for infinite places, by M f K for finite places, respectively). For v ∈ M K , we denote the completion of K with respect to v by K v , and the completion of an algebraic closure of K v by C v (resp. for v ∈ M ∞ K , for v ∈ M f K ) . Let p, q ∈ N. Let a 1 , . . . , a p , b 1 , . . . , b q ∈ Q \ {0} be non-negative integer. We define the generalized hypergeometric function by p F q a 1 , . . . , a p b 1 , . . . , b q z = ∞ k=0 (a 1 ) k · · · (a p ) k
(b 1 ) k · · · (b q ) k z k k! ,
where (a) k is the Pochhammer symbol: (a) 0 = 1, (a) k = a(a + 1) · · · (a + k − 1). For a rational number x, let us define µ(x) = q:prime q|den(x) q q/(q−1) .
Let us denote the normalized absolute value | · | v for v ∈ M K :
|p| v = p − [Kv:Qp] [K:Q] if v ∈ M f K and v | p , |x| v = |ι v x| [Kv :R] [K:Q] if v ∈ M ∞ K ,
where p is a prime number and ι v the embedding K ֒→ C corresponding to v. On K n v , the norm · v denotes the norm of the supremum. Then we have the product formula v∈MK |ξ| v = 1 for ξ ∈ K \ {0} .
Let m be a positive integer and β := (β 0 , . . . , β m ) ∈ K m+1 \ {0}. Define the absolute height of β by
H(β) = v∈MK max{1, |β 0 | v , . . . , |β m | v } ,
and logarithmic absolute height by h(β) = log H(β). Let v ∈ M K . We denote log max(1, |β i | v ) by h v (β). Then we have h(β) = v∈MK h v (β). For a finite set S ⊂ Q, we define the denominator of S by den(S) = min{1 ≤ n ∈ Z | nα are algebraic integer for all α ∈ S} .
Let m, r be strictly positive integers with r ≥ 2 and α := (α 1 , . . . , α m ) ∈ (K \{0}) m whose coordinates are pairwise distinct. For β ∈ K \ {0}, define a real number V v (α, β) = log |β| v0 − rmh(α, β) − (rm + 1) log α v0 + rm log (α, β) v0 − rm log(2) + r log(rm + 1) + rm log rm + 1 rm − r j=1 log µ(a j ) + 2 log µ(b j ) + den(a j )den(b j ) ϕ(den(a j ))ϕ(den(b j )) ,
where ϕ is the Euler's totient function. Now we are ready to state our main theorem.
Theorem 2.1. Let v 0 be a place of K. Let a 1 , . . . , a r , b 1 , . . . , b r−1 ∈ Q \ {0} be non-negative integer. Assume neither a k nor a k + 1 − b j be strictly positive integers (1 ≤ k ≤ r, 1 ≤ j ≤ r − 1). Suppose V v0 (α, β) > 0.
Then the rm + 1 numbers :
r F r−1 a 1 , . . . , a r b 1 , . . . , b r−1 α i β , r F r−1 a 1 + 1, . . . , . . . , . . . , a r + 1 b 1 + 1, . . . , b r−s + 1, b r−s+1 , . . . , b r−1 α i β (1 ≤ i ≤ m, 1 ≤ s ≤ r − 1)
and 1 are linearly independent over K.
We mention that a linear independence criterion for values of generalized hypergeometric G-functions with cyclic coefficients also follows from Theorem 2.1 based on the same argument in [20]. We will join it in the context of our futur paper to avoid heavy calculations in the current article.
This article is organized as follows. In Section 3.1, we describe our setup for generalized hypergeometric G-functions. In Section 3.2, we proceed our construction of Padé approximants, generalizing the method used in [18,19,20]. Section 4 is devoted to show the non-vanishing property of the crucial determinant. In Section 6, we give the proof of Theorem 2.1. A more general statement, together with totally effective linear independence measures, is also given in this section by Theorem 6.1.
Padé approximation of generalized hypergeometric functions
Throughout this section, denote by K a field of characteristic 0. For a variable z, we denote z d dz by θ z .
Preliminaries
In this subsection, we introduce the generalized hypergeometric function. First let us introduce polyno-
mials A(X), B(X) ∈ K[X] satisfying max(deg A, deg B) > 0. Assume A(k)B(k) = 0 (k ≥ 0) .(1)
Notice that this assumption yields A(θ t + k), B(θ t + k) ∈ Aut K (K[t]) for any non-negative integer k. Next, consider a sequence c := (c k ) k≥0 satisfying c k ∈ K \ {0} and
c k+1 = c k · A(k) B(k + 1) (k ≥ 0) .(2)
We introduce the formal power series
F (z) := ∞ k=0 c k z k+1 ,
sometime also called generalized hypergeometric function.
By the recurrence relation (2), the series F (1/z) is a solution of the differential equation:
B(−θ z )z − A(−θ z ) f (z) = B(0) .
In order to construct Padé approximants of the function F (z), we introduce a power series, say, contiguous to F (z). Put r = max(deg A, deg B) and take γ 1 , . . . , γ r−1 ∈ K. Let s be an integer with 0 ≤ s ≤ r − 1. We define the power series F s (z) by
F 0 (z) = F (z), F s (z) = ∞ k=0 (k + γ 1 ) · · · (k + γ s )c k z k+1 for 1 ≤ s ≤ r − 1 .(3)
Notice that F s (1/z) satisfies
F s (1/z) = (−θ z + γ 1 − 1) · · · (−θ z + γ s − 1)(F 0 (1/z)) .
Remark 3.1. Let p, q ∈ N, a 1 , . . . , a p , b 1 , . . . , b q ∈ K \ {0} be non-negative integer. Put A(X) = (X + a 1 + 1) · · · (X + a p + 1), B(X) = (X + b 1 ) · · · (X + b q )(X + 1) and define
c k = (a 1 ) k+1 · · · (a p ) k+1 (b 1 ) k+1 · · · (b q ) k+1 (k + 1)! (k ≥ 0)
.
Then (c k ) k≥0 satisfies c k+1 = c k · A(k) B(k + 1)
.
For this sequence, we have
F (1/z) = p F q a 1 , . . . , a p b 1 , . . . , b q 1 z − 1 .
We assume r := p − 1 = q. Put γ 1 = 1, γ 2 = b r−1 , . . . , γ r−1 = b 2 . Then the series F s (1/z) has the expression :
F 0 (1/z) = r F r−1 a 1 , . . . , a r b 1 , . . . , b r−1 1 z − 1 ,(4)F s (1/z) = a 1 · · · a r b 1 · · · b r−s · 1 z · r F r−1 a 1 + 1, . . . , . . . , . . . , a r + 1 b 1 + 1, . . . , b r−s + 1, b r−s+1 , . . . , b r−1 1 z ,(5)for 1 ≤ s ≤ r − 1.
Construction of Padé approximants
Let K be a field of characteristic 0. We define the order function ord ∞ at "z = ∞" by
ord ∞ : K((1/z)) → Z ∪ {∞}; k c k z k → min{k ∈ Z | c k = 0} .
We first recall the following fact (see [21]) :
Lemma 3.2. Let r be a positive integer, f 1 (z), . . . , f r (z) ∈ (1/z) · K[[1/z]]
and n := (n 1 , . . . , n r ) ∈ N r . Put N := r i=1 n i . Let M be a positive integer with M ≥ N . Then there exists a family of polynomials (P 0 (z), P 1 (z), . . . , P r (z)) ∈ K[z] r+1 \ {0} satisfying the following conditions :
(i) deg P 0 (z) ≤ M , (ii) ord ∞ (P 0 (z)f j (z) − P j (z)) ≥ n j + 1 for 1 ≤ j ≤ r .
Definition 3.3. For the family of polynomials (P 0 (z), P 1 (z), . . . , P r (z)) ∈ K[z] r+1 satisfying the properties (i) and (ii) of Lemma 3.2, let us call it, weight n and degree M Padé type approximants of (f 1 , . . . , f r ). For such (P 0 (z), P 1 (z), . . . , P r (z)), of (f 1 , . . . , f r ), consider the family of formal Laurent series (P 0 (z)f j (z) − P j (z)) 1≤j≤r . We call it weight n degree M Padé type approximations of (f 1 , . . . , f r ).
Notation 3.4. (i) For α ∈ K, denote by Eval α the linear evaluation map K[t] −→ K, P −→ P (α).
Whenever there is an ambiguity in a setting of variables, we will denote the map by Eval t→α .
(ii) For P ∈ K[t], we denote by [P ] the multiplication by P (the map Q −→ P Q).
(iii) For a K-automorphism ϕ of a K-module M and an integer k, put
ϕ k = k−times ϕ • · · · • ϕ if k > 0 id M if k = 0 −k−times ϕ −1 • · · · • ϕ −1 if k < 0 .
Now we explicitly construct Padé approximants of generalized hypergeometric functions at distinct points. The following lemma is a key ingredient.
Lemma 3.5. Let k be a non-negative integer.
(i) Let H(X) ∈ K[X]. We have [t k ] • H(θ t ) = H(θ t − k) • [t k ]. (ii) Let A, B ∈ K[X] be polynomials with (1). Let c := (c k ) k≥0 be a sequence satisfying c k ∈ K \ {0} together with (2) for A, B. Define T c ∈ Aut K (K[t]) by T c : K[t] −→ K[t]; t k → t k c k .(6)
Then we have the relation, in the ring End K (K[t]),
[t k ] • T c = T c • A(θ t − 1) • · · · • A(θ t − k) • B(θ t ) −1 • · · · • B(θ t − k + 1) −1 • [t k ] .
Proof. (i) Let n be a non-negative integer. We may assume H(X) = X n . For any non-negative integer m, we have
[t k ] • θ n t (t m ) = m n t m+k .(7)
On the other hand, we have
(θ t − k) n • [t k ](t m ) = (k + m − k) n t m+k = m n t m+k .
By (7) and the above identity, we obtain the assertion.
(ii) Let m be a non-negative integer. The recurrence relation (2) yields
1 c m+k = B(m + k) · · · B(m + 1) A(m + k − 1) · · · A(m) · 1 c m ,
hence we obtain
[t k ] • T c (t m ) = t k+m c m = 1 c m+k A(m + k − 1) · · · A(m) B(m + k) · · · B(m + 1) t k+m = T c • A(θ t − 1) • · · · • A(θ t − k) • B(θ t ) −1 • · · · • B(θ t − k + 1) −1 • [t k ](t m ) .
which achieves the proof of (ii).
We are now ready for our construction of Padé approximants, of the hypergeometric functions at distinct points. Let c := (c k ) k≥0 be a sequence satisfying c k ∈ K \ {0} together with (2) for polynomials A, B ∈ K[X]. Put r = max(deg A, deg B). Let us fix γ 1 , . . . , γ r−1 ∈ K. We denote by F s (z) the power series defined in (5) for γ 1 , . . . , γ r−1 ∈ K. Let m be a strictly positive integer and α 1 , . . . ,
α m ∈ K \ {0} which are pairwise distinct. For 0 ≤ s ≤ r − 1, we shall introduce a K-homomorphism ψ s,i ∈ Hom K (K[t], K) by ψ i,s : K[t] −→ K; t k → (k + γ 1 ) · · · (k + γ s )c k α k+1 i ,
where (k + γ 1 ) · · · (k + γ s ) = 1 for s = 0 and k ≥ 0.
Proposition 3.6. (confer [17,Theorem 5.5] ) We use the notations as above. For a non-negative integer ℓ, we define polynomials:
P ℓ (z) = 1 (n − 1)! r • Eval z • T c n−1 j=1 B(θ t + j) t ℓ m i=1 (t − α i ) rn ,(8)P ℓ,i,s (z) = ψ i,s P ℓ (z) − P ℓ (t) z − t for 1 ≤ i ≤ m, 0 ≤ s ≤ r − 1 ,(9)
where T c ∈ Aut K (K[t]) defined in (6). Then (P ℓ (z), P ℓ,i,s (z)) 1≤i≤m,0≤s≤r−1 forms a weight (n, . . . , n) ∈ N rm and degree rmn + ℓ Padé type approximants of (F s (α i /z)) 1≤i≤m,0≤s≤r−1 .
Proof. By the definition of P ℓ (z), we have deg P ℓ (z) = rmn + ℓ .
Hence the required condition on the degree is verified. By the definition of T c and ψ i,s , we have
ψ i,s = ψ i,0 • (θ t + γ 1 ) • · · · • (θ t + γ s ) ,(10)ψ i,0 • T c = [α i ] • Eval αi for 1 ≤ i ≤ m .(11)
Put R ℓ,i,s (z) = P ℓ (z)F s (α i /z) − P ℓ,i,s (z). Then, by the definition of R ℓ,i,s (z), we obtain
R ℓ,i,s (z) = P ℓ (z)ψ i,s 1 z − t − P ℓ,i,s (z) = ψ i,s P ℓ (t) z − t = ∞ k=0 ψ i,s (t k P ℓ (t)) z k+1 .
Let k be an integer with 0 ≤ k ≤ n − 1. By Lemma 3.5 (i) (ii), we have
(n − 1)! r t k P ℓ (t) = [t k ] • T c n−1 j=1 B(θ t + j) t ℓ m i=1 (t − α i ) rn = T c k j ′ =1 A(θ t − j ′ ) k−1 j ′′ =0 B(θ t − j ′′ ) −1 n−1 j=1 B(θ t + j − k) t ℓ+k m i=1 (t − α i ) rn = T c k j ′ =1 A(θ t − j ′ ) n−1−k j=1 B(θ t + j) t ℓ+k m i=1 (t − α i ) rn , where k j ′ =1 A(θ t − j ′ ) = id K[t] if k = 0. Therefore we have ψi,s((n − 1)! r t k P ℓ (t)) = ψi,s • Tc k j ′ =1 A(θt − j ′ ) n−1−k j=1 B(θt + j) t ℓ+k m i=1 (t − αi) rn = ψi,0 • Tc s j ′′ =1 (θt + γ j ′′ ) k j ′ =1 A(θt − j ′ ) n−1−k j=1 B(θt + j) t ℓ+k m i=1 (t − αi) rn (12) = [αi] • Evalα i s j ′′ =1 (θt + γ j ′′ ) k j ′ =1 A(θt − j ′ ) n−1−k j=1 B(θt + j) t ℓ+k m i=1 (t − αi) rn .(13)
Note that, in (12), (13), we use (10) and (11) respectively. Since we have
deg s j ′′ =1 (X + γ j ′′ ) k j ′ =1 A(X − j ′ ) n−1−k j=1 B(X + j) ≤ s + rk + r(n − 1 − k) ≤ rn − 1 thanks to the Leibniz rule, the polynomial s j ′′ =1 (θ t +γ j ′′ ) k j ′ =1 A(θ t −j ′ ) n−1−k j=1 B(θ t +j) t ℓ+k m i=1 (t − α i ) rn is contained in the ideal (t − α i ) = ker Eval αi . Consequently we have ψ i,s (t k P ℓ (t)) = 0 for 0 ≤ k ≤ n − 1, 1 ≤ i ≤ m, 0 ≤ s ≤ r − 1 .
By the above expansion of R ℓ,i,s (z), we obtain
ord ∞ R ℓ,i,s (z) ≥ n + 1 for 1 ≤ i ≤ m, 0 ≤ s ≤ r − 1 ,
hence Proposition 3.6 follows.
We should mention that this construction was also considered by D. V. Chudnovsky and G. V. Chudnovsky in [17,Theorem 5.5], but without arithmetic application. See also a related work in [31].
Remark 3.7. The polynomial P ℓ (z) does not depend on the choice of γ 1 , . . . , γ r−1 ∈ K. By contrast, the polynomials P ℓ,i,s (z) depend on these choice.
Remark 3.8. Let r, m be strictly positive integers. Let x ∈ K, supposed to be non-negative integer and α 1 , . . . , α m ∈ K \ {0} be pairwise distinct. Put A(X) = B(X) = (X + x + 1) r and c k = 1/(k + x + 1) r . Then we have
c k+1 = c k · A(k) B(k + 1)
.
Put γ 1 = · · · = γ r−1 = x + 1. This gives us
F s (α i /z) = ∞ k=0 1 (k + x + 1) r−s · α k+1 i z k+1 = Φ r−s (x, α i /z) (1 ≤ i ≤ m, 0 ≤ s ≤ r − 1) ,(14)
where Φ s (x, 1/z) is the s-th Lerch function (generalized polylogarithmic function; confer [19]). In this
case, we have T c = (θ t + x + 1) r (x + 1) r and P ℓ (z) = 1 (x + 1) r · (n − 1)! r • Eval z n j=1 (θ t + x + j) r t ℓ m i=1 (t − α i ) rn .
The polynomial (x+1) r n r P ℓ (z) gives Padé type approximants of this Lerch functions in [18,Theorem 3.8].
Non-vanishing of the generalized Wronskian of Hermite type
Let K be a field of characteristic 0 and A(X), B(X) ∈ K[X] satisfying (1). From this section to last, we assume deg A = deg B > 0 and put deg A = r. We shall choose a sequence c := (c k ) k≥0 satisfying c k ∈ K \ {0} and (2) for the given polynomials A(X), B(X). Let α := (α 1 , . . . , α m ) ∈ (K \ {0}) m whose coordinates are pairwise distinct and γ 1 , . . . , γ r−1 ∈ K. Let us fix a positive integer n. For a non-negative integer ℓ with 0 ≤ ℓ ≤ rm, recall the polynomials P ℓ (z), P ℓ,i,s (z) defined in (8) and (9). We define column vectors
p ℓ (z) ∈ K[z] rm+1 by p ℓ (z) = t P ℓ (z), P ℓ,1,r−1 (z), . . . , P ℓ,1,0 (z), . . . , P ℓ,m,r−1 (z), . . . , P ℓ,m,0 (z) , and put ∆ n (z) = ∆(z) = det p 0 (z) · · · p rm (z) .
The aim of this section is to prove the following proposition.
Proposition 4.1. The determinant ∆(z) satisfies ∆(z) ∈ K \ {0}.
First
Step
Lemma 4.2. We have ∆(z) ∈ K.
Proof. We denote the remainder function R ℓ,i,s (z) :
= P ℓ (z)F s (α i /z) − P ℓ,i,s (z) (0 ≤ ℓ ≤ rm, 1 ≤ i ≤ m, 0 ≤ s ≤ r − 1)
. For the matrix in ∆(z), multiplying the first row by the F s (α i /z) and adding it to the (i − 1)r + s + 1-th row (
1 ≤ i ≤ m, 0 ≤ s ≤ r − 1), we obtain ∆(z) = (−1) rm det P 0 (z) . . . P rm (z) R 0,1,r−1 (z) . . . R rm,1,r−1 (z) . . . . . . . . . R 0,1,0 (z) . . . R rm,1,0 (z) . . . . . . . . . R 0,m,r−1 (z) . . . R rm,m,r−1 (z) . . . . . . . . . R 0,m,0 (z) . . . R rm,m,0 (z) .
We denote the (s, t)-th cofactor by ∆ s,t (z) of the matrix in the right hand side above. Then we have, by developing along the first row :
∆(z) = (−1) rm rm ℓ=0 P ℓ (z)∆ 1,ℓ+1 (z) .(15)
Since we have
ord ∞ R ℓ,i,s (z) ≥ n + 1 for 0 ≤ ℓ ≤ rm, 1 ≤ i ≤ m and 0 ≤ s ≤ r − 1 , we obtain ord ∞ ∆ 1,ℓ+1 (z) ≥ (n + 1)rm .
The fact deg P ℓ (z) = rmn + ℓ with the lower bound above yields
P ℓ (z)∆ 1,ℓ+1 (z) ∈ (1/z) · K[[1/z]] for 0 ≤ ℓ ≤ rm − 1 , and P rm (z)∆ 1,rm+1 (z) ∈ K[[1/z]] .(16)
In the relation above, the constant term of P rm (z)∆ 1,rm+1 (z) equals to :
Coefficient of z rm(n+1) of P rm (z) × Coefficient of 1/z rm(n+1) of ∆ 1,rm+1 (z) .
thanks to the fact that ∆(z) in (15) is a polynomial of non-positive valuation in z with respect to ord ∞ , it is necessarily to be a constant. Moreover, the terms of strictly negative valuation in z, they have to cancel out, hence
∆(z) = (−1) rm · rm ℓ=0 P ℓ (z)∆ 1,ℓ+1 (z) = (−1) rm × Constant term of P rm (z)∆ 1,rm+1 (z) ∈ K .(17)
This completes the proof of Lemma 4.2.
Second step
We now start the second procedure, by factoring ∆ as an element of K(α 1 , . . . , α m ). We use the same notations as in the proof of Lemma 4.2. By the equalities (16) and (17), we have
∆(z) = (−1) rm × Coefficient of z rm(n+1) of Prm(z) × Coefficient of 1/z rm(n+1) of ∆1,rm+1(z) .(18)
Define a column vector q ℓ ∈ K rm by q ℓ = t ψ 1,r−1 (t n P ℓ (t)), . . . , ψ 1,0 (t n P ℓ (t)), . . . , ψ m,r−1 (t n P ℓ (t)), . . . , ψ m,0 (t n P ℓ (t)) .
By the definition of ∆ n,1,rm+1 (z) with the identities
R ℓ,i,s (z) = ∞ k=n ψ i,s (t k P ℓ (t)) z k+1 for 0 ≤ ℓ ≤ rm, 1 ≤ i ≤ m and 0 ≤ s ≤ r − 1 ,
we have Coefficient of 1/z rm(n+1) of ∆ n,1,rm+1 (z) = det q 0 · · · q rm−1 .
By (18) with the above identity, we have
∆(z) = (−1) rm 1 (rmn + rm)! d dz rmn+rm P m,rm (z) · det q 0 · · · q rm−1 .(19)
Note that, by the definition of P rm (z), we have deg P rm = (n + 1)rm and thus
1 (rmn + rm)! d dz rmn+rm P m,rm (z) = 0 .
Third step
Relying on (19), we study here the values
Θ = det q 0 · · · q rm−1 .(20)
From this subsection, we specify the choice of γ 1 , . . . , γ r−1 ∈ K as follows. Replacing K by an appropriate finite extension, we may assume A(X), B(X) be decomposable in K. Put
A(X) = (X + η 1 ) · · · (X + η r ), B(X) = (X + ζ 1 ) · · · (X + ζ r ) , where η 1 , . . . , η r , ζ 1 , . . . , ζ r ∈ K \ {0}, being non-negative integer. Take a sequence (γ i ) 1≤i≤r(n+1)−1 of K with γ 1 = ζ r , . . . , γ r = ζ 1 . For each 0 ≤ s ≤ r − 1, there exists a sequence (a k,s ) 0≤k≤rn ∈ K rn+1 with n j=1 A(X − j) = rn k=0 a k,s k w=1 (X + γ r−s−1+w ) ,(21)
where it read k w=1 (X +γ r−s−1+w ) = 1 if k = 0. We now simplify the determinant Θ using the quantities a 0,s to prove the non-vanishing property of Θ.
Lemma 4.3. Put H ℓ (t) = t ℓ m i=1 (t − α i ) rn for 0 ≤ ℓ ≤ rm − 1. Then we have Θ = m i=1 α r i r−1 s=0 a m 0,s (n − 1)! r 2 m · det Eval αi s w=0 (θ t + γ r−s+w ) −1 (t n H ℓ (t)) 0≤ℓ≤rm−1 1≤i≤m,0≤s≤r−1 .
Proof. Using (10) and (21), we have
ψi,r−1−s • Tc n j=1 A(θt − j) • B(θt) −1 = s k=0 a k,s ψi,0 • Tc r−s−1+k w=1 (θt + γw) • B(θt) −1 (22) + rn k=s+1 a k,s ψi,0 • Tc • B(θt) r−s−1+k w=r+1 (θt + γw) • B(θt) −1 . Since deg r−s−1+k w=r+1 (X + γ w ) = k − s − 1 ≤ rn − 1 (s + 1 ≤ k ≤ rn), by the Leibniz rule, the polynomial r−s−1+k w=r+1 (θ t + γ w )(t n H ℓ (t)) belongs to the ideal (t − α i ) = ker Eval αi . Therefore, using (11), we obtain rn k=s+1 a k,s ψ i,0 • T c • B(θ t ) r−s−1+k w=r+1 (θ t + γ w ) • B(θ t ) −1 (t n H ℓ (t)) = rn k=s+1 a k,s [α i ] • Eval αi r−s−1+k w=r+1 (θ t + γ w )(t n H ℓ (t)) = 0 .
By the above equality with (22), we have
ψ i,r−1−s • T c n j=1 A(θ t − j) • B(θ t ) −1 (t n H ℓ (t)) = s k=0 a k,s ψ i,0 • T c r−s−1+k w=1 (θ t + γ w ) • B(θ t ) −1 (t n H ℓ (t)) = s k=0 a k,s [α i ] • Eval αi s w=k (θ t + γ r−s+w ) −1 (t n H ℓ (t)) .
Interpreting the relations above as linear manipulations of lines, the columns let the determinant unchanged. This completes the proof of Lemma 4.3.
We now study when the quantity r−1 s=0 a m 0,s does not vanish. The following lemma will be used to calculate each a 0,s . Lemma 4.4. Let u be a strictly positive integer andγ 1 , . . . ,γ u ,η 1 , . . . ,η u ∈ K. Denote
(X +η 1 ) · · · (X +η u ) = b u,0 + u k=1 b u,k (X +γ 1 ) · · · (X +γ k ) , with b u,0 , b u,1 , . . . , b u,u ∈ K. Then we have b u,0 = (η 1 −γ 1 ) · · · (η u −γ 1 ).
Proof. We prove the lemma by induction on u. In the case of u = 1, we have
X +η 1 =η 1 −γ 1 + (X +γ 1 ) .
This shows b 1,0 =η 1 −γ 1 then yields the assertion. Suppose that the current lemma be true for u ≥ 1. We show its validity for u + 1. In this case we get
(X +η 1 ) · · · (X +η u )(X +η u+1 ) = b u,0 + u k=1 b u,k (X +γ 1 ) · · · (X +γ k ) (X +η u+1 ) = b u,0 (X +γ 1 +η u+1 −γ 1 ) + u k=1 b u,k (X +γ 1 ) · · · (X +γ k )(X +γ k+1 +η u+1 −γ k+1 ) . The above identity yields b u+1,0 = b u,0 (η u+1 −γ 1 ). By induction hypothesis for b u,0 , we conclude b u+1,0 = (η 1 −γ 1 ) · · · (η u −γ 1 )(η u+1 −γ 1 ) .
This completes the proof of Lemma 4.4.
Proposition 4.5. The following two properties are equivalent.
(i) The value r−1 s=0 a m 0,s is non-zero. (ii) For 1 ≤ i, j ≤ r and 1 ≤ k ≤ n, we have η i − k − ζ j = 0.
Proof. Let s be an integer with 0 ≤ s ≤ r − 1. Applying Lemma 4.4 with u = rn and (γ 1 , . . . ,γ rn ) = (γ r−s , . . . , γ r(n+1)−s−1 ), (η 1 , . . . ,η rn ) = (η i − k) 1≤i≤r,1≤k≤n , we get
a 0,s = r i=1 n k=1 (η i − k − γ r−s ) , (0 ≤ s ≤ r − 1) .
Since γ r−s = ζ s+1 for 0 ≤ s ≤ r − 1, the proposition follows.
In the following, we assume η i − ζ j not be strictly positive integers for 1 ≤ i, j ≤ r .
Fourth step
Now, we take the ring K[t i,s ] 1≤i≤m,0≤s≤r−1 , the ring of polynomials in rm variables over K. Recall the polynomial B(X) is decomposed as B(X) = (X + ζ 1 ) · · · (X + ζ r ) with ζ i ∈ K which are not negative integer. We choose (γ i ) 1≤i≤r ∈ K r by γ 1 = ζ r , . . . , γ r = ζ 1 . For each variable t i,s , one has a well defined map for α ∈ K :
ψα,i,s = Evalt i,s →α s w=0 (θt i,s + γr−s+w) −1 : K[ti,s] 1≤i≤m 0≤s≤r−1 −→ K[t i ′ ,s ′ ] (i ′ ,s ′ ) =(i,s) ; t k i,s → α k s w=0 (k + γr−s+w) .(23)
Using the definition above where K[t i,s ] 1≤i≤m,0≤s≤r−1 is seen as the one variable polynomial ring
K ′ [t i,s ] over K ′ = K[t i ′ ,s ′ ] (i ′ ,s ′ ) =(i,s) .
We now define for non-negative integers n, û
P n,u (t i,s ) = m i=1 r−1 s=0 t u i,s m j=1 (t i,s − α j ) rn (i1,s1)<(i2,s2) (t i2,s2 − t i1,s1 ) ,
where the order (i 1 , s 1 ) < (i 2 , s 2 ) means lexicographical order. In the following of this section, the index n will be conveniently omitted, to be easier to read. Also set (when no confusion is deemed to occur, we omit the subscripts α = (α 1 , . . . , α m )):
Ψ = Ψ α := m i=1 r−1 s=0ψ αi,i,s .
Note that, by the definition of Θ (see (20)), we have
Θ = m i=1 α r i r−1 s=0 a m 0,s (n − 1)! r 2 m Ψ(P n,n ) .(24)
Let u be a non-negative integer, we study the value C n,u,m =C u,m := Ψ(P u ) .
The following of subsection, we occupy the proof of the following property of C u,m .
C u,m = c u,m m i=1 α ru+r 2 n+( r 2 ) i 1≤i1<i2≤m (α i2 − α i1 ) (2n+1)r 2 .
It is also easy to see that since all the variables t i,s have been specialized, C u,m ∈ K is a polynomial in the α i . The statement is then about a factorization of this polynomial. To prove of Proposition 4.6, we are going to perform the following steps :
(a) Show that C u,m is homogeneous of degree m[r(u + 1) + r 2 n + r 2 ] + m 2 (2n + 1)r 2 .
(b) Show that m i=1 α r(u+1)+r 2 n+( r 2 ) i divides C u,m . (c) Show that 1≤i1<i2≤m (α i2 − α i1 ) (2n+1)r 2 divides C u,m .
We first prove (a) and (b).
Lemma 4.7. C u,m is homogeneous of degree m[r(u + 1) + r 2 n + r 2 ] + m 2 (2n + 1)r 2 and is divisible by
m i=1 α r(u+1)+r 2 n+( r 2 ) i .
Proof. First the polynomialP u (t) is a homogeneous polynomial with respect to the variables α i , t i,s of degree m[ru+r 2 n+ r 2 ]+ m 2 (2n+1)r 2 . By the definition of Ψ, it is easy to see that C u,m = Ψ(P u (t)) is a homogeneous polynomial with respect to the variables α i of degree m[r(u+1)+r 2 n+ r 2 ]+ m 2 (2n+1)r 2 . Second we show the later assertion. By linear algebraψ α,i,s (B(t i,s )) = αψ 1,i,s (B(αt i,s )) (i.e. the variable t specializes in 1, confer Lemma 4.8 (ii) below) for any integer ℓ and any polynomial B(t i,s ) ∈ K[t i,s ]. So, by composition, the same holds for Ψ, and, putting 1 = (1, . . . , 1), one gets
C u,m = m i=1 α r i · m i=1 Ψ 1 (P u (α i t i,s ))
.
We now computeP
u (α i t i,s ) = m i=1 α r 2 n+ru+( r 2 ) i · Q u (t) , where Q u (t) = Q n,u,m (t) = m i=1 r−1 s=0 t u i,s j =i (α i t i,s − α j ) rn (t i,s − 1) rn · 1≤i1<i2≤m 0≤s1,s2≤r−1 (α i2 t i2,s2 − α i1 t i1,s1 ) · m i=1 0≤s1<s2≤r−1 (t i,s2 − t i,s1 ) ,
by linearity, we obtain
(26) C u,m = m i=1 α r(u+1)+r 2 n+( r 2 ) i (Ψ 1 (Q u )) .
This concludes the proof of the lemma.
Now we consider (c). Since the statement is trivial for m = 1, we can assume m ≥ 2. We need to show that (α j − α i ) (2n+1)r 2 divides C u,m . Without loss of generality, after renumbering, we can assume that j = 2, i = 1. To ease notations, we are going to take advantage of the fact that m ≥ 2, and set X s = t 1,s , Y s = t 2,s , α 1 = α, α 2 = β andψ α1,1,s =ψ α,s ,ψ α2,1,s =ψ β,s . So our polynomialP u rewrites aŝ
P u (X, Y ) = r−1 s=0 [(X s Y s ) u [(X s − α)(X s − β)(Y s − α)(Y s − β)] rn ] · 0≤i<j≤r−1 (X j − X i ) 0≤i<j≤r−1 (Y j − Y i ) 0≤i,j≤r−1 (Y j − X i ) · c(t i,s ) i≥3 k≥3 r−1 s=0 0≤i,j≤r−1 (t k,s − X i )(t k,s − Y j ) , where c(t i,s ) := m i=3 r−1 s=0 t u i,s m j=1 (t i,s − α j ) rn (i1,s1)<(i2,s2),i1,i2≥3 (t i2,s2 − t i1,s1
) (the precise value of c does not actually matter as it is treated as a scalar by the operatorsψ α,s ,ψ β,s ).
We set Ψ α = r−1 s=0ψ α,s and Ψ β = r−1 s=0ψ β,s respectively. One has Ψ = Ψ α • Ψ β •ψ whereψ = (ii) The operator ∂ ∂α commutes with anyψ β,s and hence with Ψ β and withψ.
(iii) We have ∂ ∂α Ψ α = Ψ α ∂ ∂α + 1 α r−1 s=0 Ψ α • θ Xs .
Proof. The assertion (i) follows from the definition since both multiplication by a scalar, specialization of one variable or integration with respect to a given variable all pairwise commute, (ii) follows from commutation of integrals with respect to a parameter with differentiation with respect to that parameter.
Finally, we prove (iii). If h(X 0 , . . . , X r−1 ) ∈ K[α, 1/α][X 0 , . . . , X r−1 ], we write h(X 0 , . . . , X r−1 ) = i a i (α)
r−1 s=0 X is s . By definition, we have Ψ α (h) = i a i (α) α |i| r−1 s=0 (i s + γ r−s ) · · · (i s + γ r ) , where |i| = r−1 s=0 i s . Then ∂ ∂α (Ψ α (h)) = i ∂ ∂α (a i (α)) α |i| r−1 s=0 (i s + γ r−s ) · · · (i s + γ r ) + i a i (α) α |i| α |i| r−1 s=0 (i s + γ r−s ) · · · (i s + γ r )
.
First term in the sum is easily seen to be equal to Ψ α • ∂ ∂α (h). So the claim (iii) reduces to the statement
r−1 s=0 Ψ α • θ Xs (X i0 0 · · · X ir−1 r−1 ) = |i| α |i| r−1 s=0 (i s + γ r−s ) · · · (i s + γ r )
.
But left hand side is
r−1 s=0 Ψ α • θ Xs (X i0 0 · · · X ir−1 r−1 ) = r−1 s=0 i s α |i| r−1 s ′ =0 (i s ′ + γ r−s ′ ) · · · (i s ′ + γ r ) = |i| α |i| r−1 s ′ =0 (i s ′ + γ r−s ′ ) · · · (i s ′ + γ r )
.
This completes the proof of this lemma.
We introduce a specialization morphism for the variable α. Set ∆ = ∆ α : Q[α, 1/α, β, α 2 , . . . , α m ] −→ Q(α); ∆(P (α, β, α 2 , . . . , α m )) = P (α, α, α 2 , . . . , α m ) .
Note thatψ and Ψ α commute so, it is enough to prove that
∂ ℓ ∂α ℓ ∆ α Ψ α • Ψ β (P u ) = 0 for 0 ≤ ℓ ≤ (2n + 1)r 2 − 1 .
We postpone the end of the proof of (c) and start with a few preliminaries. We set
f n,u (α, β, X, Y ) = f (α, β, X, Y ) = c(t i,s ) k≥3 r−1 s=0 1≤i,j≤r (t k,s − X i )(t k,s − Y j ) (27) · r−1 s=0 (X s Y s ) u [(X s − α)(X s − β)(Y s − α)(Y s − β)] rn , and g(X, Y ) = g(α, β, X, Y ) = 0≤i,j≤r−1 (X i − Y j ) 0≤i<j≤r−1 [(X j − X i )(Y j − Y i )] .(28)
So thatP u =P = f g (for the rest of the proof, the index u will not play any role and may be conveniently left off to ease reading).
We now concentrate on a few elementary properties of the mapsψ which we regroup here and will be useful for the rest :
Now we prepare new notations. Let ξ s := (ξ s,k ) k≥1 for 0 ≤ s ≤ r − 1 be infinite sequences of elements of K. Put Ξ = (ξ s ) 0≤s≤r−1 . For ℓ := (ℓ 0 , . . . , ℓ r−1 ) ∈ Z r with ℓ i ≥ 0, we put ψ α,s,ξ s ,ℓs =ψ α,s ℓs w=1 (θ Xs + ξ s,w ) for 0 ≤ s ≤ r − 1 , Ψ α,Ξ ℓ = r−1 s=0ψ α,s,ξ s ,ℓs ,
where ℓs w=1 (θ Xs + ξ s,w ) = id K[t] if ℓ s = 0.
We remark that, in the case of ℓ = (0, . . . , 0) ∈ Z r , we have Ψ α,Ξ ℓ = Ψ α for any Ξ. Lemma 4.9. Let ξ s := (ξ s,k ) k≥1 and ξ ′ s := (ξ ′ s,k ) k≥1 for 0 ≤ s ≤ r − 1 be infinite sequences of elements of K and ℓ := (ℓ 0 , . . . , ℓ r−1 ),
ℓ ′ := (ℓ ′ 0 , . . . , ℓ ′ r−1 ) ∈ Z r with ℓ i , ℓ ′ j ≥ 0. Put Ξ := (ξ s ) s , Ξ ′ := (ξ ′ s ) s . Assume there exist ℓ i , ℓ ′ j with i w=0 (θ t + γ r−i+w ) −1 • (θ t + ξ i,1 ) • · · · • (θ t + ξ i,ℓi ) = j w ′ =0 (θ t + γ r−j+w ′ ) −1 • (θ t + ξ ′ j,1 ) • · · · • (θ t + ξ ′ j,ℓ ′ j ) ,(29)
and the polynomial P ∈ K[X, Y ] is antisymmetric (any odd permutation of the variables X i , Y j changes P in its opposite). Then we have
∆ • Ψ α,Ξ ℓ • Ψ β,Ξ ′ ℓ ′ (P ) = 0 . Similarly, if there exist ℓ i , ℓ j for 0 ≤ i < j ≤ r − 1 with i w=0 (θ t + γ r−i+w ) −1 • (θ t + ξ i,1 ) • · · · • (θ t + ξ i,ℓi ) = j w ′ =0 (θ t + γ r−j+w ′ ) −1 • (θ t + ξ j,1 ) • · · · • (θ t + ξ j,ℓj ) ,(30)
we have Ψ α,Ξ ℓ (P ) = 0 .
Proof. Let 0 ≤ i, j ≤ r − 1. Let τ be the transposition τ (X i ) = Y j , τ (Y j ) = X i leaving all the other variables invariant. Then τ acts on K[X, Y ] by permutation of the variables. Then we have τ (P ) = −P by antisymmetry. We compute
∆ • Ψ α,Ξ ℓ • Ψ β,Ξ ′ ℓ ′ (P ) = ∆ r−1 s=0ψ α,s,ξ s ,ℓs r−1 s=0ψβ,s,ξ ′ s ,ℓ ′ s (P ) = ∆ r−1 s=0ψ α,s,ξ s ,ℓs r−1 s=0ψβ,s,ξ ′ s ,ℓ ′ s (τ P ) = −∆ • Ψ α,Ξ ℓ • Ψ β,Ξ ′ ℓ ′ (P )
. Note that the second equality is obtained by the assumption (29). Thus we obtain the first assertion. The second statement is a variation of the same argument.
Remark 4.10. Later, in Lemma 4.13, we use the first assertion of Lemma 4.9 only to the case of ℓ ′ = (0, . . . , 0). Namely, we apply Lemma 4.9 to the case of Ψ β,Ξ ′ ℓ ′ = Ψ β . Lemma 4.11. Let P ∈ K[X, Y ] be a polynomial such that (X s − α) T | P for some T ≥ 1 and 0 ≤ ℓ ≤ T − 1 an integer. Let ξ 1 , . . . , ξ ℓ ∈ K (if ℓ = 0, we mean {ξ 1 , . . . , ξ ℓ } = ∅). Then we havẽ
ψ α,s s w ′ =0 (θ Xs + γ r−s+w ′ ) ℓ w=1 (θ Xs + ξ w )(P ) = 0 .
Proof. Indeed, writing P = (X i − α) T Q, with Q ∈ K[X, Y ], and noting that
ψ α,s s w ′ =0 (θ Xs + γ r−s+w ′ ) ℓ w=1 (θ Xs + ξ w )(P ) = Eval Xs→α ℓ w=1 (θ Xs + ξ w )(P ) .
By the Leibniz formula and the hypothesis ℓ ≤ T − 1, ℓ w=1 (θ Xs + ξ w )(P ) belongs to the ideal (X s − α) and so Eval Xs→α ℓ w=1 (θ Xs + ξ w )(P ) = 0.
Lemma 4.12. Let P ∈ K[X, Y ] be a polynomial such that ((X s − α) T1 (X s − β) T2 ) | P for some nonnegative integers T 1 , T 2 with either T 1 or T 2 is greater than 1 and 0 ≤ ℓ ≤ T 1 + T 2 − 1 an integer. Let ξ 1 , . . . , ξ ℓ ∈ K (if ℓ = 0, we mean {ξ 1 , . . . , ξ ℓ } = ∅). Then, we have
∆ •ψ α,s s w ′ =0 (θ Xs + γ r−s+w ′ ) ℓ w=1 (θ Xs + ξ w )(P ) = 0 .
Proof. This is a variation of the previous lemma, indeed, specialization at β = α doubles the multiplicity and commutation of specialization along β commutes withψ α,s s w ′ =0 (θ Xs + γ r−s+w ′ ) ℓ w=1 (θ Xs + ξ w ) (variation of Lemma 4.8 (i)). Now, let us compute what comes out by iteration of property (iii) of Lemma 4.8. We define infinite sequences of elements of K, ξ s = (ξ s,k ) k≥1 , with
ξ s,k = γ r−s−1+k if 1 ≤ k ≤ s + 1 0 if k > s + 1 ,
and put Ξ = (ξ s ) s=0,...,r−1 . For a non-negative integer ℓ, there exists a sequence (b s,k,ℓ ) k=0,1,...,
ℓ ∈ K ℓ+1 with X ℓ = ℓ k=0 b s,k,ℓ k w=1 (X + ξ s,w ) where k w=1 (X + ξ s,w ) = 1 if k = 0.
Let ℓ, k be non-negative integers with ℓ ≥ k. We define a set of differential operators
X ℓ,k = {V = ∂ 1 • · · · • ∂ ℓ | ∂ i ∈ {1/α, ∂ ∂α }, #{1 ≤ i ≤ ℓ, ∂ i = 1/α} = k} .
One gets that
∂ ℓ ∂α ℓ (Ψ α (P )) = ℓ=(ℓ0,...,ℓr−1)∈Z r ℓi≥0, |ℓ|≤ℓ V ∈X ℓ,|ℓ| Ψ α r−1 s=0 θ ℓs Xs (V (P )) = ℓ=(ℓ0,...,ℓr−1)∈Z r ℓi≥0, |ℓ|≤ℓ V ∈X ℓ,|ℓ| ℓ0 k0=0 · · · ℓr−1 kr−1=0 r−1 s ′ =0 b s ′ ,k s ′ ,ℓ s ′ Ψ α r−1 s=1 ks us=1 (θ Xs + ξ s,us )(V (P )) = ℓ=(ℓ0,...,ℓr−1)∈Z r ℓi≥0, |ℓ|≤l V ∈X ℓ,|ℓ| k=(k0,...,kr−1)≤ℓ ki≥0 r−1 s ′ =0 b s ′ ,k s ′ ,ℓ s ′ Ψ α,Ξ k (V (P )) ,
where k ≤ ℓ means k i ≤ l i for each 0 ≤ i ≤ r − 1. By the Leibniz formula, for V ∈ X ℓ,|ℓ| , V (P ) is a linear combination (over K[1/α]) of the derivatives ∂ j ∂α j (P ) for 0 ≤ j ≤ ℓ − |ℓ|. SinceP = f g (recall the definition of f and g in (27) and (28) respectively), it is a linear combination of g ∂ j ∂α j (f ), for 0 ≤ j ≤ ℓ − |ℓ|.
We now perform the combinatorics argument :
Lemma 4.13. We use the notations as above. Let ℓ be a non-negative integer and ℓ = (ℓ 0 , . . . , ℓ r−1 ) ∈ Z r with ℓ i ≥ 0 such that |ℓ| ≤ ℓ. Assume further either of these three to be true
(i) There exist 0 ≤ i ≤ r − 1 with ℓ i < i + 1. (ii) There exist 0 ≤ i < j ≤ r − 1 with ℓ i ≥ i + 1, ℓ j ≥ j + 1 and ℓ i − (i + 1) = ℓ j − (j + 1). (iii) There exists an index 0 ≤ s ≤ r − 1 such that 0 ≤ ℓ s − (s + 1) < 2rn − ℓ + |ℓ|. Then, ∆ • Ψ α,Ξ ℓ • Ψ β (g ∂ j f ∂α j ) = 0 for all 0 ≤ j ≤ ℓ − |ℓ|.
Proof. If the first condition is satisfied, we have
(θ t + γ r−i ) −1 • · · · • (θ t + γ r ) −1 • (θ t + ξ i,1 ) • · · · • (θ t + ξ i,ℓi ) = (θ t + γ r−i+ℓi ) −1 • · · · • (θ t + γ r ) −1 .
Thus, by antisymmetry of g, the first assertion of Lemma 4.9 ensures vanishing. If the second conditions are satisfied, we have
θ ℓi−i−1 t = i ℓ=0 (θ t + γ r−i+ℓ ) −1 • (θ t + ξ i,1 ) • · · · • (θ t + ξ i,ℓi ) = j ℓ=0 (θ t + γ r−j+ℓ ) −1 • (θ t + ξ j,1 ) • · · · • (θ t + ξ j,ℓj ) = θ ℓj −j−1 t .
By antisymmetry of g, the second assertion of Lemma 4.9 ensures vanishing. If the third condition is satisfied, Lemma 4.12 ensures that
∆ •ψ α,s ks w=1 (θ Xs + ξ s,w )(V (P )) = ∆ •ψ α,s s+1 w=1 (θ Xs + γ r−s+w−1 ) • θ ks−s−1 Xs (V (P )) ,
itself vanishes for all V ∈ X ℓ,|ℓ| since [(X s − α)(X s − β)] rn | f (so ∂ j ∂α j (P ) vanishes at α = β at order at least 2rn − j ≥ 2rn − ℓ + |ℓ| > ℓ s − (s + 1)).
s=0 (ℓ s − s − 1) ≥ r(2rn − ℓ + |ℓ|) + r(r − 1)/2, that is |ℓ| + r(ℓ − |ℓ|) ≥ 2r 2 n + r 2 . Since ℓ − |ℓ| ≥ 0, the lemma follows.
End of the proof of Proposition 4.6 (c) :
Lemma 4.14 ensures that
∂ ℓ ∂α ℓ ∆ α (C u,m ) = 0 for all 0 ≤ ℓ ≤ (2n + 1)r 2 − 1 .
This completes the proof of Proposition 4.6.
Last step
We shall reduce by induction the non-vanishing of c u,m to the non-vanishing of c u,0 (which is obviously equal to 1). First, we prove, Proof. SetL = m−1 i=1 1≤i≤m−1 1≤s≤rψ1,i,s so that Ψ 1 =L • L m and recall that by (26),
D u,m := C u,m m i=1 α r(u+1)+r 2 n+( r 2 ) i = c u,m 1≤i<j≤m (α j − α i ) (2n+1)r 2 = Ψ 1 (Q u,m ) .
We are going to evaluate D u,m at α m = 0 and thus separate the variables in Q u,m first. By definition, one has
Q u,m (t) = Q u,m−1 (t) · A(t)B(t) ,
where
B(t) = r−1 s=0 j =m (α m t m,s − α j ) rn · m−1 i=1 r−1 s=0 (α i t i,s − α m ) rn · 1≤i<m 0≤s,s ′ ≤r−1 (α m t m,s ′ − α i t i,s ) .
Note that Q u,m−1 , A do not depend on α m , and Ψ 1 treats α m as a scalar. Hence,
D u,m | αm=0 = c u,m m−1 i=1 (−α i ) (2n+1)r 2 1≤i<j≤m−1 (α j − α i ) (2n+1)r 2 (31) = Ψ 1 Q u,m−1 (t)A(t) B(t)| αm=0 . But B(t)| αm=0 = m−1 j=1 (−α j ) r 2 n m−1 i=1 r−1 s=0 (α i t i,s ) rn m−1 i=1 r−1 s=0 (−α i t i,s ) r =(−1) r 2 (m−1)(n+1) m−1 i=1 α (2n+1)r 2 i m−1 i=1 r−1 s=0 t r(n+1) i,s .
We now note that θ m treats the variables t i,s , 1 ≤ i ≤ m − 1 as scalars andL treats variables t m,s as scalars and remark
Q u,m−1 (t) B(t)| αm=0 = (−1) r 2 (m−1)(n+1) m−1 i=1 α (2n+1)r 2 i Q u+r(n+1),m−1 (t) .
Thus
Ψ 1 Q u,m−1 (t)A(t) B(t)| αm=0 = (−1) r 2 (m−1)(n+1) m−1 i=1 α (2n+1)r 2 iL (Q u+r(n+1),m−1 (t))L m (A(t)) .
Using the relation (31), taking into account D u+r(n+1),m−1 = Ψ 1 (Q u+r(n+1),m−1 (t)) and simplifying, where r j is the multiplicity of ζ j for 1 ≤ j ≤ d. For an integer s, we define the K-homomorphism ϕ ζj ,s by ϕ ζj ,s :
c u,m = (−1) r 2 n(m−1) c u+r(n+1),m−1 · L m (A(t)) .K[t] −→ K; t k → 1 (k + ζ j ) s . Lemma 4.16. There exists E ∈ K \ {0} with L m (A(t)) = E · det ϕ ζj ,sj (t u+ℓ (t − 1) rn ) 0≤ℓ≤r−1 1≤j≤d,1≤sj ≤rj .(32)
Especially, the value L m (A(t)) is not zero.
Proof. Define
ψ s : K[t] −→ K; t k → 1 (k + γ r−s ) · · · (k + γ r ) = 1 (k + ζ 1 ) · · · (k + ζ s+1 ) ,(33)
for 0 ≤ s ≤ r − 1. Then we have L m (A(t)) = det(ψ s (t u+ℓ (t − 1) rn )) 0≤s≤r−1
0≤ℓ≤r−1 .(34)
For 0 ≤ s ≤ r − 1, there exist 1 ≤ w ≤ d and 1 ≤ s w ≤ r w with s + 1 = r 1 + · · · + r w−1 + s w .
Put
p j,k = 1 (r j − k)! d rj −k dX rj −k 1 1≤j ′ ≤w−1 j ′ =j (X + ζ j ′ ) r j ′ (X + ζ w ) sw X=−ζj if 1 ≤ j ≤ w − 1, 1 ≤ k ≤ r j , 1 (s w − k)! d sw −k dX sw−k 1 w−1 j=1 (X + ζ j ) rj X=−ζw if j = w, 1 ≤ k ≤ s w .
Then we have p w,sw = 1
w−1 j=1 (ζ j − ζ w ) rj = 0 and ψ s = w−1 j=1 rj k=1 p j,k ϕ ζj ,k + sw k=1 p w,k ϕ ζw ,k .(36)
Put E = 0≤s≤r−1 p w,sw = 0 where (w, s w ) is the pair of integers defined as in (35) for s + 1. Then by equalities (34), (36) and the linearity of the determinant, we obtain (32). The non-vanishing of the determinant det ϕ ζj ,sj (t u+ℓ (t − 1) rn ) 0≤ℓ≤r−1 1≤j≤d,1≤sj ≤rj has been obtained in [20,Proposition 4.12].
Estimates
In this subsection, we use the following notations. Let K an algebraic number field and v be a place of K. Denote by K v the completion of K at v, |·| v the absolute value corresponding to v. Let η 1 , . . . , η r , ζ 1 , . . . , ζ r be strictly positive rational numbers with η i − ζ j / ∈ N for 1 ≤ i, j ≤ r. Put A(X) = (X + η 1 ) · · · (X + η r ) and B(X) = (X + ζ 1 ) · · · (X + ζ r ). We shall choose a sequence c := (c k ) k≥0 satisfying c k ∈ K \ {0} and (2) for the given polynomials A(X), B(X). Let α := (α 1 , . . . , α m ) ∈ (K \ {0}) m whose coordinates are pairwise distinct and n be a non-negative integer. We choose γ 1 = ζ r , . . . , γ r−1 = ζ 2 . For non-negative integer ℓ with 0 ≤ ℓ ≤ rm, recall the polynomials P ℓ (z), P ℓ,i,s (z) defined as in (8) and (9) for the given data.
Throughout the section, the small o-symbol o(1) and o(n) refer when n tends to infinity. Put ε v = 1 if v | ∞ and 0 otherwise.
Let I be a non-empty finite set of indices, R = K[α i ] i∈I [z, t] be a polynomial ring in indeterminate α i , z, t. For a non-negative integer n and ζ ∈ Q \ {0}, we define S n,ζ :
K[t] −→ K[t]; t k → (k + ζ + 1) n n! t k .
We set P v = max{|a| v } where a runs in the coefficients of P . Thus R is endowed with a structure of normed vector space. If φ is an endomorphism of R, we denote by φ v the endomorphism norm defined in a standard way
φ v = inf{M ∈ R, ∀ x ∈ R, φ(x) v ≤ M x v } = sup φ(x) v
x v , 0 = x ∈ R . This norm is well defined provided φ is continuous. Unfortunately, we will have to deal also with noncontinuous morphisms. In such a situation, we restrict the source space to some appropriate sub-vector space E of R and talk of φ v with φ seen as φ| E : E −→ R on which φ is continuous. In case of perceived ambiguity, it will be denoted by φ E,v . The degree of an element of R is as usual the total degree.
For a rational number x, we denote by ⌈x⌉ the greatest integer less than or equal to x.
D n = den (a) 0 (b) 0 , . . . , (a) n (b) n .
Then we have lim sup
n→∞ 1 n log D n ≤ log µ(a) + den(b) ϕ(den(b)) ,
where ϕ is the Euler's totient function.
Proof. Put
D 1,n = den (a) 0 0! , . . . , (a) n n! , D 2,n = den 0! (b) 0 , . . . , n! (b) n .
Then we have D n ≤ D 1,n · D 2,n . The assertion is deduced from lim sup
n→∞ 1 n log D 1,n ≤ log µ(a) , lim sup n→∞ 1 n log D 2,n ≤ den(b) ϕ(den(b)) .(37)
First inequality is proved in [7,Lemma 2.2]. Second inequality is shown in [28,Lemma 4.1], however, we explain here this proof in an abbreviated form, to let our article be self-contained. This proof is originally indicated by Siegel [44, p.81]. Put d = den(b), c = d · b. Set N k := c(c + d) · · · (c + (k − 1)d) for a non-negative integer k. Let p be a prime number with p|N k . Then the following properties hold.
(a) The integers p, d are coprime and, for any integers i, ℓ with ℓ > 0, there exists exactly one integer ν with 0 ≤ ν ≤ p ℓ − 1 with p ℓ |c + (i + ν)d.
(b) Let ℓ be a strictly positive integer with |c| + (k − 1)d < p ℓ . Then N k is not divisible by p ℓ .
(c) Set C p,k := ⌊log(|c| + (k − 1)d)/ log(p)⌋. Then we have
v p (k!) = C p,k ℓ=1 k p ℓ ≤ v p (N k ) ≤ C p,k ℓ=1 1 + k p ℓ = v p (k!) + C p,k ,
where v p denotes the p-adic valuation. These relations imply
log k! (β) k p ≤ C p,k log(p) if p | N k 0 otherwise , and thus log D 2,n = p:prime max 0≤k≤n log k! (β) k p ≤ log(|c| + (n − 1)d)π |c|,d (|c| + (n − 1)d) ,
where π |c|,d (x) := #{p : prime ; p ≡ |c| mod d, p < x} for x > 0. Finally, Dirichlet's prime number theorem for arithmetic progressions conclude (37).
Lemma 5.2. We have the following norm estimates (we do hope the similarity of notations is not cause of confusion) :
(i) Let E N be the subspace of R consisting of polynomials of degree at most N in t. Then for all n ≥ 1 and strictly positive rational number ζ, the morphism S n,ζ satisfies
S n,ζ EN ,v ≤ 1 if v ∤ ∞ and |ζ| v ≤ 1 (N + ζ + 1) n n! v otherwise .
It acts diagonally on R in the sense that each element of the canonical basis consisting of all monomials is an eigenvector for S n,ζ . This map conserves degrees.
(ii) Let ζ be a strictly positive rational number. Then the morphism θ t + ζ satisfies
θ t + ζ EN ,v ≤ 1
if v is non-archimedian and |ζ| v ≤ 1
|N + ζ| v otherwise .
It acts diagonally on R in the sense that each element of the canonical basis consisting of all monomials is an eigenvector for θ t + ζ. This map conserves degrees.
(iii) Let c = (c k ) k≥0 satisfying c k ∈ Q \ {0} and (2) for A(X) = (X + η 1 ) · · · (X + η r ), B(X) = (X + ζ 1 ) · · · (X + ζ r ). For a non-negative integer N , we put
D c,N = den (1 + ζ 1 ) k · · · (1 + ζ r ) k (η 1 ) k · · · (η r ) k 0≤k≤N .
The morphism T c which is defined in (6) satisfies
T c EN ,v ≤ e o(N ) if v | ∞ |c −1 0 D c,N | −1 v otherwise ,
for N → ∞. It acts diagonally on R in the sense that each element of the canonical basis consisting of all monomials is an eigenvector for T c . This map conserves degrees.
Proof. (i) and (ii) follow from the very definition of S n,ζ and θ t + ζ. We proof (iii). Since we have
T c (t k ) = 1 c 0 B(1) · · · B(k) A(0) · · · A(k − 1) = 1 c 0 (1 + ζ 1 ) k · · · (1 + ζ r ) k (η 1 ) k · · · (η r ) k , we get T c EN ,v ≤ max 0≤k≤N 1 c 0 (1 + ζ 1 ) k · · · (1 + ζ r ) k (η 1 ) k · · · (η r ) k v .(38)
Let v be an archimedian valuation. Since we have
(1 + ζ j ) k (η j ) k ≤ k η j k + ⌈ζ j ⌉ k (k ≥ 0) ,
we obtain
T c EN ,v ≤ N r c 0 · η 1 · · · η r N + ⌈ζ 1 ⌉ N · · · N + ⌈ζ r ⌉ N v = e o(N ) (N → ∞) .
For a non-archimedian place v, by (38) and the definition of D c,N , we get the desire estimate.
From the preceding lemma, we deduce :
Lemma 5.3.
For a strictly positive integer N , we put
D c,N = den (1 + ζ 1 ) k · · · (1 + ζ r ) k (η 1 ) k · · · (η r ) k 0≤k≤N , D ′ c,N = den (η 1 ) k · · · (η r ) k (1 + ζ 1 ) k · · · (1 + ζ r ) k 0≤k≤N .
We denote by w the place of Q such that v|w. One has :
(i) The polynomial P ℓ (z) = P n,ℓ (α|z) satisfies
P ℓ (z) v ≤ exp n[K : v : Q w ] [K : Q] rm log(2) + r log(rm + 1) + rm log rm+1 rm + o(1) if v | ∞ e o(1) |D c,rmn | −1 v · r j=1 |µ n−1 (ζ j )| −1 v if v | p ,
where o(1) −→ 0 for n → ∞. Recall that P ℓ (z) is of degree rn in each variable α i , of degree rmn + ℓ in z and constant in t.
(ii) The polynomial P ℓ,i,s (z) = P ℓ,i,s (α|z) satisfies
P ℓ,i,s (z) v ≤ exp n[K v : Q w ] [K : Q] rm log(2) + r log(rm + 1) + rm log rm+1 rm + o(1) if v | ∞ e o(1) |D c,rmn · D ′ c,rmn | −1 v · r j=1 |µ n−1 (ζ j )| −1 v if v | p .
Also, P ℓ,i,s (z) is of degree ≤ rmn + ℓ in z, of degree rn in each of the variables α j except for the index i where it is of degree rn + 1 (recall that ψ i,s involves multiplication by [α i ]).
(iii) For any integer k ≥ 0, the polynomial ψ i,s • [t k+n ](P ℓ (t)) satisfies
ψi,s • [t k+n ](P ℓ (t)) v ≤ exp n[Kv : Qw] [K : Q]
rm log(2) + r log(rm + 1) + rm log rm+1
rm + o(1) if v | ∞ e o(1) |Dc,rmn · D ′ c,rmn | −1 v · r j=1 |µn−1(ζj)| −1 v otherwise .
By definition, it is a homogeneous polynomial in just the variables α of degree ≤ rmn+ℓ+k +n+1.
Proof. Let I be of cardinality m, E N be the sub-vector space of K[y 1 , . . . , y m , z, t] consisting of polynomials of degree at most N in the variables y i and Γ : E N −→ R the morphism defined by Γ(Q(y 1 , . . . , y m , z, t)) = Q(t − α 1 , . . . , t − α m , z, t). Set B n,l (y, t) = t ℓ m i=1 y rn i , since B n,l is a monomial, its norm B n,l v = 1. By definition, one has P ℓ (z) = Eval t→z • T c r j=1 S n−1,ζj • Γ(B n,l ) , and thus, by sub-multiplicatively of the endomorphism norm,
P ℓ (z) v ≤ 2 εv rmn[Kv:Qw]/[K:Q] e εv o(n) |D c,N | −1 v · r j=1 (rmn + ℓ + 1 + ζ j ) n−1 (n − 1)! v if v | ∞ r j=1 (rmn + ℓ + 1 + ζ j ) n−1 (n − 1)! δv(ζj ) v otherwise ,
where δ v (ζ j ) = 1 if |ζ j | v > 1 and 0 otherwise (one can choose N = rn while using [18, Lemma 5.2 (iv)] and N = r(n + 1)m + ℓ for Lemma 5.2 (i) and (iii) using ℓ ≤ rm, and note that the original polynomial is a constant in z so the evaluation map is an isometry).
In the ultrametric case, we have the claimed result
(rmn + rm + 1 + ζ j ) n−1 (n − 1)! v ≤ |µ n−1 (ζ j )| −1 v ,
where µ n (ζ j ) := q:prime q|den(ζj ) q n+⌈n/(q−1)⌉ (confer [7, Lemma 2.2])) and Lemma 5.2 (iii) with N = rmn +
rm.
Left to prove is the archimedian case, we put Y = max j {⌈ζ j ⌉}. Then we have :
(rmn + rm + 1 + ζ j ) n−1 (n − 1)! ≤ (rm + 1)n + rm + Y n − 1 ,
and taking into account the standard Stirling formula, we get 1 n log (rm + 1)n + rm + Y n − 1 = log(rm + 1) + rm log rm+1 rm + o(1) , and putting these together, one gets
P ℓ (z) v ≤ exp n[K v : Q w ] [K : Q] rm log(2) + r log(rm + 1) + rm log rm + 1 rm + o(1) ,
where o(1) −→ 0 (n → +∞).
Let E 0 = K[α i , z] the sub-vector space of R consisting of constants in t. Define Θ : E 0 −→ R; Q → Q(α i , z) − Q(α i , t) z − t .
By definition, P ℓ,i,s (z) = ψ i,s • Θ(P ℓ (z)) and
ψ i,s = [α i ] • Eval t→αi • T −1 c • (θ t + ζ r ) • · · · • (θ t + ζ r−s+1 ) .
Using [18, Lemma 5.2 (i), (ii)] with N = rn and [18, Lemma 5.2 (iii)] and Lemma 5.2 (iii) for N = rm(n + 1), and since rn = exp(n · o(1)), one gets (ii). Finally, we have
ψ i,s • [t k+n ](P ℓ (t)) = [α i ] • Eval t→αi • T −1 c • (θ t + ζ r ) • · · · • (θ t + ζ r−s+1 ) • [t k+n ](P ℓ (t)) .
Again, using [18, Lemma 5.2] and Lemma 5.2, one gets (iii).
Recall that if P is a homogeneous polynomial in some variables y i , i ∈ I, for any point α = (α i ) i∈I ∈ K Card(I) where I is any finite set, and · v stands for the sup norm in K
(39) |P (α)| v ≤ C v (P ) P v · α deg(P ) v .
So, the preceding lemma yields trivially estimates for the v-adic norm of the above given polynomials.
Lemma 5.4. Let n be a positive integer, β ∈ K with α v < |β| v . Then we have for all 1 ≤ i ≤ m, 0 ≤ ℓ ≤ rm, 0 ≤ s ≤ r − 1,
|R ℓ,i,s (β)| v ≤ α rm(n+1) v · α v |β| v n+1 · ε v |β| v |β| v − α v + (1 − ε v ) · exp n [K v : Q w ] [K : Q]
rm log(2) + r log(rm + 1) + rm log rm+1
rm + o(1) if v | ∞ e o(1) |D c,rmn · D ′ c,rmn | −1 v · r j=1 |µ n−1 (ζ j )| −1 v otherwise .
Proof. By the definition of P ℓ (z), as formal power series, we have R ℓ,i,s (z) = ∞ k=0 ψ i,s (t k+n P ℓ (t)) z k+n+1 .
Using the triangle inequality, the fact that ℓ ≤ rm and Lemma 5.3 (iii) and inequality (39),
|R ℓ,i,s (β)| v ≤ α rm(n+1) v ∞ k=0 α v |β| v n+1+k · exp n [K v : Q w ] [K : Q]
rm log(2) + r log(rm + 1) + rm log rm+1
rm + o(1) if v | ∞ e o(1) |D c,rmn · D ′ c,rmn | −1 v · r j=1 |µ n−1 (ζ j )| −1 v otherwise ,
and the lemma follows using geometric series summation.
Proof of Theorem 2.1
We use the same notations as in Section 5. To prove Theorem 2.1, we shall prove the following theorem.
Theorem 6.1. For v ∈ M K , we define the constants
c(x, v) = ε v [K v : Q w ] [K : Q]
rm log(2) + r log(rm + 1) + rm log rm + 1 rm
+ (1 − ε v ) r j=1 log |µ(ζ j )| −1 v ,
where p v is the rational prime under v if v is non-archimedian. We also define V v (η, ζ, α, β) = log |β| v0 − rmh(α, β) − (rm + 1) log α v0 + rm log (α, β) v0 − rm log(2) + r log(rm + 1) + rm log rm + 1 rm − r j=1 log µ(η j ) + 2 log µ(ζ j ) + den(ζ j )den(η j ) ϕ(den(ζ j ))ϕ(den(η j )) .
Let v 0 be a place in M K , either archimedean or non-archimedean, such that V v0 (η, ζ, α, β) > 0. Then the functions F s (α i /β), 0 ≤ s ≤ r − 1 converge around α j /β in K v0 , 1 ≤ j ≤ m and for any positive number ε with ε < V v0 (η, ζ, α, β), there exists an effectively computable positive number H 0 depending on ε and the given data such that the following property holds. For any λ := (λ 0 , λ i,s ) 1≤i≤m where µ(η, ζ, α, β, ε) := A v0 (η, ζ, α, β) + U v0 (η, ζ, α, β) V v0 (η, ζ, α, β) − ǫ , C(η, ζ, α, β, ε) = exp − log(2) V v0 (η, ζ, α, β) − ε + 1 (A v0 (η, ζ, α, β) + U v0 (η, ζ, α, β) .
Proof. By Proposition 4.1, the matrix M n = P ℓ (β) P ℓ,i,s (β) with entries in K is invertible. By Lemma 5.3 (i) together with inequality (39),
log P ℓ (β) v ≤ ε v n [K v : Q w ] [K : Q]
rm log(2) + r log(rm + 1) + rm log rm + 1 rm + o(1)
+ (1 − ε v ) log |D c,rmn | −1 v + r j=1 log |µ n (ζ j )| −1 v + +(rmn + ℓ)h v (α, β) ≤ n (rmh v (α, β) + c(x, v)) + o(n) = U v (η, ζ, α, β)n + o(n) .
Similarly, using this time Lemma 5.3 (ii) and inequality (39), log µ(η j ) + log µ(ζ j ) + den(ζ j )den(η j ) ϕ(den(ζ j ))ϕ(den(η j ) ) , v∈MK c(x, v) = rm log(2) + r log(rm + 1) + rm log rm + 1 rm + r j=1 log µ(ζ j ) ,
log P ℓ,i,s (β) v ≤ n (rmh v (α, β) + c(x, v)) + f v (n) , where f v : N → R ≥0 ; n −→ rmh v (α, β) + (1 − ε v ) log |D c,rmn · D ′ c,rmn | −1 v . We define F v (α, β) : N → R ≥0 ; n → n (rmh v (α, β) + c(x, v)) + f v (n) .
we conclude
A v0 (η, ζ, α, β) − lim n→∞ 1 n v =v0 F v (α, β)(n) = V (η, ζ, α, β) .
Applying [19,Proposition 5.6] for {θ i,s := F s (α i /β)} 1≤i≤m 0≤s≤r−1 and the above data, we obtain the assertions of Theorem 6.1.
Proof of Theorem 2.1. We use the same notations as in Theorem 2.1 and Theorem 6.1. Put η = (a 1 + 1, . . . , a r + 1), ζ = (b 1 , . . . , b r−1 , 1). Then we have V v0 (α, β) = V v0 (η, ζ, α, β) .
Combining with (4) and (5), Theorem 6.1 yields the assertion of Theorem 2.1.
Proposition 4. 6 .
6There exists a constant c u,m ∈ K with
Lemma 4. 8 .
8(i) The morphismsψ α,s1 ,ψ β,s2 pairwise commute for 0 ≤ s 1 , s 2 ≤ r − 1.
Lemma 4 . 14 .
414The smallest integer ℓ for which there exists ℓ = (ℓ 0 , . . . , ℓ r−1 ) with |ℓ| ≤ l with none of the conditions of Lemma 4.13 are satisfied is (2n + 1)r 2 . Proof. Assume conditions (i), (ii) and (iii) are false, then the set {ℓ s − s − 1} is at least {2rn − ℓ + |ℓ|; . . . ; 2rn − ℓ + |ℓ| + r − 1} and r−1
Lemma 4 . 15 .
415Set s ·(t m,s − 1) rn · 1≤s<s ′ ≤r (t m,s − t m,s ′ ) and L m = 1≤s≤rψ1,m,s . Then, c u,m = (−1) r 2 n(m−1) c u+r(n+1),m−1 · L m (A(t))) .
This completes the proof of Lemma 4.15. By Lemma 4.15, to prove the non-vanishing of the value c u,m , it is enough to show L m (A(t)) = 0. Denote the cardinality of the set {ζ 1 , . . . , ζ r } by d. If we need, by changing the order, we may assume {ζ 1 , . . . , ζ r } = {ζ 1 , . . . , ζ d } and (ζ 1 , . . . , ζ r ) = ( r1 ζ 1 , . . . , ζ 1 , . . . , r d ζ d , . . . , ζ d ) ,
A
v (η, ζ, α, β) = log |β| v0 − (rm + 1) log α v0 − c(x, v 0 ) + (1 − ε v ) lim sup n→∞ 1 n log |D c,rmn · D ′ c,rmn | −1 v , U v (η, ζ, α, β) = rmh v (α, β) + c(x, v) − (1 − ε v ) lim sup n→∞ 1 n log |D c,rmn | v ,and
i,s F s (x, α i /β) v0 > C(η, ζ, α, β, ε)H v0 (λ)H(λ) −µ(η,ζ,α,β,ε) ,
Since on the other hand, Lemma 5.4 ensures− log |R ℓ,i,s (β)|v 0 ≤ n log |β|v 0 − (rm + 1)n log α v 0 − nc(x, v0) + (1 − εv) lim sup n→∞ 1 n log |Dc,rmn · D ′ c,rmn | −1 v + o(n)= Av 0 (η, ζ, α, β)n + o(n) .
Theorem 3.1] [13, Theorem I], [14, Theorem 0.3] [15, Theorem I] and Yu. Nesterenko [34, Theorem 1] [35,
Legendre polynomials and irrationality. K Alladi, M L Robinson, J. Reine Angew Math. 318K. Alladi and M. L. Robinson, Legendre polynomials and irrationality, J. Reine Angew Math. 318(1980), 137-155.
. Y André, G-Fonctions Et Transcendance, J. Reine Angew Math. 476Y. André, G-fonctions et transcendance, J. Reine Angew Math.,476 (1996), 95-126.
Multiple orthogonal polynomials for classical weights. A I Apetekarev, A Branquinho, W Van Assche, Trans. Amer. Math. Soc. 35510A. I. Apetekarev, A. Branquinho and W. Van Assche, Multiple orthogonal polynomials for classical weights, Trans. Amer. Math. Soc., 355, no. 10, (2003), 3887-3914.
A Baker, Transcendental Number Theory. Cambridge Univ. PressA. Baker, Transcendental Number Theory, Cambridge Univ. Press, 1975.
A note on the irrationality of ζ(2) and ζ(3). F Beukers, Bull. London Math. Soc. 11F. Beukers, A note on the irrationality of ζ(2) and ζ(3), Bull. London Math. Soc. 11, (1979), 268- 272.
Algebraic values of G-functions. F Beukers, J. Reine Angew Math. 434F. Beukers, Algebraic values of G-functions, J. Reine Angew Math. 434, (1993), 45-65.
Irrationality of some p-adic L-values. F Beukers, Acta Math. Sin. 244F. Beukers, Irrationality of some p-adic L-values, Acta Math. Sin. 24, no. 4, (2008), 663-686.
Fonctions hypergéometriques bornées, Groupe d'etude d'analyse ultramétrique. G , Secrétariat, Institut H. PoincareParisG. Christol, Fonctions hypergéometriques bornées, Groupe d'etude d'analyse ultramétrique, 1986/1987 Secrétariat, Institut H. Poincare, Paris.
Padé approximations to the generalized hypergeometric functions I. G V Chudnovsky, J. Math. Pures et Appl. 58G. V. Chudnovsky, Padé approximations to the generalized hypergeometric functions I, J. Math. Pures et Appl. 58 (1979) 445-476.
Hermite-Padé approximations to exponential functions and elementary estimates of the measure of irrationality of π. G V Chudnovsky, Lecture Notes in Math. 925G. V. Chudnovsky, Hermite-Padé approximations to exponential functions and elementary estimates of the measure of irrationality of π, Lecture Notes in Math. 925, 1982, 299-322.
On the method of Thue-Siegel. G V Chudnovsky, Annals of Math. 117G. V. Chudnovsky, On the method of Thue-Siegel, Annals of Math. 117 (1983) 325-382.
D V Chudnovsky, G V Chudnovsky, Recurrences, Padé Approximations and their Applications. 92Classical and Quantum Models and Arithmetic ProblemsD. V. Chudnovsky and G. V. Chudnovsky, Recurrences, Padé Approximations and their Applications, In: Classical and Quantum Models and Arithmetic Problems, Lecture Notes in Pure and Applied Math. 92 1984, 215-238.
On applications of diophantine approximations. G V Chudnovsky, Proc. Nat. Acad. Sci. U.S.A. 81G. V. Chudnovsky, On applications of diophantine approximations, Proc. Nat. Acad. Sci. U.S.A. 81 (1984) 1926-1930.
The Wronskian Formalism for Linear Differential Equations and Padé Approximations. D V Chudnovsky, G V Chudnovsky, Advances in Math. 53D. V. Chudnovsky and G. V. Chudnovsky, The Wronskian Formalism for Linear Differential Equa- tions and Padé Approximations, Advances in Math.53 (1984) 28-54.
Applications of Padé approximations to diophantine inequalities in values of G-functions. D V Chudnovsky, G V Chudnovsky, Lecture Notes in Math. 1135D. V. Chudnovsky and G. V. Chudnovsky, Applications of Padé approximations to diophantine inequalities in values of G-functions, Lecture Notes in Math. 1135, 1985, 9-51.
Approximations and Complex Multiplication According to Ramanujan. D V Chudnovsky, G V Chudnovsky, ; E Andrews, Ramanujan Revisited, Proceedings of the Centenary Conference. Academic PressUniversity of Illinois at Urbana-ChampaignD. V. Chudnovsky and G. V. Chudnovsky, Approximations and Complex Multiplication According to Ramanujan, In: Ramanujan Revisited, Proceedings of the Centenary Conference, University of Illinois at Urbana-Champaign, June 1-5, 1987, eds. E. Andrews et al., Academic Press, (1988), 375-472.
Use of Computer Algebra for Diophantine and Differential Equations, in Computer algebra. D V Chudnovsky, G V Chudnovsky, M. Dekker, NY. D. V. Chudnovsky, G. V. Chudnovsky, Use of Computer Algebra for Diophantine and Differential Equations, in Computer algebra, M. Dekker, NY, 1988, 1-82.
Can polylogarithms at algebraic points be linearly independent?. S David, N Hirata-Kohno, M Kawashima, Mosc. J. Comb. Number Theory. 9S. David, N. Hirata-Kohno and M. Kawashima, Can polylogarithms at algebraic points be linearly independent?, Mosc. J. Comb. Number Theory 9 (2020) 389-406.
Linear Forms in Polylogarithms. S David, N Hirata-Kohno, M Kawashima, Ann. Scuola Norm. Sup. Pisa Cl. Sci. in pressS. David, N. Hirata-Kohno and M. Kawashima, Linear Forms in Polylogarithms, Ann. Scuola Norm. Sup. Pisa Cl. Sci, in press, available at https://arxiv.org/abs/2010.09167 .
Linear independence criteria for generalized polylogarithms with distinct shifts. S David, N Hirata-Kohno, M Kawashima, preprint. available atS. David, N. Hirata-Kohno and M. Kawashima, Linear independence criteria for generalized poly- logarithms with distinct shifts, preprint. available at http://arxiv.org/abs/2202.13931 .
N I , Yu V Nesterenko, Encyclopaedia of Mathematical Sciences. A. N. Parshin and I. R. SchfarevichSpringer44Number Theory IVN. I. Fel'dman and Yu. V. Nesterenko, Number Theory IV (eds. A. N. Parshin and I. R. Schfarevich), Encyclopaedia of Mathematical Sciences 44 Springer, 1998.
Lower bounds for polynomials in values of analytic functions of certain class. A O Galochikin, English transl. in Math. USSR-Sb. 95Mat. Sb.A. O. Galochikin, Lower bounds for polynomials in values of analytic functions of certain class, Mat. Sb. 95 (1974), 396-417; English transl. in Math. USSR-Sb. 24 (1974).
Lower bounds for linear forms in values of certain G-functions. A O Galochikin, Mat. Zametki. 18A. O. Galochikin, Lower bounds for linear forms in values of certain G-functions, Mat. Zametki 18 (1975), 541-552;
. English transl. in Math. Note. 18English transl. in Math. Note 18 (1975).
On the linear independence of the values of polylogarithmic functions. M Hata, J. Math. Pures et Appl. 69M. Hata, On the linear independence of the values of polylogarithmic functions, J. Math. Pures et Appl., 69, (1990), 133-173.
Rational approximations to the dilogarithms. M Hata, Trans. Amer. Math. Soc. 3361M. Hata, Rational approximations to the dilogarithms, Trans. Amer. Math. Soc., 336, no. 1, (1993), 363-387.
A criterion for the linear independence of polylogarithms over a number field. N Hirata-Kohno, M Ito, Y Washio, RIMS Kokyouroku Bessatu. 64N. Hirata-Kohno, M. Ito and Y. Washio, A criterion for the linear independence of polylogarithms over a number field, RIMS Kokyouroku Bessatu, 64, (2017), 3-18.
Evaluation of the dimension of the Q-vector space spanned by the special values of the Lerch function. M Kawashima, Tsukuba J. Math. 382M. Kawashima, Evaluation of the dimension of the Q-vector space spanned by the special values of the Lerch function, Tsukuba J. Math. 38, no. 2, (2014), 171-188.
Padé approximations for shifted functions and parametric geometry of numbers. M Kawashima, A Poëls, preprintM. Kawashima and A. Poëls, Padé approximations for shifted functions and parametric geometry of numbers, preprint.
Structural properties of polylogarithms, Mathematical surveys and monographs. L Lewin, American Math. Society37L. Lewin, Structural properties of polylogarithms, Mathematical surveys and monographs, 37, Amer- ican Math. Society, 1991.
Linear independence of forms in polylogarithms. R Marcovecchio, Ann. Scuola Nor. Sup. Pisa CL. Sci. 5R. Marcovecchio, Linear independence of forms in polylogarithms, Ann. Scuola Nor. Sup. Pisa CL. Sci., 5, (2006), 1-11.
Type II Hermite-Padé approximations of generalized hypergeometric series. T Matala-Aho, Constr. Approx. 333T. Matala-aho, Type II Hermite-Padé approximations of generalized hypergeometric series, Constr. Approx. 33, no. 3 (2011) 289-312.
M A Miladi, Récurrences linéaires et approximations simultanées de type Padé: applicationsà l'arithmétiqus. Université des S. et T. de LilleThèseM. A. Miladi, Récurrences linéaires et approximations simultanées de type Padé: applicationsà l'arithmétiqus, Thèse, Université des S. et T. de Lille, 2001.
The Calculus of finite differences, Macmillan and co. L M Milne-Thomson, LondonL. M. Milne-Thomson, The Calculus of finite differences, Macmillan and co., London, 1933.
Hermite-Padé approximants of generalized hypergeometric functions. Yu, Nesterenko, Séminaire de Théorie des Nombres. S. DavidParisProgress in Math116Yu. Nesterenko, Hermite-Padé approximants of generalized hypergeometric functions, In: Séminaire de Théorie des Nombres, Paris, 1991-92, ed. S. David, Progress in Math. 116 (1993), 1191-216.
Hermite-Padé approximants of generalized hypergeometric functions. Yu V Nesterenko, Russian Acad. Sci. Sb. Math. 18510Mat. Sb.Yu. V. Nesterenko, Hermite-Padé approximants of generalized hypergeometric functions, Mat. Sb. 185 no. 10 (1994), 39-72; English translation in Russian Acad. Sci. Sb. Math. 83, no. 1 (1995), 189-219.
On irrationality of the values of the functions F (x, s). E M Nikisin, Math. USSR Sbornik. 373originally published in MathE. M. Nikisin, On irrationality of the values of the functions F (x, s), Math. USSR Sbornik., 37, no. 3, (1980), 381-388 (originally published in Math, Sbornik 109, no. 3, (1979)).
Rational Approximations and Orthogonality, Translations of Mathematical Monographs. E M Nikisin, V N Sorokin, American Math. SocietyE. M. Nikisin and V. N. Sorokin, Rational Approximations and Orthogonality, Translations of Math- ematical Monographs, American Math. Society, 1991.
Simultaneous Padé approximants to the Euler, exponential and logarithmic functions. T , Théorie des Nombres de Bordeaux. 27Actes de la conférence Thue 150T. Rivoal, Simultaneous Padé approximants to the Euler, exponential and logarithmic functions, J. Théorie des Nombres de Bordeaux 27.2 (2015), 565-589. Actes de la conférence Thue 150.
Approximants de Padé simultanés de logarithmes. G Rhin, P Toffin, J. Number Theory. 24G. Rhin and P. Toffin, Approximants de Padé simultanés de logarithmes, J. Number Theory, 24, (1986), 284-297.
On a permutation group related to ζ(2), Acta Arith. G Rhin, C Viola, 77G. Rhin and C. Viola, On a permutation group related to ζ(2), Acta Arith., 77, (1996), no. 1, 23-56.
The permutation group method for the dilogarithms. G Rhin, C Viola, Ann. Scuola Nor. Sup. Pisa CL. Sci. 43G. Rhin and C. Viola, The permutation group method for the dilogarithms, Ann. Scuola Nor. Sup. Pisa CL. Sci., 4, no. 3, (2005), 389-437.
T , Irrationalité d'au moins un des neuf nombres ζ(5), ζ. 103T. Rivoal, Irrationalité d'au moins un des neuf nombres ζ(5), ζ(7), . . . , ζ(21), Acta Arith., 103, no. 2, (2002), 157-167.
Indépendance linéaire des valeurs des polylogarithmes. T , J. Théorie des Nombres Bordeaux. 152T. Rivoal, Indépendance linéaire des valeurs des polylogarithmes, J. Théorie des Nombres Bordeaux, 15, no. 2, (2003), 551-559.
C Siegel, Transcendental Numbers. Princeton Univ. Press16C. Siegel, Transcendental Numbers, Annals of Mathematics Studies, 16, Princeton Univ. Press, 1950.
On the irrationality of the values of hypergeometric functions. V N Sorokin, Sb. Math. 55V. N. Sorokin, On the irrationality of the values of hypergeometric functions, Sb. Math. 55 (1986) 243-257.
. G Szegö, Orthogonal Polynomials. American Math. SocietyG. Szegö, Orthogonal Polynomials, American Math. Society, 1939.
On linear forms of a certain class of G-functions and p-adic G-functions. K Väänänen, Acta Arith. 36K. Väänänen, On linear forms of a certain class of G-functions and p-adic G-functions, Acta Arith., 36 (1980) 273-295.
Linear independence of dilogarithmic values. C Viola, W Zudilin, J. Reine Angew Math. 736C. Viola and W. Zudilin, Linear independence of dilogarithmic values, J. Reine Angew Math. 736, (2018), 193-223.
V V Zudilin, On a measure of irrationality for values of G-functions. 60V. V. Zudilin, On a measure of irrationality for values of G-functions, Izvestiya: Mathematics 60: 1 91-118.
| [] |
[
"THE 2-SURVIVING RATE OF PLANAR GRAPHS WITH AVERAGE DEGREE LOWER THAN 4 1 2 PRZEMYSŁAW GORDINOWICZ",
"THE 2-SURVIVING RATE OF PLANAR GRAPHS WITH AVERAGE DEGREE LOWER THAN 4 1 2 PRZEMYSŁAW GORDINOWICZ"
] | [
"\nInstitute of Mathematics\nLodz University of Technology\nŁódźPoland\n"
] | [
"Institute of Mathematics\nLodz University of Technology\nŁódźPoland"
] | [] | Let G be any connected graph on n vertices, n ≥ 2. Let k be any positive integer. Suppose that a fire breaks out at some vertex of G. Then, in each turn firefighters can protect at most k vertices of G not yet on fire; Next the fire spreads to all unprotected neighbours. The k-surviving rate of G, denoted by ρ k (G), is the expected fraction of vertices that can be saved from the fire, provided that the starting vertex is chosen uniformly at random.In this note, it is shown that for any planar graph G with average degree 4 1 2 − ε, where ε ∈ (0, 1], we have ρ 2 (G) ≥ 2 9 ε. In particular, the result implies a significant improvement of the bound for 2-surviving rate for triangle-free planar graphs (Esperet, van den Heuvel, Maffray and Sipma [3]) and for planar graphs without 4-cycles (Kong, Wang, Zhang [11]). The proof is done using the separator theorem for planar graphs.2000 Mathematics Subject Classification. 05C15. | 10.1002/jgt.22254 | [
"https://arxiv.org/pdf/1608.01488v2.pdf"
] | 67,769,520 | 1608.01488 | 19c461a3efa6af004dbe8ee424b8ecdfe10cd3ae |
THE 2-SURVIVING RATE OF PLANAR GRAPHS WITH AVERAGE DEGREE LOWER THAN 4 1 2 PRZEMYSŁAW GORDINOWICZ
17 Oct 2016
Institute of Mathematics
Lodz University of Technology
ŁódźPoland
THE 2-SURVIVING RATE OF PLANAR GRAPHS WITH AVERAGE DEGREE LOWER THAN 4 1 2 PRZEMYSŁAW GORDINOWICZ
17 Oct 20161and phrases firefighter problemsurviving rateplanar graph
Let G be any connected graph on n vertices, n ≥ 2. Let k be any positive integer. Suppose that a fire breaks out at some vertex of G. Then, in each turn firefighters can protect at most k vertices of G not yet on fire; Next the fire spreads to all unprotected neighbours. The k-surviving rate of G, denoted by ρ k (G), is the expected fraction of vertices that can be saved from the fire, provided that the starting vertex is chosen uniformly at random.In this note, it is shown that for any planar graph G with average degree 4 1 2 − ε, where ε ∈ (0, 1], we have ρ 2 (G) ≥ 2 9 ε. In particular, the result implies a significant improvement of the bound for 2-surviving rate for triangle-free planar graphs (Esperet, van den Heuvel, Maffray and Sipma [3]) and for planar graphs without 4-cycles (Kong, Wang, Zhang [11]). The proof is done using the separator theorem for planar graphs.2000 Mathematics Subject Classification. 05C15.
Introduction
The following Firefighter Problem was introduced by Hartnell [8]. Let G be any connected graph on n vertices, n ≥ 2. Let k be any positive integer. Suppose that a fire breaks out at some vertex v ∈ V (G). Then in each turn firefighters can protect k vertices of G, not yet on fire and the protection is permanent. Next the fire spreads to all the unprotected neighbours of vertices already on fire. The process ends when the fire can no longer spread. The goal is to save as much as possible and the question is how many vertices can be saved. We refer the reader to the survey by Finbow and MacGillivray [4] for more information on the background of the problem and directions for its consideration.
In this note we focus on the following aspect of the problem. Let sn k (G, v) denote the maximum number of vertices of G that k firefighters can save when the fire breaks out at the vertex v. This parameter may depend heavily on the choice of the starting vertex v, for example when the graph G is a star. Therefore Cai and Wang [1] introduced the following graphical parameter: the k-surviving rate ρ k (G) is the expected fraction of vertices that can be saved by k firefighters, provided that the starting vertex is chosen uniformly at random. Namely
ρ k (G) = 1 |V (G)| 2 v∈V (G) sn k (G, v).
It is not surprising that the surviving rate is connected with the density of a graph. Prałat [13,14] has provided a threshold for the average degree, which guarantees a positive surviving rate with a given number of firefighters. Precisely, for k ∈ N + let us define τ k = 30 11
for k = 1 k + 2 − 1 k+2 for k ≥ 2.
Then there exists a constant c > 0, such that for any > 0, any n ∈ N + and any graph G on n vertices and at most (τ k − )n/2 edges one has ρ k (G) > c · > 0. Moreover, there exists a family of graphs with the average degree tending to τ k and the k-surviving rate tending to 0, which shows that the above result is the best possible. In particular, Pralat's results yield that graphs with the average degree lower than 3 3 4 − ε have the 2-surviving rate at least 8 75 ε. While settled in general case, the k-surviving rate is still being investigated for particular families of graphs -the most important is the case of planar graphs. Cai and Wang [1] asked about the minimum number k such that ρ k (G) > c for some positive constant c and any planar graph G. It is easy to see that ρ 1 (K 2,n ) −−−→ n→∞ 0, hence k ≥ 2. So far, the best known upper bound for this number is 3:
Theorem 1.1 ([7]
). Let G be any planar graph. Then
ρ 3 (G) > 2 21 .
It has been shown that 2 is the upper bound for triangle-free planar graphs [3] and planar graphs without 4-, 5-or 6-cycles [11,10,5], respectively. In particular Esperet, van den Heuvel, Maffray and Sipma [3] have proven that
ρ 2 (G) > 1 723636
for any non-trivial triangle-free planar graph, while Kong, Wang and Zhang [11] that ρ 2 (G) > 1 76 for any non-trivial planar graph without 4-cycles.
In this paper, we improve the above bounds with the following theorem:
Theorem 1.2. Let G be any connected planar graph with n ≥ 2 vertices and m edges. If for some ∈ (0, 5 2 ] one has 2m n = 9 2 − then
ρ 2 (G) ≥ 2 9 ε for ≤ 1 2 9 ε − 1 n otherwise.
In particular, because triangle-free planar graphs have average degree lower than 4, and the same holds true for planar graphs without 4-cycles, we get immediately Corollary 1.3. Let G be any triangle-free planar graph on at least 2 vertices. Then
ρ 2 (G) > 1 9 .
Corollary 1.4. Let G be any graph on at least 2 vertices. If G does not contain any 4-cycle then
ρ 2 (G) > 1 9 .
The paper is organised as follows. Section 2 contains a brief description of the firefighters' strategy developed from the separator theorem of planar graphs. Section 3 contains the proof of Theorem 1.2. For background on graph theory we refer the reader to the book [2], in particular to Chapter 4 for the terminology and basics regarding planarity and plane embeddings of graphs. Note that according to [2], we imagine the edges of a plane graph as simple polygonal curves, while by the Jordan curve we mean a simple, closed polygonal curve. Considering particular plane graph G and some Jordan curve C by in(C), inC, ex(C) and exC we denote sets of vertices of G in the interior region of C, in the closed interior region (incuding C), in the exterior region and in the closed exterior region of C, respectively.
Separators and the firefighters' strategy
The proof is done using the lemma given by Lipton and Tarjan to prove the separator theorem for planar graphs [12]. The key lemma in their proof, reformulated for our purpose, is quoted below. A similar approach -using the analogy to the following lemma to the firefighter problem on planar graphs, was first applied by Floderus, Lingas and Persson [6], with a slightly different notation of approximation algorithms. They have proven a theorem analogous to Lemma 2.2. This method was also used to prove Theorem 1.1. The proof and some discussion on such an approach is included in [7]. Lemma 2.1. Let G be any n-vertex plane graph (n ≥ 2) and T be any spanning tree of G. Then there exists an arc uv between two vertices u, v ∈ V (G), not crossing any edge from E(G), such that the unique Jordan curve C, consisting of uv and some edges of T, has the property that the number of vertices in the interior region of C as well as in the exterior region of C is lower than 2 3 n. Actually in [12] there is considered any spanned supergraph H of G, being a plane triangulation and then an arc uv is some edge of H. To construct the curve C consider two T -paths from r to u and v, respectively. Let z be the last common vertex on these paths. Then the curve C consists of the T -path from z to u, the arc uv and the T -path from v to z.
The defending strategy is the following. Let G be any n-vertex connected plane graph, where n ≥ 5. Suppose that the fire breaks out at a vertex r. Consider a tree T obtained by the breadth-first-search algorithm starting from the vertex r. By Lemma 2.1 there is an arc uv joining vertices u and v determining the Jordan curve C consisting of uv and some edges of T such that |inC| > 1 3 n and |exC| > 1 3 n. Note that the curve C contains at most 2 vertices at any given distance from r. This holds true because the curve C, as mentioned before, is constructed from two shortest paths (as T is breadthfirst-search tree), say u 0 u 1 . . . u i and v 0 v 1 . . . v j , where v 0 = u 0 = r, u i = u and v j = v. The firefighters' strategy relies on protecting in the tth round, for t ≥ 1, vertices u t and v t , until every vertex from both paths, except r, is protected. When the vertex r does not belong to the curve C then firefighters, protecting the whole separator, can save all the vertices in either inC or exC. When the vertex r belongs to the curve C the protection along the separator may be not enough, as the fire may spread through the neighbours of r located in the interior as well as in the exterior region of the curve C. Because then either inC or exC contains not more than deg r−2 2 neighbours of r, we get immediately Lemma 2.2.
To increase clarity of the presentation we decided to describe the firefighters' strategy not strictly precise. It may happen that the vertex that should be protected according to our description does not exist or is already protected. For example, in the above paragraph, when u 1 = v 1 or in the ith round when i > j. In such cases firefighters may either protect a random vertex instead or skip the protection at all. Lemma 2.2. Let G be any n-vertex plane graph, where n ≥ 5. Suppose that the fire breaks out at some vertex r. Then using 2 + deg r−2 2 firefighters at the first step and 2 at the subsequent steps one can save more than n/3 − 1 vertices.
For any planar graph with the average degree lower than 4 (e.g. triangle-free graphs, graphs without 4-cycles) we get without much effort: Corollary 2.3. Let G be any planar graph with average degree lower than 4, having at least 2 vertices of degree lower than 4. There is
ρ 3,2 (G) ≥ 1 3 .
The proof
Let us start the proof of Theorem 1.2 with a simple observation derived from Lemma 2.2.
Observation 3.1. Let G be any n-vertex plane graph, where n ≥ 5. Let r ∈ V (G) be a vertex of degree at most 3. Then
sn 2 (G, r) > n − 1 if deg(r) ≤ 2 n/3 − 1 if deg(r) = 3 1 if deg(r) > 4
Dealing with vertices of degree 4 is a bit more complicated. We prove that there is a defending strategy to save a large part of a graph unless more than one neighbour of the vertex where the fire broke out has a high degree. Lemma 3.2. Let G be any n-vertex planar graph, where n ≥ 5. Let r ∈ V (G) be a vertex of degree 4 with at most one neighbour of degree higher than 5. Then
sn 2 (G, r) > 1 6 n − 1.
Proof. Let G be any n-vertex triangle-free planar graph, where n ≥ 5, and let plane graph H be its plane embedding. Let r ∈ V (H) be a vertex of degree 4. Consider a tree T obtained by the breadth-first-search algorithm starting from the vertex r. By Lemma 2.1 there are vertices u and v and the Jordan curve C ⊆ T + uv such that |inC| > 1 3 n and |outC| > 1 3 n. Let u0u1 . . . u i and v0v1 . . . v j be T -paths from r = u 0 = v 0 to u = u i and v = v j , respectively.
Consider an open disk around r small enough to contain only one line segments per each of 4 edges adjacent to r. Enumerate neighbours of r as a, b, c, d using a clockwise ordering applied to the corresponding line segments, so that a = u 1 . When v 1 = a, using the strategy described as the introduction to Lemma 2.2, two firefighters can save more than 1 3 n + 1 vertices of a graph (one extra in the first round as u 1 = v 1 ), because vertex r does not belong to the curve C. Similarly, when v 1 ∈ {b, d}, it is possible to save more than 1 3 n − 1 vertices.
Therefore suppose now that v 1 = c. If in the graph H there exists some shortest path from vertex r to u going through vertex b or d, this path splits either the interior or exterior region of C into 2 pieces. Hence we may use this path and a T -path from r to one vertex from {u, v} to build a Jordan curve C which allow us to save more than 1 6 n − 1 vertices. See Figure 1 for the illustration.
Solid edges are the edges of the spanning tree T, bold if they belong to the curve C. Dashed edges are the edges forming the ru-path through b (possibly but not necessarily belonging to the tree T ). Note that the edges not belonging to T are mostly not depicted. It may happen as well that in the graph H there exists some shortest path from r to u going through vertex c. From any of such paths let us choose this one which contains vertex u i for the least possible i , 1 < i ≤ i. This path also splits either the interior or exterior region of C into 2 pieces. If a piece not containing vertex r has more than 1 6 − 1 vertices we are done. Otherwise let u = u i −1 , v = u i . Note that the curve C build from the T -path from r to u , edge u v and the shortest rv -path going through vertex c has the property that |inC | > 1 6 n + 1 and |outC | > 1 6 n + 1. Moreover there is no shortest path from r to u going through vertex b, c or d. Hence we are ready to proceed to the final case with the curve C , instead of C.
Suppose then that there is no such a "shortcut" and both closed interior as well as closed exterior of C contains more than 1 6 + 1 vertices. We have dist(b, u) ≥ dist(r, u), dist(c, u) ≥ dist(r, u) and dist(d, u) ≥ dist(r, u). Without loss of generality, assume that deg(c) ≤ deg(a) and that b lies in the interior region of C, while d in the exterior. Note that terms "interior" and "exterior" depend on a particular embedding H of G.
As r has at most one neighbour of degree higher than 5, then one has deg(c) ≤ 5. Note that at most 3 neighbours of c = v 1 belong to either inC or exC. Suppose the exterior case holds (the interior one is analogous). One of these neighbours is v 0 = r. Let the second (supposed) neighbour, say x, belong either to C (x = v 2 when v 1 = v) or to exC, while the 3rd one, if exists, is y ∈ exC. The firefighters' strategy is now the following. At their first round they protect u 0 = a and d. In the second onex and y, then in the tth round (t > 2)u (t−1) and v t . This still allows to protect the exterior region of C, because otherwise there would exist a path from b or c to u of a length lower than dist(r, u). Hence, we are able to save more than 1 6 n − 1 vertices (r and c are burned). See Figure 1 for details.
The rest of this section is devoted to calculate from the above lemmas the 2-surviving rate for a planar graph of a given average degree, to prove the bound given by Theorem 1.2. The bound is trivially true for graphs on not more than 4 vertices. Fix n ∈ N, n ≥ 5 and > 0 and let G be any connected planar graph on n vertices of average degree
Remarks
Noting the results on the 2-surviving rate for planar graphs without 3-, 4-, 5-or 6-cycles, respectively [3,11,10,5], one may ask the following question: Problem 4.1. Does it exist a function f : N \ {1, 2} → R + such that for any k ≥ 3 and any non-trivial planar graph G without k-cycles there is ρ 2 (G) > f (k)?
Of course, such a function does exist provided there is a strategy for the positive 2-surviving rate on any planar graph, which is still open.
Figure 1 .
1Illustration of the strategy.
Let us partition the vertex set of G into 4 subsets defined by the conditions:We have thatNote that set X is the set of all vertices of degree 1 or 2 while all the vertices of degree 3 are contained in set Y. According to Lemma 3.2, set W contains vertices of degree higher than 4 and such vertices of degree 4 that have at least 2 neighbours of degree at least 6. The remaining vertices of degree 4 are contained in sets Y and Z. Without much effort one may calculate that the average degree of vertices in the set W is at least 9 2 (when each degree 4 vertex has 2 degree 6 neighbours, each degree 6 vertex has 6 degree 4 neighbours). Similarly, average degrees of vertices in sets X, Y and Z are bounded from below by 1, 3 and 4, respectively. Hence, there isReducing structural properties of graph G to these degree conditions one may estimate inequality (1) solving the following linear program.with conditionsx + 3y + 4z + 9 2 w ≤ 9 2 − ε nx, y, z, w ≥ 0.For ε ∈ (0,3 2] the minimum of (3) is obtained for x = z = 0 and y = 2 3 εn. Then, the minimum of the goal function isTherefore, for ε ∈ (0, 1] we have α 0 ≥ 9 2 ε. For ε ∈ (1,3 2] we have α 0 ≥ 9 2 ε − 1 n . For ε ∈ ( 3 2 , 5 2 ] the minimum of (3) is obtained for w = z = 0 and n − y = x = 2ε−3 6 n. Then, the minimum of the goal function isε − 1 n .The above solutions for linear program (3) yields the desired bound for surviving rate ρ(G), which closes the proof of Theorem 1.2.
The surviving rate of a graph for the firefighter problem. L Cai, W Wang, SIAM J. Discrete Math. 23L. Cai, W. Wang, The surviving rate of a graph for the firefighter problem, SIAM J. Discrete Math. 23 (2009) 1814--1826.
R Diestel, Graph Theory. Springer-Verlag Heidelberg4th editionR. Diestel, Graph Theory, 4th edition, Springer-Verlag Heidelberg, 2010.
Fire containment in planar graphs. L Esperet, J Van Den Heuvel, F Maffray, F Sipma, J. Graph Theory. 73L. Esperet, J. van den Heuvel, F. Maffray, F. Sipma, Fire containment in planar graphs J. Graph Theory 73 (2013) 267-279.
The firefighter problem: a survey of results, directions and questions. S Finbow, G Macgillivray, Australasian Journal of Combinatorics. 43S. Finbow, G. MacGillivray, The firefighter problem: a survey of results, directions and questions, Australasian Journal of Combinatorics 43 (2009) 57-77.
The 2-surviving rate of planar graphs without 6-cycles. S Finbow, J Kong, W Wang, Theoret. Comput. Sci. 518S. Finbow, J. Kong, W. Wang, The 2-surviving rate of planar graphs without 6-cycles, Theoret. Comput. Sci. 518 (2014) 22-31.
Towards more efficient infection and fire fighting. P Floderus, A Lingas, M Persson, International Journal of Foundations of Computer Science. 24P. Floderus, A. Lingas, M. Persson, Towards more efficient infection and fire fighting, International Journal of Foundations of Computer Science 24 (2013) 3--14.
Planar graph is on fire. P Gordinowicz, Theoret. Comput. Sci. 593P. Gordinowicz, Planar graph is on fire, Theoret. Comput. Sci., 593 (2015) 160--164.
Firefighter! An application of domination. B Hartnell, Presentation at the 25th Manitoba Conference on Combinatorial Mathematics and Computing. Winnipeg, CanadaUniversity of ManitobaB. Hartnell, Firefighter! An application of domination, Presentation at the 25th Manitoba Confer- ence on Combinatorial Mathematics and Computing, University of Manitoba, Winnipeg, Canada, 1995.
The surviving rate of planar graphs. J Kong, W Wang, X Zhu, Theoret. Comput. Sci. 416J. Kong, W. Wang, X. Zhu, The surviving rate of planar graphs, Theoret. Comput. Sci., 416 (2012) 65--70.
The 2-surviving rate of planar graphs without 5-cycles. J Kong, W Wang, T Wu, J. Comb. Optim. 31J. Kong, W. Wang, T. Wu, The 2-surviving rate of planar graphs without 5-cycles, J. Comb. Optim., 31 (2016) 1479-1492.
The 2-surviving rate of planar graphs without 4-cycles. J Kong, W Wang, L Zhang, Theoret. Comput. Sci. 457J. Kong, W. Wang, L. Zhang, The 2-surviving rate of planar graphs without 4-cycles, Theoret. Comput. Sci. 457 (2012) 158-165.
A Separator Theorem for Planar Graphs. R J Lipton, R E Tarjan, SIAM Journal on Applied Mathematics. 36R. J. Lipton, R. E. Tarjan, A Separator Theorem for Planar Graphs, SIAM Journal on Applied Mathematics 36 (1979), 177--189.
Graphs with average degree smaller than 30/11 burn slowly, Graphs and Combinatorics. P Pralat, 30P. Pralat, Graphs with average degree smaller than 30/11 burn slowly, Graphs and Combinatorics, 30 (2014), 455-470.
Sparse graphs are not flammable. P Pralat, SIAM Journal on Discrete Mathematics. 27P. Pralat, Sparse graphs are not flammable, SIAM Journal on Discrete Mathematics 27 (2013), 2157-2166.
| [] |
[
"Numerical methods for thermally stressed shallow shell equations",
"Numerical methods for thermally stressed shallow shell equations"
] | [
"Hangjie Ji \nDepartment of Mathematics\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n",
"Longfei Li \nDepartment of Mathematics\nUniversity of Louisiana at Lafayette\n70504LafayetteLAUSA\n"
] | [
"Department of Mathematics\nUniversity of California Los Angeles\n90095Los AngelesCAUSA",
"Department of Mathematics\nUniversity of Louisiana at Lafayette\n70504LafayetteLAUSA"
] | [] | We develop efficient and accurate numerical methods to solve a class of shallow shell problems of the von Karman type. The governing equations form a fourth-order coupled system of nonlinear biharnomic equations for the transverse deflection and Airy's stress function. A second-order finite difference discretization with three iterative methods (Picard, Newton and Trust-Region Dogleg) are proposed for the numerical solution of the nonlinear PDE system. Three simple boundary conditions and two applicationmotivated mixed boundary conditions are considered. Along with the nonlinearity of the system, boundary singularities that appear when mixed boundary conditions are specified are the main numerical challenges. Two approaches that use either a transition function or local corrections are developed to deal with these boundary singularities. All the proposed numerical methods are validated using carefully designed numerical tests, where expected orders of accuracy and rates of convergence are observed. A rough run-time performance comparison is also conducted to illustrate the efficiency of our methods. As an application of the methods, a snap-through thermal buckling problem is considered. The critical thermal loads of shell buckling with various boundary conditions are numerically calculated, and snap-through bifurcation curves are also obtained using our numerical methods together with a pseudo-arclength continuation method. Our results are consistent with previous studies. | 10.1016/j.cam.2018.10.005 | [
"https://arxiv.org/pdf/1712.03269v1.pdf"
] | 119,716,822 | 1712.03269 | 06273edfc4c61169fc4cdcb1d43f5efd16b7a98c |
Numerical methods for thermally stressed shallow shell equations
Hangjie Ji
Department of Mathematics
University of California Los Angeles
90095Los AngelesCAUSA
Longfei Li
Department of Mathematics
University of Louisiana at Lafayette
70504LafayetteLAUSA
Numerical methods for thermally stressed shallow shell equations
von Karman equationslarge deflection of shallow shellscoupled nonlinear PDEbiharmonic equationsmixed boundary conditions
We develop efficient and accurate numerical methods to solve a class of shallow shell problems of the von Karman type. The governing equations form a fourth-order coupled system of nonlinear biharnomic equations for the transverse deflection and Airy's stress function. A second-order finite difference discretization with three iterative methods (Picard, Newton and Trust-Region Dogleg) are proposed for the numerical solution of the nonlinear PDE system. Three simple boundary conditions and two applicationmotivated mixed boundary conditions are considered. Along with the nonlinearity of the system, boundary singularities that appear when mixed boundary conditions are specified are the main numerical challenges. Two approaches that use either a transition function or local corrections are developed to deal with these boundary singularities. All the proposed numerical methods are validated using carefully designed numerical tests, where expected orders of accuracy and rates of convergence are observed. A rough run-time performance comparison is also conducted to illustrate the efficiency of our methods. As an application of the methods, a snap-through thermal buckling problem is considered. The critical thermal loads of shell buckling with various boundary conditions are numerically calculated, and snap-through bifurcation curves are also obtained using our numerical methods together with a pseudo-arclength continuation method. Our results are consistent with previous studies.
Introduction
High quality thin glass sheets are ubiquitously used in modern electronic devices such as smart phone screens and large TV displays. In order to maintain the quality of glass sheets, the manufacturing processes (e.g., Corning's revolutionary "Fusion" process [1]) need to avoid any imperfections that can cause deviations from a desired shape. Small non-idealities in manufacturing may produce non-uniform stress-free deflections that fails to meet the tightening specifications in glass industry. For large thin glass sheets, variations can be introduced during the glass forming process, and the subsequent cooling and transporting processes. The cooling process can yield heterogeneous "frozen-in" thermal stresses [2], and transporting by partially holding or clamping edges of a glass sheet [1] can introduce various boundary stresses. These thermal and boundary stresses can generate further deflections to the products. Therefore, there is a pressing need for further investigation of thermal-elastic deformations in shallow shells so that improved manufacturing procedures can be designed to minimize defects during cooling and transporting.
Over the years, numerous theoretical works have been developed on related areas of elasticity and solid mechanics; for example, theories on the thermal stability of regular-shaped structures like doubly curved, conical, spherical and cylindrical shells are developed in [3,4]. Many mathematical models have also been formulated to capture various aspects of shallow shells, among which models of von Karman type [5] provide a solid foundation for the characterization of shallow shells. The nonlinear governing equations concerning the transverse deflections of the shell and the Airy's stress function are able to characterize large shell deformations that are of primary interest in industrial applications. For a review of the geometrically nonlinear theory of shallow shells exhibiting large displacement, we refer the readers to [6,7] and the reference therein.
Incorporating thermal stresses into shallow shell models is not trivial, and many literatures study the nonlinear loaded shell problems and the thermoelastic problems separately [8][9][10]. The governing equations for a flat thin isotropic plate under a thermal stress can be easily derived from the von Karman theory [11]. Thermal buckling of plates and regular-shaped shells has also been considered long time ago in [3]. We note that the difference between a plate and a shell lies in the precast shape, which is flat for plates and curved for a shell in the stress-free stage. It was only until recently when Abbott et al. developed the thermoelastic theory for nonlinear thin shells of general shapes subject to thermal stresses [2]; the model is a system of two biharmonic PDEs nonlinearly coupled together. Analytical solutions can hardly be obtained for this type of PDEs; therefore, numerical approaches are normally applied to investigate the solutions.
To the best of our knowledge, there are no existing numerical studies on solving the nonlinear shallow shell equations under thermal stresses developed in [2]. Nevertheless, a great number of numerical methods [12][13][14] have been proposed to solve the biharmonic equation, which is a fundamental part of the nonlinear shallow shell model. Common numerical methods can be applied successfully to solve the biharmonic equation with ordinary boundary conditions, such as Dirichlet and Neumann boundary conditions. However, when mixed boundary conditions are involved, it is well-known that standard numerical methods perform poorly for elliptic PDEs around boundary singularities, which are introduced by jump discontinuities in the mixed boundary conditions. In this case, both global methods (series-type method, Ritz method, etc.) and local methods (finite differences, finite elements, strip elements, etc.) suffer from loss of accuracy. A related benchmark problem, Motz's problem [15], that considers the Laplace's equation with Neumann-Dirichlet mixed boundary conditions in a rectangular domain can be used to reveal the loss of accuracy due to boundary singularities; interesting readers are referred to [16] for an extensive survey on this topic.
To maintain the desired accuracy for the biharmonic equation with mixed boundary conditions, global methods usually require extremely high order approximations around the singularities; see for example the series-based method introduced in [17] to solve the biharmonic problem with mixed boundary conditions. Meanwhile, local methods need to be implemented with adaptive mesh refinement around the singularities or combined with singular function approximations, such as the numerical methods developed in [18][19][20] to study the vibration and buckling of plates. Even though standard local methods combined with local mesh refinement can be applied to a large variety of singular problems with fewer requirements, singular function methods are preferred since it is generally more efficient provided appropriate functions are chosen to fit the singularities. Incorporating these function approximations requires the understanding of analytic forms of the boundary singularities. Among several special methods that take into consideration of local corrections of solutions, we in particular mention the methods developed by Richardson [21] and Poullikkas et al. [22]. Richardson [21] applied the Wiener-Hopf method to obtain solutions to the biharmonic equation that involves clamped and simply supported mixed boundary conditions. Poullikkas et al. [22] combined the knowledge of fundamental solutions near the singularity with a least square routine to determine unknown coefficients in the numerical approximation. There have been numerous other numerical approaches designed to prevent the loss of accuracy for mixed boundary conditions, such as Galerkin method [23], Rayleigh-Ritz variational method [24,25] and domain decomposition method [26,27] to name just a few.
For coupled nonlinear problems of thin plates with large deflections similar to the shallow shell system that we are interested in solving, several finite difference and finite element techniques [28][29][30], boundary element methods [31] and Picard iterations [32] have been developed. In the study of nonlinear dynamics of shallow shells, different methods [33][34][35][36][37] have been applied to numerically solve the system of nonlinear PDEs with various boundary conditions.
In this paper, we focus on developing new efficient and accurate numerical methods to solve the type of nonlinear biharmonic PDEs developed in [2] that incorporates the thermal stresses. The boundary conditions we consider for the coupled system are both the standard simple boundary conditions derived from preserving the energy of the shell and the mixed boundary conditions motivated by engineering applications. We are particularly interested in the partially clamped mixed boundary conditions not only because it is more numerically challenging, but also because it is closely related to the glass manufacturing applications; for example, the glass sheets are partially clamped during the cooling and transporting process in Corning's Fusion technique [1]. We develop and compare three numerical techniques to solve the nonlinear biharmonic system iteratively, and propose two approaches to address the boundary singularities of the partially clamped mixed boundary conditions. In addition, strategies of regularizing the singular system with free boundary conditions are also proposed, noting that the biharmonic system is singular with free boundary conditions since the displacement is only determined up to an arbitrary plane. All our numerical methods are carefully validated with numerical convergence studies, and the various methods are compared with a rough run-time performance comparison. As an application of the proposed numerical methods, we solve a snap-through thermal buckling problem to numerically obtain the critical thermal loads for several boundary conditions. In conjunction with a pseudo-arclength continuation method [38], we are able to obtain snap-through bifurcation curves for those boundary conditions as well.
The remainder of the paper is organized as follows. In section 2, the model for elastic shallow shells subject to thermal stresses is formulated. Three types of simple boundary conditions and two types of mixed boundary conditions are introduced for the problem. In section 3, we propose three iterative schemes (Picard, Newton and Trust-Region Dogleg) based on a common finite difference discretization of the coupled biharmonic system to solve the governing equations. In particular, a transition function approach and a local asymptotic solution approach are developed for special treatments for boundary singularities of the mixed boundary conditions. In section 4, numerical results of the proposed approaches are presented, and the influences of thermal stresses, mixed boundary conditions and geometric nonlinearity to shallow shells are investigated via an example application problem.
Formulation
We consider an elastic thin shallow shell defined on a rectangular domain Ω = [x a , x b ] × [y a , y b ] with a precast shape w 0 (x , y ) under the influence of a temperature field T ; all the primed variables are dimensional quantities. The governing equations of this problem consist of a coupled system of two biharmonic equations for the transverse deflection function w (x , y ) and Airy stress function φ (x , y ) [7]:
∇ 4 φ = − 1 2 EhL [w , w ] − EhL [w 0 , w ] − ∇ 2 N (T ), D∇ 4 w = L [φ , w ] + L [φ , w 0 ] − 1 1 − ν ∇ 2 M (T ) + P ,
where the bilinear operator L is defined as
L [u, v] ≡ u xx v yy + u yy v xx − 2u xy v xy .
Here h is the thickness of the shell, E is the Young's modulus, ν is the Poisson's ratio, D = Eh 3 /12(1 − ν 2 ) is the bending stiffness and P accounts for any external load. In addition, the resultant thermal force N (T ) and thermal moment M (T ) are given by
N (T ) = Eα h/2 −h/2 T dz and M (T ) = Eα h/2 −h/2 T z dz,
where α is the coefficient of thermal expansion, and T is the temperature distribution [11]. We note that, for the special case with w 0 ≡ 0, the governing equations reduce to a classical nonlinear model for thin plates. For a thin shallow shell, the thickness h is assumed to be small compared to other dimensions; thus the temperature variations through the thickness can be ignored, namely T = T (x , y ). The thermal force and moment are then reduced to N (T ) = EhαT (x , y ) and M (T ) = 0, and therefore the model can be simplified to
1 Eh ∇ 4 φ = − 1 2 L [w , w ] − L [w 0 , w ] − α∇ 2 T , (2.1a) D∇ 4 w = L [φ , w ] + L [φ , w 0 ] + P .
(2.1b)
The biharmonic-type coupled system (2.1) is in a form of the von Karman nonlinear static shallow shell equations [7], and is valid for shallow shell with large transverse displacements.
To non-dimensionalize the nonlinear shell model, we follow the similar scalings for the nonlinear von Karman plate equations used in [39,40]; i.e.,
x = Lx, y = Ly, w 0 = D Eh w 0 , w = D Eh w, φ = Dφ, T = D αEhL 2 T, P = D L 4 D Eh P.
Substituting the scales into the model (2.1) leads to the dimensionless coupled system governing the displacement w and the Airy stress function φ,
∇ 4 φ = − 1 2 L [w, w] − L [w 0 , w] − f φ , (2.2a) ∇ 4 w = L [w, φ] + L [w 0 , φ] + f w , (2.2b)
where the forcing terms are given by f φ = ∇ 2 T and f w = P . For a shallow shell with small deformations, the linear shallow shell theory is applicable and leads to a coupled system of linear partial differential equations,
∇ 4 φ = −L [w 0 , w] − f φ , (2.3a) ∇ 4 w = L [w 0 , φ] + f w . (2.3b)
Numerical solutions to this linear system can serve as an initial guess for the iterative methods of solving the nonlinear shell equations (2.2). For the shallow shell models (2.3) and (2.2), we first consider three types of commonly used boundary conditions; these simple boundary conditions are normally derived from conservation of energy [28]. Motivated by industrial applications [1,2], we are interested in understanding the effects of mixed boundary conditions on the static behavior of the shallow shell. To this end, two partially clamped mixed boundary conditions are also considered.
Simple boundary conditions
To be specific, the three simple boundary conditions considered are
• Clamped boundary conditions:
w = ∂w ∂n = 0, φ = ∂φ ∂n = 0,(2.
4)
• Simply supported boundary conditions:
w = ∂ 2 w ∂n 2 = 0, φ = ∂ 2 φ ∂n 2 = 0,(2.
5)
• Free boundary conditions:
∂ 2 w ∂n 2 + ν ∂ 2 w ∂t 2 = 0, ∂ ∂n ∂ 2 w ∂n 2 + (2 − ν) ∂ 2 w ∂t 2 = 0, φ = ∂φ ∂n = 0,(2.6)
where n and t are normal and tangential vectors to the boundary of the domain. The free boundary conditions must be complemented by a corner condition that impose zero forcing at corners of the rectangular region [28]:
∂ 2 w ∂x∂y = 0. (2.7)
Noting that similar to the Possion equation, the forcing term f of a biharmonic equation ∇ 4 w = f with free boundary conditions has to satisfy a compatibility condition, Ω f dX = 0. It is also important to point out that with the assumption of no temperature variation through the shell thickness, the resultant thermal moment M does not affect the boundary conditions; while the influences from other boundary constraints have been investigated in [41,42]. Rectangular shells with various mixed boundary conditions involving several combinations of the aforementioned simple boundary conditions have been studied analytically in [8]. Shells with partially clamped edges have a great number of applications in industry. For example, in glass industry, large glass panels are sometimes partially clamped during cooling and transport processes.
Mixed boundary conditions
In this work, we focus on rectangular shells partially clamped on two opposite edges with the rest of the boundary being either simply supported or free. Specifically, we divide the boundary Γ of the region Ω into two parts, Γ = Γ c ∪ Γ c , where Γ c represents the collection of the center portion of two opposite edges that are clamped. The rest of the boundary Γ c is denoted by either Γ s if simply supported or Γ f if free. As illustrated in Figrue 1, the two partially clamped mixed boundary conditions considered are
w = ∂w ∂n = 0, φ = ∂φ ∂n = 0 on Γ c (2.8a) w = ∂ 2 w ∂n 2 = 0, φ = ∂ 2 φ ∂n 2 = 0 on Γ s (2.8b)
• Clamped-Free (CF):
w = ∂w ∂n = 0, φ = ∂φ ∂n = 0 on Γ c (2.9a) ∂ 2 w ∂n 2 + ν ∂ 2 w ∂t 2 = 0, ∂ ∂n ∂ 2 w ∂n 2 + (2 − ν) ∂ 2 w ∂t 2 = 0, φ = ∂φ ∂n = 0 on Γ f (2.9b)
We note again that, for the CF boundary condition, the corner condition (2.7) is needed at each of the free corners for completion. Mixed boundary conditions of this type are sometimes referred to as strongly mixed boundary conditions since the boundary conditions change at inner points of the domain edges rather than on the domain vertices [43]. The sudden switch of boundary conditions on an interior point of the boundary edge introduces a jump discontinuity (singularity), as the two boundary conditions cannot be both satisfied at the point of discontinuity. The effects of all the five boundary conditions on shallow shells will be demonstrated in numerical examples. The difficulties of the numerical computation lie in the nonlinearity of the shallow shell equations, and in the singularities induced by discontinuities in the mixed boundary conditions. Numerical approaches proposed below address both difficulties satisfactorily.
Numerical schemes
In this section, three iterative methods (Picard, Newton and Trust-Region Dogleg) will be proposed for the numerical solution of the coupled nonlinear system (2.2) on a rectangular domain Ω = [x a , x b ]×[y a , y b ]. All three iterative methods are based on a common spatial discretization of the coupled system utilizing a second-order accurate centered finite-difference scheme.
Spatial discretization
To be specific, the equations are solved on a Cartesian grid G N , with grid spacings h x = (x b − x a )/N and h y = (y b − y a )/N , for a positive integer N :
G N = {x i = (x i , y j ) = (x a + ih x , y a + jh y ) : i, j = −2, −1, 0, 1, . . . , N + 2} .
(3.1)
Here i = (i, j) is a multi-index. We note that two layers of ghost points are also included at each boundary to aid the discretization, and we use h = min{h x , h y } to characterize the grid size. Let Φ i and W i be the numerical approximation to φ(x i ) and w(x i ), and denote W 0i = w 0 (x i ), F φi = f φ (x i ) and F wi = f w (x i ) for notational brevity. For each grid index i, the spatial discretized approximation to the coupled system reads
∇ 4 h Φ i = − 1 2 L h [W i , W i ] − L h [W 0i , W i ] − F φi , (3.2a) ∇ 4 h W i = L h [W i , Φ i ] + L h [W 0i , Φ i ] + F wi . (3.2b)
The discrete operators ∇ 4 h and L h are the standard centered finite-difference approximation to ∇ 4 and L :
∇ 4 h U i = (D xx D xx + 2D xx D yy + D yy D yy ) U i , L h [U i , V i ] = D xx U i D yy V i + D yy U i D xx V i − 2D xy U i D xy V i , where D xx U i = U i+1,j − 2U i,j + U i−1,j h 2 x , D yy U i = U i,j+1 − 2U i,j + U i,j−1 h 2 y , D xy U i = U i+1,j+1 − U i−1,j+1 − U i+1,j−1 + U i−1,j−1 4h x h y .
The discretized system of equations (3.2) can be denoted as two matrix equations:
M ∇ 4 h Φ = − 1 2 L h [W, W] − L h [W 0 , W] − F φ , (3.3a) M ∇ 4 h W = L h [W, Φ] + L h [W 0 , Φ] + F w . (3.3b)
Here Φ denotes the column vector obtained by reshaping the grid function Φ i and similar for all the other grid functions such as W, W 0 , F φ , F w , etc. Let M xx , M yy and M xy denote the matrices associated with the difference operators D xx , D yy and D xy , respectively; the vector operators M ∇ 4 h and L h can then be written as
L h [U, V] = M xx U • M yy V + M yy U • M xx V − 2M xy U • M xy V, M ∇ 4 h = M xx M xx + 2M xx M yy + M yy M yy ,
where U and V denote any column vectors that are of the same size as Φ (and W). Here A • B represents the Hadamard (entrywise) product of two matrices (or vectors) of the same dimensions, and AB is the standard matrix multiplication.
To complete the statement of the discretized problem, appropriate discrete boundary conditions need to be applied to the matrix equation system (3.3). As is already noted in section 2, there are three simple boundary conditions and two partially clamped mixed boundary conditions that are considered for the shallow shell equations.
Numerical implementation of simple boundary conditions
The discretizations of the three simple boundary conditions are straightforward and their formulations are given by
• Supported W i b = 0, D 2 n i b W i b = 0, Φ i b = 0, D 2 n i b Φ i b = 0, (3.4) • Clamped W i b = 0, D n i b W i b = 0, Φ i b = 0, D n i b Φ i b = 0, (3.5) • Free D 2 n i b + νD 2 t i b W i b = 0, D n i b D 2 n i b + (2 − ν)D 2 t i b W i b = 0, Φ i b = 0, D n i b Φ i b = 0. (3.6) Here i b = (i b , j b )
denotes the index of a boundary node, n i b and t i b are the normal and tangential vectors at the boundary node, respectively. The directional difference operator is defined by
D a = a 1 D x + a 2 D y ,
where a = (a 1 , a 2 ) is any given direction, and D x and D y are given by
D x U i = U i+1,j − U i−1,j 2h x and D y U i = U i,j+1 − U i,j−1 2h y .
Numerical implementation of mixed boundary conditions
The partially clamped mixed boundary conditions are less straightforward to implement, as the sudden switch of boundary conditions poses a boundary singularity at the point of jump discontinuity. To address this singularity and achieve second order spatial accuracy, we explore the following two approaches to remove the boundary discontinuity.
Transition function approach
The first approach considered removes the discontinuity at the continuous level by introducing a transition function that enables the boundary conditions to change smoothly from one to the other. For simplicity, we restrict the partially clamped region to the top and bottom boundaries of the square domain
Ω = [x a , x b ] × [y a , y b ].
The clamped region is defined by
Γ c = {(x, y) : y = y a or y b , x c − r c < x < x c + r c },
where x c and r c denote the x-coordinate of the center and the radius of the clamped region, respectively. The top-bottom and left-right boundaries are given respectively by
Γ t,b = {(x, y) : y = y a or y b , x a ≤ x ≤ x b } and Γ l,r = {(x, y) : x = x a or x b , y a ≤ y ≤ y b }.
We then define a transition function as following,
ω(x) = 1 − 1 2 tanh |x − x c |−r c + 1 , (3.7)
where is a parameter that controls the width of the transition region, and we set = 0.01 for the rest of this paper. We note that ω(x) = 1 when x is in the clamped region and ω(x) = 0 otherwise (see Fig. 2).
With the introduction of ω(x), we redefine the two partially clamped mixed boundary conditions such that there is no discontinuity at the point where boundary conditions switch. The mixed clampedsupported boundary conditions are redefined to be Meanwhile the mixed clamped-free boundary conditions are redefined to be
w = 0, ∂ 2 w ∂n 2 = 0, φ = 0, ∂ 2 φ ∂n 2 = 0 on Γ l,r , (3.8a) w = 0, (1 − ω(x)) ∂ 2 w ∂n 2 + ω(x) ∂w ∂n = 0 on Γ t,b , (3.8b) φ = 0, (1 − ω(x)) ∂ 2 φ ∂n 2 + ω(x) ∂φ ∂n = 0 on Γ t,b .(3.∂ 2 w ∂n 2 + ν ∂ 2 w ∂t 2 = 0, ∂ ∂n ∂ 2 w ∂n 2 + (ν − 2) ∂ 2 w ∂t 2 = 0 on Γ l,r , (3.9a) (1 − ω(x)) ∂ 2 w ∂n 2 + ν ∂ 2 w ∂t 2 + ω(x)w = 0 on Γ t,b , (3.9b) (1 − ω(x)) ∂ ∂n ∂ 2 w ∂n 2 + (ν − 2) ∂ 2 w ∂t 2 + ω(x) ∂w ∂n = 0 on Γ t,b , (3.9c) φ = 0, ∂φ ∂n = 0 on Γ. (3.9d)
The discrete version of the redefined mixed boundary conditions are readily obtained:
• Clamped-Supported (CS)
W i b = 0, D 2 n i b W i b = 0, Φ i b = 0, D 2 n i b Φ i b = 0 on Γ l,r , (3.10a) W i b = 0, (1 − ω(x i b )) D 2 n i b W i b + ω(x i b )D n i b W i b = 0 on Γ t,b , (3.10b) Φ i b = 0, (1 − ω(x i b )) D 2 n i b Φ i b + ω(x i b )D n i b Φ i b = 0 on Γ t,b . (3.10c) • Clamped-Free (CF) D 2 n i b + νD 2 t i b W i b = 0, D n i b D 2 n i b + (2 − ν)D 2 t i b W i b = 0 on Γ l,r , (3.11a) (1 − ω(x i b )) D 2 n i b + νD 2 t i b W i b + ω(x i b )W i b = 0 on Γ t,b , (3.11b) (1 − ω(x i b )) D n i b D 2 n i b + (2 − ν)D 2 t i b W i b + ω(x i b )D n i b W i b = 0 on Γ t,b , (3.11c) Φ i b = 0, D n i b Φ i b = 0 on Γ. (3.11d)
Local asymptotic solution approach
The second approach considered removes the discontinuity at the discrete level by using local asymptotic analytical solutions. This approach is related to the Wiener-Hopf technique which has been applied to determine exact solutions to many problems including biharmonic equations with mixed boundary conditions [21,44]. Here we briefly describe this approach by demonstrating the numerical schemes for a simple biharmonic plate equation for the transverse deflection w(x, y), (3.12) which is subject to either CS or CF mixed boundary conditions. Since the equation for the Airy stress φ is not considered in (3.12), only the w parts of the boundary conditions (2.8) and (2.9) are needed.
∇ 4 w(x, y) = f w (x, y), (x, y) ∈ Ω,
The key point of this approach is to locally construct an analytical solution in a small neighborhood of the singular point that approximately satisfies the boundary conditions. The analytical solution consists of a solution to the homogeneous version of (3.12), ∇ 4 w = 0, that satisfies the boundary conditions exactly and a solution of the inhomogeneous equation (3.12) that asymptotically satisfies the boundary conditions. Then a leading order approximation of the analytical solution is used to design a special numerical scheme that bypasses the singular point with the assumption that the singular point lies on a grid point.
To be specific, we seek solutions of (3.12) in a half-disk domain B(O, r ) that is centered at the singular point O with a small radius r . The equation (3.12) is then converted to polar coordinates (r, θ) for convenience. To simplify the discussion for boundary conditions, we assume that the boundary is clamped at θ = 0, and is either supported or free at θ = π. Note that the normal derivatives in the boundary conditions become θ derivatives at θ = 0 and π since the domain B(O, r ) is a half disk. Using separation of variables and the fact that w satisfies the clamped boundary conditions w = ∂w ∂θ = 0 at θ = 0, we write the solution to the homogeneous biharmonic equation ∇ 4 w = 0 for an eigenvalue λ in polar coordinates as,
w λ (r, θ) = r λ+1 f λ (θ), (3.13)
where the corresponding eigenfunction f λ (θ) is given by
f λ (θ) = A [cos ((λ + 1)θ) − cos ((λ − 1)θ)] + B sin ((λ + 1)θ) λ + 1 − sin ((λ − 1)θ) λ − 1 .
(3.14)
The unknown coefficients A and B will be determined by boundary conditions at θ = π.
For the CS boundary conditions, we have
w = ∂ 2 w ∂θ 2 = 0 on θ = π. (3.15)
With (3.15) applied to w λ defined in (3.13), the unknown coefficients in (3.14) are determined, and the eigenvalues and their eigenfunctions are found to be
f λ (θ) = cos(λ + 1)θ − cos(λ − 1)θ for λ = 1 2 , 3 2 , 5 2 · · · (λ − 1) sin(λ + 1)θ − (λ + 1) sin(λ − 1)θ for λ = 2, 3, 4 · · · . (3.16)
Motivated by the modified method of fundamental solutions (MFS) proposed in [22] where singular radial basis functions are integrated to approximate the biharmonic solution, we construct an approximate solution to the equation (3.12):
w cs (r, θ) =α 1 r 3 2 f 1 2 (θ) + α 2 r 5 2 f 3 2 (θ) + α 3 r 7 2 f 5 2 (θ) + α 4 r 3 f 2 (θ) + α 5 r 4 f 3 (θ) + b 0 2 r 2 ln r + (a 0 2 − b 0 2 ln )r 2 . (3.17)
We note that the terms with coefficients α i 's form a local approximation to the solution of the homogenous version of (3.12) that satisfies the CS boundary conditions exactly, while the rest terms represent an approximation to a solution of the inhomogeneous equation (3.12) that satisfies the CS boundary conditions asymptotically as r → 0. Similarly, for the CF mixed boundary conditions, applying the following free boundary conditions
∂ 2 w ∂θ 2 + ν ∂ 2 w ∂r 2 = ∂ ∂θ ∂ 2 w ∂θ 2 + (2 − ν) ∂ 2 w ∂r 2 = 0 on θ = π (3.18)
to w λ defined in (3.13), we determine the unknown coefficients A and B in (3.14), and thus find all the possible eigenvalues,
λ = (n − 1 2 ) ± iK for n ≥ 1, where tanh(Kπ) = 1 + ν 2 . (3.19)
Then we derive a local approximation to the solution of (3.12) associated with CF boundary conditions, In this expansion, the first two terms with coefficients α i 's are the first complex conjugate pairs of the solutions that exactly satisfy the homogeneous biharmonic equation with the mixed CF boundary conditions. The rest terms form a local approximation to the inhomogeneous equation that satisfies the CF boundary conditions approximately. In particular, the biharmonic terms r 2 and r 2 θ asymptotically satisfy the boundary conditions at θ = 0 as r → 0 with a truncation error of O(r 2 ); those terms would satisfy the free boundary conditions exactly at θ = π if proper coefficients are chosen. The term r 2 ln r is included in the expansion as it is important for matching the local approximation in the singular subdomain to the outer solution at r = O(1).
w cf (r, θ) = α 1 r 3 2 +iK f 1 2 +iK (θ) + α 2 r 3 2 −iK f 1 2 −iK (θ) + b 0 r 2 ln r + a 0 r 2 + a 0 r 2 θ.
Based on the idea that overlapping "patches" with different stencils can be used to approximate local solutions from the Flexible Local Approximation MEthod (FLAME) [45], the analytical approximate solutionsŵ cs (r, θ) given by (3.17) andŵ cf (r, θ) given by (3.20) are incorporated to remove the boundary singularity from the numerical scheme for the CS and CF mixed boundary conditions, respectively. For illustration purpose, we label the grid points near the boundary singularity O as shown in Figure 3a and where p = 8 for the CS boundary conditions, and p = 6 for CF boundary conditions. This numerical boundary condition is then implemented at node 1 in both cases (see Fig. 3a and Fig. 3b). Note that Dirichlet boundary conditions are imposed at node O and standard centered-difference schemes with appropriate numerical boundary conditions as discussed in section 3.1 are used for all the other grid points. For example, the diamond shape stencils in Fig. 3a and Fig. 3b are used to indicate all the nodes involved in the standard finite difference approximation for the biharmonic operator.
It is important to point out that, without using the local asymptotic solution approach to remove the boundary singularity, traditional methods simply enforce one of the boundary conditions involved in the mixed boundary conditions at the singular point and proceed with the standard numerical schemes and boundary conditions at all the grid points. In the traditional methods, the singularity, which is left untreated in the discretized system, deteriorates the order of accuracy of the whole system. For comparison purpose, a traditional approach that imposes the clamped boundary conditions at the singular point for both the CS and CF cases will be implemented and its numerical results will be compared with that of the asymptotic analytical solution approach.
Initial guess
The system (3.3) together with the discrete boundary conditions are solved iteratively using one of the following algorithms. We note that all the iterative methods start with a given initial guess denoted by (Φ 0 , W 0 ). Unless otherwise noted, we use the precast shell shape W 0 as the initial guess for W 0 , and use the solution to the following Airy stress Φ equation
M ∇ 4 h Φ 0 = − 1 2 L h [W 0 , W 0 ] − L h [W 0 , W 0 ] − F φ
as the initial guess for Φ.
Picard method
Motivated by [32], we propose a Picard-type iterative method to solve the matrix equations described in Alogirthm 1. It is important to note that the Picard method decouples the shallow shell equations by solving two biharmonic equations (3.23) & (3.24) independently at each iteration step. Each of the biharmonic equations has a matrix dimension that is four times smaller than the original coupled system; therefore it could potentially be more efficient in overall performance than the other two algorithms that solve the coupled system as a single matrix equation. The efficiency of Picard method is confirmed by the numerical tests presented in §4. We also have an option (δ ∈ [0, 1]) to treat the W equation semi-implicitly; the scheme is explicit for δ = 0 and implicit for δ = 1.
M ∇ 4 h Φ k+1 = − 1 2 L h [W k , W k ] − L h [W 0 , W k ] − F φ ;(3.23)
; solve W equation:
M ∇ 4 h W k+1 = δL h [W k+1 , Φ k+1 ] + (1 − δ)L h [W k , Φ k+1 ] + L h [W 0 , Φ k+1 ] + F w ; (3.24)
; if ||Φ|| ∞ +||W|| ∞ < tol then converged =true; end prepare for next iteration step:
Φ k = Φ k+1 , W k = W k+1 , step++; end if converged then solutions obtained: Φ = Φ k , W = W k ; else
iteration failed after max number of iteration steps reached; end Algorithm 1: A Picard-type iterative method for the coupled system (3.3), where tol is the tolerance and maxIter is the maximum number of iterations allowed. The nonlinear term in the W equation (3.24) is treated semi-implicitly with δ ∈ [0, 1] representing the degree of implicity.
Newton's method
The most obvious approach of solving a nonlinear system of equations is Newton's method. To this end, we also develop a Newton solver to numerically solve the shallow shell equations. We rewrite the equations (3.3) as
F(X) = 0, (3.25) where X = Φ W and F(X) = M ∇ 4 h Φ + 1 2 L h [W, W] + L h [W 0 , W] + F φ M ∇ 4 h W − L h [W, Φ] − L h [W 0 , Φ] − F w .
The key for the Newton's method, as well as the Trust-Region Dogleg Method discussed below in §3.7, to work efficiently for problems with a large number of grid points is to find a way to efficiently evaluate the Jacobian matrix of F(X). Fortunately, in our case, we are able to determine the analytical expression of the Jacobian matrix.
With the introduction of a matrix function
M L h (U) = diag(M xx U)M yy + diag(M yy U)M xx − 2diag(M xy U)M xy ,
the bilinear operator can be written in terms of a matrix product
L h [U, V] = M L h (U)V. Here diag(V)
represents the diagonal matrix with the elements of vector V on the main diagonal. The Jacobian matrix of F(X) is therefore readily obtained:
J(X) = ∂F(X) ∂X = M ∇ 4 h M L h (W) + M L h (W 0 ) −M L h (W) − M L h (W 0 ) M ∇ 4 h − M L h (Φ)
.
(3.26)
Our Newton solver is summarized in Algorithm 2.
Data: given initial guess: ;
X 0 = Φ 0 T , W 0
if ||Φ|| ∞ +||W|| ∞ < tol then converged =true; end prepare for next iteration step: X k = X k+1 , step++; end if converged then solutions obtained: X = X k ; else iteration failed after max number of iteration steps reached; end Algorithm 2: Newton's method for the coupled system (3.3) where tol is the tolerance and maxIter is the maximum number of iterations allowed.
Trust-Region Dogleg method
For comparison purpose, we also solve the discretized system (3.2a)-(3.2b) using the built-in function fsolve of MATLAB (The MathWorks, Inc., Natick, MA). The underlining algorithm that we choose when using fsolve is Trust-Region Dogleg Method, which is a variant of the Powell dogleg method [46]. The key difference between this method and the Newton's method lies in the procedure for computing the step ∆X. In contrast to the Newton's method that computes the step as in equation (3.27), the Trust-Region Dogleg Method constructs steps from a convex combination of a Cauchy step (a step along the steepest descent direction) and a Gauss-Newton step. The trust-region technique improves the robustness and is able to handle the case when the Jacobian matrix is singular. Further details about this method can be found in the documentation of MATLAB's Optimization Toolbox [47].
Displacement regularization for free boundary conditions
We note that the displacement equation (a biharmonic equation) subject to free boundary conditions is singular since the displacement is only determined up to an arbitrary plane c 1 x+c 2 y +c 3 (c i 's are arbitrary constants). In addition, similar to a Poisson equation with Neumann boundary condition, the biharmonic equation is solvable only if the right hand side satisfies a compatibility condition. In order to solve this singular system one need to eliminate three equations and replace them with equations that set the values of w at three points. Instead of picking the equations to be replaced, we prefer to use a different approach which is better conditioned. This approach is motivated by the method used by Henshaw and Petersson [48] to regularize the pressure Poisson equation with Neumann boundary condition; it is a crucial step to solve this singular pressure Poisson equation for the split-step scheme proposed by the authors to solve the incompressible Navier-Stokes equations with no-slip wall boundary conditions. Let the biharmonic equation for the displacement with free boundary conditions be denoted as a matrix equation
AW = b,(3.29)
where the matrix A is singular with the dimension of its null space being 3. Since the solution is determined up to an arbitrary plane, the right null of A is found to be Q = [x, y, r], where x and y are column vectors obtained by reshaping the x and y coordinates of all the grid points and r is the vector with all components equal to one. Instead of solving the singular equation (3.29), we seek solutions of the augmented system
A Q Q T 0 3×3 W a = b 0 3×1 . (3.30)
It is well-known that the saddle point problem (3.30) is non-singular and has an unique solution [49]. The last three equations (Q T W = 0 3×1 ) will set the mean values of xw, yw and w to be zero.
Numerical results
We now present the results for a sequence of simulations to demonstrate the properties of our numerical approaches for the shallow shell equations. We begin with mesh refinement studies to illustrate some basic properties of our approach. Two simple tests for solving a single biharmonic equation are considered first because the accurate solution of a biharmonic equation is an essential component for the coupled system. The next set of tests are designed for the coupled system. We perform mesh refinement studies first for the simplified linear system and then for the nonlinear system using all three of the aforementioned iterative schemes. Efficiency of the iterative schemes are also compared. In order to study the effects of boundary conditions, a numerical example of nonlinear shell with a precast shell shape and localized thermal forcing is considered with all the proposed simple and mixed boundary conditions (2.4)-(2.9). Finally, as an application of the numerical methods, we study the snap-through thermal buckling problem with an unstressed shell shape. A pseudo-arclength continuation (PAC) method [38] is utilized to find the snap-through bifurcation; one of our iterative methods is used to solve the resulted system at the each step of the continuation method.
For simplicity, unless otherwise noted all the test problems considered are on a unit square domain, i.e., Ω = [x a , x b ] × [y a , y b ] with x a = y a = 0 and x b = y b = 1, and the partially clamped region on the boundary for the two mixed boundary conditions are both assumed to be Γ c = {(x, y) : y = 0 or 1, 0.4 < x < 0.6}.
Mesh refinement study
We use the method of manufactured solutions to construct exact solutions of test problems by adding forcing functions to the governing equations. The forcing is specified so that a chosen function becomes an exact solution to the forced equations. Here the approach is used to verify the order of accuracy of the numerical solutions of: (I) a single biharmonic equation; (II) the linear coupled system and (III) the nonlinear coupled system. Numerical solutions subject to all of the five boundary conditions (2.4)-(2.9) are obtained separately.
Biharmonic equation
We note that solving biharmonic equation plays a vital role for all the iterative algorithms proposed for the numerical solution of the shallow shell equations. The Picard method essentially solves two biharmonic equations at each iteration and for the Newton and Trust-Region Dogleg methods the discretized biharmonic operator forms the diagonal blocks of the Jacobian matrix (3.26) that are inverted at each step. Given its importance, the accuracy of the numerical solution of a single biharmonic equation
∇ 4 w = f w (4.1)
is verified first. The exact solution w e (x, y) of the biharmonic equation is chosen to be either of the following:
1. trigonometric test
w e (x, y) = sin 4 2π x − x a x b − x a sin 4 2π y − y a y b − y a , 2. polynomial test w e (x, y) = 1 100 (x − x c )(x − x a )(x − x b )(y − y c )(y − y a )(y − y b ) l 3 x l 3 y 7
, where x c = (x a + x b )/2, y c = (y a + y b )/2, l x = (x b − x a )/3 and l y = (y b − y a )/3. Both exact solutions satisfy all the boundary conditions. The forcing term is then given by
f w (x, y) = ∇ 4 w e ,
which is plotted in Fig. 4 for both the trigonometric and polynomial tests. The accuracy of the biharmonic solution is illustrated in Fig. 5 for all the boundary conditions. The first plot in the panel shows the numerical solution of the trigonometric test; the rest plots demonstrate the numerical error of various boundary conditions. The error at grid i is given by E(w i ) = w e (x i ) − W i . Here we observe that the errors for all the boundary conditions are well behaved in that the magnitude is small and is smooth throughout the domain including the boundaries. The behavior of the errors in the polynomial test are similar. A mesh refinement study is shown in Fig. 6 for all the boundary conditions using the grids G N defined in (3.1) where N = 10 × 2 j with j = 1, 2, · · · , 6. The maximum-norm errors ||E(w)|| ∞ against the grid size for all boundary conditions together with a second-order reference curve are plotted in log-log scale in Fig. 6. The results in the plot show the expected second-order accuracy for all five choices of boundary conditions, and in particular for the case of free boundary conditions; the problem would otherwise be singular without the technique introduced in §3.8. The accuracy result is consistent with the truncation error of our centered finite difference discretization.
Asymptotic analytical solution approach for mixed boundary conditions
As an example to show the convergence property of the asymptotic analytical solution approach for mixed boundary conditions, we consider a simple test problem consisting of the biharmonic equation (4.1) with a constant external forcing f w ≡ 1 on the unit square domain [0, 1] × [0, 1]. The square shell is assumed to be partially clamped on the right half of its boundary, i.e., Γ c = {y = 0, 0.5 ≤ x ≤ 1} ∪ {y = 1, 0.5 ≤ x ≤ 1} ∪ {x = 1, 0 ≤ y ≤ 1}, and the rest of the boundary is either supported or free. This problem without boundary singularities corresponds to the plate sagging problem under gravity, which has been well-studied in classic mechanical engineering textbooks [5].
We first consider the CS mixed boundary conditions; that is, the rest of the boundary is simply supported. The reduced rigidity caused by simply supported boundary conditions enforced on the left half of the boundary should lead to larger displacements at the center of the plate compared to the case associated with fully clamped boundary conditions [5]. The contour of the displacement function w(x, y) shown in the top-left image of Fig. 7 agrees with this result, where the displacement in the clamped half is significantly smaller than that in the simply supported half. We then consider the CF mixed boundary conditions. The numerical solution for the displacement w is shown in the top-right image of Fig. 7. The free boundary condition on the left half of the boundary has an obvious effect on the sagging of the plate; the lowest point now locates on the free edge instead of at the center of the domain.
Mesh refinement study is also performed for this test problem to reveal the order of accuracy of the local asymptotic solution approach for dealing with mixed boundary conditions. For comparison purposes, we also solve the test problem using traditional finite difference methods; namely, no special treatment are given to the singular point on boundary. The convergence results for both mixed boundary conditions are shown in the bottom image of Fig. 7. We observe for both CS and CF boundary conditions that the traditional method has a linear rate of convergence; while with the flexible local approximation scheme in (3.22) applied to the inner node adjacent to the singular point, the resultant numerical scheme exhibits a second-order convergence rate. We can see that both the transition function approach and the local asymptotic solution approach are effective in maintaining the second order accuracy for solving the biharmonic equation with the mixed boundary conditions. Given that the implementation is much straightforward for the transition function approach, we stick with this method for the rest of the paper.
Linear coupled system
To illustrate the effectiveness of the iterative schemes for solving the coupled system, we now discuss the problem of linear shallow equations (2.3) which is obtained by dropping the nonlinear terms in the nonlinear shallow shaw shell equations (2.2). This linear coupled system is considered here because it is simple and applicable for shallow shells with small deformations (the linear shallow shell theory). Moreover, the solution to this linear system can be utilized as an initial guess to the iterative process of solving the nonlinear shell equations.
Again, the method of manufactured solution is used here. The exact solutions chosen for this test are given by φ e (x, y) = sin 5 2π
x
− x a x b − x a sin 5 2π y − y a y b − y a , (4.2a) w e (x, y) = sin 4 2π x − x a x b − x a sin 4 2π y − y a y b − y a . (4.2b)
All five boundary conditions are satisfied by the exact solutions. The precast shell shape is specified as
w 0 (x, y) = sin 2π x − x a x b − x a sin 2π y − y a y b − y a . (4.3)
The forcing functions f φ (x, y) and f w (x, y) are obtained accordingly by substituting φ e , w e and w 0 into the system, and their contour plots are shown in Fig. 8. We solve the linear coupled system using all three iterative schemes together with all five boundary conditions for completeness. The initial conditions used to start the iteration is as discussed in Section 3.4. The results from the Picard method (Algorithm 1) with the implicit factor δ = 0 are presented in this section. Solutions obtained using the other iterative algorithms are similar and they are not included here to save space. The results of the φ component are collected in Fig. 9, and those of the w component are collected in Fig. 10. We see that the errors of both φ and w components subject to all the five choices of boundary conditions are well behaved; the errors are small and smooth throughout the domain including the boundaries.
A careful mesh refinement study is also performed to test the order of accuracy for the linear coupled system. The series of refined grids are G N 's with N = 10 × 2 j (j = 1, 2, . . . , 6). As expected, second-order spatial accuracy is achieved by all three algorithms as is shown in Fig. 11. It is worth noting that the regularization for the w equation with free boundary conditions works well for the linear coupled system regardless of the numerical methods used for iterations.
It is also observed that the Picard method both explicit and implicit are more efficient than the Newton and trust-region-dogleg methods (fsolve). At each iteration step, we need to solve two N × N matrix equations using the Picard method, while a 2N × 2N (Jacobian) matrix equation is solved with the other two methods. For large N , the Newton's method and the trust-region-dogleg method are more computationally expensive at each step, which may result in a longer overall time to solve the system even though the Newton method converges in a faster rate than the Picard method. In addition, computer memory can also be an issue using Newton solve and fsolve on a high resolution grid. For example, both Newton solve and fsolve encountered out-of-memory error when solving this problem on grid G 640 (the finest grid considered for the mesh refinement study) using a single processor of a linux desktop computer equipped with 64GB memory; however, the Picard method handles this resolution with no problem. For this reason, the data points for G 640 are absent in plots (c) and (d) of Fig. 11.
Nonlinear coupled system
As a final mesh refinement study, we test our numerical methods by solving the nonlinear shallow shell equations (2.2). The exact solutions and the precast shell shape are specified to be the same as the linear coupled system test, which are given in equations (4.2) and (4.3). The forcing terms are different due to the nonlinear terms, and they are visualized in Fig. 12.
The numerical solutions for the nonlinear case are accurate for all the boundary conditions and for all numerical methods that are considered in this paper. As are illustrated in Fig. 13 and Fig. 14, the numerical errors are well behaved in the sense that their magnitudes are small and smooth throughout the domain including the boundaries. Importantly, the technique employed to regularize the displacement equation with free boundary conditions performs well in the context of a nonlinear coupled system, too. The 2nd-order spatial accuracy for all of the numerical schemes are again confirmed by the mesh refinement results shown in Fig. 15. We note that the Picard method with free boundary conditions demands the grid to be fine enough to converge; thus, the mesh refinement study starts from grid G 40 for δ = 0 and from G 80 for δ = 1. Table 1: Comparison of the run-time performance of the explicit Picard method versus the Newton's method for the nonlinear shallow shell equations with free boundary conditions. The column labeled "s/step" gives the CPU time in seconds per iteration step; the column labeled "steps" gives the number of steps taken; and the column labeled "rate" gives the estimated rate of convergence of the corresponding method.
We end the mesh refinement study by providing a rough comparison between the run-time CPU costs for the explicit Picard method (δ = 0) and the Newton's method on grids with increasing resolutions. In this comparison, the solving process of both methods are profiled and their performance for solving the nonlinear test problem with free boundary conditions is summarized in Table 1. In this table, we list the average CPU time in seconds-per-step (s/step), the number of steps taken and the estimated rate of convergence; the convergence rate is approximated following [50,51] by the average of
p k+1 = ln ||X k+1 − X k || ∞ ln (||X k − X k−1 || ∞ ) (4.4)
for all steps. Recall that, in contrast to the Newton's method which solves one matrix equation involving the Jacobian matrix, the Picard method solves two matrix equations that are much smaller in dimension. Therefore, the Picard method is expected to be faster than the Newton's method per step; and this is observed in Table 1. However, since the Newton's method converges in 2nd-order rate and the Picard method converges in a slower 1st-order one, it takes more steps for the Picard method to converge. Overall speaking, the Newton's method could still beat the Picard method despite being slower at each step; for example, the case with grid G 160 in Table 1. As the grid gets more refined, the Newton's method takes much more time per step since the size of the Jacobian matrix becomes too big and eventually surpasses the capacity of our computational resources. For the case of G 320 , the Picard method is faster than the Newton's method both per step and in total time (s/step × steps); and for the case of G 640 , the Newton's method encounters out-of-memory issue while the Picard method still works fine. It is important to note that the performance comparison conducted here is just a rough one, which can be affected by many factors, the quality of initial guess for instance. In addition, many improvements can be employed to speed up the iteration, such as Anderson acceleration [52]. The memory issue of the Newton's method can also be alleviated by switching the solver of the linear system; solvers based on iterative schemes such as the biconjugate gradient stabilized method can be more suitable for problems with large matrix dimension than the direct QR solver used in this paper. But all those numerical techniques are topics beyond the scope of this paper.
Effects of boundary conditions and localized thermal source
In this section, we solve a realistic problem from industrial application to demonstrate the effectiveness of our scheme in capturing the influences of thermal stresses, precast shell shape, and various boundary supports to the final shell shape. In the process of manufacturing curved glass sheets, the "frozen-in" thermal strain due to a non-ideal cooling history can cause small non-uniformities in the final configuration of the glass sheets. How the thermal stress interplays with the precast shape and boundary conditions is of great interest. Motivated by this application, we consider a thin shallow shell with a nonuniform precast shell shape and various boundary conditions. In order to separate the impact of the thermal stress from the impact of the geometry of the precast shape, we focus on the case where the shell is subjected to a localized thermal loading. The influence of localized thermal heating on plate thermal buckling and post buckling has been investigated in [53] when the plate is either simply supported or clamped. Here, we can do a more thorough study with our new numerical methods.
We assume that there is no external forcing to the shell displacement (f w = 0), and prescribe the precast shell shape w 0 and the thermal loading f φ using the following given functions,
w 0 = 0.1 − 0.4(y − 0.5) 2 , f φ = 32634.2 max{−100.0((x − 0.75) 2 + (y − 0.25) 2 ) + 1, 0}.
(4.5)
The contour plots of w 0 and f φ are shown in Fig. 16. This problem is solved again using all three proposed numerical methods subject to all five boundary conditions. From the numerical results, we observe that while the unstressed shell shape w 0 defined in (4.5) is cylindrically symmetric, both the displacement w and Airy stress function φ associated with all the boundary conditions are asymmetric due to the effects of the localized thermal forcing at the bottom right corner of the domain (see the right image of Fig. 16). This suggests that thermal effects can cause small non-uniformity in the shell.
For this problem, we are interested in how the final shell shape (w + w 0 ) is affected by various boundary conditions. In Fig. 17, we collect our numerical solutions for the final shape subject to these boundary conditions. We note that, since the results of all the numerical methods are similar, only the ones from the Newton's method are presented here. From Fig. 17, it can be seen that boundary conditions have a significant impact on the final shell shapes. In particular, we observe that the free boundary conditions (Fig. 17(c)), as well as the CF boundary conditions (Fig. 17(e)), introduce the largest deflections to the shell shape; nonetheless, the clamped boundary conditions (Fig. 17(b)) preserve the precast shell shape w 0 the best. Our numerical results for the clamped and supported boundary conditions agree qualitatively with the results reported in [53] where a similar problem with rectangular plates under localized thermal stresses subject to these boundary conditions is investigated. The results for the other three boundary conditions, which are made possible by our numerical methods, are not available in literature for comparison.
Snap-through bifurcations
The critical thermal loading for the snap-through bifurcation is important for the understanding of the maximum allowed temperature for a shell or plate structure. In [4,54], thermal buckling with various types of shell shapes and boundary supports has been studied. It has been shown in [3,55] that the deflection of a perfectly flat plate develops a symmetric pitchfork bifurcation associated with elevated temperatures, while a shallow shell undergoes an asymmetric saddle-node bifurcation at a relatively high critical temperature. As an application of our proposed numerical methods for the nonlinear shallow shell equations, the snap-through thermal buckling problem is studied numerically for each of the boundary conditions so as to demonstrate the effectiveness and accuracy of our numerical methods. Specifically, we solve the nonlinear shallow shell equations (2.2) with the following specifications
f φ (x, y) = ξ, f w (x, y) = 0, w 0 (x, y) = 0.3(1 − (x − 0.5) 2 − (y − 0.5) 2 ),(4.6)
where ξ is a spatially-uniform thermal loading. The snap-through bifurcation can be obtained numerically using path following (parameter continuation) techniques. The natural choice of parameter for continuation in this problem would be the constant thermal loading ξ. The idea of parameter continuation is to find a solution to the governing equations at ξ 0 + ∆ξ for a small perturbation ∆ξ given the solutions at ξ 0 , and then we proceed step by step to get a global solution path; solutions at each step are solved using our iterative methods with the solutions from the previous step as the initial guess. However, it is well-known that the natural parameter continuation method may fail at some step due to the existence of singularities on the curve (e.g., folds or bifurcation points) [38]. Even though an estimate of the locations of the folding points can be made from bifurcation branches away from the singularities that are obtained during the natural parameter continuation prior to failure, we would like to have a more accurate estimation since the folding points represent the critical thermal loading of our problem. In order to capture the critical thermal loading, the so-called Pseudo-Arclength Continuation (PAC) method is used here to circumvent the simple fold difficulties [38]. The main idea in PAC is to drop the natural parametrization by ξ and use some other parameterization. Detail discussion about the PAC method can be found in the lecture notes [38] by Keller. Here, to be self-contained, we briefly describe the PAC method used for our problem. To simplify notation, we denote the shallow shell equation (2.2) together with the specifications given in (4.6) as G(w, φ, ξ) = 0.
(4.7)
Instead of tracing out a solution path from the incrementation of the natural parameter, we treat ξ as an unknown and solve (4.7) together with a scalar normalization equation, N (w, φ, ξ; ∆s) ≡ẇ p (w − w p ) +φ p (φ − φ p ) +ξ p (ξ − ξ p ) − ∆s = 0, (4.8) where N (w, φ, ξ; ∆s) = 0 is the equation of a plane perpendicular to the tangent (ẇ p ,φ p ,ξ p ) at a distance ∆s from a solution (w p , φ p , ξ p ). This plane will intersect the solution path if ∆s and the curvature of the path are not too large. Here ∆s can be regarded as the increment of the pseudo-arclength of the path curve. So by solving (4.7) & (4.8) step by step with the solutions (w p , φ p , ξ p ) from the previous step as initial guess, we are able to circumvent the simple fold difficulties and obtain a complete bifurcation branch. We note that at each step the equations (4.7) & (4.8) are solved using the iterative solvers proposed in this paper. In addition, in order to speed up the continuation process, ∆s is set dynamically in our numerical implementation; that is, the size of ∆s is set to decrease when approaching a folding point and increase when leaving one automatically. The bifurcation results obtained using the PAC method and the Newton's solver for the shallow shell equations are shown in Fig. 18. It is interesting to notice that, by using PAC, there exhibits no issues related to the simple fold difficulties that had plagued the natural parameter continuation and notice that the data points are clustered towards the folding points due to the dynamical strategy of setting the value of ∆s. For all the five boundary conditions, we collect the bifurcation diagrams of the displacement at the center of the domain in Fig. 18 (left), and show the bifurcation diagrams in terms of the L 2 norm of the shell displacement in Fig. 18 (right). The saddle-node bifurcation curves in Fig. 18 (left) all qualitatively agree with the load-deflection curves for thermal buckling of shallow shells in the literature [3,55].
The results reveal the effects of boundary conditions on critical thermal loadings of snap-through buckling. It is clearly seen from the numerical results that the critical thermal loading for fully clamped boundary supports is much smaller than the other ones, while the clamped boundary conditions possess the largest critical thermal loading. Let ξ bc denote the critical thermal loading of various boundary conditions, the locations of the folding points in the left plot of Fig. 18 indicate the following relation, ξ clamped < ξ cf < ξ cs < ξ free < ξ supported .
Conclusions
We have developed novel finite difference based iterative schemes to solve a von-Karman type nonlinear shallow shell model (2.2) that incorporates the thermal stresses. The boundary conditions considered for the system are three simple boundary conditions and two application-motivated mixed boundary conditions. To deal with the boundary singularities introduced by the mixed boundary conditions and maintain the second order accuracy, a transition function approach and a local asymptotic solution approach are proposed. All proposed numerical methods for solving the shallow shell equations are verified as second order accurate by numerical mesh refinement studies.
As a demonstration of the efficiency and accuracy of our numerical schemes for engineering applications, we also solve two realistic shallow shell problems; namely the localized thermal source problem and the the snap-through thermal buckling problem. Our numerical results directly reveal the combined effects of unstressed shell shape, thermal stresses and boundary conditions on the shallow shell system. In addition, for the snap-through thermal buckling problem, we are able numerically obtain the snap-through bifurcations using the pseudo-arclength continuation method with the equations at each continuation step being solved by one of our proposed numerical methods. Our results for both problems are consistent with existing studies.
A number of interesting questions remain to be answered. While this paper is devoted to the static von Karman shell equations, we are also interested in extending our methods to the corresponding dynamical systems. For instance, in [28] Bilbao studied numerical stability of a family of finite difference schemes for the dynamical plate equations. To obtain numerical stability, special treatments to our methods will be needed when applying to simulate dynamic evolution of shell structures with mixed boundary conditions.
For the study on the influences of thermal stresses and the precast shell shape, our investigation has been focused on the forward problem proposed in [2] which involves numerically solving the governing equations to obtain the overall deflection. Related inverse problems would also be interesting and challenging. For instance, we may ask: is it possible to recover the precast shape with given deflection and thermal stresses, or is it possible to obtain the thermal stresses with given final deflection and precast shape? Whether these inverse problems are well-posed or not is still unclear, and some regularization may be needed in order to form an optimization problem to solve for these inverse problems.
Figure 1 :
1Illustration of the mixed boundary conditions. Left: clamped-supported boundary condition. Right: clamped-free boundary condition.
•
Clamped-Supported (CS):
Figure 2 :
2The plot of the smooth transition function defined in (3.7) for 0 ≤ x ≤ 1. It provides a smooth transition from non-clamped region to the clamped region (0.4 < x < 0.6). The parameters are specified as xc = 0.5 and rc = 0.1.
Figure 3 :
3Schematic illustration of the stencil of the numerical scheme near the singular point O. The boundary condition (3.22) derived from the local asymptotic solution approach for the corresponding boundary conditions is implemented at node 1 with its stencil enclosed by the dashed line. Dirichlet boundary conditions are imposed at O. The standard centereddifference schemes or the standard numerical boundary conditions are implemented at all the other nodes.
Figure 3bfor each case. Applying(3.17) for the CS boundary conditions or(3.20) for the CF boundary conditions to the grid points adjacent to the singular point O, we obtain a set of algebraic equations w i =ŵ bc (r i , θ i ), bc = cs or cf,(3.21) where w i represents the numerical approximation of the solution w(x, y) at node i and (r i , θ i ) denote its polar coordinates (seeFig. 3aandFig. 3b). Here i = 1, . . . , 8 for the CS case and i = 1, . . . , 6 for the CF case. Eliminating the unknown coefficients in(3.17) and(3.20) using the algebraic equations (3.
Data: given initial guess:(Φ 0 , W 0 ) Result: numerical solutions to the shallow shell equations (3.3a) & (3.3b): (Φ, W) initialization: set Φ k = Φ 0 , W k = W 0 , converged = false and step = 0;while not converged and step < maxIter do solve Φ equation:
T T Result: numerical solutions to the shallow shell equations (3.3a) & (3.3b): X = Φ T , W T T initialization: set X k = X 0 , converged = false and step = 0; while not converged and step < maxIter do ∆X = −J(X k ) −1 F(X k ); (3.27)X k+1 = X k + ∆X;(3.28)
Figure 4 :
4Plots of the forcing term fw in the biharmonic equation (4.1) for (left) the trigonometric test and (right) the polynomial test.
Figure 5 :
5Contour plots showing the solution and errors of the biharmonic equation (4.1) with various boundary conditions on grid G 640 for the trigonometric test.
Figure 6 :
6A mesh refinement study for the numerical solutions of the biharmonic equation (L∞ norm).
Figure 7 :
7Numerical results for the test problem ∇ 4 w = −1 subject to partially clamped boundary conditions on the right portion of the edges (marked with solid lines). The solution for the CS mixed boundary conditions is shown in the top-left image, and that for the CF mixed boundary conditions is shown in the top-right image. The bottom image shows the mesh refinement studies for the local singularity treatment(3.22) and the traditional finite difference schemes with both CS and CF mixed boundary conditions.
Figure 8 :
8Contour plots of the forcing terms f φ and fw given by (4.2) for the linear coupled system (2.3).
Figure 9 :Figure 10 :
910Contour plots showing the solution and errors of the φ component of the linear coupled system with various boundary conditions on grid G 640 . The tolerance for this simulation is tol = 10 −6 . Results obtained from the Picard method are shown here; those of the other two algorithms are similar. Contour plots showing the solution and errors of the w component of the linear coupled system with various boundary conditions on grid G 640 . Tolerance for this simulation is tol = 10 −6 . Results obtained from the Picard method are shown here; those of the other two algorithms are similar.
Figure 11 :
11A mesh refinement study for the numerical solutions of the linear coupled system with various methods and boundary conditions. Errors are in maximum norm L∞. Tolerance for this simulation is tol = 10 −6 .
Figure 12 :
12Contour plots showing the forcing terms in the nonlinear coupled system (2.2).
Figure 13 :Figure 14 :
1314Contour plots showing the φ component of the solution and errors of the nonlinear coupled system with various boundary conditions on grid G 640 . Tolerance for this simulation is tol = 10 −6 . Picard method is used here. Results of the other two algorithms are similar. Contour plots showing w component of the solution and errors of the nonlinear coupled system (2.2) with various boundary conditions on grid G 640 . Tolerance for this simulation is tol = 10 −6 . Picard method is used here. Results of the other two algorithms are similar.
Figure 15 :
15A mesh refinement study for the numerical solutions of the nonlinear coupled system with various methods and boundary conditions. Errors are in maximum norm L∞. Tolerance for this simulation is tol = 10 −6 .
Figure 16 :
16Contour plots of (left) the nonuniform precast shell shape and (right) the localized thermal loading.
Figure 17 :
17Contour plots showing the final shape w + w 0 governed by the nonlinear shell equations (2.2) with the precast shell shape w 0 and thermal loading f φ in (4.5) subject to various boundary conditions.
Figure 18 :
18Snap-through bifurcations for the nonlinear shallow shell equations subject to constant thermal loading and various boundary conditions. Left: bifurcation diagram of the displacement w at the center of the domain against the thermal loading ξ with various boundary conditions. Right: bifurcation diagram of the L 2 norm of the displacement w against the thermal loading ξ with various boundary conditions.
How it works: Corning's fusion process. How it works: Corning's fusion process. URL https://www.corning.com/worldwide/en/innovation/the-glass-age/science-of-glass/ how-it-works-cornings-fusion-process.html
J Abbott, L Button, M Bandegi, A Bardall, V Barra, S Bohun, P Buchak, D Crowdy, P Dubovski, H Ji, Methods for thin nearly flat elastic shells with stretching and bending. J. Abbott, L. Button, M. Bandegi, A. Bardall, V. Barra, S. Bohun, P. Buchak, D. Crowdy, P. Dubovski, H. Ji, et al., Methods for thin nearly flat elastic shells with stretching and bending.
Thermal buckling of shallow shells. M Mahayni, International Journal of Solids and Structures. 22M. Mahayni, Thermal buckling of shallow shells, International Journal of Solids and Structures 2 (2) (1966) 167-180.
Thermal buckling of plates and shells. E A Thornton, Applied Mechanics Reviews. 4610E. A. Thornton, Thermal buckling of plates and shells, Applied Mechanics Reviews 46 (10) (1993) 485-506.
Applied solid mechanics. P Howell, G Kozyreff, J Ockendon, Cambridge University PressP. Howell, G. Kozyreff, J. Ockendon, Applied solid mechanics, no. 43, Cambridge University Press, 2009.
V Z Vlasov, General theory of shells and its applications in engineering. 99National Aeronautics and Space AdministrationV. Z. Vlasov, General theory of shells and its applications in engineering, Vol. 99, National Aeronautics and Space Administration, 1964.
E Ventsel, T Krauthammer, Thin plates and shells: theory: analysis, and applications. CRC pressE. Ventsel, T. Krauthammer, Thin plates and shells: theory: analysis, and applications, CRC press, 2001.
S Timoshenko, S Woinowsky-Krieger, Theory of Plates and Shells. McGraw-Hill2nd EditionS. Timoshenko, S. Woinowsky-Krieger, Theory of Plates and Shells, 2nd Edition, McGraw-Hill, 1959.
L H Donnell, Beams, plates and shells. McGraw-Hill CompaniesL. H. Donnell, Beams, plates and shells, McGraw-Hill Companies, 1976.
R B Hetnarski, M R Eslami, G Gladwell, Thermal stresses: advanced theory and applications. Springer158R. B. Hetnarski, M. R. Eslami, G. Gladwell, Thermal stresses: advanced theory and applications, Vol. 158, Springer, 2009.
Thermal stresses in plates-statical problems, Thermal stresses I. T Tauchert, 1T. Tauchert, Thermal stresses in plates-statical problems, Thermal stresses I. 1 (1986) 23-141.
On the expansion of the dirichlet problem and a mixed problem for biharmonic equation into a seriaes of decomposed problems. B Palsev, Journal of Comput. Math. and Math. Physics. 61B. Palsev, On the expansion of the dirichlet problem and a mixed problem for biharmonic equation into a seriaes of decomposed problems, Journal of Comput. Math. and Math. Physics 6 (1) (1966) 43-51.
Iterative method for solving a problem with mixed boundary conditions for biharmonic equation. D Q. A , L T Son, Advances in Applied Mathematics and Mechanics. 15D. Q. A, L. T. Son, Iterative method for solving a problem with mixed boundary conditions for biharmonic equation, Advances in Applied Mathematics and Mechanics 1 (5) (2009) 683-698.
The numerical solution of a model problem biharmonic equation by using extrapolated alternating direction implicit methods. A Hadjidimos, Numerische Mathematik. 174A. Hadjidimos, The numerical solution of a model problem biharmonic equation by using extrapolated alternating direction implicit methods, Numerische Mathematik 17 (4) (1971) 301-317.
The treatment of singularities of partial differential equations by relaxation methods. H Motz, Quart. Appl. Math. 4H. Motz, The treatment of singularities of partial differential equations by relaxation methods, Quart. Appl. Math 4 (1946) 371-377.
Singularities and treatments of elliptic boundary value problems. Z.-C Li, T.-T Lu, Mathematical and Computer Modelling. 318Z.-C. Li, T.-T. Lu, Singularities and treatments of elliptic boundary value problems, Mathematical and Computer Modelling 31 (8) (2000) 97-145.
Application of a series-type method to vibration of orthotropic rectangular plates with mixed boundary conditions. Y Narita, Journal of Sound and Vibration. 773Y. Narita, Application of a series-type method to vibration of orthotropic rectangular plates with mixed boundary conditions, Journal of Sound and Vibration 77 (3) (1981) 345-355.
Natural frequencies of circular plates with partially free, partially clamped edges. F Eastep, F Hemmig, Journal of Sound and Vibration. 843F. Eastep, F. Hemmig, Natural frequencies of circular plates with partially free, partially clamped edges, Journal of Sound and Vibration 84 (3) (1982) 359-370.
Vibration and buckling of rectangular plates with nonuniform elastic constraints in rotation. T Mizusawa, T Kajita, International journal of solids and structures. 231T. Mizusawa, T. Kajita, Vibration and buckling of rectangular plates with nonuniform elastic con- straints in rotation, International journal of solids and structures 23 (1) (1987) 45-55.
Flexural free vibrations of rectangular plates with complex support conditions. S Fan, Y Cheung, Journal of Sound and Vibration. 931S. Fan, Y. Cheung, Flexural free vibrations of rectangular plates with complex support conditions, Journal of Sound and Vibration 93 (1) (1984) 81-94.
A 'stick-slip' problem related to the motion of a free jet at low Reynolds numbers. S Richardson, Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge Univ Press67S. Richardson, A 'stick-slip' problem related to the motion of a free jet at low Reynolds numbers, in: Mathematical Proceedings of the Cambridge Philosophical Society, Vol. 67, Cambridge Univ Press, 1970, pp. 477-489.
Methods of fundamental solutions for harmonic and biharmonic boundary value problems. A Poullikkas, A Karageorghis, G Georgiou, Computational Mechanics. 214-5A. Poullikkas, A. Karageorghis, G. Georgiou, Methods of fundamental solutions for harmonic and biharmonic boundary value problems, Computational Mechanics 21 (4-5) (1998) 416-423.
Non-linear vibration of anisotropic rectangular plates with non-uniform edge constraints. C Chia, Journal of Sound and Vibration. 1014C. Chia, Non-linear vibration of anisotropic rectangular plates with non-uniform edge constraints, Journal of Sound and Vibration 101 (4) (1985) 539-550.
Vibrations of rectangular plates with nonuniform elastic edge supports. A Leissa, P Laura, R Gutierrez, Journal of Applied Mechanics. 474A. Leissa, P. Laura, R. Gutierrez, Vibrations of rectangular plates with nonuniform elastic edge supports, Journal of Applied Mechanics 47 (4) (1980) 891-895.
Vibration analysis of plates by the pb-2 Rayleigh-Ritz method: Mixed boundary conditions, reentrant corners, and internal curved supports. K Liew, C Wang, Journal of Structural Mechanics. 203K. Liew, C. Wang, Vibration analysis of plates by the pb-2 Rayleigh-Ritz method: Mixed boundary conditions, reentrant corners, and internal curved supports, Journal of Structural Mechanics 20 (3) (1992) 281-292.
On the use of the substructure method for vibration analysis of rectangular plates with discontinuous boundary conditions. K Liew, K Hung, K Lam, Journal of Sound and Vibration. 1633K. Liew, K. Hung, K. Lam, On the use of the substructure method for vibration analysis of rectangular plates with discontinuous boundary conditions, Journal of Sound and Vibration 163 (3) (1993) 451- 462.
On the use of the domain decomposition method for vibration of symmetric laminates having discontinuities at the same edge. K Liew, K Hung, M Lim, Journal of sound and vibration. 1782K. Liew, K. Hung, M. Lim, On the use of the domain decomposition method for vibration of symmetric laminates having discontinuities at the same edge, Journal of sound and vibration 178 (2) (1994) 243- 264.
A family of conservative finite difference schemes for the dynamical von Karman plate equations. S Bilbao, Numerical Methods for Partial Differential Equations. 241S. Bilbao, A family of conservative finite difference schemes for the dynamical von Karman plate equations, Numerical Methods for Partial Differential Equations 24 (1) (2008) 193-216.
A symplectic galerkin method for non-linear vibration of beams and plates. A Leung, S Mao, Journal of Sound and Vibration. 1833A. Leung, S. Mao, A symplectic galerkin method for non-linear vibration of beams and plates, Journal of Sound and Vibration 183 (3) (1995) 475-491.
Geometrical non-linear, steady state, forced, periodic vibration of plates, part i: model and convergence studies. P Ribeiro, M Petyt, Journal of Sound and Vibration. 2265P. Ribeiro, M. Petyt, Geometrical non-linear, steady state, forced, periodic vibration of plates, part i: model and convergence studies, Journal of Sound and Vibration 226 (5) (1999) 955-983.
A dual reciprocity boundary element approach for the problems of large deflection of thin elastic plates. W Wang, X Ji, M Tanaka, Computational mechanics. 261W. Wang, X. Ji, M. Tanaka, A dual reciprocity boundary element approach for the problems of large deflection of thin elastic plates, Computational mechanics 26 (1) (2000) 58-65.
Implementation of meshless method for a problem of a plate large deflection. A Uscilowska, Computational Modelling and Advanced Simulations. SpringerA. Uscilowska, Implementation of meshless method for a problem of a plate large deflection, in: Computational Modelling and Advanced Simulations, Springer, 2011, pp. 225-239.
Modal equations for the nonlinear flexural vibrations of a cylindrical shell. E Dowell, C Ventres, International Journal of Solids and Structures. 410E. Dowell, C. Ventres, Modal equations for the nonlinear flexural vibrations of a cylindrical shell, International Journal of Solids and Structures 4 (10) (1968) 975-991.
Nonlinear analysis of doubly curved symmetrically laminated shallow shells with rectangular planform. C Chia, Archive of Applied Mechanics. 584C. Chia, Nonlinear analysis of doubly curved symmetrically laminated shallow shells with rectangular planform, Archive of Applied Mechanics 58 (4) (1988) 252-264.
Non-linear free vibration and postbuckling of symmetrically laminated orthotropic imperfect shallow cylindrical panels with two adjacent edges simply supported and the other edges clamped. C Chuen-Yuan, International Journal of Solids and Structures. 238C. Chuen-Yuan, Non-linear free vibration and postbuckling of symmetrically laminated orthotropic imperfect shallow cylindrical panels with two adjacent edges simply supported and the other edges clamped, International Journal of Solids and Structures 23 (8) (1987) 1123-1132.
Non-linear vibration characteristics of clamped laminated shallow shells. A Abe, Y Kobayashi, G Yamada, Journal of Sound and Vibration. 2343A. Abe, Y. Kobayashi, G. Yamada, Non-linear vibration characteristics of clamped laminated shallow shells, Journal of Sound and Vibration 234 (3) (2000) 405-426.
Nonlinear vibrations of shallow shells with complex boundary: R-functions method and experiments. L Kurpa, G Pilgun, M Amabili, Journal of Sound and Vibration. 3063L. Kurpa, G. Pilgun, M. Amabili, Nonlinear vibrations of shallow shells with complex boundary: R-functions method and experiments, Journal of Sound and Vibration 306 (3) (2007) 580-600.
H B Keller, Lectures on NumericalMethods In Bifurcation Problems. Springer-VerlagH. B. Keller, Lectures on NumericalMethods In Bifurcation Problems, Springer-Verlag, 1987.
E H Dowell, Aeroelasticity of plates and shells. Springer Science & Business Media1E. H. Dowell, Aeroelasticity of plates and shells, Vol. 1, Springer Science & Business Media, 1974.
Application of continuation methods to uniaxially loaded postbuckled plates. T C Lyman, L N Virgin, R B Davis, Journal of Applied Mechanics. 81331010T. C. Lyman, L. N. Virgin, R. B. Davis, Application of continuation methods to uniaxially loaded postbuckled plates, Journal of Applied Mechanics 81 (3) (2014) 031010.
Edge effects in the reissner-mindlin plate theory. D N Arnold, R S Falk, Presented at the winter annual meeting of the American Society of Mechanical Engineers. 12D. N. Arnold, R. S. Falk, Edge effects in the reissner-mindlin plate theory, in: Presented at the winter annual meeting of the American Society of Mechanical Engineers, Vol. 1, 1989, p. I2.
Effects of edge constraints upon shallow shell frequencies. M Qatu, A Leissa, Thin-walled structures. 145M. Qatu, A. Leissa, Effects of edge constraints upon shallow shell frequencies, Thin-walled structures 14 (5) (1992) 347-379.
Simple iterative method for solving problems for plates with partial internal supports. Q A Dang, H H Truong, Journal of Engineering Mathematics. 861Q. A. Dang, H. H. Truong, Simple iterative method for solving problems for plates with partial internal supports, Journal of Engineering Mathematics 86 (1) (2014) 139-155.
Matrix Wiener-Hopf approximation for a partially clamped plate. I D Abrahams, A M Davis, S G L Smith, The Quarterly Journal of Mechanics and Applied Mathematics. 612I. D. Abrahams, A. M. Davis, S. G. L. Smith, Matrix Wiener-Hopf approximation for a partially clamped plate, The Quarterly Journal of Mechanics and Applied Mathematics 61 (2) (2008) 241-265.
A class of difference schemes with flexible local approximation. I Tsukerman, Journal of Computational Physics. 2112I. Tsukerman, A class of difference schemes with flexible local approximation, Journal of Computa- tional Physics 211 (2) (2006) 659-699.
A fortran subroutine for solving systems of nonlinear algebraic equations. M J D Powell, Numerical Methods for Nonlinear Algebraic Equations. 7M. J. D. Powell, A fortran subroutine for solving systems of nonlinear algebraic equations, Numerical Methods for Nonlinear Algebraic Equations (1970) Ch.7.
Optimization Toolbox: User's Guide (R2017a. Mathworks, MathWorks, Optimization Toolbox: User's Guide (R2017a), 2017.
A split-step scheme for the incompressible Navier-Stokes equations. W D Henshaw, N A Petersson, Numerical Simulation of Incompressible Flows. M. M. HafezWorld ScientificW. D. Henshaw, N. A. Petersson, A split-step scheme for the incompressible Navier-Stokes equations, in: M. M. Hafez (Ed.), Numerical Simulation of Incompressible Flows, World Scientific, 2003, pp. 108-125.
Numerical solution of saddle point problems. M Benzi, G H Golub, J Leisen, Acta Numerica. M. Benzi, G. H. Golub, J. Leisen, Numerical solution of saddle point problems, Acta Numerica (2005) 1-137.
A variant of newton's method with accelerated third-order convergence. S Weerakoon, T Fernando, Applied Mathematics Letters. 13S. Weerakoon, T. Fernando, A variant of newton's method with accelerated third-order convergence, Applied Mathematics Letters 13 (2000) 87-93.
Variants of newton's method using fifth-order quadrature formulas. A Cordero, J R Torregrosa, Applied Mathematics and Computation. 190A. Cordero, J. R. Torregrosa, Variants of newton's method using fifth-order quadrature formulas, Applied Mathematics and Computation 190 (2007) 686-698.
Anderson acceleration for fixed-point iterations. H F Walker, P Ni, SIAM Journal on Numerical Analysis. 494H. F. Walker, P. Ni, Anderson acceleration for fixed-point iterations, SIAM Journal on Numerical Analysis 49 (4) (2011) 1715-1735.
Semi-analytical approach for thermal buckling and postbuckling response of rectangular composite plates subjected to localized thermal heating. R Kumar, L Ramachandra, B Banerjee, Acta Mechanica. 2285R. Kumar, L. Ramachandra, B. Banerjee, Semi-analytical approach for thermal buckling and post- buckling response of rectangular composite plates subjected to localized thermal heating, Acta Me- chanica 228 (5) (2017) 1767-1791.
Thermally induced flexure, buckling, and vibration of plates. T R Tauchert, Applied Mechanics Reviews. 448T. R. Tauchert, Thermally induced flexure, buckling, and vibration of plates, Applied Mechanics Reviews 44 (8) (1991) 347-360.
Thermal buckling of rectangular plates. K D Murphy, D Ferreira, International journal of solids and structures. 3822K. D. Murphy, D. Ferreira, Thermal buckling of rectangular plates, International journal of solids and structures 38 (22) (2001) 3979-3994.
| [] |
[
"Graphs in which the Maxine heuristic produces a maximum independent set",
"Graphs in which the Maxine heuristic produces a maximum independent set"
] | [
"Benjamin Lantz [email protected] \nDepartment of Mathematics\nUniversity of Rhode Island Kingston\n02881RI\n"
] | [
"Department of Mathematics\nUniversity of Rhode Island Kingston\n02881RI"
] | [] | The residue of a graph is the number of zeros left after iteratively applying the Havel-Hakimi algorithm to its degree sequence. Favaron, Mahéo, and Saclé showed that the residue is a lower bound on the independence number. The Maxine heuristic reduces a graph to an independent set of size M . It has been shown that given a graph G, M is bounded between the independence number and the residue of a graph for any application of the Maxine heuristic. We improve upon a forbidden subgraph classification of graphs such that M is equal to the independence number given by Barrus and Molnar in 2015. | null | [
"https://arxiv.org/pdf/1910.06201v1.pdf"
] | 204,512,438 | 1910.06201 | aa4b98d47cd305162d55ebb66d513133c4b38e02 |
Graphs in which the Maxine heuristic produces a maximum independent set
14 Oct 2019 October 15, 2019
Benjamin Lantz [email protected]
Department of Mathematics
University of Rhode Island Kingston
02881RI
Graphs in which the Maxine heuristic produces a maximum independent set
14 Oct 2019 October 15, 2019arXiv:1910.06201v1 [math.CO]
The residue of a graph is the number of zeros left after iteratively applying the Havel-Hakimi algorithm to its degree sequence. Favaron, Mahéo, and Saclé showed that the residue is a lower bound on the independence number. The Maxine heuristic reduces a graph to an independent set of size M . It has been shown that given a graph G, M is bounded between the independence number and the residue of a graph for any application of the Maxine heuristic. We improve upon a forbidden subgraph classification of graphs such that M is equal to the independence number given by Barrus and Molnar in 2015.
Introduction
We will be considering simple graphs and we will let N (v) represent the neighborhood of a vertex v in a graph, and let u ∼ v mean that u and v are adjacent in the graph. For such a graph G and subset of vertices U in the graph, let G[U ] be the induced subgraph on the set U . For a set of graphs S, a graph G is said to be S-free, if no graph in S appear as an induced subgraph in G.
Given a degree sequence d = (d 1 , d 2 , . . . , d n ), an iterative step in the Havel-Hakimi algorithm, developed independently by Havel [4] and Hakimi [5], reduces d to d 1 = (d 2 − 1, d 3 − 1, . . . , d d1+1 − 1, d d1+2 . . . , d n ). After reordering the vertices to be non-increasing, the algorithm iterates until no positive entries are present. The algorithm arose to determine when a degree sequence is graphic: that is a list of integers d is graphic if and only if the Havel Hakimi algorithm terminates in a list of zeros. The number of these zeros is said to be the residue of the degree sequence, and the residue of a graph G, denoted R(G), is the residue of the degree sequence of G. The residue is of interest because of its connection to the independence number of a graph, α(G). In 1988, the conjecture-making computer program Graffiti [6] proposed the following theorem, Theorem 1.1. [2] For every graph G, R(G) ≤ α(G).
This result was proven by Favaron et. al. in 1991 and improved upon by Griggs and Kleitman [3], Triesch [8], and Jelen [9] in the 1990's. Determining the independence number is NP-hard, but since it takes only O(E) steps to determine the residue where E is the number of edges in a graph, it is of interest to know how well R(G) approximates α(G) and when is the bound realized.
To further illustrate the relationship between the residue and the independence number, we can consider the Maxine heuristic, which is the process of iteratively deleting vertices of maximum degree until an independent set of vertices is realized [3]. We will call M the size of the independent set achieved by the Maxine heuristic and note that this is clearly a lower bound on the independence number. Note that the heuristic depends on our choice of deleted vertices and M can vary accordingly. It was shown by Griggs and Kleitman [3] that Theorem 1.2. ( [3]) If M is the size of the independent set produced by any application of the Maxine heuristic for a graph G, then R(G) ≤ M ≤ α(G).
Thus if R(G) = α(G) for some G, then every application of the Maxine heuristic must achieve a maximum independent set.
A vertex in a graph is said to have the Havel-Hakimi property if it is of maximum degree and its neighbors are of maximal degree, i.e. the deletion of said vertex corresponds to the reduction in the degree sequence by one step of the Havel-Hakimi algorithm. Not every graph has a vertex with this property, but every degree sequence has a realization that has such a vertex [7]. If at each step of the Maxine heuristic, a vertex with the Havel-Hakimi property is deleted, then R(G) = M . To find when M = α(G) we will consider graphs with certain conditions.
A vertex v in a graph G is said to have maximum degree-independence conditions (or MDI conditions) if it is has maximum degree and is a part of every maximum independent set. Also we will say that a graph G has maximum degree-independence conditions (or MDI conditions), if there exists a vertex v ∈ V (G) that has MDI conditions.
In 2016, Barrus and Molnar found that if a vertex v in G has MDI conditions, then G must contain an induced subgraph of C 4 (the cycle on 4 vertices) containing v or an induced subgraph of P 5 (the path on 5 vertices) with v as the center vertex [1]. From this it can be quickly shown that
Results
We will work to strengthen Theorem 1.3 by examining the case where v with MDI conditions is in an induced copy of C 4 , since C 4 does not have MDI conditions itself. Since we will only strengthen the condition on C 4 , we will assume that all graphs considered have no subgraph isomorphic to P 5 in which the center vertex has MDI conditions. We will call a graph P 5 * -free when referring to the condition that the center vertex must have MDI conditions, as we will not restrict the existence of an induced P 5 in general. We will allude to the aforementioned MDI conditions as the maximum degree condition and independence condition separately. To start, we will prove a few lemmas to reduce our search of induced subgraphs needed to strengthen the C 4 condition. Lemma 2.1. If v ∈ V (G) has MDI conditions and is a part of more than one maximum independent set, then there is an induced subgraph of G in which v also has MDI conditions and there is only one maximum independent set.
Proof. Let v belong to maximum independent sets I 1 , I 2 , ..., I n . Then we can consider the subgraph induced by deleting n i=2 I i \ I 1 . The maximum degree condition is not violated since none of the deleted edges were adjacent to v, and there is exactly one maximum independent set in the induced subgraph.
Because of Lemma 2.1, we will now only consider a graph G with one maximum independent set I including a vertex v such that v has MDI conditions.
Lemma 2.2. Let x be a vertex such that x / ∈ N (v) ∪ I where I is the lone independent set. Then G \ {x} has MDI conditions as well.
Proof. Deleting x does not change the degree of v and thus the maximum degree condition is unaffected. Furthermore, since x is not in I, the independent set is unaffected as well. Thus v still has MDI conditions in G \ {x}.
If α(G) = 1, G must be a clique, and if there is only one maximum independent set, then G must be an isolated vertex. Furthermore, if α(G) = 2 with maximum independent set {u, v}, then N (v) = N (u) must form a clique and thus every element of N (v) must have strictly larger degree than both u and v. Since we require an element of the maximum independent set to have maximum degree, N (v) must be empty and G must be the graph of two isolated vertices. Hence, if G has MDI conditions and α(G) ≤ 2, then every application of the Maxine heuristic vacuously produces a set of size α.
Thus we will now assume that the size of I is 3 and that I = {u, v, w} where v is the vertex with MDI conditions and
I ′ = I \ {v}. Note that if x ∈ N (v)
where v has MDI conditions and x is not adjacent to any other element in I, the maximum independent set, then we have another maximum independent set (I \ {v}) ∪ {x}. From 2.2 we can then delete x and retain conditions on G. Thus we only need to consider N (v) ∪ I. We will then partition N (v) into Q u and Q w as the vertices in N (v) whose neighbors in I ′ are only u and w respectively. We will call Q = Q u ∪ Q w . Let N be the set of vertices in N (v) that are adjacent to both u and w. Since the independence number of G must be 3 and I is the unique independent set of size 3, we have that Q u and Q w must have independence number at most 1; hence Q u and Q w are cliques, since otherwise there would exist another independent set of size 3. Similarly, N must have independence number at most 2. Then since G must be P * 5 free, we have that Q must form a clique as every vertex in Q u must dominate Q w and vice versa as otherwise there exists q u ∈ Q u and q w ∈ Q w non-adjacent; hence {u, q u , v, q w , w} induce P 5 with v as the center vertex.
Theorem 2.3. Let G have MDI conditions with α = 3.
Then G has at least one of the following induced subgraphs where Q ′ is a subset of Q and N ′ a subset of N :
1. |Q ′ | = 0, G[N ′ ] ∼ = C n . 2. |Q ′ | = 1, G[N ′ ∪ Q ′ ] ∼ = C n . 3. |Q ′ | = 2, G[N ′ ∪ Q ′ ] ∼ = P n where the elements of Q are the endpoints of P n in the complement.
Proof. We will first consider the case where |Q| = 0. First note that if |N | = 0, then N (v) is empty and G is only the independent set and the result follows immediately. Thus we will assume that N is non-empty. We have that every vertex in N has two non-neighbors in N as Q is empty and every vertex in N is also adjacent to u, v, and w, otherwise v would not have maximum degree as N (v) = N ∪ Q. We can then arrange the non-neighbors into one or more disjoint cycle complements. Consider a smallest cycle complement, and label its vertices x 0 , . . . , x m−1 where x i is non-adjacent to both x i+1 and x i−1 modulo m. If there exists an x i that does not dominate the rest of the cycle complement, then we have a smaller cycle complement which is a contradiction. Thus we have that x i dominates the rest of the cycle complement for every i and thus we have G[N ′ ] ∼ = C m where N ′ is the vertex set of the cycle complement.
We will next consider the case where |Q| = 1. We will call q the lone vertex in Q. If |N | = 0, then q has larger degree than v, which is a contradiction so we will assume that N is non-empty. Note that every vertex in N has to have at least 2 non-neighbors in N ∪ Q otherwise v is not of maximum degree, as every vertex in N is also adjacent to u, v, and w. If q dominates N then deg(q) > deg(v) which is a contradiction. Thus there exists a non-neighbor of q in N ; call it x 0 , and call the other guaranteed non-neighbor of x 0 , x 1 . Similarly, x 1 is guaranteed to have another non-neighbor in N ∪ Q as x 1 ∈ N and must have at least two non-neighbors in N ∪ Q. If this other non-neighbor is q then we have that {q, x 0 , x 1 } induce C 3 and we are done. Thus we will assume that the other non-neighbor is in N , call it x 2 . Inductively this creates a sequence of non-neighbors in N , {x i }, as each x i must be adjacent to q otherwise we are done as C i+1 is induced on q ∪ x 1 ∪ · · · ∪ x i . Furthermore each x i must be adjacent to {x 0 , . . . , x i−2 } otherwise we have an induced copy of C n in N for some n. Since we have a finite graph, this sequence must terminate at x m for some m, and thus we have that x m must be non-adjacent to either q or some vertex in {x 0 , . . . , x m−2 } giving the result.
Finally we will show the result if |Q| ≥ 2. We will proceed by induction on the size of Q. We will now consider the base case where |Q| = 2, calling the 2 vertices q 1 , q 2 . Note that q 1 , q 2 are adjacent as Q forms a clique. Similar to the case |Q| = 1, if N is empty then q 1 has strictly larger degree than v which is a contradiction. Thus we will assume that N is non-empty. Each of the vertices has at least one non-neighbor in N ; if they have the same non-neighbor then those three vertices induce the desired P 3 and we are done. Thus we will assume that they have different non-neighbors, call them x 1 and x 2 respectively. If x 1 ≁ x 2 , then the four vertices induce the desired P 4 and we are done, so assume that x 1 ∼ x 2 . Each of these vertices has another non-neighbor in N ; if they share a non-neighbor then the five vertices induce the desired P 5 , so assume that x 1 and x 2 have different non-neighbors call them x 3 and x 4 respectively. Inductively, we have that the pair of vertices x 2i , x 2i+1 are the new non-neighbors of x 2i−2 and x 2i−1 . Note that x 2i and x 2i+1 must be adjacent to Q otherwise there is an induced complement of a cycle and we are done. Furthermore x 2i must be adjacent to each x j for j even and x 2i+1 must be adjacent to each x j for j odd, otherwise we have an induced complement of a cycle in N . Then x 2i must be adjacent to each x j for j odd, and x 2i+1 must be adjacent to each x j for j even, otherwise we have the desired induced complement of a path. We thus have that both x 2i and x 2i+1 must have another non-neighbor in N . Since we have a finite graph, this process must terminate, yielding the result.
We will now show that if |Q| > 2, G has one of the desired induced subgraphs above. We will proceed by induction on |Q|, noting that the base case of |Q| = 2 is done above. Assume the result is true for |Q| < k and consider the case with |Q| = k. We will label the vertices of Q, {q 1 , q 2 , . . . , q k }. Each of these has a non-neighbor in N , call it x i for each q i . Note that these are distinct otherwise we have an induced copy of P 3 with 2 elements of Q has endpoints in the complement. Furthermore q i ∼ x j for all i = j as otherwise we have an induced P 4 . Then there exists another non-neighbor of x 1 in N , call it y 1 . We have that
• y 1 ∼ q 1 , otherwise {q 1 , x 1 , y 1 } induce C 3 . • y 1 ∼ q j for all j > 1 otherwise {q 1 , x 1 , y 1 , q j } induce P 4 . • y 1 ∼ x j for all j > 1, otherwise {q 1 , x 1 , y 1 , x j , q j } induce P 5 .
We then have that y 1 must have another non-neighbor in N , call it y 2 . Inductively let y k be the other non-neighbor of y k−1 where each y i for 1 ≤ i < k dominates all preceding vertices except y i−1 . Then we have that • y k ∼ q 1 , otherwise {q 1 , x 1 , y 1 , . . . , y k } induce C k+2 .
• y k ∼ q j for all j > 1, otherwise {q 1 , x 1 , y 1 , . . . , y k , q j } induce P k+3 .
• y k ∼ x 1 , otherwise {x 1 , y 1 , . . . , y k } induce C k+1 .
• y k ∼ x j for all j > 1, otherwise {q 1 , x 1 , y 1 , . . . , y k , x j , q j } induce P k+4
• y k ∼ y i for all i < k otherwise inductively there is an induced complement of a cycle.
Thus y k has another non-neighbor in N . Since our graph is finite, this process must terminate and the result holds.
We will now extend the result to a graph with independence number greater than 3. Proof. We will assume the contrary, that there exists such a graph without the desired induced subgraphs and derive a contradiction.
From Lemma 2.1 we have that G has one maximum independent set with v a vertex with MDI conditions. First call I the lone independent set, and I ′ = I \ {v}. Furthermore, we will use the notation that a set A ⊆ N (v) induces a subgraph on G ij , to mean G[{v, A, i, j}] where i, j are elements of I ′ . Then call Q i ⊆ N (v) the vertices that are adjacent to exactly i members of I ′ . Then {Q i } k−1 i=1 partition N (v), using Lemma 2.2. Also note, that in order for G to have MDI conditions, every vertex in Q i must have i nonneighbors in N (v), otherwise v would not have maximum degree. We will first show that Q k−1 must be empty. Let q ∈ Q k−1 . We will show that q must have at most one non-neighbor in N (v) \ Q k−1 . Suppose that q has two such non-neighbors; x and y.
First suppose x ∼ y. If x and y have distinct neighbors in I ′ , call them u and w respectively, then {q, x, y} induce P 3 in G u,w . Otherwise, without loss of generality, (N (x) ∩ I ′ ) ⊆ (N (y) ∩ I ′ ), and we must have that N (x) ∩ I ′ is non-empty, so it contains an element u, and (N (y) ∩ I ′ ) c is non-empty as y / ∈ Q k−1 , and thus w ∈ (N (y) ∩ I ′ ) c . We then have that, again, {q, x, y} induce P 3 in G u,w .
Then suppose that x ≁ y. We must have that x and y do not have any distinct neighbors in I ′ , say a and b, as otherwise {x, v, y, a, b} would induce P 5 . Then x and y share a neighbor in I ′ , call it u and note that both x, y cannot belong to Q 1 , as Q 1 forms a clique. Thus, without loss of generality, we can say that y has another neighbor, w, in I ′ , and thus {q, x, y} induce C 3 in G u,w .
We thus have that, for each q ∈ Q k−1 , q must have at most one non-neighbor in N (v) \ Q k−1 , and thus must have at least 2 non-neighbors in Q k−1 . As in the proof of the α = 3 case, we can arrange a smallest cycle complement of non-neighbors and thus we have an induced C n in G u,w where u, w are any two members of I ′ . This is a contradiction, and thus Q k−1 must be empty.
We will then proceed by induction to show that Q i is empty for 3 ≤ i ≤ k − 1. We will assume that Q i is empty for all i > ℓ, and we will show that Q ℓ is empty as well.
let q ∈ Q ℓ , and assume that q has two non-neighbors in N (v)\Q ℓ , call them x and y. If any pair of {x, y, q} have distinct neighbors in I ′ , then we have an induced P 5 , as seen above. Thus we must have that without loss of generality, (N (x)∩I ′ ) ⊆ (N (y)∩I ′ ), and since q has the most neighbors in I ′ , (N (y)∩I ′ ) ⊆ (N (q)∩I ′ ). Then we argue, in the same way as in the base case of Q k−1 , that q can only have at most one non-neighbor in N (v) \ Q ℓ . Thus, q has at least ℓ − 1 non-neighbors in Q ℓ as Q i is empty for all i > ℓ, and as above this means that we have an induced C n , a contradiction. Thus we have that Q ℓ must be empty. Hence by induction we have that Q i is empty for all i > 2.
Note that N (v) must be non-empty, as we cannot have an edgeless graph, and Q 2 cannot be empty as Q 1 forms a clique, and each element of Q 1 must have at least one non-neighbor in N (v). Then let q ∈ Q 2 . If q has two non-neighbors in Q 1 , adjacent to u and w respectively in I ′ , then q must also be adjacent to u, w, otherwise we have an induced P 5 . Thus the three vertices induce P 2 in G u,w . Then assume that q has exactly one non-neighbor in Q 1 , call it x and a non-neighbor in Q 2 , call it y. We must have that q, y share the same neighbors in I ′ , otherwise we have an induced P 5 , and thus the neighbor of x in I ′ is shared by both q, y. We then have that if x ≁ y, we have that {x, y, q} induce C 3 . We will thus assume that x ∼ y.
Then if all q ∈ Q 2 have 2 non-neighbors in Q 2 , we must have an induced copy of C n in Q 2 . Suppose then that there are 2 vertices in Q 2 , q, q ′ that have a non-neighbor in Q 1 , and choose these vertices such that the distance between them in Q c 2 is as small as possible. Note that there must exist a chain of vertices in Q 2 such that q ≁ q 1 ≁ q 2 ≁ · · · ≁ q ′ , such that q i does not have a non-neighbor in Q 1 . Furthermore, q, q ′ , q i must share the same neighbors in I ′ , otherwise we have an induced copy of P 5 . If q, q ′ have the same non-neighbor in Q 1 , call it x, then {x, q, q ′ , q 1 , . . .} induce C n . If q, q ′ have different non-neighbors, x and x ′ in Q 1 , then {x, x ′ , q, q ′ , q 1 , . . .} induce P n . This is a contradiction, and thus for every graph G with conditions and α = k > 3, we have the result.
For ease, we will call the families of induced subgraphs in 2.3 F . We wanted to improve the C 4 condition introduced by Barrus and Molnar as C 4 itself was not MDI. By construction, each graph in F is itself MDI alongside P 5 . We then have the immediate corollary, Corollary 2.5. The Maxine heuristic always produces a maximum independent set when applied to a {F , P 5 }-free graph.
Open Questions
Barrus and Molnar used their results to show that if a graph is {P 5 , 4 − pan, K 2,3 , K + 2,3 , kite, 2P 3 , P 3 + K 3 , stool, co-domino}-free, then R(G) = α(G). [1] It can be expected that this class of graphs can be expanded with the strengthened conditions shown in this paper. We pose the following open questions/problems:
• Can we fully classify the graphs in which the Maxine heuristic produces a maximum independent set.
• What other conditions, other than forbidding MDI conditions, can be considered to guarantee that the Maxine heuristic produces a maximum independent set?
• Can we fully classify the graphs in which the Maxine heuristic produces a graph with an independent set the same size as the residue? Note that graphs with the Havel-Hakimi property introduced in [1] are a subset of these graphs.
• Can we fully classify the graphs in which the residue equals the independence number?
Theorem 1.3. ([1])The Maxine heuristic always produces a maximum independent set when applied to a {C 4 , P 5 }-free graph.
Theorem 2. 4 .
4Let G have MDI conditions with α = k such that k > 3. Then the result from 2.3 holds as well.
M D Barrus, G Molnar, Graphs with the strong Havel-Hakimi property, Graphs and Combinatorics. 32Barrus, M. D., Molnar, G. 2015. Graphs with the strong Havel-Hakimi property, Graphs and Combina- torics (2016) 32, 1689-1697.
On the residue of a graph. O Favaron, M Mahéo, J.-F Saclé, J. Graph Theory. 1513964O. Favaron, M. Mahéo, and J.-F. Saclé, On the residue of a graph, J. Graph Theory 15 (1) (1991) 3964
Independence and the Havel-Hakimi residue. J Griggs, D J Kleitman, Graph theory and applications. Hakone127209212J. Griggs and D.J. Kleitman, Independence and the Havel-Hakimi residue, Graph theory and applica- tions (Hakone, 1990), Discrete Math. 127 (1994), no. 13, 209212.
A remark on the existence of finite graphs(Hungarian). V Havel, Časopis Pěst Mat. 80V.Havel, A remark on the existence of finite graphs(Hungarian),Časopis Pěst Mat. 80 (1955), 477-480.
On the realizability of a set of integers as degree sequences of the vertices of a graph. S Hakimi, J. SIAM Appl. Math. 10496506S. Hakimi, On the realizability of a set of integers as degree sequences of the vertices of a graph, J. SIAM Appl. Math. 10 (1962), 496506.
On conjectures of Grafti. S Fajtlowicz, III. Cong. Numer. 662332Fajtlowicz, S.: On conjectures of Grafti, III. Cong. Numer. 66, 2332 (1988)
Graphs & Digraphs, 5th edn. G Chartrand, L Lesniak, P Zhang, CRC PressBoca RatonChartrand, G., Lesniak, L., Zhang, P.: Graphs & Digraphs, 5th edn. CRC Press, Boca Raton (2011)
Degree sequences of graphs and dominance order. E Triesch, J. Graph Theory. 2218993E. Triesch, Degree sequences of graphs and dominance order, J. Graph Theory 22 (1996), no. 1, 8993.
k-Independence and k-residue of a graph. F Jelen, J. Graph Theory. 127Jelen, F.: k-Independence and k-residue of a graph. J. Graph Theory 127, 209-212 (1999)
| [] |
[
"Optical Study of LaO 0.9 F 0.1 FeAs: Evidence for a Weakly Coupled Superconducting State",
"Optical Study of LaO 0.9 F 0.1 FeAs: Evidence for a Weakly Coupled Superconducting State"
] | [
"S.-L Drechsler \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"M Grobosch \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"K Koepernik \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"G Behr \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"A Köhler \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"J Werner \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"A Kondrat \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"N Leps \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"C Hess \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"R Klingeler \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"R Schuster \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"B Büchner \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"M Knupfer \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n"
] | [
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany"
] | [] | We have studied the reflectance of the recently discovered superconductor LaO0.9F0.1FeAs in a wide energy range from the far infrared to the visible regime. We report on the observation of infrared active phonons, the plasma edge (PE) and possible interband transitions. On the basis of this data and the reported in-plane penetration depth λL(0) = 254 nm [H. Luetkens et al., Phys. Rev. Lett. 101, 097009 (2008)] a disorder sensitive relatively small value of the total electronboson coupling constant λtot = λ e−ph + λe−sp ∼ 0.6 ± 0.35 can be estimated adopting an effective single-band picture.The discovery of superconductivity up to 43 K in LaO 1−x F x FeAs [1, 2] has established a new family of superconductors with rather high transition temperatures T c . The substitution of La with other rare earths even yields T c -values above 50 K[3]. In addition to these already fascinating reports, both experimental and theoretical studies consider possible unconventional multiband behavior[4,5]. Various scenarios for the superconducting mechanism have been proposed or excluded[6,7,8,9,10,11,12]. A more general related problem under debate[5,13,14,15,16,17,18,19] being of considerable interest is the strength of the correlation effects in these doped pnictides governed by the local Coulomb interactions U d and J d at Fe sites.The crystals of LaO 1−x F x FeAs are formed by alternating stacking of LaO 1−x F x and FeAs layers. The electronic structure of these new systems has been mainly addressed by theoretical studies so far. They predict that LaOFeAs and the related compounds harbor quasi two-dimensional electronic bands as is suggested by the layered crystal structure already[4,5,6,7,8,9]. The LDA (local density approximation) Fermi surface of the doped compound consists of four cylinder-like sheets . Hence, the electronic behavior is expected to be highly anisotropic concerning e.g. charge transport and superconducting properties such as the penetration depth λ L (T ) and the upper critical magnetic field. In addition, calculations have provided the expected phonon density of states which might be relevant for the symmetry of the superconducting order parameter even in the case of a dominant magnetic coupling in spite of the predicted weak electron-phonon (e-ph) coupling[6,20]. So far, optical data are available for the far-infrared region only, where indications for the opening of a gap in the superconducting state have been found, and infrared active phonons can be observed[21]. To the best of our knowledge a phenomenological estimate of the electron-boson (e-b) coupling strengths is still missing and the present work provides a first step in that direction.Here we report on a study of the optical properties of LaO 0.9 F 0.1 FeAs via reflectance measurements. We have determined the reflectance of powder samples in the energy range from 0.009 up to 3 eV at T = 300 K. Our data reveal two prominent and two weaker infrared active phonon modes, and provide evidence for the in-plane plasma energy at ≈ 0.4 eV as well as the appearance of electronic interband transitions at higher energies. These data are used to estimate the strength of the total e-b coupling λ tot responsible for the "mass enhancement" in the plasma energy of the paired electrons without specifying the nature of this coupling. We compare its value with various microscopic estimates in the literature based on spin fluctuations and e-ph contributions as well as the empirical unscreened plasma energy with uncorrelated L(S)DA (local spin density approximation) predictions.A polycrystalline sample of LaO 0.9 F 0.1 FeAs was prepared as described in our previous work[22]and in Ref.[23]. For the optical measurements reported here the pellets have been polished to obtain appropriate surfaces. In order to characterize the superconducting properties, zero field cooled (shielding signal) and field cooled (Meissner signal) magnetic susceptibility in external fields H = 10 Oe ... 50 kOe have been measured using a SQUID magnetometer. The resistivity has been measured with a standard 4-point geometry employing an alternating DC current. A value of T c ≈ 26.8 K has been extracted from these data as shown inFig. 1.The reflectance measurements have been performed using a combination of Bruker IFS113v/IFS88 spectrometers. This allows us to determine the reflectivity from the far infrared up to the visible spectral region in an energy range from 0.009 up to 3 eV. The measurements have been carried out with different spectral resolutions depending on the energy range, these vary form 0.06 to 2 meV. Since the observed spectral features are significantly broader than these values, this slight variation in resolution does not impact our data analysis. All reflectance measurements were performed at T = 300 K and a pressure of 4 mbar.InFig. 2we show the reflectance of our | 10.1103/physrevlett.101.257004 | [
"https://arxiv.org/pdf/0805.1321v2.pdf"
] | 2,815,757 | 0805.1321 | d9caf1ec08987e1736c9772bac6a23d7162bc428 |
Optical Study of LaO 0.9 F 0.1 FeAs: Evidence for a Weakly Coupled Superconducting State
20 Feb 2009 (Dated: February 20, 2009)
S.-L Drechsler
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
M Grobosch
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
K Koepernik
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
G Behr
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
A Köhler
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
J Werner
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
A Kondrat
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
N Leps
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
C Hess
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
R Klingeler
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
R Schuster
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
B Büchner
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
M Knupfer
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
Optical Study of LaO 0.9 F 0.1 FeAs: Evidence for a Weakly Coupled Superconducting State
20 Feb 2009 (Dated: February 20, 2009)
We have studied the reflectance of the recently discovered superconductor LaO0.9F0.1FeAs in a wide energy range from the far infrared to the visible regime. We report on the observation of infrared active phonons, the plasma edge (PE) and possible interband transitions. On the basis of this data and the reported in-plane penetration depth λL(0) = 254 nm [H. Luetkens et al., Phys. Rev. Lett. 101, 097009 (2008)] a disorder sensitive relatively small value of the total electronboson coupling constant λtot = λ e−ph + λe−sp ∼ 0.6 ± 0.35 can be estimated adopting an effective single-band picture.The discovery of superconductivity up to 43 K in LaO 1−x F x FeAs [1, 2] has established a new family of superconductors with rather high transition temperatures T c . The substitution of La with other rare earths even yields T c -values above 50 K[3]. In addition to these already fascinating reports, both experimental and theoretical studies consider possible unconventional multiband behavior[4,5]. Various scenarios for the superconducting mechanism have been proposed or excluded[6,7,8,9,10,11,12]. A more general related problem under debate[5,13,14,15,16,17,18,19] being of considerable interest is the strength of the correlation effects in these doped pnictides governed by the local Coulomb interactions U d and J d at Fe sites.The crystals of LaO 1−x F x FeAs are formed by alternating stacking of LaO 1−x F x and FeAs layers. The electronic structure of these new systems has been mainly addressed by theoretical studies so far. They predict that LaOFeAs and the related compounds harbor quasi two-dimensional electronic bands as is suggested by the layered crystal structure already[4,5,6,7,8,9]. The LDA (local density approximation) Fermi surface of the doped compound consists of four cylinder-like sheets . Hence, the electronic behavior is expected to be highly anisotropic concerning e.g. charge transport and superconducting properties such as the penetration depth λ L (T ) and the upper critical magnetic field. In addition, calculations have provided the expected phonon density of states which might be relevant for the symmetry of the superconducting order parameter even in the case of a dominant magnetic coupling in spite of the predicted weak electron-phonon (e-ph) coupling[6,20]. So far, optical data are available for the far-infrared region only, where indications for the opening of a gap in the superconducting state have been found, and infrared active phonons can be observed[21]. To the best of our knowledge a phenomenological estimate of the electron-boson (e-b) coupling strengths is still missing and the present work provides a first step in that direction.Here we report on a study of the optical properties of LaO 0.9 F 0.1 FeAs via reflectance measurements. We have determined the reflectance of powder samples in the energy range from 0.009 up to 3 eV at T = 300 K. Our data reveal two prominent and two weaker infrared active phonon modes, and provide evidence for the in-plane plasma energy at ≈ 0.4 eV as well as the appearance of electronic interband transitions at higher energies. These data are used to estimate the strength of the total e-b coupling λ tot responsible for the "mass enhancement" in the plasma energy of the paired electrons without specifying the nature of this coupling. We compare its value with various microscopic estimates in the literature based on spin fluctuations and e-ph contributions as well as the empirical unscreened plasma energy with uncorrelated L(S)DA (local spin density approximation) predictions.A polycrystalline sample of LaO 0.9 F 0.1 FeAs was prepared as described in our previous work[22]and in Ref.[23]. For the optical measurements reported here the pellets have been polished to obtain appropriate surfaces. In order to characterize the superconducting properties, zero field cooled (shielding signal) and field cooled (Meissner signal) magnetic susceptibility in external fields H = 10 Oe ... 50 kOe have been measured using a SQUID magnetometer. The resistivity has been measured with a standard 4-point geometry employing an alternating DC current. A value of T c ≈ 26.8 K has been extracted from these data as shown inFig. 1.The reflectance measurements have been performed using a combination of Bruker IFS113v/IFS88 spectrometers. This allows us to determine the reflectivity from the far infrared up to the visible spectral region in an energy range from 0.009 up to 3 eV. The measurements have been carried out with different spectral resolutions depending on the energy range, these vary form 0.06 to 2 meV. Since the observed spectral features are significantly broader than these values, this slight variation in resolution does not impact our data analysis. All reflectance measurements were performed at T = 300 K and a pressure of 4 mbar.InFig. 2we show the reflectance of our
The discovery of superconductivity up to 43 K in LaO 1−x F x FeAs [1,2] has established a new family of superconductors with rather high transition temperatures T c . The substitution of La with other rare earths even yields T c -values above 50 K [3]. In addition to these already fascinating reports, both experimental and theoretical studies consider possible unconventional multiband behavior [4,5]. Various scenarios for the superconducting mechanism have been proposed or excluded [6,7,8,9,10,11,12]. A more general related problem under debate [5,13,14,15,16,17,18,19] being of considerable interest is the strength of the correlation effects in these doped pnictides governed by the local Coulomb interactions U d and J d at Fe sites.
The crystals of LaO 1−x F x FeAs are formed by alternating stacking of LaO 1−x F x and FeAs layers. The electronic structure of these new systems has been mainly addressed by theoretical studies so far. They predict that LaOFeAs and the related compounds harbor quasi two-dimensional electronic bands as is suggested by the layered crystal structure already [4,5,6,7,8,9] . The LDA (local density approximation) Fermi surface of the doped compound consists of four cylinder-like sheets . Hence, the electronic behavior is expected to be highly anisotropic concerning e.g. charge transport and superconducting properties such as the penetration depth λ L (T ) and the upper critical magnetic field. In addition, calculations have provided the expected phonon density of states which might be relevant for the symmetry of the superconducting order parameter even in the case of a dominant magnetic coupling in spite of the predicted weak electron-phonon (e-ph) coupling [6,20]. So far, optical data are available for the far-infrared region only, where indications for the opening of a gap in the superconducting state have been found, and infrared active phonons can be observed [21]. To the best of our knowledge a phenomenological estimate of the electron-boson (e-b) coupling strengths is still missing and the present work provides a first step in that direction.
Here we report on a study of the optical properties of LaO 0.9 F 0.1 FeAs via reflectance measurements. We have determined the reflectance of powder samples in the energy range from 0.009 up to 3 eV at T = 300 K. Our data reveal two prominent and two weaker infrared active phonon modes, and provide evidence for the in-plane plasma energy at ≈ 0.4 eV as well as the appearance of electronic interband transitions at higher energies. These data are used to estimate the strength of the total e-b coupling λ tot responsible for the "mass enhancement" in the plasma energy of the paired electrons without specifying the nature of this coupling. We compare its value with various microscopic estimates in the literature based on spin fluctuations and e-ph contributions as well as the empirical unscreened plasma energy with uncorrelated L(S)DA (local spin density approximation) predictions.
A polycrystalline sample of LaO 0.9 F 0.1 FeAs was prepared as described in our previous work [22] and in Ref. [23]. For the optical measurements reported here the pellets have been polished to obtain appropriate surfaces. In order to characterize the superconducting properties, zero field cooled (shielding signal) and field cooled (Meissner signal) magnetic susceptibility in external fields H = 10 Oe ... 50 kOe have been measured using a SQUID magnetometer. The resistivity has been measured with a standard 4-point geometry employing an alternating DC current. A value of T c ≈ 26.8 K has been extracted from these data as shown in Fig. 1.
The reflectance measurements have been performed using a combination of Bruker IFS113v/IFS88 spectrometers. This allows us to determine the reflectivity from the far infrared up to the visible spectral region in an energy range from 0.009 up to 3 eV. The measurements have been carried out with different spectral resolutions depending on the energy range, these vary form 0.06 to 2 meV. Since the observed spectral features are significantly broader than these values, this slight variation in resolution does not impact our data analysis. All reflectance measurements were performed at T = 300 K and a pressure of 4 mbar.
In Fig. 2 we show the reflectance of our LaO 0.9 F 0.1 FeAs sample (note the logarithmic and linear energy scales in the two panels, respectively). These data reveal a number of spectral features with different origin. At low energies (see left panel of Fig. 2) there are two prominent structures at about 12 and 55 meV and two weaker features in-between near 25 and 31 meV, which most likely stem from the excitation of infrared active phonons. We attribute the two strong features to phonons that are polarized along the c-axis of the lattice, since the dielectric screening in this direction is expected to be much weaker as a result of the rather two dimensional electronic system. An assignment of the two weaker phonons is not possible at the present stage, and future work on single crystals will help to clarify their polarization character. A comparison with a calculated phonon density of states [6] allows an assignment of the higher energy phonon (55 meV) to oxygen modes, since above about 40 meV only these vibrations can be expected. In addition, the calculated phonon density of states has a maximum near 12 meV [6] with contribution from all elements in the structure, which would correspond to the strong lowest energy feature in Fig. 2, and the calculations also predict two further maxima which could be associated with weaker structures in our reflectance spectrum.
At higher energies the reflectance significantly drops between 0.13 and 0.4 eV, thereafter it shows a small increase until about 0.6 eV before it starts to slightly decrease again. In addition, at about 2 eV another small upturn is visible. We attribute the two small features at about 0.6 and 2 eV to weak electronic interband transition at the corresponding energies. This observation is in accord with the behavior of the optical conductivity σ(ω) predicted by Haule et al. using dynamical mean-field theory (DMFT), where As 4p to Fe 3d interband transition (IB pd ) slightly below 2 eV and a weak feature near 0.6 eV for the undoped parent compound LaOFeAs have been found. Again, our measurements on powder samples do not allow the extraction of their polarization. The steep-est edge centered at about 250 meV with an high energy onset at about 395 (20) meV represents the plasma edge or plasma energy of LaO 0.9 F 0.1 FeAs, which is observed at a relatively low value. In consideration of the quasi-2D character of the electronic states in LaO 0.9 F 0.1 FeAs and the expected strong anisotropy of the plasma energy for light polarized to the (a,b) and c crystal axes, respectively, the onset of the plasma edge (PE) in Fig. 2 gives a reasonable value of the in-plane value (i.e. for light polarized within the (a,b) crystal plane) [24].
Within a simple Drude model, this value allows a first, crude estimate of the charge carrier density in LaO 0.9 F 0.1 FeAs. Within this model, the plasma energy, ω p , is given by ω 2 p = ne 2 /mε 0 ε ∞ , with n, e, m, and ε ∞ representing the charge carrier density, the elementary charge, the effective mass of the charge carriers and the background dielectric screening due to higher lying electronic excitations. [25] Taking the effective mass equal to the free electron mass, and ε ∞ ∼ 10-12 [26], one would arrive at a rather low charge density of about 4x10 −20 cm −3 . We note, that this estimate ignores the strongly anisotropic nature of the electronic structure near the Fermi energy E F in LaO 0.9 F 0.1 FeAs. Our LDA-FPLO [full potential localized orbital minimum-basis code) (version 7)] [27] band structure calculations for the nonmagnetic ground state provide a plasma frequency within the (a,b) crystal plane of Ω LDA p = 2.1 eV and a small value for a polarization along the c axis of 0.34 eV in accord with 2.3 eV and 0.32 eV given in Ref. 6. Note that these "bare" values do not describe the screening through ε ∞ caused by interband transitions. In order to get the measured ω p = 0.395 eV an unusually large value of ε ∞ ∼ 28.3 to 33.9 would be required which seems to be unrealistic [26] and instead ε ∞ ∼ 12 would be expected. Then, for the empirical unscreened plasma energy Ω p a value about 1.37 eV is expected on the basis of our reflectance data. For a deeper insight we calculated at first the interband contribution to the dielectric function from the in-plane DMFT optical conductivity between 1 and 6 eV for the undoped LaOFeAs given in Fig. 5 of Ref. [13]. From its static value we obtained ε ∞ = 5.4, only. Note that generally smaller values for the onsite Coulomb repulsion U d would result in larger ε ∞ values due to an energy shift of the interband transition from the lower Hubbard band and the less suppressed combined density of states for the IB pd transitions which vanishes at the metal-insulator transition expected at a critical U d slightly above 4 eV [13]. Hence, a corresponding systematic DMFT study with lower U d and J d -values would be of interest. In contrast to Haule et al. [13], Shorikov et al. [14] argue that LaOFeAs is in an intermediate U d regime but strongly affected by the value of the Hund's rule exchange J Fe . Hence, ε ∞ should significantly increase and the U d = 4 eV DMFT based estimate given above can be regarded as a lower limit. But it provides an unrealistic estimate for ε ∞ itself. The expected U d -dependence of ε ∞ is shown schematically in Fig. 3. It points to strongly reduced U dvalues in accord with the suggestions of Refs. [14,19,37]. Thus, ε ∞ provides a convenient direct measure for the strength of the correlation regime, even for small U dvalues near 2 eV.
In addition, there is a direct relation between the plasma energy as measured with optical techniques and the London penetration depth in the superconducting state. Within a BCS approach, these two parameters are inversely proportional to each other. Recently, the in-plane penetration depth, λ a,b , for LaO 0.9 F 0.1 FeAs has been determined using muon spin relaxation to be λ a,b L (0) = 254(2) nm [22,28]. This value slightly exceeds those found for optimally doped high-T c cuprates (HTC's). This suggests that the plasma energy of LaO 0.9 F 0.1 FeAs should be smaller than that of HTC. Their ω p is found near 1 eV [29,30,31,32] for optimal doping (i.e. the highest T c ), and taking into account the different penetration depths λ a,b L one would expect a plasma energy in LaO 0.9 F 0.1 FeAs that is close to the measured value. Here we take into account that λ L is renormalized by the mass enhancement factor as a consequence of the always present e-b coupling measured by the Fermi surface averaged coupling constant λ tot , whereas the plasma energy is not. This asymmetry in the renormalization is based on very different temperatures and energies probed in both measurement: nearly T = 0 and low-ω for the penetration depth vs. high energies and high T for the PE, see also Fig. 7 in Ref. 33 and the Note added. In other words, we employ both the T = 0 and the asymptotic high-energy limits of the coupling constant being in general ω-and T -dependent. In the effective single-band approach this relation can be rewritten in convenient units as [34] αΩ p [eV] = (n/n s (0))(1 + λ tot (0))(1 + δ),
where α = λ L (0)[nm]/(197.3 nm) is the experimental (a, b)-plane penetration depth extrapolated to T = 0, Ω p = √ ε ∞ ω p denotes the empirical unscreened plasma energy, n s (0) is the density of electrons in the condensate at T = 0 and n the total electron density which contributes to the unscreened Ω p , δ = 0.7γ imp /(2∆(0)) is the disorder parameter which vanishes in the clean limit. Notice that all three factors under the root are ≥ 1. In the quasi-clean limit δ < 1 at T = 0, for n=n s and using ε ∞ = 12 [26], δ = 0.93 K for γ imp = 125 K estimated from resistivity data [35] and λ L (0) = 254 nm [22,28] one estimates λ tot ∼ 0.61, i.e. the superconductivity is in a weak coupling regime for our samples with T c = 27 K. The effect of ε ∞ /(1+δ) on the coupling strength is shown in Fig. 3. Note that a substantial impurity scattering (δ > 1) beyond our moderate disorder regime should further reduce λ tot . We note once more that our empirical Ω p ≈ 1.37 eV differs clearly from the LDA-prediction of 2.1 eV for the nonmagnetic ground state and slightly from 1.6 eV for the rigid band estimate of the antiferromagnetic state within the LSDA.
The microscopic nature of the empirically estimated total e-b coupling constant and especially its decomposition into the interaction with various boson modes remains unclear. Since the experimental optical mass deviates by a factor of 2.35 from the LDA predictions based on the neglect of antiferromagnetic fluctuations and correlation effects (and the LSDA by a factor of 1.37) it is also unclear to what extent the calculated weak e-ph coupling λ LDA e−ph ≤ 0.21 [6] can be trusted. These deviations might be caused by the filling of the two(three) Fermi surface hole pockets [13] as a result of correlations , a hidden spin density wave [17], or another still unknown many-body effect. Adopting nevertheless such a weak value for λ e−ph one is left with a non-phonon contribution of 0.2 to 0.4. Since the screening by the interband transitions is affected by the value of U d as mentioned above, Fig. 3 suggests the stronger the correlation the weaker is the empirical e-b coupling strength.
In general, a classical e-ph-mechanism seems to be ruled out by the weakness of λ ph and the smallnes of typical phonon energies except an exotic situation with an attractive Coulomb interaction µ * < 0(!) due to a negative dielectric response at large wave vectors [36] or huge pnictide polarizibilities [37]. Multi-band effects ignored here might lead to somewhat larger λ-values since the low-T penetration depth and λ L (0) in particular, might be affected by the weakest coupled group of electrons with the smallest gaps and the largest Fermi velocities (Ω p ) whereas the region near T c is dominated by particles with the largest gaps [38]. The corresponding estimates e.g. within a two-band model are postponed to the time when more data will be available.
To summarize, we have studied the optical response of LaO 0.9 F 0.1 FeAs powder samples in a wide energy range. Our reflectance data reveal four features near 12, 25, 31, and 55 meV we assigned to phonon excitations. Furthermore, our data allow to extract the screened in-plane plasma energy to about 0.4 eV and two interband excitations near 0.6 and 2 eV. The subsequent analysis of the reported penetration depth suggests a quasi-clean limit , weak coupling superconducting regime. The analysis of ε ∞ in the correlated regime at U d =4 eV points to a substantially reduced Coulomb repulsion This is in accord with Refs. [14,19,37]. Further systematic studies including various dopings, oriented films, single crystals and other Fe based superconductors are desirable. Note added. Preparing an amended manuscript we learned about an ellipsometry study of LaO 0.9 F 0.1 FeAs , whereΩ p = 0.61 eV has been reported applying the effective medium approximation (EMA) to obtain σ eff (ω) for a polycrystalline sample [26]. However, for spherical grains and σ ab ≫ σ c as expected for a quasi-2D metal (except for the unscreened c-axis polarized i.r. phonon related peaks), EMA predicts σ eff ≈ 0.5σ ab which correponds tõ Ω ab p = 0.86 eV. SinceΩ p has been derived mainly from the low-ω, damped Drude region (below 25 meV) it is still renormalized by the e-b coupling, where a high-energy boson well above 25 meV has been assumed to explain the high T c at weak coupling. Using our λ = 0.61 one estimates Ω p =1.23 eV, close to 1.37 eV suggested above [39]. Finally, the analysis of the Pauli limiting (PL) behavior for the closely related LaO 0.9 F 0.1 FeAs 1−δ system yields a similar value λ ≈ 0.6 to 0.7 as derived from the strong coupling correction for its PL field B P (0) = 102 T [40]. Rather similar phonon-peaks as reported here, have been detected recently also for SmO 1−x F x FeAs [41].
We
studied the reflectance of the recently discovered superconductor LaO0.9F0.1FeAs in a wide energy range from the far infrared to the visible regime. We report on the observation of infrared active phonons, the plasma edge (PE) and possible interband transitions. On the basis of this data and the reported in-plane penetration depth λL(0) = 254 nm [H. Luetkens et al., Phys. Rev. Lett. 101, 097009 (2008)] a disorder sensitive relatively small value of the total electronboson coupling constant λtot = λ e−ph + λe−sp ∼ 0.6 ± 0.35 can be estimated adopting an effective single-band picture.
FIG. 1 :
1Left: field cooled (FC) and zero field cooled (ZFC) magnetic susceptibility for the LaO0.9F0.1FeAs samples studied here. Right: T -dependence of the resistivity near Tc.
FIG. 2 :
2Reflectance of a polished LaO0.9F0.1FeAs sample. The energy is given in a logarithmic (left) and linear (right) scale. The arrows in the left highlight the low-energy excitation features ascribed to phonons. In the right the arrows denote the plasma edge and the 2 eV interband transition.
FIG. 3 :
3Empirical relation between the total coupling constant from the mass enhancement entering the penetration depth vs interband screening modeled by the disorder renormalized dielectric background constant ε * = ε∞ns/n(1 + δ). The λ ab L (0) values from Refs.[22,28] (left). The improved screening measured by the background dielectric constant ε∞ vs. reduced onsite repulsion U d on Fe-sites (right).
thank M. Deutschmann, R. Müller, S. Pichl, R. Schönfelder, R. Hübel, S. Müller-Litvanyi and S. Leger for technical support and H. Rosner, A. Boris, M. Kulič, O. Dolgov, H. Klauss, H. Eschrig, A. Koitzsch, V. Gvozdikov, and I. Eremin for discussions.
. Y Kamihara, J. Am. Chem. Soc. 1303296Y. Kamihara et al., J. Am. Chem. Soc. 130, 3296 (2008).
. H Takahashi, Nature. 453376H. Takahashi et al., Nature, 453, issue 7193, 376 (2008).
. Z.-A Ren, Chin. Phys. Lett. 552215Z.-A. Ren et al., Chin. Phys. Lett. 55, 2215 (2008).
. H Eschrig, arXiv:0804.0186v2H. Eschrig, arXiv:0804.0186v2 (2008).
. C Cao, Phys. Rev. 77220506C. Cao et al., Phys. Rev. 77, 220506(R) (2008).
. L Boeri, Phys. Rev. Lett. 101206403L. Boeri et al., Phys. Rev. Lett. 101, 0206403 (2008).
. I I Mazin, Phys. Rev. Lett. 10157003I.I. Mazin Phys. Rev. Lett. 101, 057003 (2008).
. K Kuroki, Phys. Rev. Lett. 10187004K. Kuroki et al., Phys. Rev. Lett. 101, 087004 (2008).
. G Xu, Europhys. Lett. 8267002G. Xu et al., Europhys. Lett. 82, 67002 (2008).
. P Lee, Phys. Rev. B. 78144517P. Lee et al. Phys. Rev. B 78, 144517 (2008).
. Q Han, Europhys. Lett. 8237007Q. Han et al., Europhys. Lett. 82, 37007 (2008).
. X Dai, Phys. Rev. Lett. 10157008X. Dai et al., Phys. Rev. Lett. 101, 057008 (2008)
. K Haule, Phys. Rev. Lett. 100226402K. Haule et al., Phys. Rev. Lett. 100, 226402 (2008).
. A O Shorikov, arXiv:0804.3283v2A.O. Shorikov et al., arXiv:0804.3283v2.
. Q Si, E Abrahams, Phys. Rev. Lett. 10176401Q. Si and E. Abrahams, Phys. Rev. Lett. 101, 076401 (2008).
. M Daghofer, Phys. Rev. Lett. 101M. Daghofer et al., Phys. Rev. Lett. 101 (2008).
. Z.-Yu Weng, arXiv:0804.3228v2Z.-Yu. Weng, arXiv:0804.3228v2.
. K Haule, G Kotliar, arXiv:0805.0722K. Haule and G. Kotliar, arXiv:0805.0722.
. E Z Kurmaev, arXiv:0805.0668E.Z. Kurmaev et al., arXiv:0805.0668.
. D J Singh, M.-H Du, Phys. Rev. Lett. 100237003D.J. Singh and M.-H. Du, Phys. Rev. Lett. 100, 237003 (2008).
. G F Chen, Phys. Rev. Lett. 10157007G.F. Chen et al., Phys. Rev. Lett. 101, 057007 (2008).
. H Luetkens, Phys. Rev. Lett. 10197009H. Luetkens et al., Phys. Rev. Lett. 101, 097009 (2008).
. X Zhu, Supercond. Sci. Technol. 21105001X. Zhu et al., Supercond. Sci. Technol. 21, 105001 (2008).
Further, also the HTC's are quasi-2D metals with the highest PE value for light polarization within the CuO2-plane. In the case of HTC's, there are a number of publications of polycrystalline pellets exactly as in our studies of FeAsLa(O,F) which demonstrate that the reflectivity onset seen as a kink in the measured curves is a good measure of the in-plane PE value. K Kamaras, The situation parallels that for the HTC's. Right after their discovery, also only polycrystalline sampels were available, and have been studied by reflectance measurements. 59919Such studies already provided very important insight into the optics of HTC's, e.g. they were used to study the doping dependence of the in-plane PE. and most importantly these results later on were corroborated by single crystal studies (see e.g. Ref. [33])The situation parallels that for the HTC's. Right after their discovery, also only polycrystalline sampels were available, and have been studied by reflectance measure- ments. Further, also the HTC's are quasi-2D metals with the highest PE value for light polarization within the CuO2-plane. In the case of HTC's, there are a number of publications of polycrystalline pellets exactly as in our studies of FeAsLa(O,F) which demonstrate that the re- flectivity onset seen as a kink in the measured curves is a good measure of the in-plane PE value [see e.g.: K. Ka- maras et al., Phys. Rev. Lett. 59, 919 (1987)]. Such studies already provided very important insight into the optics of HTC's, e.g. they were used to study the doping dependence of the in-plane PE, and most importantly these results later on were corroborated by single crystal studies (see e.g. Ref. [33]).
F Wooten, Optical properties of solids. New YorkAcademic PressF. Wooten, Optical properties of solids, Academic Press, New York (1972).
. A V Boris, Phys. Rev. Lett. 10227001A.V. Boris, et al., Phys. Rev. Lett. 102 027001 (2009).
. K Koepernik, H Eschrig, Phys. Rev. B. 591743K. Koepernik and H. Eschrig, Phys. Rev. B 59, 1743 (1999).
In this work also the data of Ref. 22 were re-analyzed resulting in a slightly smaller value λL(0) = 242(3) nm which reduces our λtot to about 0. J Drew, Phys. Rev. Lett. 1019010After completing the present work we learned about a µSR study for a SmO0.82F0.18FeAs sample by AAfter completing the present work we learned about a µSR study for a SmO0.82F0.18FeAs sample by A.J. Drew et al., Phys. Rev. Lett. 101, 09010 (2008). In this work also the data of Ref. 22 were re-analyzed resulting in a slightly smaller value λL(0) = 242(3) nm which reduces our λtot to about 0.46 for all other parameters fixed.
. N Nücker, Phys. Rev. B. 3912379N. Nücker et al., Phys. Rev. B 39, 12379 (1989).
. S Uchida, ibid. 437942S. Uchida et al., ibid. 43, 7942 (1991).
. A Zibold, Physica C (Amsterdam). 212365A. Zibold et al., Physica C (Amsterdam) 212, 365 (1993).
. M Knupfer, ibid. 230121M. Knupfer et al., ibid. 230, 121 (1994).
. D Basov, T Timusk, Rev. Mod. Phys. 77721D. Basov and T. Timusk, Rev. Mod. Phys. 77, 721 (2005).
. A Wälte, Phys. Rev. B. 70174503A. Wälte et al., Phys. Rev. B, 70, 174503 (2004).
. M Kulič, S.-L Drechsler, O Dolgov, arXiv:0811.3119.v2Europhys. Lett. 85in pressM. Kulič, S.-L. Drechsler, and O. Dolgov, Europhys. Lett. 85 (2009) in press; arXiv:0811.3119.v2.
. V Cvetkovic, Z Tesanovich, Europhys. Lett. 8537002V. Cvetkovic and Z. Tesanovich, Europhys. Lett. 85, 37002 (2009).
. G A Sawatzky, arXiv:0808.1390G.A. Sawatzky et al., arXiv:0808.1390.
. A A Golubov, Phys. Rev. B. 6654524A.A. Golubov et al., Phys. Rev. B 66 054524 (2002).
This way Ωp is somewhat underestimated. In addition, for a strongly correlated system as suggested there a fit with an ω-and T -dependent γ should be performed. The tails of the low-ω interband transitions take artificially oscillator strength of the Drude peak. see e.g. Ref. [33The tails of the low-ω interband transitions take artifi- cially oscillator strength of the Drude peak. This way Ωp is somewhat underestimated. In addition, for a strongly correlated system as suggested there a fit with an ω-and T -dependent γ should be performed, see e.g. Ref. [33].
. G Fuchs, Phys. Rev. Lett. 101237003G. Fuchs et al., Phys. Rev. Lett. 101, 237003 (2008).
. S I Mirzaei, arXiv:0806.2303S.I. Mirzaei, et al., arXiv:0806.2303.
| [] |
[
"Thermoelectric properties of polycrystalline palladium sulfide",
"Thermoelectric properties of polycrystalline palladium sulfide"
] | [
"Liu-Cheng Chen ",
"Bin-Bin Jiang ",
"Hao Yu ",
"Hong-Jie Pang ",
"Lei Su ",
"Xun Shi ",
"Li-Dong Chen ",
"Xiao-Jia Chen "
] | [] | [] | Measurement of the electrical, thermal, and structural properties of palladium sulfide (PdS) has been conducted in order to investigate its thermoelectric performance. A tetragonal structure with the space group P4 2 /m for PdS was determined from X-ray diffraction measurement. The obtained power factor of 27 mW cm À1 K À2 at 800 K is the largest value obtained for the transition metal sulfides studied so far. The maximum value of the dimensionless figure of merit is 0.33 at 800 K. These results indicate that binary bulk PdS has promising potential for good thermoelectric performance. | 10.1039/c8ra01613e | null | 102,951,093 | 1802.02761 | 4fa21211dd339472659d5b2d03fe301786165134 |
Thermoelectric properties of polycrystalline palladium sulfide
Liu-Cheng Chen
Bin-Bin Jiang
Hao Yu
Hong-Jie Pang
Lei Su
Xun Shi
Li-Dong Chen
Xiao-Jia Chen
Thermoelectric properties of polycrystalline palladium sulfide
10.1039/c8ra01613e
Measurement of the electrical, thermal, and structural properties of palladium sulfide (PdS) has been conducted in order to investigate its thermoelectric performance. A tetragonal structure with the space group P4 2 /m for PdS was determined from X-ray diffraction measurement. The obtained power factor of 27 mW cm À1 K À2 at 800 K is the largest value obtained for the transition metal sulfides studied so far. The maximum value of the dimensionless figure of merit is 0.33 at 800 K. These results indicate that binary bulk PdS has promising potential for good thermoelectric performance.
Introduction
Thermoelectric materials, which can directly convert heat to electrical power, have been of interest for many years because of their potential applications and environmentally friendly properties. The efficiency of thermoelectric materials is determined using the dimensionless gure of merit (zT), dened as zT ¼ S 2 s k T; where S is the Seebeck coefficient, s is the electrical conductivity, T is the absolute temperature and k is the thermal conductivity. For thermoelectric devices, a conversion efficiency of 15% (zT $ 1) is needed and this can be achieved by obtaining large S, high s, and low k values. 1,2 Many strategies for enhancing zT have been proven to be effective, including the use of phonon-liquid electron-crystal materials, nanostructure engineering and band structure engineering. [3][4][5] Among these strategies, the reduction of thermal conductivity by microstructure modication is the most promising strategy. Therefore, searching for thermoelectric materials with intrinsic large power factors, PFs (PF ¼ S 2 s), and then modifying them through nanostructuring while maintaining the original electrical properties will be a good method for obtaining superior performance in thermoelectric materials. Recently, transition metal suldes have attracted much attention because sulde is cheaper and more earth abundant than telluride or selenium. 6,7 Many transition metal suldes have shown good thermoelectric performance, mainly because of their relatively low thermal conductivities. For instance, the highest zT for PbS has been improved from 0.4 to $1 by obtaining a low thermal conductivity through microstructure modication. [8][9][10] The reduction of thermal conductivity led to a high zT of 0.6 at 873 K for SnS through a doping method. 11 Copper sulde is an important thermoelectric material with high zTs (zTs ¼ 1.4-1.7 at 1000 K). Ultra-low lattice thermal conductivities caused by liquid-like copper ions were proposed to account for such high zTs in Cu x S. 12 However, the PFs of those thermoelectric suldes are not high enough to provide ideal zTs. For example, the best performing material, Cu 1.97 S, only has a PF of around 8 mW cm À1 K À2 at 1000 K. Thus, an interesting approach for obtaining a high zT value is to search for a thermoelectric sulde with an intrinsically large PF. Palladium sulde (PdS), which belongs to the transition metal sulde group, has potential applications in semiconducting, photoelectrochemical and photovoltaic elds because of its ideal band gap of 1.6 eV. [13][14][15] Furthermore, it also has several potential device applications in catalysis and acid resistant and high temperature electrodes. [16][17][18] In a study of thermoelectric properties, 19 the Seebeck coefficient of PdS thin lm was reported to have a large value (about 280 mV K À1 ) at room temperature. Therefore in this study, it is highly desirable to investigate the bulk thermoelectric properties of PdS, with the purpose of exploring the viability of the sample as a potentially useful thermoelectric material. In addition, it is interesting to gain some insight into the physical mechanisms involved in enhancing the zTs of thermoelectric materials.
In this work, polycrystalline PdS was successfully fabricated using a melt quenching and spark plasma sintering technique. Its thermoelectric properties were investigated using measurement of the electrical conductivity, Seebeck coefficient and thermal conductivity. We found that the power factor was very large (PF ¼ 27 mW cm À1 K À2 ), due to the high values of s and S. The highest zT value achieved in this paper was 0.33 at 800 K. Our results highlight that polycrystalline PdS is a promising potential thermoelectric material aer overcoming the high k value with microstructure modication.
Experimental details
High purity raw elements, Pd (powder, 99.99%, Alfa Aesar) and S (powder, 99.999%, Alfa Aesar), were weighed out in stoichiometric proportions and then mixed well in an agate mortar. The mixture was pressed into pellets and sealed in quartz tubes under vacuum. Then, the tubes were heated at a rate of 1 C min À1 to 1373 K. They remained at this temperature for 12 hours before quenching in cold water. Next, the quenched tubes were annealed at 873 K for 7 days. Finally, the products were ground into ne powders and sintered by Spark Plasma Sintering (Sumitomo SPS-2040) at 923 K under a pressure of 65 MPa for 5 min. High-density samples (>99% of the theoretical density) were obtained.
The powders were characterized using X-ray diffraction (XRD) (Rigaku, Rint 2000) under Cu Ka radiation (l ¼ 1.5405Å) at room temperature. Measurements were obtained between 20 and 70 with a scan width of 0.02 and a rate of 2 min À1 . The Seebeck coefficient and electrical and thermal conductivities were simultaneously measured between 3 K and 300 K in a thermal transport option (TTO) setup using a Physical Properties Measurement System (PPMS) by Quantum Design. The measurements were carried out in the residual vacuum of a He atmosphere, under a pressure of 10 À5 Torr. The typical size of PdS used in the PPMS was 4.3 Â 2.0 Â 0.9 mm 3 , with four Cu wires attached with Ag paste. The Hall coefficient, R H , was also measured using a conventional four-probe technique by the PPMS, with a temperature range of 7 K to 300 K. The heat capacity, C p , in a temperature range of 1.8 to 300 K was additionally measured by the PPMS in order to analyze the thermal conductivity data. The high temperature electrical conductivity and the Seebeck coefficient were measured using an Ulvac ZEM-3 from 300 to 800 K under a helium atmosphere. The high temperature thermal conductivity was calculated from k ¼ DC p r, where the thermal diffusivity (D) was obtained using a laser ash method (Netzsch, LFA 457) under an argon atmosphere. The specic heat (C p ) was calculated using the Dulong-Petit law, and the density (r) was measured using the Archimedes method.
Results and discussion
Electrical transport properties
The XRD pattern and crystal structure of bulk PdS are shown in Fig. 1. The main diffraction peaks indicate a tetragonal structure (JCPDS no. 25-1234) with the space group P4 2 /m (84) and no other phases were obviously detected from the XRD pattern. The lattice parameters are a ¼ b ¼ 6.441Å and c ¼ 6.619Å.
The temperature dependences of the s, S, and PF of polycrystalline PdS are shown in Fig. 2. It can be seen that the values for s, S and PF measured in different studies are consistent with each other. The small deviations of the data around room temperature could be caused by the different errors of the two systems. In Fig. 2(a), the behavior of s with changing temperature is complex. At high temperatures, s decreases sharply with decreasing temperature until around 500 K, which is probably caused by the thermal activation induced bipolar effect in semiconductors. However, in the temperature range of 200-500 K, s behaves like a constant. It does not show an obvious increase with decreasing temperature. This behavior is consistent with that of a metal, which was not observed previously. 14 Below 200 K, there is a signicant decrease in the temperature dependence of s, decreasing sharply as the sample is cooled. This reects the nature of a typical semiconductor. This behavior at low temperatures can be seen more obviously in the inset [ Fig. 2(a)]. Therefore, these contrasting behaviors imply that bulk PdS may have semiconductor-metal and metal-semiconductor transitions around 200 K and 500 K, respectively.
In Fig. 2(b), the temperature dependences of S and PF are shown. The value of S is negative, indicating that the majority of the charge carriers are electrons. With increasing temperature, the absolute value of S increases to a very large value at approximately 400 mV K À1 , without doping at around 350 K, and then reaches a plateau until about 600 K. Unfortunately, there is an obvious decrease above 600 K, which may be caused by the thermal excitation of the carriers. This behavior observed for s and S at high temperatures has been observed in many other intrinsic semiconductor systems and may be ameliorated by doping. 20 Notably, the PF value tends to increase over the entire temperature range, except the intermediate region. The maximum PF value is about 27 mW cm À1 K À2 at 800 K (the highest recorded temperature in this study). The PF value reported here is very large compared with values obtained for other thermoelectric suldes. For example, it is about three times larger than the value of optimal Cu 1.97 S. 12 From these electrical properties, it is obvious that PdS is a potentially useful thermoelectric material.
The Hall effect was measured to give insight into the electrical transport properties of PdS. The temperature dependences of the carrier concentration (n H ) and Hall mobility (m H ), evaluated from low temperature Hall measurements, are shown in Fig. 3. The Hall coefficient (R H ) is negative over the entire temperature range, indicating that the majority of the charge carriers are electrons, which is consistent with the negative Seebeck coefficient. The temperature dependent m H gently increases until around 200 K and the maximum value is 230 cm 2 V À1 s À1 . This relatively high value of m H could be caused by the covalent bond characteristics of the tetragonal structure. Then, this value remains almost unchanged up to 300 K. This behavior is consistent with the temperature dependence of s below 300 K, as shown in Fig. 2(a). At low temperatures, m H ts the curve m H $ T 3/2 well, which indicates that ionized impurity scattering is the dominating carrier scattering process. However, at high temperatures, a T À3/2 dependence is observed in PdS, suggesting that the major carrier scattering process has changed to acoustic phonon scattering. The temperature dependence of n H is very complex and incredible, especially at low temperatures, which is most likely caused by some magnetic transitions. However, the carrier concentration of PdS changes in the same order, which indicates that the carrier concentration for PdS has a weak temperature dependence below 300 K, which has also been observed in lead chalcogenides. 21
Heat transport properties
The temperature dependences of the thermal diffusivity and specic heat capacity used for the k calculation are shown in Table 1. The obtained values for k with changing temperatures are shown in Fig. 4. It can be seen that k increases sharply with increasing temperature and evolves through a maximum (about 130 W m À1 K À1 ) at 38 K before nally decreasing roughly in a T À1 relation. This phenomenon indicates that bulk PdS is a normal crystal compound. Generally, k consists of the electronic part, k ele , and the lattice part, k lat . The electronic part k ele is proportional to electrical conductivity s through the Wiedemann-Franz relation: 22 k ele ¼ LsT, where L is the Lorenz number (L ¼ 2.44 Â 10 À8 W U K À2 in theory for semiconductors). Here, k ele can be ignored below room temperature, as shown in the inset of Fig. 4. However, the contribution of k ele to the total k increases with increasing temperature and reaches 6.6 about 12% at 800 K. The high k of PdS probably comes from the light atomic mass of sulfur and the strong chemical bonds in the crystal. In order to elucidate the reasons for the high thermal conductivity of PdS, the heat capacity, C p , was measured at low temperatures. The results are shown in Fig. 5. The measured C p value at 300 K is 0.35 J g À1 K À1 , which is close to the theoretical value (0.36 J g À1 K À1 ). The inset of Fig. 5 displays the heat capacity (C p ) vs. T. The solid red line is a tted curve based on the Debye model by the relation:
23 C p ¼ 4T + bT 3 .
The total C p includes the carrier contribution, 4T, and the phonon contribution, bT 3 . The tted parameters are 0.34 mJ mol À1 K À2 for 4 and 0.13 mJ mol À1 K À2 for b. The small value for 4 indicates that the electronic density of states near the Fermi level is quite weak at low temperatures when compared with other thermoelectric materials (e.g. YbFe 4 Sb 12 , 4 ¼ 141.2 mJ mol À1 K À2 ). 24 This nding is consistent with the low s observed at low temperatures, as shown in Fig. 2(a).
The dimensionless gure of merit
Based on the measured temperature-dependent values for S, s and k, the dimensionless gure of merit (zT), which directly determines the energy conversion efficiency of a thermoelectric material, has been calculated. The results are shown in Fig. 6. The calculated value of zT has a positive temperature dependence over the entire measurement range, which is different from other thermoelectric materials which have maximum peaks at suitable temperatures. This phenomenon indicates that the thermoelectric properties of bulk PdS will be more efficient at higher temperatures. The value of zT reaches 0.33 at a temperature of 800 K (the highest recorded temperature in this study). This value is considerably high compared to that of many undoped thermoelectric suldes at similar temperatures, such as Bi 2 S 3 (zT $ 0.2 at 823 K). 25 However, the thermoelectric conversion efficiency is still low compared to that of high performance thermoelectric materials, despite the obvious uptrend at higher temperatures. Thus, it is extremely urgent to improve the thermoelectric conversion efficiency of bulk PdS. Based on our results, the intrinsic large PF is the most important feature for bulk PdS. Since only nominally undoped samples were studied in this initial report, it's not hard to infer that further improvement of the PF through optimized doping should be a good method. This could change the density of states at the Fermi level depending on the relation: 26
S ¼ p 2 3 k B 2 T q dln sðEÞ dE E¼EF
(q is the carrier charge and E F is the Fermi energy). Another way to improve the power factor of PdS is through band structure engineering, as many studies have explored in recent years. For example, the zT of PbTe reached 1.5 at 773 K by distortion of the electronic density of states, 26 and a higher zT value of $1.8 was observed in PbTe 1Àx Se x alloys by producing the convergence of many valleys at desired temperatures. Furthermore, a successful approach of rationally tuning crystal structures in non-cubic materials has been proposed, which has enhanced the zT values of a few carefully selected chalcopyrites. 27 In addition, it can be expected that alloying between Pd and S will lead to a reduction in k lat due to the alloy scattering effect. At the same time, the k of bulk PdS is too large compared to that of other thermoelectric materials with low thermal conductivity (Fig. 4). Therefore, the reduction of k appears to be very important and useful for the purpose of improving zT. Alloying and nanostructuring are both optional routes to increase phonon scattering for the purpose of reducing k effectively. 2 Many thermoelectric materials have been improved by these methods, such as the analogous binary PbS, where the zT value of the nanostructured PbS is about twice as high as previously reported (zT $ 0.4). 9 SiGe is a well-known alloy for high temperature thermoelectric applications. Recently, by controlling their nanoscale structures, the zT values of both p-and n-type SiGe have been enhanced. 28,29 More importantly, nanostructuring has been proven to be an efficient method for lowering thermal conductivities and will not change the electrical transport properties too much. 4 Therefore, controlling the nanostructure of binary PdS will be a useful way to improve the zT value.
Conclusions
In summary, measurement of the electrical and thermal transport properties of binary PdS has been carried out in the temperature range of 2-800 K. The large value of the power factor (27 mW cm À1 K À2 ) and the maximum value of zT of 0.33 at about 800 K indicate the great potential thermoelectric performance of PdS as an undoped thermoelectric material. This study also predicts that k lat could be reduced through increased phonon scattering, which can be realized by nanostructuring, alloying or doping. These results suggest that binary bulk PdS has suitable properties to be a potential base thermoelectric material and many PdS based materials are expected to have good performance for thermoelectric applications.
Conflicts of interest
There are no conicts to declare.
Fig. 2
2Temperature dependence of the electrical transport properties of bulk PdS; (a) temperature dependence of the electrical conductivity (s) from 3 to 800 K. The inset shows r versus T below room temperature. (b) Temperature dependence of the Seebeck coefficient (S) and the calculated power factor (PF ¼ S 2 s) from 3 to 800 K.
Fig. 3
3Temperature dependences of the Hall carrier concentration (n H ) for PdS from 6 K to 300 K (black line) and the Hall mobility (m H ) (olive line) in the same temperature range. The red solid lines are fitted curves, m H $ T 3/2 for low temperatures and m H $ T À3/2 for high temperatures.
Fig. 4
4Temperature dependence of the total thermal conductivity (k) of bulk PdS. The inset illustrates the electric thermal conductivity (k ele ) vs. T plot in the temperature range between 3 and 800 K.
Fig. 5
5Temperature dependence of the heat capacity (C p ) for bulk PdS from 1.8 K to 300 K. The dashed line is the 3nR line calculated using the Dulong-Petit model. The inset shows the heat capacity below 6 K. The solid red line is the fitted curve, calculated using C p ¼ 4T + bT 3 .
Fig. 6
6Temperature dependence of the dimensionless figure of merit (zT) from 2 K to 800 K. The maximum value of zT is 0.33 at 800 K.This journal is © The Royal Society of Chemistry 2018
Table 1
1Summary of the values for thermal diffusivity, D, heat capacity, C p , and density, d, which were used to calculate the thermal conductivity, k, of PdST (K)
300
350
400
450
500
550
600
650
700
750
800
D (cm 2 s À1 )
10.137
8.328
7.046
6.015
5.216
4.602
4.083
3.67
3.305
3.035
2.817
C p (J g À1 K À1 )
0.36
0.36
0.36
0.36
0.36
0.36
0.36
0.36
0.36
0.36
0.36
d (g cm À3 )
AcknowledgementsLei Su acknowledges support from the Natural Science FounNotes and references
. L E Bell, Science. 321L. E. Bell, Science, 2008, 321, 1457-1461.
. G J Snyder, E S Toberer, Nat. Mater. 7G. J. Snyder and E. S. Toberer, Nat. Mater., 2008, 7, 105-114.
. B C Sales, D Mandrus, R K Williams, Science. 272B. C. Sales, D. Mandrus and R. K. Williams, Science, 1996, 272, 1325-1328.
. K Biswas, J He, I D Blum, C I Wu, T P Hogan, D N Seidman, V P Dravid, M G Kanatzidis, Nature. 489K. Biswas, J. He, I. D. Blum, C. I. Wu, T. P. Hogan, D. N. Seidman, V. P. Dravid and M. G. Kanatzidis, Nature, 2012, 489, 414-418.
. Y Z Pei, X Shi, A Lalonde, H Wang, L D Chen, G J Snyder, Nature. 473Y. Z. Pei, X. Shi, A. Lalonde, H. Wang, L. D. Chen and G. J. Snyder, Nature, 2011, 473, 66-69.
. X Lu, D T Morelli, Y Xia, F Zhou, V Ozolins, H Chi, X Y Zhou, C Uher, Adv. Energy Mater. 3X. Lu, D. T. Morelli, Y. Xia, F. Zhou, V. Ozolins, H. Chi, X. Y. Zhou and C. Uher, Adv. Energy Mater., 2013, 3, 342-348.
. C Wan, Y Wang, N Wang, W Norimatsu, M Kusunoki, K Koumoto, Adv. Mater. 44306C. Wan, Y. Wang, N. Wang, W. Norimatsu, M. Kusunoki and K. Koumoto, Adv. Mater., 2010, 11, 044306.
. L D Zhao, S H Lo, J He, H Li, K Biswas, J Androulakis, C I Wu, T P Hogan, D Y Chung, V P Dravid, J. Am. Chem. Soc. 133L. D. Zhao, S. H. Lo, J. He, H. Li, K. Biswas, J. Androulakis, C. I. Wu, T. P. Hogan, D. Y. Chung and V. P. Dravid, J. Am. Chem. Soc., 2011, 133, 20476-20487.
. S Johnsen, J He, J Androulakis, V P Dravid, I Todorov, D Y Chung, M G Kanatzidis, J. Am. Chem. Soc. 133S. Johnsen, J. He, J. Androulakis, V. P. Dravid, I. Todorov, D. Y. Chung and M. G. Kanatzidis, J. Am. Chem. Soc., 2011, 133, 3460-3470.
. L D Zhao, J He, S Hao, C I Wu, T P Hogan, C Wolverton, V P Dravid, M G Kanatzidis, J. Am. Chem. Soc. 134L. D. Zhao, J. He, S. Hao, C. I. Wu, T. P. Hogan, C. Wolverton, V. P. Dravid and M. G. Kanatzidis, J. Am. Chem. Soc., 2012, 134, 16327-16336.
. Q Tan, L D Zhao, J F Li, C F Wu, T R Wei, Z B Xing, M G Kanatzidis, J. Mater. Chem. A. 2Q. Tan, L. D. Zhao, J. F. Li, C. F. Wu, T. R. Wei, Z. B. Xing and M. G. Kanatzidis, J. Mater. Chem. A, 2014, 2, 17302-17306.
. Y He, T Day, T Zhang, H Liu, X Shi, L D Chen, G J Snyder, Adv. Mater. 26Y. He, T. Day, T. Zhang, H. Liu, X. Shi, L. D. Chen and G. J. Snyder, Adv. Mater., 2014, 26, 3974-3978.
. J C W Folmer, J A Turner, B A Parkinson, J. Solid State Chem. 68J. C. W. Folmer, J. A. Turner and B. A. Parkinson, J. Solid State Chem., 1987, 68, 28-37.
. I J Ferrer, P Díaz-Chao, A Pascual, C Sánchez, Thin Solid Films. 515I. J. Ferrer, P. Díaz-Chao, A. Pascual and C. Sánchez, Thin Solid Films, 2007, 515, 5783-5786.
. M Barawi, I J Ferrer, J R Ares, C Sánchez, ACS Appl. Mater. Interfaces. 6M. Barawi, I. J. Ferrer, J. R. Ares and C. Sánchez, ACS Appl. Mater. Interfaces, 2014, 6, 20544-20549.
. J J Bladon, J. Electrochem. Soc. 143J. J. Bladon, J. Electrochem. Soc., 1996, 143, 1206-1213.
. A Zubkov, T Fujino, N Sato, K Yamada, J. Chem. Thermodyn. 30A. Zubkov, T. Fujino, N. Sato and K. Yamada, J. Chem. Thermodyn., 1998, 30, 571-581.
. C H Yang, Y Y Wang, C C Wan, C J Chen, J. Electrochem. Soc. 143C. H. Yang, Y. Y. Wang, C. C. Wan and C. J. Chen, J. Electrochem. Soc., 1996, 143, 3521-3525.
A Pascual, J R Ares, I J Ferrer, C R Sanchez, International Conference on -ICT. A. Pascual, J. R. Ares, I. J. Ferrer, and C. R. Sanchez, in International Conference on -ICT, 2003, pp. 376-379.
. Y Z Pei, J Falk, E S Toberer, D L Medlin, G J Snyder, Adv. Funct. Mater. 21Y. Z. Pei, J. Lensch Falk, E. S. Toberer, D. L. Medlin and G. J. Snyder, Adv. Funct. Mater., 2011, 21, 241-249.
. W W Scanlon, Solid State Phys. 83W. W. Scanlon, Solid State Phys., 1959, 9, 83.
. G S Kumar, G Prasad, R O Pohl, J. Mater. Sci. 28G. S. Kumar, G. Prasad and R. O. Pohl, J. Mater. Sci., 1993, 28, 4261-4272.
. K Gofryk, D Kaczorowski, T Plackowski, J Mucha, A Leithejasper, W Schnelle, Y Grin, Phys. Rev. B: Condens. Matter Mater. Phys. 75K. Gofryk, D. Kaczorowski, T. Plackowski, J. Mucha, A. Leithejasper, W. Schnelle and Y. Grin, Phys. Rev. B: Condens. Matter Mater. Phys., 2007, 75, 1-10.
. N R Dilley, E D Bauer, M B Maple, S Dordevic, D N Basov, F Freibert, T W Darling, A Migliori, B C Chakoumakos, B C Sales, Phys. Rev. B: Condens. Matter Mater. Phys. 61N. R. Dilley, E. D. Bauer, M. B. Maple, S. Dordevic, D. N. Basov, F. Freibert, T. W. Darling, A. Migliori, B. C. Chakoumakos and B. C. Sales, Phys. Rev. B: Condens. Matter Mater. Phys., 2000, 61, 4608-4614.
. Z H Ge, B P Zhang, Z X Yu, J F Li, J. Mater. Res. 26Z. H. Ge, B. P. Zhang, Z. X. Yu and J. F. Li, J. Mater. Res., 2011, 26, 2711-2718.
. J P Heremans, V Jovovic, E S Toberer, A Saramat, K Kurosaki, A Charoenphakdee, S Yamanaka, G J Snyder, Science. 321J. P. Heremans, V. Jovovic, E. S. Toberer, A. Saramat, K. Kurosaki, A. Charoenphakdee, S. Yamanaka and G. J. Snyder, Science, 2008, 321, 554-557.
. J Zhang, R Liu, N Cheng, Y Zhang, J Yang, C Uher, X Shi, L D Chen, W Zhang, Adv. Mater. 26J. Zhang, R. Liu, N. Cheng, Y. Zhang, J. Yang, C. Uher, X. Shi, L. D. Chen and W. Zhang, Adv. Mater., 2014, 26, 3848-3853.
. X W Wang, H Lee, Y C Lan, G H Zhu, G Joshi, D Z Wang, J Yang, A J Muto, M Y Tang, J Klatsky, Appl. Phys. Lett. 93X. W. Wang, H. Lee, Y. C. Lan, G. H. Zhu, G. Joshi, D. Z. Wang, J. Yang, A. J. Muto, M. Y. Tang and J. Klatsky, Appl. Phys. Lett., 2008, 93, 193121.
. G Joshi, H Lee, Y C Lan, X W Wang, G H Zhu, D Z Wang, R W Gould, D C Cuff, M Y Tang, M S Dresselhaus, Nano Lett. 8G. Joshi, H. Lee, Y. C. Lan, X. W. Wang, G. H. Zhu, D. Z. Wang, R. W. Gould, D. C. Cuff, M. Y. Tang and M. S. Dresselhaus, Nano Lett., 2008, 8, 4670-4674.
| [] |
[
"Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction",
"Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction"
] | [
"Takuma Yagi [email protected] \nThe University of Tokyo Tokyo\nJapan\n",
"Md Tasnimul Hasan \nThe University of Tokyo Tokyo\nJapan\n",
"Yoichi Sato [email protected] \nThe University of Tokyo Tokyo\nJapan\n"
] | [
"The University of Tokyo Tokyo\nJapan",
"The University of Tokyo Tokyo\nJapan",
"The University of Tokyo Tokyo\nJapan"
] | [] | Every hand-object interaction begins with contact. Despite predicting the contact state between hands and objects is useful in understanding hand-object interactions, prior methods on hand-object analysis have assumed that the interacting hands and objects are known, and were not studied in detail. In this study, we introduce a video-based method for predicting contact between a hand and an object. Specifically, given a video and a pair of hand and object tracks, we predict a binary contact state (contact or no-contact) for each frame. However, annotating a large number of hand-object tracks and contact labels is costly. To overcome the difficulty, we propose a semi-supervised framework consisting of (i) automatic collection of training data with motion-based pseudo-labels and (ii) guided progressive label correction (gPLC), which corrects noisy pseudo-labels with a small amount of trusted data. We validated our framework's effectiveness on a newly built benchmark dataset for hand-object contact prediction and showed superior performance against existing baseline methods. Code and data are available at https: //github.com/takumayagi/hand_object_contact_prediction. | null | [
"https://arxiv.org/pdf/2110.10174v1.pdf"
] | 239,050,534 | 2110.10174 | 8a681d9383fe0667a7d81ee9730988204e12b5dd |
Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction
Takuma Yagi [email protected]
The University of Tokyo Tokyo
Japan
Md Tasnimul Hasan
The University of Tokyo Tokyo
Japan
Yoichi Sato [email protected]
The University of Tokyo Tokyo
Japan
Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction
YAGI, HASAN, SATO: HAND-OBJECT CONTACT PREDICTION 1
Every hand-object interaction begins with contact. Despite predicting the contact state between hands and objects is useful in understanding hand-object interactions, prior methods on hand-object analysis have assumed that the interacting hands and objects are known, and were not studied in detail. In this study, we introduce a video-based method for predicting contact between a hand and an object. Specifically, given a video and a pair of hand and object tracks, we predict a binary contact state (contact or no-contact) for each frame. However, annotating a large number of hand-object tracks and contact labels is costly. To overcome the difficulty, we propose a semi-supervised framework consisting of (i) automatic collection of training data with motion-based pseudo-labels and (ii) guided progressive label correction (gPLC), which corrects noisy pseudo-labels with a small amount of trusted data. We validated our framework's effectiveness on a newly built benchmark dataset for hand-object contact prediction and showed superior performance against existing baseline methods. Code and data are available at https: //github.com/takumayagi/hand_object_contact_prediction.
Introduction
Recognizing how hands interact with objects is crucial to understand how we interact with the world. Hand-object interaction analysis contributes to several fields such as action prediction [10], rehabilitation [28], robotics [38], and virtual reality [17].
Every hand-object interaction begins with contact. In determining which hand-object pairs are interacting, it is important to infer when hands and objects are in contact. However, despite its importance, finding the beginning and the end of hand-object interaction has not received much attention. For instance, prior works on action recognition (e.g., [12]) attempt to recognize different types of hand object interactions at the video clip level, i.e., recognizing one action for each video clip given as input. Some other works on action localization (e.g., [29]) can be used for detecting hand object interactions but localized action segments Figure 1: Overlap on Image = Contact?: Right hand (masked red region) grabs the onion (middle). Surrounding objects (cutting board, knife) overlaps with hand but not in contact. While it is difficult to determine contact state from single image, we can ease the problem by looking at temporal context (left and right).
are not necessarily related to the beginning and the end of contact between hands and objects. Contact between a hand and an object has been studied in the context of 3D reconstruction of hand object interaction [5,18]. However, they assumed that hands and objects are already interacting with each other. Only the moment when hands and objects are interacting in a stable grasp was targeted for analysis.
In this work, we tackle the task of predicting contact between a hand and an object from visual input. Predicting contact between a hand and an object from visual input is not trivial. For example, even if the hand area and the bounding box of an object overlap, it does not necessarily mean that the hand and the object are in contact (see Figure 1). In determining whether a hand and an object are in contact, it is essential to consider the spatiotemporal relationship between them. While some methods claim that the hand contact state can be classified by looking at hand shape [32,40], they did not explicitly predict the contact state between a specific pair of a hand and an object, limiting their utility.
We propose a video-based method for predicting binary contact states (contact or nocontact) between a hand and an object in every frame. We assume tracks of hands and objects specified by bounding boxes (hand-object tracks) as input. However, annotating a large number of hand-object tracks and their contact states can become too costly. To overcome this difficulty, we propose a semi-supervised framework consisting of (i) automatic training data collection with motion-based pseudo-labels and (ii) guided progressive label correction (gPLC) which corrects noisy pseudo-labels with a small amount of trusted data.
Given unlabeled videos, we apply off-the-shelf detection and tracking models to form a set of hand-object tracks. Then we assign pseudo-contact state labels to each track by looking at its motion pattern. Specifically, we assign a contact label when a hand and an object are moving in the same direction and a no-contact label when a hand is moving alone.
While generated pseudo-labels can provide valuable information on determining the state of contact states with various types of objects when training a prediction model, the pseudolabels also contain errors that hurt the model's performance. To alleviate this problem, we correct those errors by the guidance of an additional model trained on a small amount of trusted data. In gPLC, we train two networks each trained with noisy labels and trusted labels. During the training, we iteratively correct noisy pseudo-labels based on both network's confidence scores. We use the small-scale trusted data to guide which label to be corrected and yield reliable training labels for automatically extracted hand-object tracks.
Since there was no benchmark suitable for this task, we newly annotated contact states to various types of interactions appearing in the EPIC-KITCHENS dataset [8,9] which includes in-the-wild cooking activities. We show that our prediction model achieves superior performance against frame-based models [32,40], and the performance further boosted by using motion-based pseudo-labels along with the proposed gPLC scheme.
Our contributions include: (1) A video-based method of predicting contact between a hand and an object leveraging temporal context; (2) A semi-supervised framework of automatic pseudo-contact state label collection and guided label correction to complement lack of annotations; (3) Evaluation on newly collected annotation over a real-world dataset.
Related Works
Reconstructing hand-object interaction Reconstruction of the spatial configuration of hands and their interacting objects plays a crucial role to understand hand-object interaction. 2D segmentation [31] and 3D pose/mesh estimation [15,18,30,39,44] of hand-object interaction were studied actively in recent years. However, they assume (1) 3D CAD models exist for initialization (except [18]) (2) the hand is interacting with objects, making the methods inapplicable when hand and object are not interacting with each other. While multiple datasets appear for hand-object interaction analysis [4,14,31,40], no dataset focused on the entire process of interaction including beginning and termination of contact. It is worth mentioning DexYCB [6], which captured sequences of picking up an object. However, the performed action was very simple and their analysis focused on 3D pose estimation rather than contact modeling between hands and objects. We study the front stage of the handobject reconstruction problem-whether the hand interacts with the object or not.
Hand-object contact prediction Contact prediction is found to be a difficult problem because contact cannot be directly observed due to occlusions. To avoid using intrusive handmounted sensors, contact and force prediction from visual input was studied [1,11,37,42]. For example, Pham et al. [37] present an RNN-based force estimation method trained on force and kinematics measurements from force transducers. These methods require a careful setup of sensors, making it hard to apply them in an unconstrained environment. Instead of precise force measurement, a few methods study contact state classification (e.g., no contact, self contact, other people contact, object contact) from an image [32,40]. Shan et al. [40] collected a large-scale dataset of hand-object interaction along with annotated bounding boxes of hands and objects in contact. They train a network that detects hands and their contact state from their appearance. Narasimhaswamy et al. [32] extends the task into multi-class prediction. While their formulation is simple, they did not take the relationship between hands and objects explicitly and were prone to false-positive prediction. To balance utility and convenience, we take the middle way between the two approaches-binary contact state prediction between a hand and an object specified by bounding boxes.
Learning from noisy labels Since dense labels are often costly to collect, methods to learn from large unlabeled data are studied. While learning features from weak cues are studied in object recognition [33] and instance segmentation [34], it was not well studied in a sequence prediction task. Generated pseudo-labels typically include noise that hurts the model's performance. Various approaches such as loss correction [19,35], label correction [43,45], sample selection [21], and co-teaching [16,26] are proposed to deal with noisy labels. However, most methods assume feature-independent feature noise which is over-simplified, and only a few works study realistic feature-dependent label noise [7,45]. Zhang et al. [45] propose progressive label correction (PLC) which iteratively corrects labels based on the network's confidence score with theoretical guarantees against feature-dependent noise patterns. Inspired by PLC [45], we propose gPLC which iteratively corrects noisy labels by not only the prediction model but also with the clean model trained on small-scale trusted labels.
Proposed Method
In contrast to prior works [32,40] we formalize the hand-object contact prediction problem as predicting the contact states between a hand and a specific object appearing in a image sequence. We assume video frames X = {X 1 , . . . , X T }, hand instance masks H = Our goal is to predict a sequence of a binary contact state ("no contact" or "contact") y = {y 1 , . . . , y T }(y ∈ {0, 1}) given a hand-object track T . If any physical contact between the hand and the object exists, the binary contact state y is set to 1, otherwise 0. Although we do not explicitly model two-hands manipulation, we consider the presence of another hand as side information (see Section 3.3 for details).
However, collecting a large number of hand-object tracks and contact states for training can become too costly. We deal with this problem by automatic pseudo-label collection based on motion analysis and a semi-supervised label correction scheme.
Pseudo-Label Generation from Motion Cues
We automatically detect hand-object tracks and assign pseudo-labels to them based on two critical assumptions. (i) When a hand and an object are in contact, they exhibit similar motion pattern. (ii) When a hand and an object are not in contact, the hand moves while the object remains static (see Figure 2 left for illustration). Because these assumptions are simple yet applicable regardless of object appearance and motion direction, we can use these motion-based pseudo-labels for training to achieve generalization against novel objects.
Hand-object track generation Given a video clip, we first use the hand-object detection model [40] to detect bounding boxes of hands and candidate objects appearing in each frame. Note that the detected object's contact state is unknown and objects which overlap with hands are detected. For each hand detection, we further apply a hand segmentation model trained on EGTEA dataset [27] to each hand detection to obtain segmentation masks.
Next, we associate adjacent detections using a Kalman Filter-based tracker [3]. However, since [40] does not detect objects away from the hand, we extrapolate object tracks one second before and after using a visual tracker [25], producing H and O. Finally, we construct the hand-object track T by looking for pairs of hand and object tracks which include a spatial overlap between hand mask and object bounding box.
Contact state assignment We find contact (and no-contact) moments by looking at the correlation between hand and object motion. First, we estimate optical flows between adjacent frames. Since we are interested in relative movement of hands and objects against backgrounds, we obtain background motion-compensated optical flow and its magnitude M by homography estimation. Specifically, we sample flow vectors outside detected bounding boxes as matches between frames and estimate the homography using RANSAC [13].
Let F = ( f i j ) = I (M>σ ) be a binary mask of foreground moving region its magnitude larger than a certain threshold σ . For each hand and object binary region mask H = (h i j ) and O = (o i j ), we calculate the ratio of moving region within each region:
h r = ∑ i j (h i j · f i j ) ∑ i j h i j , o r = ∑ i j (o i j · f i j ) ∑ i j o i j .
We assign a label to a frame if IoU(H, O) > 0 and h r and o r above certain In rightmost frame, no label is assigned because of abrupt background motion.
thresholds. Similarly, we assign a no-contact label if IoU(H, O) = 0 or h r above threshold but o r below threshold. However, the above procedure may wrongly assign contact labels if the motion direction of hand and object are different (e.g., the object handled by the other hand). Thus we calculate the cosine similarity between the average motion vector of hand and object region and assign a contact label if above threshold otherwise a no-contact label. To deal with errors in flow estimation, we cancel the assignment if the background motion ratio b r =
∑ i j (b i j · f i j ) ∑ i j b i j (B = (b i j )
denotes background mask other than H and O) is above threshold.
Pseudo-label extension The above procedure assigns labels on hand-moving frames, but it does not assign labels when hands are moving slowly or still. To assign labels also on those frames, we extend the assigned contact states if the relationship between hands and objects does not change from the timing when pseudo-labels are assigned (see Figure 2 right).
To track hand-object distance, we find point trajectories from hand and object region which satisfy forward-backward consistency [41]. We then calculate the distance d between each hand-object point pair and compare the average distance of them in each frame. We extend the last contact state if the average distance is within a certain range of that of the starting frame. Figure 3 shows an example of the generated pseudo-labels.
Guided Progressive Label Correction (gPLC)
While generated pseudo-labels include useful information in determining contact states, they also include errors induced by irregular motion patterns. The model may overfit to noise if we simply train it based on these noisy labels. To utilize reliable labels from noisy pseudolabels, we propose a semi-supervised procedure called guided progressive label correction (gPLC), which works with a small number of trusted labels (see Figure 4 for overview).
Input: Noisy datasetS = {(x i ,ỹ i )} Np i=1 , trusted dataset S = {(x i , y i )} Nt i=1 , clean datasetŜ = ∅, noisy model f (x), clean model g(x)
, initial and end thresholds (δ 0 , δ end ), correction threshold δ = δ 0 , flip ratio α, step size β , supervision interval m, total round N Output:
Trained Model f (x) S = {(x i ,ŷ i )} Np i=1
whereŷ i is a list with empty elements same size asỹ i // Initialize clean dataset for n ← 1, . . . , N do for i ← 1, . . . , N p do We assume a noisy datasetS with generated pseudo-labels and a trusted dataset S with manually annotated trusted labels. We train two identical networks, each called noisy model and clean model. The noisy model f is trained on bothS and S while the clean model g is trained on S and a clean datasetŜ which is introduced later. We perform label correction against generated pseudo-labels inS using the prediction of both models.
z i ←ỹ i // Keep previous labels for t ← 1, . . . , |ỹ i | do ifỹ t i ∈ {0, 1} and | f (x t i ) − 1 2 | ≥ 1 2 − δ and I { f (x t i )≥ 1 2 } = I {g(x t i )≥ 1 2 } theñ y t i ,ŷ t i ← I { f (x t i )≥ 1 2 } //
As the training of f proceeds, f will give high confidence against some samples. Similar to PLC [45], we try to correct labels on which f gives high confidence. Note that we correct labels in a frame-wise manner, assuming output contact probability is produced per frame. In gPLC, we correct labels only when f has high confidence and does not contradict the clean network g's prediction. BecauseS is generated from motion cues, the decision boundary of f may be different from that of the optimal classifier. Thus the label correction on f alone would not converge to the desired decision boundary. Therefore, we guide the correction process by using g. Starting with a strict threshold on δ , we iteratively correct labels upon training. When the number of corrected labels gets small enough, we increase δ to loosen the threshold and continue the same procedure. However, since g is trained on a small-scale data, it has the risk of overfitting to S. To prevent overfitting, we iteratively add data that f gives high confidence to another dataset called clean datasetŜ and feed them to g so that g also grows through training. InitiallyŜ will not contain labels, but high-confident labels will be added over time. See Algorithm 1 for detail. In implementation, f (x) and g(x) are trained beforehand byS and S before starting the gPLC iterations.
Contact Prediction Model
Contact states are predicted by using an RNN-based model that takes RGB images, optical flow, and mask information as input (see Figure 5). For each modality, we crop the input by taking the union of the hand region and the object bounding box. The foreground mask is a four-channel binary mask that tells the presence of a target hand instance mask, a target object bounding box, other detected hand instance masks, and other detected object bounding boxes. The latter two channels prevent confusion when the target hand or object interacts with other entities. RGB and flow images are fed into each encoder branch, concatenated at the middle, and then passed to another encoder. Both encoders consist of several convolutional blocks, each consisted of 3×3 convolution followed by a ReLU and a LayerNorm layer [2], and a 2×2 max-pooling layer. The foreground mask encoder consists of three convolutional layers each followed by a ReLU layer, producing a 1×1 feature map encoding the positional relationship between the target hand, the target object, and the other hands and objects. After concatenating the features extracted from the foreground mask, contact probability is calculated through four bi-directional LSTM layers and three layers of MLP.
We train the network by a standard binary cross-entropy loss weighted by the ratio of the amount of labels in the training data. We did not propagate the error for non-labeled frames.
Experiments
Since there was no benchmark suitable for our task, we newly annotated hand-object tracks and contact states between hands and objects against videos in EPIC-KITCHENS dataset [8]. We collected tracks with various objects (e.g., container, pan, knife, sink). The amount of the annotation was 1,200 tracks (67,000 frames) in total. We split the data into a training set (240 tracks), validation set (260 tracks), and test set (700 tracks). For the noisy dataset, we have generated 96,000 tracks with motion-based pseudo-labels.
Implementation Details
We used FlowNet2 [20] for optical flow estimation. We used Adam [22] for optimization with a learning rate of 3e-4. We trained the network for 500,000 iterations with a batch size of one and selected the best model by frame accuracy on the validation set. The hyperparameters were set to δ 0 = 0.05, δ end = 0.25, α = 0.01, β = 0.025, m = 2500.
Evaluation Metrics
We prepared several metrics to evaluate the performance. Frame Accuracy: Frame-wise accuracy balanced by the ground truth label ratio; Boundary Score: F-measure of boundary detection. Performs bipartite graph matching between ground truth and predicted bound- ary [36]. Count as correct if the predicted boundary within six frames from the ground truth boundary; Peripheral Accuracy: Frame-wise accuracy within six frames from the ground truth boundary; Edit Score: Segmental metric using Levenshtein distance between segments [23]. We assume both contact and no-contact labels are foreground; Correct Track Ratio: The ratio of tracks which gives frame accuracy above 0.9 and boundary score of 1.0.
Baseline Methods
We evaluated several baseline methods: Fixed: Predicts always as "contact"; IoU: Calculate the mask IoU between the input hand mask and object bounding box. If the score is larger than zero predicts as contact, otherwise no-contact; ContactHands [32]: Predicts as a contact if the detected hand's contact state is "object"; Shan-Contact [40]: Predicts as a contact if corresponding hand's contact state prediction is "portable"; Shan-Bbox [40]: Predicts as contact if there is enough overlap between the detected object bounding box and input object bounding box; Shan-Full [40]: Combines predictions of Shan-Contact and Shan-Bbox; Supervised: Our proposed prediction model, trained by trusted data alone. We note that for the Shan-* baselines, the 100k+ego pre-trained model provided by the authors was used, which is trained on egocentric video datasets including the EPIC-KITCHENS dataset.
Results
Quantitative results We report the performance in Table 1. Our proposed method consistently outperforms the baseline models on all the metrics, achieving a double correct track ratio compared to IoU based on the overlap between hand and object bounding boxes. The frame-based methods (ContactHands, Shan-* ) performed equal or worse than IoU, producing many false positive predictions. These results suggest that previous methods claiming contact state prediction fails to infer physical contact between hands and objects. While Supervised performed well, gPLC further boosted the performance by leveraging diverse motion-based cues with label correction, especially on boundary score. Qualitative results Figure 6 shows the qualitative results. As shown in the top, our method distinguish contact and no-contact states by looking at the interaction between hands and objects while baseline methods yield false positive predictions by looking at box overlaps. The middle shows a typical no-contact case of a hand floating above an object. Our proposed model trained on motion-based pseudo-labels avoid producing false positive prediction.
Comparison against other robust learning methods To show the effectiveness of the proposed gPLC, we report ablations on other robust learning/semi-supervised learning methods (see Table 2 top). As expected, training using motion-based pseudo-labels performed worse due to labeling errors. Joint training with noisy and trusted labels gives marginal gain against the supervised model, but the boundary score remains low since it overfits against pseudo-label noise. We also applied the existing label correction method [45] on a single network with fine-tuning on trusted labels, but its performance was almost equal to joint training, suggesting that label correction on a single network does not yield good correction. We also tried a typical pseudo-labeling [24] without motion-based labels. However, it showed only a marginal improvement over the supervised baseline, suggesting that our motion-based pseudo-labels are necessary for better generalization.
Effect of input modality The bottom of Table 2 reports the ablation results of changing the input modalities. We observed that using RGB images alone impacts the boundary score, suggesting the difficulty of determining the contact state change without motion information.
In contrast, the optical flow-based model achieved nearly the same performance as the full model, suggesting that motion information is crucial for accurate prediction. Figure 7: Accuracy of noisy labels when initialized by corrupted ground-truth labels. Horizontal axis shows elapsed epochs ("0" denotes initial labels). Vertical axis shows frame accuracy (solid) and boundary score (dashed). Figure 8: Accuracy of noisy labels when initialized by motion-based pseudo-labels. Horizontal axis shows elapsed epochs and vertial axis shows mean frame accuracy per track. Note that non-labeled frames are ignored.
Error analysis While our method can better predict contact states by utilizing the rich supervision from motion-based pseudo-labels, we observed several failure patterns. As shown in Figure 6 bottom, our method often ignored contacts when a person instantly touched objects without yielding apparent object motion. We also observed failures due to unfamiliar grasps, complex in-hand motion, and failure in determining object regions (see supplemental for more results). These errors indicate the limitation of the motion-based pseudo-labels which assigns labels only when clear joint motion is observed. To better deal with subtle/complex hand motions, additional supervision or rules on such patterns may be required.
How does gPLC correct noisy labels? To understand the behavior of gPLC, we measured how gPLC corrects labels during training. We included the validation set into the training data with two patterns of initial labels: (i) randomly corrupted labels from ground truth (with three different corruption ratios c r = 0.1/0.2/0.5) (ii) motion-based pseudo-labels.
We trained the full model and measured the accuracy of the labels for every epoch. First, gPLC succeeded to correct randomly corrupted label even in the case of high corruption ratio of 0.5 (see Figure 7). However, in the case of a small corruption ratio of 0.1, gPLC made wrong corrections which means that both the noisy model and clean model got the prediction wrong. Improved boundary scores showed that gPLC can iteratively suppress inconsistent boundary errors. In the more realistic case of motion-based pseudo-labels, pseudo-labels were assigned to around 44% of the total frames, and achieved initial mean frame accuracy of 91.4% for the labeled frames. While gPLC reduced the error rate by 20% PLC wrongly flipped the contact state, which may have harmed the final performance (see Figure 8). These results indicate that gPLC effectively corrects noisy labels during training.
Conclusion
We have presented a simple yet effective method of predicting the contact state between hands and objects. We have introduced a semi-supervised framework of motion-based pseudolabel generation and guided progressive label correction that corrects noisy pseudo-labels guided by a small amount of trusted data. We have newly collected annotation for evaluation and showed the effectiveness of our framework against several baseline methods. Table 3: Hyperparameters used for pseudo-label generation.
σ hand object background motion direction Contact 2.0 h r ≥ 0.7 o r < 0.05 b r < 0.2 sim(d h , d o ) > 0.5 No-contact 1.0 h r ≥ 0.7 o r ≥ 0.2 b r < 0.2 sim(d h , d o ) < 0.0
A.1 Details on Pseudo-Label Generation
Preprocessing We extracted frames from videos by either 30 or 25 fps, half of the original frame rate. We processed all the frames in the resolution of 854 × 480.
Hyperparameters In the implementation, we used different σ and motion thresholds for contact detection and no-contact detection. We summarize the hyperparameters used in Table 3. d h and d o denote the average motion direction of the hand region and object region. We tuned the hyperparameter using the validation set.
For pseudo-label extension, we tracked at most 100 points for each hand and object to track the distance between hands and objects. Figure 9 shows additional results on pseudo-label generation. As seen in the figures, our procedure assigns reliable pseudo-labels in various types of interactions. However, few tracking errors are included (e.g., rightmost frame in second example) and the label assignment is not perfect, suggesting the needs of correction.
Additional examples
A.2 Model Details
Network architecture Figure 10 shows the architecture of the proposed contact prediction model. The input frames are passed one by one and temporal dependencies will be captured at the bidirectional LSTM layers.
Training During training, if the length of the hand-object track is long, we randomly cropped the track at a maximum length of 105 to fit the GPU memory.
Baseline models In ContactHands and Shan-* , we used the pre-trained model provided by the authors. We used the combined model in the former and the model trained on the 100DOH dataset and egocentric datasets for the latter. In those baseline models, we link between the predicted hand instance mask and ground truth hand instance mask if IoU is above 0.5. In Shan-Bbox, we predicted as a contact if IoU between a predicted object bounding box and a ground truth bounding box is larger than 0.5. Shan-Full combines Shan-Bbox and Shan-Contact based on the following rules: (i) If IoU between input hand instance mask and input object bounding box is zero, predicts as a no-contact; (ii) If Shan-Bbox predicts as a no-contact, follow its prediction; (iii) Otherwise, use predictions produced by Shan-Contact. We observed improved performance by combining predictions based on object-in-contact detection and contact state prediction.
A.3 Dataset Details
Pseudo-labels We used 96,000 tracks (9 million frames) with bounding boxes and pseudolabels for training. Pseudo-labels were assigned for 37.3% of the total frames.
A.4 Additional Experimental Results
Effect of pseudo-label set size. Table 4 shows ablation results on changing the noisy dataset size. We sampled 1%, 5%, and 25% of the full noisy dataset and trained by the proposed gPLC algorithm. The result supports the fact that using large-scale data with pseudo-labels helps generalization.
Additional qualitative examples Figure 11 shows additional examples on the contact prediction result. Our model focused on motion rather than spatial overlap to infer the contact states. The performance was boosted at novel objects thanks to large-scale pseudo-label training.
We also show additional failure cases in Figure 12. We observed failures due to unfamiliar grasps (top), no movement (middle), subtle hand movement (bottom). Since we focused on the holistic foreground motion of hands and objects, it was difficult to predict contacts in fine-grained manipulation. Figure 13 shows the prediction examples on different input modalities. In general, the model trained by RGB input solely tended to make uncertain predictions near boundaries (see Figure 13 top). Also shown in the lower boundary score, this result indicates distinguishing a contact state is difficult from a single image. The model trained by flow input solely generally behaves similar to the proposed model. However, the difference appears when there is no motion in the scene. If there is no or subtle motion in the scene, the flow model has no clue except temporal context to predict contacts. In such cases, RGB images will be the only clue (see Figure 13 middle). Motion information contributed to accurate prediction in most cases but sometimes failed when complex motion patterns are observed (see Figure 13 bottom). Figure 9: Additional examples of generated pseudo-labels: (Top) Gray and dark gray bar indicates no-contact/contact labels otherwise no labels assigned. (Bottom) Representative frames. Red, blue, and green regions denote moving hand, object, and background regions, respectively. Note that in few tracking errors are included in these tracks (e.g., rightmost frame in second example). Refer to video visualization for detail. Figure 10: Detailed network architecture of contact prediction model (per frame). C denotes convolutional layer with filter size and number of channels, followed by ReLU layer. MP denotes max pooling with filter size. LN denotes layer normalization layer. FC denotes fully-connected layer with number of units, followed by ReLU layer (except last layer).
Effect of input modality
{H 1 , . . . , H T }, and target object bounding boxes O = {O 1 , . . . , O T } as inputs, forming a hand-object track T = (X , H, O).
Figure 2 :
2(Left) Pseudo-label generation from motion cues. (Right) Pseudo-label extension based on hand-object distance.
Figure 3 :
3Example of generated pseudo-labels: (Top) Gray and dark gray bar indicates nocontact/contact labels otherwise no labels assigned. (Bottom) Representative frames. Red, blue, and green regions denote moving hand, object, and background regions, respectively.
Figure 4 :
4Overview of guided progressive label correction (gPLC).
Figure 5 :
5Architecture of contact prediction model. Algorithm 1 Guided Progressive Label Correction (gPLC)
Refine or add label by confident prediction end if end for Train f (x) on (x i ,ỹ i ) and g(x) on (x i ,ŷ i ) // Update models if #iterations % m = 0 then Train f (x) and g(x) on S // Fine-tune on trusted set end if end for if∑ i,t I {ỹ t i =z t i } < α · ∑ i,t I {ỹ t i ∈{0,1}} then δ ← min(δ + β , δ end )// Loosen threshold if number of flipped labels are small enough end if end for
Figure 6 :
6Qualitative examples. Upper chart shows ground truth contact state and prediction of each model (gray and blue region indicates contact, otherwise no-contact) with contact probability in black line. Lower images correspond to blue vertical lines in chart from left to right and red and blue boxes represents input hand and object bounding box.
Figure 11 :
11Additional qualitative examples. Our model better predicts correct contact state change point and can avoid false positives in difficult cases of image-level overlap between hand and objects. Refer to video visualization for detail.
Figure 12 :
12Additional failure examples.
Figure 13 :
13Qualitative results on input modalities.
Table 2 :
2Ablations on input modality and other robust learning methods.
Amount of pseudo-labels Frame Acc. Boundary Score Peripheral Acc. Edit Score Correct Ratio0% (supervised-train)
0.770
0.563
0.649
0.718
0.394
1%
0.784
0.595
0.728
0.729
0.397
5%
0.803
0.620
0.737
0.747
0.427
25%
0.818
0.651
0.725
0.772
0.467
100% (proposed)
0.836
0.681
0.730
0.793
0.519
Table 4 :
4Ablations on noisy dataset size.Trusted labels We annotated 67,064 frames of 1,200 tracks. We did not annotate the instance masks of the hands since the segmentation network described in Section 3.1 produced reliable results. The average length of the track was 56 frames. To evaluate whether the model can distinguish touched and untouched objects, we included tracks stably in contact and untouched tracks. The number of tracks that were in constant contact was 284, the number of tracks with mixed contact states was 670, and the number of tracks that were not in contact was 246.
AcknowledgementsThis work was supported by JST AIP Acceleration Research Grant Number JPMJCR20U1, JSPS KAKENHI Grant Number JP20H04205 and JP21J11626, Japan. TY was supported by Masason Foundation. We are grateful for the insightful suggestions and feedback from the anonymous reviewers.
Tactile logging for understanding plausible tool use based on human demonstration. Shuichi Akizuki, Yoshimitsu Aoki, Proceedings of British Machine Vision Conference. British Machine Vision Conference334Shuichi Akizuki and Yoshimitsu Aoki. Tactile logging for understanding plausible tool use based on human demonstration. In Proceedings of British Machine Vision Conference, page 334, 2019.
Layer normalization. Computing Research Repository. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, abs/1607.06450Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. Com- puting Research Repository, abs/1607.06450, 2016.
Simple online and realtime tracking. Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, Ben Upcroft, Proceedings of the IEEE International Conference on Image Processing. the IEEE International Conference on Image ProcessingAlex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. Simple online and realtime tracking. In Proceedings of the IEEE International Conference on Image Processing, pages 3464-3468, 2016.
Contactdb: Analyzing and predicting grasp contact via thermal imaging. Samarth Brahmbhatt, Cusuh Ham, C Charles, James Kemp, Hays, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSamarth Brahmbhatt, Cusuh Ham, Charles C Kemp, and James Hays. Contactdb: Analyzing and predicting grasp contact via thermal imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8709- 8719, 2019.
Reconstructing hand-object interactions in the wild. Zhe Cao, Ilija Radosavovic, Angjoo Kanazawa, Jitendra Malik, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionZhe Cao, Ilija Radosavovic, Angjoo Kanazawa, and Jitendra Malik. Reconstructing hand-object interactions in the wild. Proceedings of the IEEE international conference on computer vision, pages 12417-12426, 2021.
Dexycb: A benchmark for capturing hand grasping of objects. Yu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, S Yashraj, Karl Narang, Umar Van Wyk, Stan Iqbal, Birchfield, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Trem- blay, Yashraj S Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, et al. Dexycb: A benchmark for capturing hand grasping of objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9044-9053, 2021.
Beyond class-conditional assumption: A primary attempt to combat instance-dependent label noise. Pengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, Pheng-Ann Heng, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligencePengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, and Pheng-Ann Heng. Be- yond class-conditional assumption: A primary attempt to combat instance-dependent label noise. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021.
Scaling egocentric vision: The epic-kitchens dataset. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionDima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision, pages 720-736, 2018.
Rescaling egocentric vision. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, , , Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray, International Journal of Computer Vision. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, , Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Rescaling egocentric vision. International Journal of Computer Vision, 2021.
Forecasting action through contact representations from first person video. Eadom Dessalene, Chinmaya Devaraj, Michael Maynord, Cornelia Fermuller, Yiannis Aloimonos, IEEE Transactions on Pattern Analysis and Machine Intelligence. Eadom Dessalene, Chinmaya Devaraj, Michael Maynord, Cornelia Fermuller, and Yiannis Aloimonos. Forecasting action through contact representations from first per- son video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
Use the force, luke! learning to predict physical forces by simulating effects. Kiana Ehsani, Shubham Tulsiani, Saurabh Gupta, Ali Farhadi, Abhinav Gupta, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKiana Ehsani, Shubham Tulsiani, Saurabh Gupta, Ali Farhadi, and Abhinav Gupta. Use the force, luke! learning to predict physical forces by simulating effects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 224- 233, 2020.
Slowfast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionChristoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast net- works for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2019.
Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. A Martin, Robert C Fischler, Bolles, Communications of the ACM. 246Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commu- nications of the ACM, 24(6):381-395, 1981.
First-person hand action benchmark with rgb-d videos and 3d hand pose annotations. Guillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, Tae-Kyun Kim, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGuillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, and Tae-Kyun Kim. First-person hand action benchmark with rgb-d videos and 3d hand pose annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 409-419, 2018.
Honnotate: A method for 3d annotation of hand and object poses. Shreyas Hampali, Mahdi Rad, Markus Oberweger, Vincent Lepetit, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. Honnotate: A method for 3d annotation of hand and object poses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3196-3206, 2020.
Co-teaching: Robust training of deep neural networks with extremely noisy labels. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama, Neural Information Processing Systems. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Neural Information Processing Systems, 2018.
Megatrack: monochrome egocentric articulated hand-tracking for virtual reality. Shangchen Han, Beibei Liu, Randi Cabezas, D Christopher, Peizhao Twigg, Jeff Zhang, Tsz-Ho Petkau, Chun-Jung Yu, Muzaffer Tai, Zheng Akbay, Wang, ACM Transactions on Graphics. 394Shangchen Han, Beibei Liu, Randi Cabezas, Christopher D Twigg, Peizhao Zhang, Jeff Petkau, Tsz-Ho Yu, Chun-Jung Tai, Muzaffer Akbay, Zheng Wang, et al. Megatrack: monochrome egocentric articulated hand-tracking for virtual reality. ACM Transactions on Graphics, 39(4):87-1, 2020.
Learning joint reconstruction of hands and manipulated objects. Yana Hasson, Gul Varol, Dimitrios Tzionas, Igor Kalevatykh, J Michael, Ivan Black, Cordelia Laptev, Schmid, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYana Hasson, Gul Varol, Dimitrios Tzionas, Igor Kalevatykh, Michael J Black, Ivan Laptev, and Cordelia Schmid. Learning joint reconstruction of hands and manipulated objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11807-11816, 2019.
Using trusted data to train deep networks on labels corrupted by severe noise. Dan Hendrycks, Mantas Mazeika, Duncan Wilson, Kevin Gimpel, Advances in Neural Information Processing Systems. 31Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In Advances in Neural Information Processing Systems, volume 31, 2018.
Flownet 2.0: Evolution of optical flow estimation with deep networks. Eddy Ilg, Nikolaus Mayer, Margret Saikia, Alexey Keuper, Thomas Dosovitskiy, Brox, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionEddy Ilg, Nikolaus Mayer, T Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2462 -2470, 2017.
Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, Li Fei-Fei, International Conference on Machine Learning. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304-2313, 2018.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, International Conference on Learning Representations. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Segmental spatiotemporal cnns for fine-grained action segmentation. Colin Lea, Austin Reiter, René Vidal, Gregory D Hager, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionColin Lea, Austin Reiter, René Vidal, and Gregory D Hager. Segmental spatiotemporal cnns for fine-grained action segmentation. In Proceedings of the European Conference on Computer Vision, pages 36-52, 2016.
Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. Dong-Hyun Lee, Workshop on challenges in representation learning, ICML. 3Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learn- ing method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, 2013.
Siamrpn++: Evolution of siamese visual tracking with very deep networks. Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionBo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4282-4291, 2019.
Dividemix: Learning with noisy labels as semi-supervised learning. Junnan Li, Richard Socher, C H Steven, Hoi, International Conference on Learning Representations. Junnan Li, Richard Socher, and Steven CH Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In International Conference on Learning Representations, 2020.
In the eye of beholder: Joint learning of gaze and actions in first person video. Yin Li, Miao Liu, James M Rehg, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionYin Li, Miao Liu, and James M Rehg. In the eye of beholder: Joint learning of gaze and actions in first person video. In Proceedings of the European Conference on Computer Vision, pages 619-635, 2018.
Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Jirapat Likitlersuang, R Elizabeth, Tianshi Sumitro, Cao, J Ryan, Sukhvinder Visée, José Kalsi-Ryan, Zariffa, Journal of Neuroengineering and Rehabilitation. 16183Jirapat Likitlersuang, Elizabeth R Sumitro, Tianshi Cao, Ryan J Visée, Sukhvinder Kalsi-Ryan, and José Zariffa. Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home. Journal of Neuroengineering and Rehabil- itation, 16(1):83, 2019.
Bmn: Boundary-matching network for temporal action proposal generation. T Lin, Xiao Liu, Xin Li, Errui Ding, Shilei Wen, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionT. Lin, Xiao Liu, Xin Li, Errui Ding, and Shilei Wen. Bmn: Boundary-matching network for temporal action proposal generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3888-3897, 2019.
Semi-supervised 3d hand-object poses estimation with interactions in time. Shaowei Liu, Hanwen Jiang, Jiarui Xu, Sifei Liu, Xiaolong Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShaowei Liu, Hanwen Jiang, Jiarui Xu, Sifei Liu, and Xiaolong Wang. Semi-supervised 3d hand-object poses estimation with interactions in time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14687- 14697, 2021.
Workinghands: A hand-tool assembly dataset for image segmentation and activity mining. Supreeth Narasimhaswamy, Saif Vazir, Proceedings of the British Machine Vision Conference. the British Machine Vision Conference258Supreeth Narasimhaswamy and Saif Vazir. Workinghands: A hand-tool assembly dataset for image segmentation and activity mining. In Proceedings of the British Ma- chine Vision Conference, page 258, 2019.
Detecting hands and recognizing physical contact in the wild. Supreeth Narasimhaswamy, Trung Nguyen, Minh Hoai, Advances in Neural Information Processing Systems. Supreeth Narasimhaswamy, Trung Nguyen, and Minh Hoai. Detecting hands and rec- ognizing physical contact in the wild. In Advances in Neural Information Processing Systems, 2020.
Learning features by watching objects move. Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionDeepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2701-2710, 2017.
Learning instance segmentation by interaction. Deepak Pathak, Yide Shentu, Dian Chen, Pulkit Agrawal, Trevor Darrell, Sergey Levine, Jitendra Malik, IEEE Conference on Computer Vision and Pattern Recognition Workshops. Deepak Pathak, Yide Shentu, Dian Chen, Pulkit Agrawal, Trevor Darrell, Sergey Levine, and Jitendra Malik. Learning instance segmentation by interaction. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018.
Making deep neural networks robust to label noise: A loss correction approach. Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, Lizhen Qu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGiorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1944-1952, 2017.
A benchmark dataset and evaluation methodology for video object segmentation. Federico Perazzi, Jordi Pont-Tuset, Brian Mcwilliams, Luc Van Gool, Markus Gross, Alexander Sorkine-Hornung, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionFederico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 724-732, 2016.
Hand-object contact force estimation from markerless visual tracking. Tu-Hoa Pham, Nikolaos Kyriazis, A Antonis, Abderrahmane Argyros, Kheddar, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4012Tu-Hoa Pham, Nikolaos Kyriazis, Antonis A Argyros, and Abderrahmane Kheddar. Hand-object contact force estimation from markerless visual tracking. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 40(12):2883-2896, 2017.
Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, Sergey Levine, Proceedings of Robotics: Science and Systems. Robotics: Science and SystemsAravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. In Proceedings of Robotics: Science and Systems.
Hands in action: real-time 3d reconstruction of hands in interaction with objects. Javier Romero, Hedvig Kjellström, Danica Kragic, 2010 IEEE International Conference on Robotics and Automation. Javier Romero, Hedvig Kjellström, and Danica Kragic. Hands in action: real-time 3d reconstruction of hands in interaction with objects. In 2010 IEEE International Conference on Robotics and Automation, pages 458-463, 2010.
Understanding human hands in contact at internet scale. Dandan Shan, Jiaqi Geng, Michelle Shu, David F Fouhey, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDandan Shan, Jiaqi Geng, Michelle Shu, and David F. Fouhey. Understanding human hands in contact at internet scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9869-9878, 2020.
Dense point trajectories by gpu-accelerated large displacement optical flow. Narayanan Sundaram, Thomas Brox, Kurt Keutzer, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionNarayanan Sundaram, Thomas Brox, and Kurt Keutzer. Dense point trajectories by gpu-accelerated large displacement optical flow. In Proceedings of the European Con- ference on Computer Vision, pages 438-451, 2010.
Grab: a dataset of whole-body human grasping of objects. Omid Taheri, Nima Ghorbani, J Michael, Dimitrios Black, Tzionas, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionOmid Taheri, Nima Ghorbani, Michael J Black, and Dimitrios Tzionas. Grab: a dataset of whole-body human grasping of objects. In Proceedings of the European Conference on Computer Vision, pages 581-600, 2020.
Joint optimization framework for learning with noisy labels. Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, Kiyoharu Aizawa, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionDaiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimiza- tion framework for learning with noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5552-5560, 2018.
3d object reconstruction from hand-object interactions. Dimitrios Tzionas, Juergen Gall, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionDimitrios Tzionas and Juergen Gall. 3d object reconstruction from hand-object interac- tions. In Proceedings of the IEEE International Conference on Computer Vision, pages 729-737, 2015.
Learning with feature dependent label noise: a progressive approach. Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, Chao Chen, International Conference on Learning Representations. Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, and Chao Chen. Learning with feature dependent label noise: a progressive approach. In International Conference on Learning Representations, 2021.
| [] |
[
"Large-Scale-Fading Decoding in Cellular Massive MIMO Systems with Spatially Correlated Channels",
"Large-Scale-Fading Decoding in Cellular Massive MIMO Systems with Spatially Correlated Channels"
] | [
"Student Member, IEEETrinh Van Chien \nDepartment of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden\n",
"Christopher Mollén \nDepartment of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden\n",
"Senior Member, IEEEEmil Björnson \nDepartment of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden\n",
"T V Chien \nDepartment of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden\n",
"E Björnson \nDepartment of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden\n"
] | [
"Department of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden",
"Department of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden",
"Department of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden",
"Department of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden",
"Department of Electrical Engineering (ISY)\nLinköping University\n581 83LinköpingSweden"
] | [] | Massive multiple-input-multiple-output (MIMO) systems can suffer from coherent intercell interference due to the phenomenon of pilot contamination. This paper investigates a two-layer decoding method that mitigates both coherent and non-coherent interference in multi-cell Massive MIMO. To this end, each base station (BS) first estimates the channel to intra-cell users using either minimum mean-squared error (MMSE) or element-wise MMSE (EW-MMSE) estimation based on uplink pilots. The estimates are used for local decoding on each BS followed by a second decoding layer where the BSs cooperate to mitigate inter-cell interference. An uplink achievable spectral efficiency (SE) expression is computed for arbitrary two-layer decoding schemes. A closed-form expression is then obtained for correlated Rayleigh fading, maximum-ratio combining, and the proposed large-scale fading decoding (LSFD) in the second layer. We also formulate a sum SE maximization problem with both the data power and LSFD vectors as optimization variables. Since this is an NP-hard problem, we develop a low-complexity algorithm based on the weighted MMSE approach to obtain a local optimum. Numerical results show that both data power control and LSFD improves the sum SE performance over single-layer decoding multi-cell Massive MIMO systems. | 10.1109/tcomm.2018.2889090 | [
"https://arxiv.org/pdf/1807.08071v1.pdf"
] | 49,905,101 | 1807.08071 | 57b86b7e68281e2db523a5ff410cb60ea9195a49 |
Large-Scale-Fading Decoding in Cellular Massive MIMO Systems with Spatially Correlated Channels
Student Member, IEEETrinh Van Chien
Department of Electrical Engineering (ISY)
Linköping University
581 83LinköpingSweden
Christopher Mollén
Department of Electrical Engineering (ISY)
Linköping University
581 83LinköpingSweden
Senior Member, IEEEEmil Björnson
Department of Electrical Engineering (ISY)
Linköping University
581 83LinköpingSweden
T V Chien
Department of Electrical Engineering (ISY)
Linköping University
581 83LinköpingSweden
E Björnson
Department of Electrical Engineering (ISY)
Linköping University
581 83LinköpingSweden
Large-Scale-Fading Decoding in Cellular Massive MIMO Systems with Spatially Correlated Channels
1Index Terms Massive MIMOLarge-Scale Fading DecodingSum Spectral Efficiency OptimizationChannel Estimation
Massive multiple-input-multiple-output (MIMO) systems can suffer from coherent intercell interference due to the phenomenon of pilot contamination. This paper investigates a two-layer decoding method that mitigates both coherent and non-coherent interference in multi-cell Massive MIMO. To this end, each base station (BS) first estimates the channel to intra-cell users using either minimum mean-squared error (MMSE) or element-wise MMSE (EW-MMSE) estimation based on uplink pilots. The estimates are used for local decoding on each BS followed by a second decoding layer where the BSs cooperate to mitigate inter-cell interference. An uplink achievable spectral efficiency (SE) expression is computed for arbitrary two-layer decoding schemes. A closed-form expression is then obtained for correlated Rayleigh fading, maximum-ratio combining, and the proposed large-scale fading decoding (LSFD) in the second layer. We also formulate a sum SE maximization problem with both the data power and LSFD vectors as optimization variables. Since this is an NP-hard problem, we develop a low-complexity algorithm based on the weighted MMSE approach to obtain a local optimum. Numerical results show that both data power control and LSFD improves the sum SE performance over single-layer decoding multi-cell Massive MIMO systems.
I. INTRODUCTION
Massive MIMO BSs, which are equipped with hundreds of antennas, exploit channel reciprocity to estimate the channel based on uplink pilots and spatially multiplex a large number of users on the same time-frequency resource [1], [2]. It is a promising technique to meet the growing demand for wireless data traffic of tomorrow [3], [4]. In a single-cell scenario, there is no need for computationally heavy decoding or precoding methods in Massive MIMO, like successive interference cancellation or dirty paper coding, as both thermal noise and interference combine non-coherently and are effectively suppressed by linear processing, e.g. zero-forcing combining, with a large number of BS antennas [5].
In a multi-cell scenario, however, pilot-based channel estimation is contaminated by the non-orthogonal transmission in other cells. This results in coherent intercell interference in the data transmission, so-called pilot contamination [6], unless some advanced processing schemes are used to suppress it [7]. Pilot contamination causes the gain of having more antennas to decrease and the SE of linear decoding methods, such as maximum-ratio combining (MRC) or zero-forcing, to saturate as the number of antennas grows.
Much work has been done to mitigate the effects of pilot contamination. The first and intuitive approach to mitigate pilot contamination is to increase the length of the pilots. In practical networks, however, it is not possible to make all pilots orthogonal due to the limited channel coherence block [8]. Hence, there is a trade-off between having longer pilots and low pilot overhead. Another method to mitigate pilot contamination is to assign the pilots in a way that reduces the contamination [9], since only a few users from other cells cause substantial contamination. The pilot assignment is a combinatorial problem and heuristic algorithms with low computational complexity can be developed to mitigate the pilot contamination.
In [10], a greedy pilot assignment method is developed that exploits the statistical channel information and mutual interference between users. Pilot assignment approaches still suffer from asymptotic SE saturation since we only change one contaminating user for a less contaminating user. A third method is to utilize the spatial correlation to mitigate the coherent interference using multi-cell minimum-mean-square error (M-MMSE) decoding [7], but this method has high computational complexity.
Instead of combating pilot contamination, one can utilize it using more advanced decoding schemes [11]- [13]. This approach was initially called pilot contamination decoding since the BSs cooperate in suppressing the pilot contamination [11]. The original form of this technique used simplistic MRC, which does not suppress interference very well, thus it required a quite huge number of antennas to be effective [12]. The latest version of this decoding design, called large-scale fading decoding (LSFD) [13], was designed to be useful also with a practical DRAFT number of antennas. In the two-layer LSFD framework, each BS applies an arbitrary local linear decoding method in the first layer. The result is then gathered at a common central station that then applies so-called LSFD vectors in a second-layer to combine the signals from multiple BSs to suppress pilot contamination and other inter-cell interference. This new decoding design overcomes the aforementioned limitations in [11] and attains high SE even with a limited number of BS antennas. However, both [11], [13] assume uncorrelated Rayleigh fading channels and rely on special asymptotic properties of that channel model. Thus, the generalization to more practical correlated channels is non-trivial and has not been considered until now. 1
A. Main Contributions
In this paper, we generalize the LSFD method from [11], [13] to a scenario with correlated Rayleigh fading and arbitrary first-layer decoders, and also develop a method for data power control in the system. We evaluate the performance by deriving an SE expression for the system. Our main contributions are summarized as follows:
• An uplink per-user SE is derived as a function of the second-layer decoding weights. A closed-form expression is then obtained for correlated Rayleigh fading and a system that uses MRC in the first decoding layer and an arbitrary choice of LSFD in the second layer. The second-layer decoding weights that maximize the SE follows in closed form.
• An uplink sum SE optimization problem with power constraints is formulated. Because it is non-convex and NP-hard, we propose an alternating optimization approach that converges to a local optimum with polynomial complexity.
• Numerical results demonstrate the effectiveness of two-layer decoding for Massive MIMO communication systems with correlated Rayleigh fading.
The rest of this paper is organized as follows: Multi-cell Massive MIMO with two-layer decoding is presented in Section II. An SE for the uplink together with the optimal LSFD design is presented in Section III. A maximization problem for the sum SE is formulated and a solution is proposed in Section IV. Numerical results in Section V demonstrate the performance of the proposed system. Section VI states the major conclusions of the paper.
Notation: Lower and upper case bold letters are used for vectors and matrices. The expectation of the random variable X is denoted by E{X } and the Euclidean norm of the vector x by x . The transpose and Hermitian transpose of a matrix M are written as M T 1 The concurrent work [14] appeared just as we were submitting this paper. It contains the uplink SE for correlated Rayleigh fading described by the one-ring model and MMSE estimation, while we consider arbitrary spatial correlation and uses two types of channel estimators. Moreover, they consider joint power control and LFSD for max-min fairness, while we consider sum SE optimization, making the papers complementary. DRAFT and M H , respectively. The L × L-dimensional diagonal matrix with the diagonal elements
d 1 , d 2 , . . . , d L is denoted diag(d 1 , d 2 , . . . , d L )
. Re(·) and Im(·) are the real and imaginary parts of a complex number. ∇g(x)| x 0 denotes the gradient of a multivariate function g at x = x 0 .
Finally, CN (0, R) is a vector of circularly symmetric, complex, jointly Gaussian distributed random variables with zero mean and correlation matrix R.
II. SYSTEM MODEL We consider a network with L cells. Each cell consists of a BS equipped with M antennas that serves K single-antenna users. The M-dimensional channel vector in the uplink between user k in cell l and BS l is denoted by h l l,k ∈ C M . We consider the standard block-fading model, where the channels are static within a coherence block of size τ c channel uses and assume one independent realization in each blocks, according to a stationary ergodic random process. Each channel follows a correlated Rayleigh fading model:
h l l,k ∼ CN 0, R l l,k ,(1)
where R l l,k ∈ C M×M is the spatial correlation matrix of the channel. The BSs know the channel statistics, but have no prior knowledge of the channel realizations, which need to be estimated in every coherence block.
A. Channel Estimation
As in conventional Massive MIMO [7], the channels are estimated by letting the users transmit τ p -symbol long pilots in a dedicated part of the coherence block, called the pilot phase. All the cells share a common set of τ p = K mutually orthogonal pilots {φ φ φ 1 , . . . , φ φ φ K }, which are disjointly distributed among the K users in each cell:
φ φ φ H k φ φ φ k = τ p k = k , 0 k k.(2)
Without loss of generality, we assume that all the users in different cells that share the same index use the same pilot and thereby cause pilot contamination to each other [2].
During the pilot phase, at BS l, the signals received in the pilot phase are collectively denoted by the M × τ p -dimensional matrix Y l and it is given by
Y l = L l =1 K k=1 p l ,k h l l ,k φ H k + N l ,(3)
wherep l ,k is the power of the pilot of user k in cell l and N l is a matrix of independent and identically distributed noise terms, each distributed as CN (0, σ 2 ).
DRAFT An intermediate observation of the channel from user k to BS l is obtained through correlation with the pilot of user k in the following way:
y l,k = Y l φ k = τ p p l,k h l l,k + L i=1 i l τ p p i,k h l i,k +ñ l,k ,(4)
whereñ l,k N l φ φ φ k ∼ CN (0, τ p σ 2 I M ) are independent over l and k. The channel estimate and estimation error of the MMSE estimation of h l,k is presented in the following lemma.
Lemma 1. If BS l uses MMSE estimation based on the observation in (4), the estimate of the channel between user k in cell l and BS l iŝ
h l l,k = p l,k R l l,k Ψ Ψ Ψ −1 l,kỹ l,k ,(5)
where Ψ Ψ Ψ l,k = E{ỹ l,k (ỹ l,k ) H }/τ p is given by
Ψ Ψ Ψ l,k L l =1 τ ppl ,k R l l ,k + σ 2 I M .(6)
The channel estimate is distributed aŝ
h l l,k ∼ CN 0, τ ppl,k R l l,k Ψ Ψ Ψ −1 l,k R l l,k ,(7)
and the channel estimation error, e l l,k h l l,k −ĥ l l,k , is distributed as
e l l,k ∼ CN 0, R l l,k − τ ppl,k R l l,k Ψ Ψ Ψ −1 l,k R l l,k .(8)
Proof. The proof follows directly from established MMSE estimation techniques [15], [16,Section 3].
Lemma 1 provides statistical information for the BS to construct the decoding and precoding vectors for the up-and downlink data transmission. However, to compute the MMSE estimate, the inverse matrix of Ψ Ψ Ψ l,k has to be computed for every user, which can lead to a computational complexity that might be infeasible when there are many antennas. This motivates us to use the simpler estimation technique called element-wise MMSE (EW-MMSE) [16].
To simplify the presentation, it is assumed that the correlation matrix R l l,k has equal diagonal elements, denoted by β l l,k . However, EW-MMSE estimation of the channel can also be done when the diagonal elements are different. The generalization to this case is straightforward.
EW-MMSE estimation is given in Lemma 2 together with the statistics of the estimates.
DRAFT where l l,k =
p l,k β l l,k L l =1 τ ppl ,k β l l ,k + σ 2 ,(10)
and the channel estimate and estimation error of h l l,k are distributed as:
h l l,k ∼ CN 0, ( l l,k ) 2 τ p Ψ Ψ Ψ l,k ,(11)e l l,k ∼ CN 0, R l l,k − ( l l,k ) 2 τ p Ψ Ψ Ψ l,k .(12)
Proof. The proof for this special case is given in [17].
As compared to MMSE estimation, EW-MMSE estimation simplifies the computations, since no inverse matrix computation is involved. Moreover, each BS only needs to know the diagonal of the spatial correlation matrices, which are easier to acquire in practice the full matrices. We can also observe the relationship between two users utilizing nonorthogonal pilots by a simple expression as shown in Corollary 1.
whereĥ l l ,k = l l ,kỹ l,k with l l ,k = p l ,k β l l ,k / L l =1 τ ppl ,k β l l ,k + σ 2 .
Corollary 1 mathematically shows that the channel estimates of two users with the same pilot signal only differ from each other by a scaling factor. Using EW-MMSE estimation leads to severe pilot contamination that cannot be mitigated by linear processing of the data signal only, at least not with the approach in [7].
B. Uplink Data Transmission
During the data phase, it is assumed that user k in cell l sends a zero-mean symbol s l ,k with variance E{|s l ,k | 2 } = 1. The received signal at BS l is then
y l = L l =1 K k=1 √ p l ,k h l l ,k s l ,k + n l ,(14)
where p l ,k denotes the transmit power of user k in cell l . Based on the signals in (14), the BSs decode the symbols with the two-layers-decoding technique that is illustrated in Fig. 1.
In the first layer, an estimate of the symbol from user k in cell l is obtained by local linear decoding ass where v l,k is the linear decoding vector. The symbol estimates l,k generally contains interference and, in Massive MIMO, the pilot contamination from all the users with the same pilot sequence is particularly large. To mitigate the pilot contamination, all the symbol estimates of the contaminating users are collected in a vector
l,k = v H l,k y l = L l =1 K k =1 √ p l ,k v H l,k h l l ,k s l ,k + v H l,k n l ,(15)s k [s 1,k ,s 2,k , . . . ,s L,k ] T ∈ C L .(16)
After the local decoding, a second layer of centralized decoding is performed on this vector using the LSFD vector a l,k [a 1 l,k , a 2 l,k , . . . , a L l,k ] T ∈ C L , where a l l,k is the LSFD weight. The final estimate of the data symbol from user k in cell l is then given bŷ
s l,k = a H l,ks k = L l =1
(a l l,k ) * s l ,k .
In the next section, we use the decoded signalsŝ l,k together with the asymptotic channel properties [16, Section 2.5] to derive a closed-from expression of an uplink SE.
III. UPLINK PERFORMANCE ANALYSIS
R l,k = max {a l l,k } 1 − τ p τ c log 2 1 + SINR l,k ,(18)
DRAFT where the effective SINR, denoted by SINR l,k , is
SINR l,k = E{|DS l,k | 2 } E{|PC l,k | 2 } + E{|BU l,k | 2 } + L l =1 K k =1 k k E{|NI l ,k | 2 } + E{|AN l,k | 2 } ,(19)
where DS l,k , PC l,k , BU l,k , NI l ,k , and AN l,k stand for the desired signal, the pilot contamination, the beamforming gain uncertainty, the non-coherent interference, and the additive noise, respectively, whose expectations are defined as
E{|DS l,k | 2 } p l,k L l =1 (a l l,k ) * E{v H l ,k h l l,k } 2 ,(20)E{|PC l,k | 2 } L l =1 l l p l ,k L l =1 (a l l,k ) * E{(ĥ l l ,k ) H h l l ,k } 2 ,(21)E{|BU l,k | 2 } L l =1 p l ,k E L l =1 (a l l,k ) * (ĥ l l ,k ) H h l l ,k − E{(ĥ l l ,k ) H h l l ,k } 2 ,(22)E{|NI l ,k | 2 } p l ,k E L l =1 (a l l,k ) * (ĥ l l ,k ) H h l l ,k 2 ,(23)E{|AN l,k | 2 } L l =1 |a l l,k | 2 E{|(ĥ l l ,k ) H n l | 2 }.(24)
Note that the lower bound on the uplink ergodic capacity in Lemma 3 can be applied to any linear decoding method and any LSFD design.
To maximize the SE of user k in cell l is equivalent to selecting the LSFD vector that maximizes a Rayleigh quotient as shown in the proof of the following theorem. This is the first main contribution of this paper.
Theorem 1. For a given set of pilot and data power coefficients, the SE of user k in cell l is
R l,k = 1 − τ p τ c log 2 1 + p l,k b T l,k 4 i=1 C (i) l,k −1 b l,k ,(25)
DRAFT where the matrices C (1) l,k , C (2) l,k , C (3) l,k , C (4) l,k ∈ C L×L and the vector b l,k are defined as
C (1) l,k L l =1 l l p l ,k b l ,k b H l ,k ,(26)C (2) l,k diag L l =1 p l ,k E |v H 1,k h 1 l ,k | 2 − E{v H 1,k h 1 l ,k } 2 , . . . , L l =1 p l ,k E |v H L,k h L l ,k | 2 − E{v H L,k h L l ,k } 2 ,(27)C (3) l,k diag L l =1 K k =1 p l ,k E v H 1,k h 1 l ,k 2 , . . . , L l =1 K k =1 p l ,k E v H L,k h L l ,k 2 (28) C (4) l,k diag σ 2 v 1,k 2 , . . . , σ 2 v L,k 2 ,(29)
and the vector b l ,k ∈ C L , ∀l = 1, . . . , L, is defined as
b l ,k E{v H 1,k h 1 l ,k }, . . . , E{v H L,k h L l ,k } T .(30)
In order to attain this SE, the LSFD vector is formulated as
a l,k = 4 i=1 C (i) l,k −1 b l,k , ∀l, k.(31)
Proof. The proof is available in Appendix B.
We stress that the LSFD vector in (31) is designed to maximize the SE in (25) for every user in the network for a given data and pilot power and a given first-layer decoder. Different from previous work [13], [18], which only considered uncorrelated Rayleigh fading channels, the results in Theorem 1 consider the more general, and practically relevant, correlated Rayleigh fading model with either MMSE or EW-MMSE and an arbitrary first-layer decoder.
The following theorem states a closed-form expression of the SE for the case of MRC, i.e., v l,k =ĥ l l,k . This is the second main contribution of this paper.
Theorem 2. When MRC is used, the SE in (18) of user k in cell l is given by
R l,k = 1 − τ p τ c log 2 1 + SINR l,k ,(32)
where the SINR value in (32) is
SINR l,k = p l,k L l =1 (a l l,k ) * b l l,k 2 L l =1 l l p l ,k L l =1 (a l l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1 p l ,k |a l l,k | 2 c l ,k l ,k + L l =1 |a l l,k | 2 d l ,k .(33)
DRAFT The values b l l ,k , c l ,k l ,k , and d l ,k are different depending on the channel estimation technique.
MMSE estimation results in b l l ,k = τ ppl ,kpl ,k tr Ψ Ψ Ψ −1 l ,k R l l ,k R l l ,k ,(34)c l ,k l ,k =p l ,k tr R l l ,k Ψ Ψ Ψ −1 l ,k R l l ,k R l l ,k ,(35)d l ,k = σ 2p l ,k tr Ψ Ψ Ψ −1 l ,k R l l ,k R l l ,k ,(36)whereas EW-MMSE results in b l l ,k = √ τ p l l ,k l l ,k tr Ψ Ψ Ψ l ,k ,(37)c l ,k l ,k = ( l l ,k ) 2 tr R l l ,k Ψ Ψ Ψ l ,k ,(38)d l ,k = ( l l ,k ) 2 σ 2 tr Ψ Ψ Ψ l ,k .(39)
Proof. The proofs consist of computing the moments of complex Gaussian distributions.
They are available in Appendix C and Appendix D for MMSE and EW-MMSE estimation, respectively.
Theorem 2 describes the exact impact of the spatial correlation of the channel on the system performance through the coefficients b l l ,k , c l ,k l ,k , and d l ,k . It is seen that the numerator of (33) grows as the square of the number of antennas, M 2 , since the trace in (34) is the sum of M terms. This gain comes from the coherent combination of the signals from the M antennas. It can also be seen from Theorem 2 that the pilot contamination in (17) combines coherently, i.e., its variance-the first term in the denominator that contains b l l,k -grows as M 2 . The other terms in the denominator represent the impact of non-coherent interference and Gaussian noise, respectively. These two terms only grow as M. Since the interference terms contain products of correlation matrices of different users, the interference is smaller between users that have very different spatial correlation characteristics [16].
The following corollary gives the optimal LSFD vector a l,k that maximizes the SE of every user in the network for a given set of pilot and data powers, which is expected to work well when each BS is equipped with a practical number of antennas.
Corollary 2. For a given set of data and pilot powers, by using MRC and LSFD, the SE in Theorem 2 is given in the closed form as
R l,k = 1 − τ p τ c log 2 1 + p l,k b T l,k C −1 l,k b l,k(40)
where C l,k ∈ C L×L and b l,k ∈ C L are defined as
C l,k L l =1 l l p l ,k b l ,k b H l ,k + diag L l =1 K k =1 p l ,k c l ,k 1,k + d 1,k , . . . , L l =1 K k =1 p l ,k c l ,k L,k + d L,k , (41) b l ,k [b 1 l ,k , . . . , b L l ,k ] T .(42)
The SE in (40) is obtained by using LSFD vector defined as
a l,k = C −1 l,k b l,k .(43)
Even though Corollary 2 is a special case of Theorem 1 when MRC is used, its contributions are two-fold: The LSFD vector a l,k is computed in the closed form which is independent of the small-scale fading, so it is easy to compute and store. Moreover, this LSFD vector is the generalization of the vector given in [13] to the larger class of correlated Rayleigh fading channels.
IV. DATA POWER CONTROL AND LFSD DESIGN FOR SUM SE OPTIMIZATION
In this section, how to choose the powers {p l,k } (power control) and the LSFD vector to maximize the sum SE is investigated. The sum SE maximization problem for a multi-cell
Massive MIMO system is first formulated based on results from previous sections. Next, an iterative algorithm based on solving a series of convex optimization problems is proposed to efficiently find a stationary point.
A. Problem Formulation
We consider sum SE maximization:
maximize {p l,k ≥0},{a l,k } L l=1 K k=1 R l,k subject to p l,k ≤ P max,l,k ∀l, k.(44)
Using the rate (32) in (44), and removing the constant pre-log factor, we obtain the equivalent
formulation maximize {p l,k ≥0},{a l,k } L l=1 K k=1 log 2 1 + SINR l,k subject to p l,k ≤ P max,l,k ∀l, k.(45)
Sum SE maximization with imperfect CSI is a non-convex and NP-hard problem, as can be shown using the methodology for small-scale multi-user MIMO systems in [19]. Therefore, the global optimum is difficult to find in general. Nevertheless, solving the ergodic sum SE maximization (45) for a Massive MIMO system is more practical than maximizing the DRAFT instantaneous SEs for a small-scale MIMO network and a given realization of the small-scale fading [20], [21]. In contrast, the sum SE maximization in (45) only depends on the large-scale fading coefficients, which simplifies matters and allows the solution to be used for a long period of time. Another key difference from prior work is that we jointly optimize the data powers and LSFD vectors.
Instead of seeking the global optimum to (45), which has an exponential computational complexity, we will follow the weighted MMSE approach [22], [23] and adapted it to the problem at hand to obtain a stationary point to (45) in polynomial time. To this end, we first formulate the weighted MMSE problem from (45) as shown in Theorem 3.
Theorem 3. The optimization problem minimize {p l,k ≥0},{a l,k }, {w l,k ≥0},{u l,k } L l=1 K k=1 w l,k e l,k − ln(w l,k ) subject to p l,k ≤ P max,l,k , ∀l, k,(46)
where e l,k is defined as
e l,k |u l,k | 2 L l =1 p l ,k L l =1 (a l l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1 p l ,k |a l l,k | 2 c l ,k l ,k + L l =1 |a l l,k | 2 d l ,k − 2 √ p l,k Re u l,k L l =1 (a l l,k ) * b l l,k + 1, (47)
is equivalent to the sum SE optimization problem (45) in the sense that (45) and (46) have the same global optimal power solution {p l,k }, ∀l, k, and the same LSFD elements {a l l,k }, ∀l, k, l .
Proof. The proof is available in Appendix E.
B. Iterative Algorithm
We now find a stationary point to (46) by decomposing it into a sequence of subproblems, each having a closed-form solution. By changing variable as ρ l,k = √ p l,k , the optimization problem (46) is equivalent to
minimize {ρ l,k ≥0},{a l,k }, {w l,k ≥0},{u l,k } L l=1 K k=1 w l,k e l,k − ln(w l,k )
subject to ρ 2 l,k ≤ P max,l,k , ∀l, k,
DRAFT where e l,k is
e l,k = |u l,k | 2 L l =1 ρ 2 l ,k L l =1 (a l l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1 ρ 2 l ,k |a l l,k | 2 c l ,k l ,k + L l =1 |a l l,k | 2 d l ,k − 2ρ l,k Re u l,k L l =1 (a l l,k ) * b l l,k + 1. (49)
As a third main contribution of this paper, the following theorem provides an algorithm that relies on alternating optimization to find a stationary point to (48).
Theorem 4.
A stationary point to (48) is obtained by iteratively updating {a l,k , u l,k , w l,k , ρ l,k }.
Let a n−1 l,k , u n−1 l,k , w n−1 l,k , ρ n−1 l,k the values after iteration n − 1. At iteration n, the optimization parameters are updated in the following way:
• u l,k is updated as
u (n) l,k = ρ (n−1) l,k L l =1 a l ,(n−1) l,k (b l l,k ) * u (n−1) l,k ,(50)
where the valueũ (n−1)
l,k is defined in (51).
u (n−1) l,k = L l =1 (ρ (n−1) l ,k ) 2 L l =1 (a l ,(n−1) l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1 (ρ (n−1) l ,k ) 2 |a l ,(n−1) l,k | 2 c l ,k l ,k + L l =1 |a l ,(n−1) l,k | 2 d l ,k . (51)
• w l,k is updated as
w (n) l,k = e (n) l,k −1 ,(52)
where e (n) l,k is defined as
e (n) l,k = |u (n) l,k | 2ũ (n−1) l,k − 2ρ (n−1) l,k Re u (n) l,k L l =1 (a l ,(n−1) l,k ) * b l l,k + 1.(53)
• a l,k is updated as
a (n) l,k =ũ * ,(n) l,k L l =1 (a l ,(n−1) l,k ) * b l l,k C (n−1) l,k −1 b l,k ,(54)
where C (n−1)
l,k is computed as in (55).
C (n−1) l,k = L l =1 (ρ (n−1) l ,k ) 2 b l ,k b H l ,k + diag L l =1 K k =1 (ρ (n−1) l ,k ) 2 c l ,k 1,k + d 1,k , . . . , L l =1 K k =1 (ρ (n−1) l ,k ) 2 c l ,k L,k + d L,k .(55)
• ρ l,k is updated as in (56).
ρ (n) l,k = min w (n) l,k Re u (n) l,k L l =1 (a l ,(n) l,k ) * b l l,k L l =1 w (n) l ,k |u (n) l ,k | 2 L l =1 (a l ,(n) l ,k ) * b l l,k 2 + L l =1 K k =1 w (n) l ,k |u (n) l ,k | 2 L l =1 |a l ,(n) l ,k | 2 c l,k l ,k , P max,l,k .(56)
If we denote the stationary point to (48) that is attained by the above iterative algorithm as n → ∞ by u Proof. The proof is available in Appendix F.
The iterative algorithm is summarized in Algorithm 1. With initial data power values in the feasible set, the related LSFD vectors are computed by using Corollary 2. 2 After that, the iterative algorithm in Theorem 4 is used to obtain a stationary point to the sum SE optimization problem (44). Algorithm 1 can be terminated when the variation between two consecutive iterations are small. In particular, for a given ≥ 0, the stopping criterion can, for instance, be defined as
L l=1 K k=1 R (n) l,k − L l=1 K k=1 R (n−1) l,k ≤ .(57)
Algorithm 1 Alternating optimization approach for (48) Input: Maximum data powers P max,l,k , ∀l, k; Large-scale fading coefficients β l l,k , ∀, l, k, l . Initial coefficients ρ (0) l,k , ∀l, k. Set up n = 1 and compute a (0) l,k , ∀l, k, using Corollary 2. 1. Iteration n:
1.1. Update u (n)
l,k using (50). 1.2. Update w (n) l,k using (52) where e (n) l,k is computed as in (53).
Update a (n)
l,k by using (54) where C (n−1)
l,k is computed as in (55) and b l,k as in (42) 1.4. Update ρ (n) l,k by using (56) 2. If Stopping criterion (57) is satisfied → Stop. Otherwise, go to Step 3.
3. Store the currently best solution: ρ (n) l,k and a (n) l,k , ∀l, k. Set n = n + 1, then go to Step 1. Output: The optimal solutions: ρ opt l,k = ρ (n) l,k , a opt l,k = a (n) l,k ∀l, k. 2 We observe faster convergence with a hierarchical initialization of ρ
l,k , ∀l, k, than with an all-equal initialization. In simulations, we initialize ρ (0) l,k , ∀l, k, as uniformly distributed over the range 0, P max,l,k .
DRAFT
C. Sum SE Optimization Without Using LFSD
For completeness, we also study a multi-cell Massive MIMO system that only uses one-layer decoding. This scenario is considered as a benchmark to investigate the improvements of our proposed joint data power control and LSFD design in the previous section. Mathematically, this is a special case of the above analysis, in which the elements of the LSFD vector a l,k , ∀l, k are defined as subject to ρ 2 l,k ≤ P max,l,k , ∀l, k,
where e l,k is defined as
e l,k |u l,k | 2 L l =1 ρ 2 l ,k |b l l ,k | 2 + L l =1 K k =1
ρ 2 l ,k c l ,k l,k + d l,k − 2ρ l,k Re u l,k b l l,k + 1. (60)
The alternating optimization approach in Algorithm 1 can also be applied to the problem in (59) to obtain a stationary point as shown in the following corollary.
Corollary 3.
A stationary point to (59) is obtained by iteratively updating {u l,k , w l,k , ρ l,k }.
At iteration n, these optimization parameters are updated as • u l,k is updated as
u (n) l,k = ρ (n−1) l,k (b l l,k ) * u (n−1) l,k ,(61)
whereũ (n−1)
l,k is computed as
u (n−1) l,k = L l =1 (ρ (n−1) l ,k ) 2 |b l l ,k | 2 + L l =1 K k =1 (ρ (n−1) l ,k ) 2 c l ,k l,k + d l,k .(62)
• w l,k is updated as:
w (n) l,k = e (n) l,k −1 ,(63)
where e (n) l,k is computed as e (n) l,k = |u (n) l,k | 2ũ (n−1)
l,k − 2ρ (n−1) l,k Re u (n) l,k b l l,k + 1.
• ρ l,k is updated as ρ (n) l,k = min ρ (n) l,k , P max,l,k ,
whereρ (n) l,k w (n) l,k Re u (n) l,k b l l,k L l =1 w (n) l ,k |u (n) l ,k | 2 b l l,k 2 + L l =1 K k =1 w (n) l ,k |u (n) l ,k | 2 c l,k l ,k .(65)
where the decibel value of the shadow fading, z l,k , has a Gaussian distribution with zero mean and standard derivation 7. The spatial correlation matrix of the channel from user k in cell l to BS l is described by the exponential correlation model, which models a uniform linear array [25]:
R l l,k = β l l,k 1 r l , * l,k · · · (r l , * l,k ) M−1 r l l,k 1 · · · (r l , * l,k ) M−2 . . . . . . . . . . . . (r l l,k ) M−1 (r l l,k ) M−2 · · · 1 ,(68)
where the correlation coefficient r l l,k = ςe jθ l l,k , the correlation magnitude ς is in the range [0, 1] and the user incidence angle to the array boresight is θ l l,k . We assume that the power is fixed to 200 mW for each pilot symbol and it is also the maximum power that each user can allocate to a data symbol, i.e., P max,l,k = 200 mW.
Extensive numerical results will be presented from the following methods with either MMSE or EW-MMSE estimation: (ii) Single-layer decoding system with data power control: This benchmark is similar to (i), but the data powers are optimized using the weighted MMSE algorithm in Corollary 3.
(iii) Two-layer decoding system with fixed data power and LSFD vectors: The network deploys the two-layer decoding as shown in Fig. 1, using MRC and LSFD. The data symbols have fixed power 200 mW and the LSFD vectors are computed using Corollary 2.
(iv) Two-layer decoding system with fixed data power and approximate LSFD vectors: This benchmark is similar to (iii), but the LSFD vectors are computed using only the diagonal elements of the channel correlation matrices. This allows us to study how inaccurate LSFD vectors degrade sum SE.
(v) Two-layer decoding system with optimized data power and LSFD vectors: This benchmark is similar to (iii), but the data powers and LSFD vectors are computed using the weighted MMSE algorithm as in Theorem 4.
(vi) Two-layer decoding system with optimized data power and approximate LSFD vectors:
This benchmark is similar to (v), but the LSFD vectors are computed by Corollary 2 based on only the diagonal coefficients of the channel correlation matrices. better at the stationary point than at the initial point. The corresponding improvement for the system that uses EW-MMSE estimation is about 24.7%. By using MMSE estimation, the two-layer decoding system gives 2.4% better sum SE than a system with single-layer decoding. The corresponding gain for EW-MMSE estimation is up to 7.5%. Besides, MMSE estimation gives an SE that is up to 7.1% higher than EW-MMSE. The proposed optimization methods need around 100 iterations to converge, but the complexity is low since every update in the algorithm consists of evaluating a closed-form expression.
A. Convergence
The approximation in (vi) of the channel correlation matrix as diagonal breaks the convergence statement in Theorem 4, so it is not included in Fig. 3. Hereafter, when we consider (vi) for comparison, we select the highest sum SE among 500 iterations. First, we observe the large gains in sum SE attained by using LSFD detection. With MMSE estimation, the sum SE increases with up to 7.5% in the case of equally fixed data powers, while that gain is about 7.9% for jointly optimizing data powers and LSFD vectors. The same maximum gains are observed when using EW-MMSE estimation, since these gains occur when ς = 0 (in which case MMSE and EW-MMSE coincide). The performance of EW-MMSE estimation is worse than that of MMSE when the correlation magnitude is increased, because EW-MMSE does not use the knowledge of the spatial correlation to improve the estimation quality. The advantage of EW-MMSE is the reduced computational complexity. 100 to 300, the gain of using LSFD increases from 4.0% to 7.7% with optimized data power, and from 3.8% to 6.8% with equal data power. In case of EW-MMSE estimation and fixed transmitted power level, LSFD increase the sum SE by 5.5% to 8.6% compared to using only MRC. Besides, by optimizing the data powers, the gain from using LSFD is between 5.5% and 9.4%.
B. Impact of Spatial Correlation
Definition 1 (Stationary point, [27]). Consider the optimization problem
minimize x∈X g(x),(70)
where the feasible set X is convex and g(x) : R n → R is differentiable. A point y ∈ X is a stationary point to the optimization problem (70) if the following property is true for all
x ∈ X:
(x − y) T ∇g(x)| y ≥ 0. (71)
Note that a stationary point y of g(x) can be obtained by solving the equation ∇g(x) = 0.
APPENDIX B
PROOF OF THEOREM 1
The numerator of (19) is reformulated into
E{|DS l,k | 2 } = p l,k |a H l,k b l,k | 2 .(72)
Meanwhile, the pilot contamination term in the denominator of (19) is rewritten as
E{|PC l,k | 2 } = L l =1 l l p l ,k L l =1 (a l l,k ) * E{v H l ,k h l l ,k } 2 = L l =1 l l p l ,k a H l,k b l ,k b H l ,k a l,k = a H l,k C (1) l,k a l,k .(73)
The beamforming gain uncertainty term in the denominator of (19) is rewritten as
E{|BU l,k | 2 } = a H l,k C (2) l,k a l,k .(74)
Similarly, the non-coherent interference term in the denominator is computed as
L l =1 K k =1 E{|NI l ,k | 2 } = L l =1 K k =1 L l =1 (a l l,k ) * √ p l ,k v H l ,k h l l ,k 2 = a H l,k C (3) l,k a l,k(75)
and the additive noise term is computed as
E{|AN l,k | 2 } = E L l =1 (a l l,k ) * v H l ,k n l 2 = a H l,k C (4) l,k a l,k .(76)
The lower-bound on the uplink capacity given in Lemma 3 is written as
R l,k = 1 − τ p τ c log 2 1 + p l,k |a H l,k b l,k | 2 a H l,k 4 i=1 C (i) l,k a l,k .(77)
Since the SINR expression in (77) is a generalized Rayleigh quotient with respect to a l,k , we can apply [16,Lemma B.10] to obtain the maximizing vector a l,k as in (31). Hence, using (31) in (77), maximizes the SE for both MMSE and EW-MMSE estimation. The maximum SE is given by (25) in the theorem.
DRAFT APPENDIX C
PROOF OF THEOREM 2 IN CASE OF MMSE
Here the expectations in (33) are computed. The numerator of (33), E{|DS l,k | 2 }, becomes
p l,k L l =1 (a l l,k ) * E p l ,k (R j l ,k Ψ Ψ Ψ −1 l ,k y l ,k ) H h l l,k 2 = p l,k L l =1 (a l l,k ) * p l ,k tr Ψ Ψ Ψ −1 l ,k R l l ,k E{h l l,k y H l ,k } 2 = p l,k L l =1 (a l l,k ) * τ p p l ,kpl,k tr Ψ Ψ Ψ −1 l ,k R l l ,k R l l,k 2 = τ p p l,k L l =1 (a l l,k ) * b l l,k 2 ,(78)
The variance of the pilot contamination in the denominator of (33) is computed as
L l =1 l l p l ,k L l =1 (a l l,k ) * τ p p l ,kpl ,k tr Ψ Ψ Ψ −1 l ,k R l l ,k R l l ,k 2 = τ p L l =1 l l p l ,k L l =1 (a l l,k ) * b l l ,k 2 .(79)
The variance of the beamforming gain uncertainty, E{|BU l,k | 2 }, is evaluated as
L l =1 p l ,k L l =1 |a l l,k | 2 E |(ĥ l l ,k ) H h l l ,k | 2 I 1 − E{(ĥ l l ,k ) H h l l ,k } = τ 2 pp l ,kpl ,k |tr(Ψ Ψ Ψ −1 l ,k R l l ,k R l l ,k )| 2 = τ p (b l l ,k ) 2 .(84)
Combining (80), (81), and (84), we obtain the variance of the beamforming gain uncertainty
as L l =1 L l =1 p l ,k τ ppl ,k |a l l,k | 2 tr(R l l ,k R l l ,k Ψ Ψ Ψ −1 l ,k R l l ,k ) = L l =1 L l =1 τ p p l ,k |a l l,k | 2 c l ,k l ,k .(85)
The variance of the non-coherent interference term, L l =1
K k =1 k k E{|NI l ,k | 2 }, is computed based on the independent channel properties L l =1 K k =1 k k p l ,k L l =1 |a l l,k | 2 E (ĥ l l ,k ) H h l l ,k 2 = L l =1 K k =1 k k L l =1 p l ,k τ ppl ,k |a l l,k | 2 tr R l l ,k R l l ,k Ψ Ψ Ψ −1 l ,k R l l ,k = L l =1 K k =1 k k L l =1 τ p p l ,k |a l l,k | 2 c l ,k l ,k .(86)
The last expectation in the denominator is computed as
E{|AN| 2 l,k } = L l =1 |a l l,k | 2 E{|(ĥ l l ,k ) H n l | 2 } = L l =1 |a l l,k | 2 σ 2 τ ppl ,k tr R l l ,k Ψ Ψ Ψ −1 l ,k R l l ,k = L l =1 τ p |a l l,k | 2 d l ,k .(87)
Using (78)
The variance of the beam uncertainty term in the denominator of (19) is computed as
L l =1 p l ,k L l =1 |a l l,k | 2 E |(ĥ l l ,k ) Hĥl l ,k | 2 − E{(ĥ l l ,k ) Hĥl l ,k } 2 + L l =1 p l ,k E L l =1 (a l l,k ) * (ĥ l l ,k ) H e l l ,k 2 .(91)
By using the relationship between the two channel estimatesĥ l l ,k andĥ l l ,k in Corollary 1,
(91) is equal to L l =1 p l ,k L l =1 |a l l,k | 2p l ,k (β l l ,k ) 2 p l ,k (β l l ,k ) 2 E{ ĥ l l ,k 4 } − E{ ĥ l l ,k 2 } 2 + L l =1 L l =1 p l ,k a l l,k 2 E (ĥ l l ,k ) H e l l ,k 2 = L l =1 L l =1 p l ,k τ p |a l l,k | 2 ( l l ,k ) 2 ( l l ,k ) 2 tr Ψ Ψ Ψ 2 l ,k + L l =1 L l =1 p l ,k |a l l,k | 2 ( l l ,k ) 2 tr R l l ,k − τ p ( l l ,k ) 2 Ψ Ψ Ψ l ,k Ψ Ψ Ψ l ,k (92) = L l =1 L l =1 p l ,k |a l l,k | 2 ( l l ,k ) 2 tr R l l ,k Ψ Ψ Ψ l ,k = L l =1 L l =1 p l ,k |a l l,k | 2 c l ,k l ,k .(93)
By performing MMSE estimation separately for every element of a channel vector, it is straightforward to prove thatĥ l l ,k and h l l ,k are independent since the joint density function is the product of their respective marginal densities. Consequently, the variance of the noncoherent interference in the denominator of (19)
, L l =1 K k =1,k k E{|NI l ,k | 2 }, is computed as L l =1 K k =1 k k p l ,k E L l =1 (a l l,k ) * (ĥ l l ,k ) H h l l ,k 2 = L l =1 K k =1 k k p l ,k L k =1 |a l l,k | 2 E (ĥ l l ,k ) H h l l ,k 2 (94) = L l =1 K k =1 k k L l =1 p l ,k |a l l,k | 2 ( l l ,k ) 2 tr R l l ,k Ψ Ψ Ψ l ,k = L l =1 K k =1 k k L l =1 p l ,k |a l l,k | 2 c l ,k l ,k .(95)
The last expectation in the denominator of (18) is computed by using the fact that the noise and the channel estimate are independent, leading to
E{|AN l,k | 2 } = L l =1 |a l l,k | 2 ( l l ,k ) 2 σ 2 tr Ψ Ψ Ψ l ,k = L l =1 |a l l,k | 2 d l ,k .(96)
Applying (88) The whole system with the aggregate effect of channel and decoding can be viewed as a SISO channel with deterministic channel gain, whose SE is the equivalent of (32), namely:
y l,k = √ p l,k L l =1 (a l l,k ) * b l l,k s l,k + L l =1 l l √ p l ,k L l =1 (a l l,k ) * b l l ,k s l ,k + L l =1 K k =1 L l =1 p l ,k |a l l,k | 2 c l ,k l ,k z l ,k + L l =1 |a l l,k | 2 d l ,k n l,k ,(97)
where z l ,k ∼ CN (0, 1) and n l,k ∼ CN (0, 1). The desired signal s l,k is decoded by using a beamforming coefficient u l,k ∈ C aŝ
s l,k = u l,k y l,k = √ p l,k u l,k L l =1 (a l l,k ) * b l l,k s l,k + L l =1 l l √ p l ,k u l,k L l =1 (a l l,k ) * b l l ,k s l ,k + L l =1 K k =1 u l,k L l =1 p l ,k |a l l,k | 2 c l ,k l ,k z l ,k + u l,k L l =1 |a l l,k | 2 d l ,k n l,k . (98)
We now compute the mean-square error as e l,k = E{|ŝ l,k − s l,k | 2 } = E{|u l,k y l,k − s l,k | 2 } = √ p l,k u l,k L l =1
(a l l,k ) * b l l,k − 1
2 + L l =1 l l p l ,k |u l,k | 2 L l =1 (a l l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1
|u l,k | 2 p l ,k |a l l,k | 2 c l ,k l ,k + |u l,k | 2 L l =1 |a l l,k | 2 d l ,k .
After some algebra, we obtain e l,k as shown in (47). The optimal solution of u l,k is computed by equating the first derivative of e l,k with respect to u l,k to zero, leading to
u * l,k L l =1 p l ,k L l =1 (a l l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1 p l ,k |a l l,k | 2 c l ,k l ,k + L l =1 |a l l,k | 2 d l ,k − √ p l,k L l =1
(a l l,k ) * b l l,k = 0. (100) Therefore, the optimal solution u opt l,k for a given set {a l,k , w l,k , ρ l,k } is given by:
The optimal solution w opt l,k is obtained by taking the first derivative of the objective function of the optimization problem (46) with respect to w l,k and equating to zero:
w opt l,k = e −1 l,k .(102)
Using (101) and (102) For sake of simplicity, we omit the iteration index in the proof. The optimal solution of u l,k and w l,k are easily computed by (101) and (102) by noting that ρ l,k = √ p l,k . We can find the optimal solution to a l,k for a given set of {u l,k , w l,k , ρ l,k } from the optimization problem
minimize {a l,k } L l=1 K k=1 w l,kẽl,k(104)
whereẽ l,k in (104) depends on {a l,k } and is defined as
e l,k = |u l,k | 2 L l =1 ρ 2 l ,k L l =1 (a l l,k ) * b l l ,k 2 + L l =1 K k =1 L l =1 ρ 2 l ,k |a l l,k | 2 c l ,k l ,k + L l =1 |a l l,k | 2 d l ,k − ρ l,k u l,k L l =1 (a l l,k ) * b l l,k − ρ l,k u * l,k L l =1 a l l,k (b l l,k ) * . (105)
By denoting f (a l,k ) = L l=1 K k=1 w l,kẽl,k and using the expression ofẽ l,k in (105), we can write f (a l,k ) as
f (a l,k ) = L l=1 K k=1 w l,k |u l,k | 2 a H l,k C l,k a l,k − u l,k ρ l,k a H l,k b l,k − u * l,k ρ l,k b H l,k a l,k .(106)
Taking the first derivative of f (a l,k ) with respect to a l,k , we obtain ∇ f = 2w l,k |u l,k | 2 C l,k a l,k − 2w l,k u l,k ρ l,k b l,k .
Therefore, the solution is
a opt l,k = ρ l,k u * l,k C −1 l,k b l,k .(108)
After removing ρ l,k in both the numerator and denominator of the fraction in (108) and doing some algebra, the optimal solution to a l,k is expressed as in (54). DRAFT We now compute the optimal solution for ρ l,k for a given set of optimization variables {a l,k , w l,k , u l,k }. In this case, (48) simplifies to minimize {ρ l,k ≥0} L l=1 K k=1 w l,k e l,k subject to ρ 2 l,k ≤ P max,l,k , ∀l, k.
(109)
The Lagrangian function of the optimization (109) is
L {ρ l,k }, {λ l,k } = L l=1 K k=1 w l,k e l,k + L l=1 K k=1 λ l,k ρ 2 l,k − P max,l,k ,(110)
where λ l,k ≥ 0 is the Lagrange multiplier associated with ρ 2 l,k ≤ P max,l,k . Taking the first derivative of L {ρ l,k }, {λ l,k } with respect to ρ l,k , we obtain
∂L ∂ ρ l,k = 2ρ l,k L l =1 w l ,k |u l ,k | 2 L l =1 (a l l ,k ) * b l l,k 2 + 2ρ l,k L l =1 K k =1 w l ,k |u l ,k | 2 L l =1 |a l l ,k | 2 c l,k l ,k − w l,k u l,k L l =1 (a l l,k ) * b l l,k − w l,k u * l,k L l =1 a l l,k (b l l,k ) * + 2λ l,k ρ l,k . (111)
By equating the above derivative to zero, the stationary point is obtained as shown in (112). The Lagrangian multiplier λ l,k must satisfy the complementary slackness condition [28] λ l,k ρ 2 l,k − P max,l,k = 0.
Therefore, we obtain the solution to ρ l,k as
ρ l,k =
min(ρ l,k , P max,l,k ), if λ l,k = 0, P max,l,k , if λ l,k 0,
whereρ l,k is defined as
which is obtained from (112) by setting λ l,k = 0. From (114), the optimal solution to ρ l,k is derived as shown in (56). DRAFT We now prove that Algorithm 1 converges to a stationary point, as defined in Definition 1.
The optimization problem (48) w l,k e l,k − ln(w l,k ) + λ l,k ρ 2 l,k − P max,l,k .
Since in every iteration, each subproblem is convex and has a unique optimal solution, we can construct the following chain of inequalities · · · ≥ g {u
where the superscript "opt, (n − 2)" denotes the optimal solution to a subproblem at iteration
These properties mean that the limit point is a stationary point to (48).
We now prove that the optimal solution {a opt l,k }, {(ρ opt l,k ) 2 } forms a stationary point of (45). In fact, the optimization problem (45) is equivalent to maximize {ρ l,k ≥0},{a l,k } h({a l,k }, {ρ l,k }),
where the objective function is h({a l,k }, {ρ l,k }) L l=1 K k=1 log 2 1 + SINR l,k + λ l,k ρ 2 l,k − P max,l,k .
Here, the SINR value has a similar expression as in (33), but with p l,k = ρ 2 l,k . For given w l,k = w opt l,k and u l,k = u opt l,k , for all l, k, it is sufficient to prove the following equalities ∂g ∂ ρ l ,k = 1 log 2 (e) ∂h ∂ ρ l ,k + 2λ l ,k ρ l ,k , ∀l , k ,
∇g(a l ,k ) = 1 log 2 (e) ∇h(a l ,k ), ∀l , k .
where e opt l,k = 1 + SINR l,k −1 is derived by using (101) in (60) and some algebra. It leads to ∂g ∂ ρ l ,k = L l=1 K t=1 1 + SINR l,k ∂ 1 + SINR l,k −1 ∂ ρ l ,k + 2λ l ,k ρ l ,k = L l=1 K k=1 1 + SINR l,k −1 ∂SINR l,k ∂ ρ l ,k + 2λ l ,k ρ l ,k = 1 log 2 (e) ∂h ∂ ρ l ,k + 2λ l ,k ρ l ,k .
The proof of (128) is similar to how (127) just was proved.
Lemma 2 .
2If BS l uses EW-MMSE estimation and the diagonal elements of the spatial correlation matrix of the channel are equal, the channel estimate between user k in cell l and BS l isĥ l l,k = l l,kỹ l,k ,
Corollary 1 .
1When the diagonal elements of the spatial correlation matrix of the channel are equal, the two EW-MMSE estimatesĥ l l,k andĥ l l ,k of the channels of users k in cells l and l that are computed at BS l are related as:
Fig. 1 .
1Desired signals are detected by the two-layer decoding technique.
This section first derives a general SE expression for each user k in each cell l and a closed-form expression when using MRC. These expressions are then used to obtain the LSFD vectors that maximize the SE. The use-and-then-forget capacity bounding technique [5, Chapter 2.3.4], [7, Section 4.3] allows us to compute a lower bound on the uplink ergodic capacity (i.e., an achievable SE) as follows.
Lemma 3 .
3A lower bound on the uplink ergodic SE is
k ) 2 }, is also a stationary point to the problem (45).
LSFD vector fixed, the SE for each user in the network is a function only of the data power coefficients and it saturates when the number of BS antennas is increased without bound. The SE can, thus, only be improve through data power control. For this communication scenario, the problem (48) becomes minimize {ρ l,k ≥0},{w l,k ≥0},{u l,,k e l,k − ln(w l,k )
After initializing the data power coefficients to a point in the feasible set, Corollary 3 provides closed-form expressions to update each variable in the optimization (59) iteratively. This benchmark only treats the data powers as optimization variables, so it is a simplification of Algorithm 1. V. NUMERICAL RESULTS To demonstrate the effectiveness of the proposed algorithms, we consider a wrapped-around cellular network with four cells as illustrated in Fig. 1. The distance between user k in cell l and BS l is denoted by d l l ,k . The users in each cell are uniformly distributed over the cell area that is at least 35 m away from the BS, i.e. d l l ,k ≥ 35 m. Monte-Carlo simulations are done over 300 random sets of user locations. We model the system parameters and large-scale fading similar to the 3GPP LTE specifications [24]. The system uses 20 MHz of bandwidth, the noise variance is −96 dBm, and the noise figure is 5 dB. The large-scale fading coefficient β l l,k in decibel is computed as
Fig. 2. A wrapped-around cellular network used for simulation.
Fig. 3 .
3Convergence of the proposed sum SE optimization with M = 200, K = 5, and ς = 0.8.
(i) Single-layer decoding system with fixed data power: Each BS uses MRC for data decoding for the users in the own cell, and all users transmit data symbols with the same power 200 mW.
Fig. 3
3shows the convergence of the proposed methods for sum SE optimization in Theorem 4and Corollary 3 for both MMSE and EW-MMSE estimation. From the initial data powers, in the feasible set, updating the optimization variables gives improved sum SE in every iteration.For a system that uses MMSE estimation and LSFD, the sum SE per cell is about 22.2%
Fig. 4 .
4Sum SE per cell [b/s/Hz] versus different correlation magnitudes. The network uses MMSE estimation, M = 200, and K = 5.
Fig. 5 .
5Sum SE per cell [b/s/Hz] versus different correlation magnitudes. The network uses EW-MMSE estimation, M = 200, and K = 5.
Figs. 4
4and 5 show the sum SE per cell as a function of the channel correlation magnitude ς for a multi-cell Massive MIMO system using either MMSE or EW-MMSE estimation.
Fig. 6 .
6Sum SE per cell [b/s/Hz] versus different number of BS antennas. The network uses MMSE estimation, K = 5, and ς = 0.5.
Fig. 7 .
7Sum SE per cell [b/s/Hz] versus different number of BS antennas. The network uses EW-MMSE estimation, K = 5, and ς = 0.5. Interestingly, Figs. 4 and 5 indicate that it is sufficient to use only the large-scale fading coefficients when constructing the LSFD vectors in many scenarios. Specifically, in the system with EW-MMSE estimation, the approximate LSFD vectors yield almost the same sum SE as the optimal ones. Meanwhile, in the case of MMSE estimation, the loss from the approximation of LSFD vectors, which are only based on the diagonal values of channel correlation matrices, grows up to 6.7% when having a correlation magnitude of 0.8. Moreover, the performance is greatly improved when the data powers are optimized. The gain varies from 17.9% to 20.7%. The gap becomes larger as the channel correlation magnitude increases. This shows the importance of doing joint data power control and LSFD optimization in Massive MIMO systems with spatially correlated channels. C. Impact of Number of Antennas and Users Figs. 6 and 7 show the sum SE per cell as a function of the number of BS antennas with MMSE and EW-MMSE estimation, respectively. Two-layer decoding gives improvements in all the cases. In case of MMSE estimation, by increasing the number of BS antennas from
Figs. 8
8and 9 show the sum SE per cell as a function of the number of users per cell when using MMSE and EW-MMSE estimation, respectively. The figures demonstrate how the gain from power control increases with the number of users. The gain grows from 5.2% for two Fig. 8. Sum SE per cell [b/s/Hz] versus different number of BS antennas. The network uses MMSE estimation, M = 200, and ς = 0.5.
Fig. 9 .
9Sum SE per cell [b/s/Hz] versus different number of users per cell. The network uses EW-MMSE estimation, M = 200, and ς = 0.5. users to 35.8% for for ten users. The approximated version of LSFD detection works properly in all tested scenarios, in the sense that the maximum loss in SE is only up to 2.9%. VI. CONCLUSION This paper has investigated the performance of the LSFD design in mitigating mutual interference for multi-cell Massive MIMO systems with correlated Rayleigh fading. This decoding design is deployed as a second decoding layer to mitigate the interference that remains after classical linear decoding. Numerical results demonstrate the effectiveness of the LSFD in reducing pilot contamination with the improvement of sum SE for each cell up to about 10% in the tested scenarios. We have also investigated joint data power control and LSFD design, which efficiently improves the sum SE of the network. Even though the sum SE optimization is a non-convex and NP-hard problem, we proposed an iterative approach to obtain a stationary point with low computational complexity. Numerical results showed improvements of sum SE for each cell up to more than 20% with using the limited number of BS antennas. APPENDIX A USEFUL LEMMA AND DEFINITION Lemma 4 (Lemma 2 in [26]). Let a random vector be distributed as u ∼ CN (0, Λ Λ Λ) and consider an arbitrary, deterministic matrix M. It holds that E{|u H Mu| 2 } = |tr(Λ Λ ΛM)| 2 + tr(Λ Λ ΛMΛ Λ ΛM H ).
, (79), (85), (86), and (87) in (33), we obtain the closed-form expression for the SE as shown in Theorem 2. APPENDIX D PROOF OF THEOREM 2 IN CASE OF EW-MMSE The main steps to prove the results for the case of EW-MMSE are similar to that of MMSE, but the distributions of the estimate and estimation errors are different (and not independent). However, we can use the relationship in Corollary 1 between the channels of the users sending non-orthogonal pilot signals to perform the derivation. The main steps of the proof are summarized as follows: The numerator of(19) is computed based on the relationship between the estimates of the channels to BS l and users k in cells l and l we use the relationship between the channel estimates of users k in cells l and l to compute the variance of the pilot contamination term in the denominator of (
, (89), (93), (95), and (96) in (18), we obtain the closed-form expression of the SE as shown in Theorem 2.
p l ,k |a l l,k | 2 c l ,k l ,k + L l =1 |a l l,k | 2 d l ,k .
+
in (46), we obtain the optimization problem minimize {p l,k ≥0},{a l,k } SINR l,k subject to p l,k ≤ P max,l,k , ∀l, k.
is first converted to the following equivalent unconstrained problem: minimize {ρ l,k ≥0},{a l,k }, {w l,k ≥0},{u l,k } g({u l,k }, {w l,k }, {a l,k }, {ρ l,k }) (116) where the objective function g is defined as g({u l,k }, {w l,k }, {a l,k }, {ρ l,k })
{ρ l,k } opt,(n−1) ≥ · · · ,
(n − 2 ).
2The chain of inequalities (118) means that the objective function of the optimization problem (116) is monotonically decreasing. Additionally, this function is lower bounded by zero, so Algorithm 1 must converge to a limit point, attained by a solution that we call k )}. Note that g({u l,k }, {w l,k }, {a l,k }, {ρ l,k }) is convex in each optimization variable, when the others variables are fixed, and the optimal solution to each sub-problem is computed from the first derivative of the cost function. By applying the standard trick in[29, Remark 2.2] to decompose a complex number into the real and imaginary parts, the following properties are obtained:
l ,k + 2λ l ,k ρ l ,k ,
where the expectation I 1 is computed by applying the property in Lemma 4 asand the expectation I 2 is computed as
Massive MIMO: 10 myths and one critical question. E Björnson, E G Larsson, T L Marzetta, IEEE Commun. Mag. 542E. Björnson, E. G. Larsson, and T. L. Marzetta, "Massive MIMO: 10 myths and one critical question," IEEE Commun. Mag., vol. 54, no. 2, pp. 114 -123, 2016.
Noncooperative cellular wireless with unlimited numbers of base station antennas. T L Marzetta, IEEE Trans. Wireless Commun. 911T. L. Marzetta, "Noncooperative cellular wireless with unlimited numbers of base station antennas," IEEE Trans. Wireless Commun., vol. 9, no. 11, pp. 3590-3600, Nov. 2010.
A survey of 5G network: Architecture and emerging technologies. A Gupta, R K Jha, IEEE Access. 3A. Gupta and R. K. Jha, "A survey of 5G network: Architecture and emerging technologies," IEEE Access, vol. 3, pp. 1206-1232, 2015.
What will 5G be?. J G Andrews, S Buzzi, W Choi, S V Hanly, A Lozano, A C K Soong, J C Zhang, IEEE J. Sel. Areas Commun. 326J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. K. Soong, and J. C. Zhang, "What will 5G be?" IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1065-1082, 2014.
T L Marzetta, E G Larsson, H Yang, H Q Ngo, Fundamentals of Massive MIMO. Cambridge University PressT. L. Marzetta, E. G. Larsson, H. Yang, and H. Q. Ngo, Fundamentals of Massive MIMO. Cambridge University Press, 2016.
Pilot contamination and precoding in multi-cell TDD systems. J Jose, A Ashikhmin, T L Marzetta, S Vishwanath, IEEE Trans. Commun. 108J. Jose, A. Ashikhmin, T. L. Marzetta, and S. Vishwanath, "Pilot contamination and precoding in multi-cell TDD systems," IEEE Trans. Commun., vol. 10, no. 8, pp. 2640-2651, 2011.
Massive MIMO has unlimited capacity. E Björnson, J Hoydis, L Sanguinetti, IEEE Trans. Wireless Commun. 171E. Björnson, J. Hoydis, and L. Sanguinetti, "Massive MIMO has unlimited capacity," IEEE Trans. Wireless Commun., vol. 17, no. 1, pp. 574-590, 2018.
Massive MIMO for maximal spectral efficiency: How many users and pilots should be allocated?. E Björnson, E G Larsson, M Debbah, IEEE Trans. Wireless Commun. 152E. Björnson, E. G. Larsson, and M. Debbah, "Massive MIMO for maximal spectral efficiency: How many users and pilots should be allocated?" IEEE Trans. Wireless Commun., vol. 15, no. 2, pp. 1293-1308, 2016.
Pilot scheduling schemes for multi-cell massive multiple-input-multipleoutput transmission. S Jin, M Li, Y Huang, Y Du, X Gao, IET Communications. 95S. Jin, M. Li, Y. Huang, Y. Du, and X. Gao, "Pilot scheduling schemes for multi-cell massive multiple-input-multiple- output transmission," IET Communications, vol. 9, no. 5, pp. 689-700, 2015.
Smart pilot assignment for Massive MIMO. X Zhu, Z Wang, L Dai, C Qian, IEEE Commun. Letters. 199X. Zhu, Z. Wang, L. Dai, and C. Qian, "Smart pilot assignment for Massive MIMO," IEEE Commun. Letters, vol. 19, no. 9, pp. 1644 -1647, 2015.
Pilot contamination precoding in multi-cell large scale antenna systems. A Ashikhmin, T Marzetta, Proc. IEEE ISIT. IEEE ISITA. Ashikhmin and T. Marzetta, "Pilot contamination precoding in multi-cell large scale antenna systems," in Proc. IEEE ISIT, 2012.
Interference reduction in multi-cell Massive MIMO systems with large-scale fading precoding. A Ashikhmin, L Li, T L Marzetta, IEEE Trans. Inf. Theory. early AccessA. Ashikhmin, L. Li, and T. L. Marzetta, "Interference reduction in multi-cell Massive MIMO systems with large-scale fading precoding," IEEE Trans. Inf. Theory, 2018, early Access.
Uplink interference reduction in large-scale antenna systems. A Adhikary, A Ashikhmin, T L Marzetta, IEEE Trans. Commun. 655A. Adhikary, A. Ashikhmin, and T. L. Marzetta, "Uplink interference reduction in large-scale antenna systems," IEEE Trans. Commun., vol. 65, no. 5, pp. 2194-2206, 2017.
Uplink Massive MIMO for channels with spatial correlation. A Adhikary, A Ashikhmin, A. Adhikary and A. Ashikhmin, "Uplink Massive MIMO for channels with spatial correlation," 2018. [Online].
S Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice HallS. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, 1993.
Massive MIMO networks: Spectral, energy, and hardware efficiency. E Björnson, J Hoydis, L Sanguinetti, Foundations and Trends in Signal Processing. 113-4E. Björnson, J. Hoydis, and L. Sanguinetti, "Massive MIMO networks: Spectral, energy, and hardware efficiency," Foundations and Trends in Signal Processing, vol. 11, no. 3-4, pp. 154 -655, 2017.
Training-based MIMO channel estimation: a study of estimator tradeoffs and optimal training signals. M Biguesh, A Gershman, IEEE Trans. Signal Process. 543M. Biguesh and A. Gershman, "Training-based MIMO channel estimation: a study of estimator tradeoffs and optimal training signals," IEEE Trans. Signal Process., vol. 54, no. 3, pp. 884-893, 2006.
Performance of cell-free Massive MIMO systems with MMSE and LSFD receivers. E Nayebi, A Ashikhmin, T L Marzetta, B D Rao, Proc. ASILOMAR. ASILOMARE. Nayebi, A. Ashikhmin, T. L. Marzetta, and B. D. Rao, "Performance of cell-free Massive MIMO systems with MMSE and LSFD receivers," in Proc. ASILOMAR, 2016, pp. 203-207.
Sum capacity of MIMO interference channels in the low interference regime. V Annapureddy, V Veeravalli, IEEE Trans. Inf. Theory. 575V. Annapureddy and V. Veeravalli, "Sum capacity of MIMO interference channels in the low interference regime," IEEE Trans. Inf. Theory, vol. 57, no. 5, pp. 2565-2581, 2011.
An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel. Q Shi, M Razaviyayn, Z.-Q Luo, C He, IEEE Trans. Signal Process. 599Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, "An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel," IEEE Trans. Signal Process., vol. 59, no. 9, pp. 4331-4340, 2011.
Weighted Sum-Rate Maximization in Wireless Networks: A Review. P Weeraddana, M Codreanu, M Latva-Aho, A Ephremides, C Fischione, Foundations and Trends in Networking. 61-2P. Weeraddana, M. Codreanu, M. Latva-aho, A. Ephremides, and C. Fischione, "Weighted Sum-Rate Maximization in Wireless Networks: A Review," Foundations and Trends in Networking, vol. 6, no. 1-2, pp. 1-163, 2012.
Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design. S Christensen, R Agarwal, E Carvalho, J Cioffi, IEEE Trans. Wireless Commun. 712S. Christensen, R. Agarwal, E. Carvalho, and J. Cioffi, "Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design," IEEE Trans. Wireless Commun., vol. 7, no. 12, pp. 4792-4799, 2008.
Weighted sum rate optimization for multicell MIMO systems with hardware-impaired transceivers. R Brandt, E Björnson, M Bengtsson, Proc. ICASSP. ICASSPR. Brandt, E. Björnson, and M. Bengtsson, "Weighted sum rate optimization for multicell MIMO systems with hardware-impaired transceivers," in Proc. ICASSP, 2014, pp. 479-483.
Further advancements for E-UTRA physical layer aspects. 3GPP TS 36.814Release 9Further advancements for E-UTRA physical layer aspects (Release 9). 3GPP TS 36.814, Mar. 2010.
Channel capacity of MIMO architecture using the exponential correlation matrix. S Loyka, IEEE Commun. Lett. 59S. Loyka, "Channel capacity of MIMO architecture using the exponential correlation matrix," IEEE Commun. Lett., vol. 5, no. 9, pp. 369-371, 2001.
Massive MIMO with non-ideal arbitrary arrays: Hardware scaling laws and circuit-aware design. E Björnson, M Matthaiou, M Debbah, IEEE Trans. Wireless Commun. 148E. Björnson, M. Matthaiou, and M. Debbah, "Massive MIMO with non-ideal arbitrary arrays: Hardware scaling laws and circuit-aware design," IEEE Trans. Wireless Commun., vol. 14, no. 8, pp. 4353 -4368, 2015.
A unified successive pseudoconvex approximation framework. Y Yang, M Pesavento, IEEE Trans. Signal Process. 6513Y. Yang and M. Pesavento, "A unified successive pseudoconvex approximation framework," IEEE Trans. Signal Process., vol. 65, no. 13, pp. 3313-3328, 2017.
S Boyd, L Vandenberghe, Convex Optimization. Cambridge University PressS. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.
Optimal resource allocation in coordinated multi-cell systems. E Björnson, E Jorswieck, Foundations and Trends in Communications and Information Theory. 92-3E. Björnson and E. Jorswieck, "Optimal resource allocation in coordinated multi-cell systems," Foundations and Trends in Communications and Information Theory, vol. 9, no. 2-3, pp. 113-381, 2013. DRAFT
| [] |
[
"Gravitational Self-force in a Radiation Gauge",
"Gravitational Self-force in a Radiation Gauge"
] | [
"Tobias S Keidl \nDepartment of Physics\nUniversity of Wisconsin-Washington County\n400 University Dr53095West BendWIUSA\n",
"Abhay G Shah \nCenter for Gravitation and Cosmology\nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nP.O. Box 41353201MilwaukeeWisconsinUSA\n",
"John L Friedman \nCenter for Gravitation and Cosmology\nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nP.O. Box 41353201MilwaukeeWisconsinUSA\n",
"Dong-Hoon Kim \nMax-Planck-Institut fr Gravitationsphysik\nAm Mhlenberg 1D-14476GolmGermany\n\nDivision of Physics, Mathematics, and Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n\nInstitute for the Early Universe\nDepartment of Physics\nEwha Womans University\n120-750SeoulSouth Korea\n",
"Larry R Price \nCenter for Gravitation and Cosmology\nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nP.O. Box 41353201MilwaukeeWisconsinUSA\n"
] | [
"Department of Physics\nUniversity of Wisconsin-Washington County\n400 University Dr53095West BendWIUSA",
"Center for Gravitation and Cosmology\nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nP.O. Box 41353201MilwaukeeWisconsinUSA",
"Center for Gravitation and Cosmology\nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nP.O. Box 41353201MilwaukeeWisconsinUSA",
"Max-Planck-Institut fr Gravitationsphysik\nAm Mhlenberg 1D-14476GolmGermany",
"Division of Physics, Mathematics, and Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Institute for the Early Universe\nDepartment of Physics\nEwha Womans University\n120-750SeoulSouth Korea",
"Center for Gravitation and Cosmology\nDepartment of Physics\nUniversity of Wisconsin-Milwaukee\nP.O. Box 41353201MilwaukeeWisconsinUSA"
] | [] | In this, the first of two companion papers, we present a method for finding the gravitational self-force in a modified radiation gauge for a particle moving on a geodesic in a Schwarzschild or Kerr spacetime. An extension of an earlier result by Wald is used to show the spin-weight ±2 perturbed Weyl scalar (ψ0 or ψ4) determines the metric perturbation outside the particle up to a gauge transformation and an infinitesimal change in mass and angular momentum. A Hertz potential is used to construct the part of the retarded metric perturbation that involves no change in mass or angular momentum from ψ0 in a radiation gauge. The metric perturbation is completed by adding changes in the mass and angular momentum of the background spacetime outside the radial coordinate r0 of the particle in any convenient gauge. The resulting metric perturbation is singular only on the trajectory of the particle. A mode-sum method is then used to renormalize the self-force. Gralla shows that the renormalized self-force can be used to find the correction to a geodesic orbit in a gauge for which the leading, O(ρ −1 ), term in the metric perturbation has spatial components even under a parity transformation orthogonal to the particle trajectory, and we verify that the metric perturbation in a radiation gauge satisfies that condition.We show that the singular behavior of the metric perturbation and the expression for the bare self-force have the same power-law behavior in L = ℓ + 1/2 as in a Lorenz gauge (with different coefficients). We explicitly compute the singular Weyl scalar and its mode-sum decomposition to subleading order in L for a particle in circular orbit in a Schwarzschild geometry and obtain the renormalized field. Because the singular field can be defined as this mode sum, the coefficients of each angular harmonic in the sum must agree with the large L limit of the corresponding coefficients of the retarded field. One may therefore compute the singular field by numerically matching the retarded field to a power series in L. To check the accuracy of the numerical method, we analytically compute leading and subleading terms in the singular expansion of ψ0 and compare the numerical and analytic values of the renormalization constants, finding agreement to high precision. Details of the numerical computation of the perturbed metric, the self-force, and the quantity h αβ u α u β (gauge invariant under helically symmetric gauge transformations) are presented for this test case in the companion paper. | 10.1103/physrevd.82.124012 | [
"https://arxiv.org/pdf/1004.2276v4.pdf"
] | 119,183,444 | 1004.2276 | 0835d4eaed1210b91767ac77bd84c422a9af77aa |
Gravitational Self-force in a Radiation Gauge
24 Oct 2010 (Dated: October 26, 2010)
Tobias S Keidl
Department of Physics
University of Wisconsin-Washington County
400 University Dr53095West BendWIUSA
Abhay G Shah
Center for Gravitation and Cosmology
Department of Physics
University of Wisconsin-Milwaukee
P.O. Box 41353201MilwaukeeWisconsinUSA
John L Friedman
Center for Gravitation and Cosmology
Department of Physics
University of Wisconsin-Milwaukee
P.O. Box 41353201MilwaukeeWisconsinUSA
Dong-Hoon Kim
Max-Planck-Institut fr Gravitationsphysik
Am Mhlenberg 1D-14476GolmGermany
Division of Physics, Mathematics, and Astronomy
California Institute of Technology
91125PasadenaCAUSA
Institute for the Early Universe
Department of Physics
Ewha Womans University
120-750SeoulSouth Korea
Larry R Price
Center for Gravitation and Cosmology
Department of Physics
University of Wisconsin-Milwaukee
P.O. Box 41353201MilwaukeeWisconsinUSA
Gravitational Self-force in a Radiation Gauge
24 Oct 2010 (Dated: October 26, 2010)numbers: 0430Db0425Nx0470Bw
In this, the first of two companion papers, we present a method for finding the gravitational self-force in a modified radiation gauge for a particle moving on a geodesic in a Schwarzschild or Kerr spacetime. An extension of an earlier result by Wald is used to show the spin-weight ±2 perturbed Weyl scalar (ψ0 or ψ4) determines the metric perturbation outside the particle up to a gauge transformation and an infinitesimal change in mass and angular momentum. A Hertz potential is used to construct the part of the retarded metric perturbation that involves no change in mass or angular momentum from ψ0 in a radiation gauge. The metric perturbation is completed by adding changes in the mass and angular momentum of the background spacetime outside the radial coordinate r0 of the particle in any convenient gauge. The resulting metric perturbation is singular only on the trajectory of the particle. A mode-sum method is then used to renormalize the self-force. Gralla shows that the renormalized self-force can be used to find the correction to a geodesic orbit in a gauge for which the leading, O(ρ −1 ), term in the metric perturbation has spatial components even under a parity transformation orthogonal to the particle trajectory, and we verify that the metric perturbation in a radiation gauge satisfies that condition.We show that the singular behavior of the metric perturbation and the expression for the bare self-force have the same power-law behavior in L = ℓ + 1/2 as in a Lorenz gauge (with different coefficients). We explicitly compute the singular Weyl scalar and its mode-sum decomposition to subleading order in L for a particle in circular orbit in a Schwarzschild geometry and obtain the renormalized field. Because the singular field can be defined as this mode sum, the coefficients of each angular harmonic in the sum must agree with the large L limit of the corresponding coefficients of the retarded field. One may therefore compute the singular field by numerically matching the retarded field to a power series in L. To check the accuracy of the numerical method, we analytically compute leading and subleading terms in the singular expansion of ψ0 and compare the numerical and analytic values of the renormalization constants, finding agreement to high precision. Details of the numerical computation of the perturbed metric, the self-force, and the quantity h αβ u α u β (gauge invariant under helically symmetric gauge transformations) are presented for this test case in the companion paper.
In this, the first of two companion papers, we present a method for finding the gravitational self-force in a modified radiation gauge for a particle moving on a geodesic in a Schwarzschild or Kerr spacetime. An extension of an earlier result by Wald is used to show the spin-weight ±2 perturbed Weyl scalar (ψ0 or ψ4) determines the metric perturbation outside the particle up to a gauge transformation and an infinitesimal change in mass and angular momentum. A Hertz potential is used to construct the part of the retarded metric perturbation that involves no change in mass or angular momentum from ψ0 in a radiation gauge. The metric perturbation is completed by adding changes in the mass and angular momentum of the background spacetime outside the radial coordinate r0 of the particle in any convenient gauge. The resulting metric perturbation is singular only on the trajectory of the particle. A mode-sum method is then used to renormalize the self-force. Gralla shows that the renormalized self-force can be used to find the correction to a geodesic orbit in a gauge for which the leading, O(ρ −1 ), term in the metric perturbation has spatial components even under a parity transformation orthogonal to the particle trajectory, and we verify that the metric perturbation in a radiation gauge satisfies that condition.
We show that the singular behavior of the metric perturbation and the expression for the bare self-force have the same power-law behavior in L = ℓ + 1/2 as in a Lorenz gauge (with different coefficients). We explicitly compute the singular Weyl scalar and its mode-sum decomposition to subleading order in L for a particle in circular orbit in a Schwarzschild geometry and obtain the renormalized field. Because the singular field can be defined as this mode sum, the coefficients of each angular harmonic in the sum must agree with the large L limit of the corresponding coefficients of the retarded field. One may therefore compute the singular field by numerically matching the retarded field to a power series in L. To check the accuracy of the numerical method, we analytically compute leading and subleading terms in the singular expansion of ψ0 and compare the numerical and analytic values of the renormalization constants, finding agreement to high precision. Details of the numerical computation of the perturbed metric, the self-force, and the quantity h αβ u α u β (gauge invariant under helically symmetric gauge transformations) are presented for this test case in the companion paper.
I. INTRODUCTION
Among the most important sources for LISA are extreme mass ratio inspirals (EMRIs) of solar mass compact objects into supermassive black holes. LISA could potentially measure hundreds of EMRI events [1] whose wide range of astrophysical and fundamental implications [2,3] include determination of the Hubble constant [4,5]; of luminosity distance, mass and spin of galactic black holes; and measurements of the deviation from a Kerr geometry of the central object [6]. Such measurements depend on accurate parameter estimation, for which it is essential to have accurate gravitational waveforms available; these require an accurate calculation of the gravitational self-force experienced by the particle.
The gravitational self-force contains both dissipative and conservative parts. The dissipative part is simply the familiar contribution from the half-retarded minus half-advanced Green's function, smooth at the position of the particle. The more difficult portion of calculating the gravitational self-force is the determination of the conservative part of the field. Estimates of its effect on the phase of the waveform are given, for example in [7][8][9][10][11]). The focus of our work is on this latter problem. A number of authors have worked on this problem in recent years. Poisson's Living Review [12] is a comprehensive self-contained introduction to the subject; Barack [13] gives an extensive recent review; and Detweiler [14] provides a more concise summary.
A complicating feature of the conservative part of the self-force is a combination of its inherent non-locality in curved space due to scatter off curvature [15][16][17]) and the singular behavior of its expression near the particle. The calculation can be made tractable through the observation that the field consists of both a regular part that is entirely responsible for the self-force and a singular part that contributes nothing to the self-force. The other complicating feature of self-force computations is the choice of gauge. The Lorenz gauge is particularly useful for sorting out formal issues but does not lead to separable, decoupled equations in the Kerr spacetime. For this reason we choose to work in a radiation gauge that exploits the separability of the Teukolsky equation [18,19] for the perturbed Weyl scalar ψ 0 in the Kerr background. Using the Hertz potential formalism developed for gravity by Kegeles and Cohen [20,21], Chrzanowksi [22], Stewart [23] and Wald [24] (which we will call the CCK formalism), it is possible to construct a metric perturbation from source-free solutions to the Teukolsky equation. Explicit forms of the reconstruction are given for perturbations of Schwarzschild and Kerr spacetimes, respectively, by Lousto and Whiting [25] and Ori [26]. With ψ 0 written as a sum of angular harmonics (in Kerr, a sum of oblate spheroidal harmonics), the reconstruction yields the part of the perturbed metric that has no change in the mass or angular momentum of the spacetime. There is a radiation-gauge form of a change in mass (an ℓ = 0 perturbation in the case of a Schwarzschild background) that arises from a Hertz potential in the CCK formalism, but it is singular on a ray through the particle [27,28]. One can find a nonsingular form for the metric of a mass perturbation in a radiation gauge [28], but we know of no advantage to using it. To obtain a gauge in which the perturbed metric is singular only on the particle's trajectory, we add the change in mass and angular in a gauge for which that part of the metric perturbation is continuous.
An expression for the self-force is then found as a mode-sum in terms of the retarded metric. To renormalize it, one must subtract off a singular part that does not contribute to the self-force. We consider two alternatives, involving either an analytic or numerical determination of the singular part of the expression for the self-force f α as a power series in the integer ℓ that labels the angular eigenvalues (more precisely, a series involving L := ℓ + 1/2). Renormalization relies on subtracting from the bare self-force -from the expression for the self-force in terms of the retarded field -a singular part that does not contribute to the self-force. This can be checked by showing that the singular vector field f s α that one subtracts has vanishing angle-average over a small sphere about the particle. In the case of a circular orbit about a Schwarzschild black hole, the conservative part of the self force is axisymmetric about a radial ray through the particle. We numerically compute the axisymmetric part of f s α and show that (as in a Lorenz gauge) it coincides with the axisymmetric part of −m∇ α 1 ρ , with ρ the geodesic distance to the trajectory. Because the angle-average of ∇ α 1 ρ over a sphere of radius ρ about the particle vanishes as ρ → 0, the renormalized self-force gives the first-order correction to geodesic motion in our modified radiation gauge.
The paper is organized as follows: In Sec. II, we introduce the self-force equations and a criterion for their use in a generic gauge; we briefly review features of mode-sum renormalization in a Lorenz gauge; and we review relevant parts of the Newman-Penrose [29,30] formalism and spin-weighted harmonics. In Sec. III, we begin with a list of the steps involved in computing the self-force in a modified radiation gauge. In the subsections that follow, we obtain a simple analytic expression for the singular part of each of the Weyl scalars ψ 0 and ψ 4 ; we relate the small-distance behavior of the Weyl scalars to their large ℓ behavior, and observe that the singular behavior in ℓ of the expression for the self-force has the same behavior (involves the same powers of ℓ) in a radiation gauge as in a Lorenz gauge; and we show that the perturbed metric obtained from ψ 0 is unique up to gauge transformations and the addition of metric perturbations corresponding to infinitesimal changes in mass and spin. We conclude the section by studying the parity of the radiation-gauge part of the perturbed metric; in particular, we show that spatial part of the metric (in the frame of the particle) is even under parity to leading order in the geodesic distance ρ to the trajectory. In Sec. IV, we specialize to a particle in a circular orbit around a Schwarzschild black hole, finding ψ ret 0 and ψ s 0 , the retarded and singular forms of ψ 0 and comparing the analytic and numeric methods of renormalizing ψ 0 . The substantial analytical work involved in the mode-sum expression for the leading and subleading terms of ψ s 0 is detailed in an appendix. Finally, in Sec. V, we briefly discuss the results.
Conventions
Greek letters early in the alphabet α, β, . . . will be abstract spacetime indices; letters µ, ν, . . . will be concrete spacetime indices, labeling components in Schwarzschild or Boyer-Lindquist coordinates. Bold-face Greek indices µ, ν will label components along the null Newman-Penrose (NP) tetrad defined in Eq. (18) below. We adopt the + − −− signature of Newman and Penrose and use the standard names l α , n α , m α ,m α for the null NP tetrad and NP notation for the spin coefficients.
II. REVIEW OF SELF-FORCE AND OF BLACK-HOLE PERTURBATIONS IN AN NP FRAMEWORK.
A. Gravitational self-force
We work in linear perturbation theory, for which the metric perturbation is a solution with point-particle source to the Einstein field equation linearized about a Kerr or Schwarzschild background. With the particle's mass, trajectory and velocity denoted by m, z(τ ) and u α , respectively, the source is given by
T αβ (x) = mu α u β δ 4 (x, z(τ ))dτ.(1)
Here u α = u α (x) when x lies on the particle's trajectory, and the δ-function is normalized by δ 4 (x, z) |g|d 4 x = 1. We denote by h ret αβ the retarded solution to the perturbed Einstein equation with this source. As noted by Quinn and Wald and by Detweiler and Whiting [31], in the MiSaTaQuWa prescription for finding the self-force [15,16], a particle follows a geodesic of the metric g αβ + h ren αβ , where h ren αβ is given by
h ren αβ = h ret αβ − h s αβ ,(2)
with h s αβ a locally defined singular field, chosen to cancel the singular part of h ret αβ and to give no contribution to the self-force. The perturbed geodesic equation has the form
a α := u · ∇u α = −(g αδ − u α u δ ) ∇ β h ren γδ − 1 2 ∇ δ h ren βγ u β u γ .(3)
Here a α is the acceleration with respect to the background metric, and the self-force is, by definition, f α = ma α . We will denote by a ret α the expression on the the right side of Eq. (3), with h ren αβ replaced by h ret αβ . Work by Gralla [32], following a careful justification of the self-force equations by Gralla and Wald [33], gives the following characterization of a α , based on the vanishing of the angle-averaged singular part of the expression for the self-force: Let ρ be geodesic distance to the particle trajectory. Let h ret αβ be the retarded metric perturbation in a gauge for which its spatial part near the trajectory is O(ρ −1 ) and has even parity to that order. Then the self-force is given in local inertial coordinates about P by
a ren µ = lim ρ→0 Sρ a ret µ dΩ(4)
where S ρ is a sphere of constant ρ in a geodesic surface orthogonal to the worldline. That is, the first-order perturbative correction to the geodesic equation is
u β ∇ β u α = a ren α ,(5)
with a ren α given by Eq. (4).
One can thus identify the singular part of the self-force with any vector field ma s α near the particle trajectory for which a ret α − a s α is continuous and lim ρ→0 Sρ a s µ dΩ = 0.
Then a ren α (P ) = lim
P ′ →P {a ret α (P ′ ) − a α [h s ](P ′ )}.(7)
A free particle in flat space has no self-force, and the form of its linearized field can be used to obtain a s α in a curved spacetime. Its linearized gravitational field in a Lorenz gauge is the Schwarzschild solution in isotropic coordinates, linearized about flat space
h µν = − 2m ρ δ µν = 2m ρ (η µν − 2u µ u ν ).(8)
In a Lorenz gauge for a particle in geodesic motion in a curved background, the singular part h s αβ of the metric perturbation takes this form in local inertial coordinates T, X, Y, Z centered at at any point P along the trajectory, with ∂ T = u:
h s,Lor µν = − 2m ρ δ µν = 2m ρ (g µν − 2u µ u ν )| P .(9)
The corresponding singular part of the self-force is given by
f s,Lor µ = −m∇ µ 1 ρ .(10)
As in flat space, the angle-average of f µ over a sphere of constant ρ vanishes in the small-ρ limit. In particular, although the leading correction to the flat-space coordinate expression x i /ρ 3 can be a term of order ρ 0 , the term has the form c jk x i x j x k /ρ 3 ; because the term is odd in
x i , we have lim ρ→0 Sρ ∇ µ 1 ρ dΩ = 0.(11)
Because the self-force involves only the values of ∇ γ h ren αβ on the particle's trajectory, two tensors h s αβ and h s αβ give the same self-force if ∇ γ ( h s αβ − h s αβ ) vanishes on the particle's trajectory. In particular, Detweiler and Whiting [31] show that there is a choice h S αβ of the singular field that is a locally defined solution to the perturbed field equations with the same point-particle source. One can choose local inertial coordinates (THZ, coordinates, for example) for which the Detweiler-Whiting singular solution differs from the form (9) by terms of order ρ 2 [34]. Following the notation in Detweiler-Whiting, we denote their form of the singular field by an upper-case S.
B. Mode-sum renormalization
A review of mode-sum renormalization, is given in Ref. [13]. We recall some of its main features in a Lorenz gauge; many of these continue to hold in our (modified) radiation gauge.
In mode-sum renormalization, the retarded field h ret αβ and the corresponding expression a ret α are written as sums over angular harmonics, labeled by ℓ, m. In a Schwarzschild background, these are unambiguously associated with the spherical symmetry of the backgound. In Kerr, one can use the spherical coordinates of a Boyer-Lindquist chart to define the decomposition. When the retarded field is written as a superposition of angular harmonics, its shortdistance singular behavior (9) is replaced by a large L-divergence of the mode sum at the position of the particle. (Appendix B relates the large L behavior of an function on a sphere to its singular behavior for small ρ.)
For a particle at coordinate radius r 0 , the angular harmonics have finite limits as r → r 0 from r < r 0 or r > r 0 . Denoting by h ret µνℓ the sum over m of all harmonics belonging to a given ℓ, one has
h ret± µνℓ (P ) = A µν + B µν /L + O(L −3 ).(12)
Similarly, with a ret α ℓ the contribution to a ret α from h ret µνℓ , the large-L behavior of a ret α ℓ is given by
a ret µ ℓ (P ) = A ±µ L + B µ + O(L −2 ).(13)
Explicit expressions for the renormalization coefficients A ± µ and B µ have been found for generic orbits in a Kerr background by Barack [13], who shows that the first two terms in this expansion reproduce the singular part of the acceleration, −∇ α 1 ρ , up to terms that vanish at the particle:
a s µ ℓ (P ) = A ±µ L + B µ .(14)
The fact that the term of order L −1 vanishes for each component a s µ is related to the behavior of a short-distance expansion in which terms of order ρ k with k even occur with an odd number of factors of x i . The retarded acceleration, expressed as a mode-sum that diverges on the particle trajectory, is regularized by a cutoff ℓ max in ℓ,
a reg µ = ℓmax ℓ=0 (a ret α ℓ − a s µ ℓ ),(15)
and the renormalized acceleration is given by
a ren µ = lim ℓmax→∞ ℓmax ℓ=0 (a ret α ℓ − a s µ ℓ ).(16)
C. Black-hole perturbations in an NP framework
The present method obtains the metric perturbation from components of the Weyl tensor along basis vectors of an NP tetrad e α 1 := l α , e α 2 := n α , e α 3 := m α , e α 4 :=m α ,
whose two real null vectors l α and n α are along the principle null directions of the Kerr geometry. In particular, the Kinnersley tetrad has in Boyer-Lindquist coordinates the components
(l µ ) = ( r 2 + a 2 ∆ , 1, 0, a ∆ ), (n µ ) = 1 2(r 2 + a 2 cos 2 θ) (r 2 +a 2 , −∆, 0, a), (m µ ) = 1 √ 2(r + ia cos θ) (ia sin θ, 0, 1, i/ sin θ),(18)
where ∆ = r 2 − 2M r + a 2 . We denote the associated derivative operators by
D = l µ ∂ µ , ∆ = n µ ∂ µ , δ = m µ ∂ µ ,δ =m µ ∂ µ ,(19)
where boldface distinguishes these operators from subsequently defined scalars. The nonzero spin coefficients are
̺ = − 1 r − ia cos θ , β = −̺ cot θ 2 √ 2 , π = i √ 2 a̺ 2 sin θ, τ = − i √ 2 a̺̺ sin θ, µ = 1 2 ̺ 2̺ ∆, γ = µ + 1 2 ̺̺(r − M ), α = π −β,(20)
where we distinguish ̺ from ρ introduced before Eq. (4). The spin-weight s = ±2 components, ψ 0 and ψ 4 , of the perturbed Weyl tensor are given by
ψ 0 = −C αβγδ l α m β l γ m δ ,(21)ψ 4 = −C αβγδ n αmβ n γmδ .(22)
Each satisfies the decoupled Teukolsky equation appropriate to its spin weight:
T s ψ s := (r 2 + a 2 ) 2 ∆ − a 2 sin 2 θ ∂ 2 ∂t 2 − 2s M (r 2 − a 2 ) ∆ − r − ia cos θ ∂ ∂t + 4M ar ∆ ∂ 2 ∂t∂φ − ∆ −s ∂ ∂r ∆ s+1 ∂ ∂r − 1 sin θ ∂ ∂θ sin θ ∂ ∂θ − 2s a(r − M ) ∆ + i cos θ sin 2 θ ∂ ∂φ + a 2 ∆ − 1 sin 2 θ ∂ 2 ∂φ 2 + (s 2 cot 2 θ − s) ψ s = 4π(r 2 + a 2 cos 2 θ)T s ,(23)
where
ψ s=2 = ψ 0 , T s=2 = 2(δ +π −ᾱ − 3β − 4τ )[(D − 2ǫ − 2̺)T 13 − (δ +π − 2ᾱ − 2β)T 11 ] +2(D − 3ǫ +ǭ − 4̺ −̺)[(δ + 2π − 2β)T 13 − (D − 2ǫ + 2ǭ −̺)T 33 ],(24a)ψ s=−2 = ̺ −4 ψ 4 , T s=−2 = 2̺ −4 (∆ + 3γ −γ + 4µ +μ)[(δ − 2τ + 2α)T 24 − (∆ + 2γ − 2γ +μ)T 44 ] +2̺ −4 (δ −τ +β + 3α + 4π)[(∆ + 2γ + 2μ)T 24 − (δ −τ + 2β + 2α)T 22 ].(24b)
The scalars ψ ret 0 and ̺ −4 ψ ret 4 can be decomposed into time and angular harmonics,
ψ ret 0 ℓmω = 2 R ℓmω 2 S ℓmω e i(mφ−ωt) ,(25a)̺ −4 ψ ret 4 ℓmω = −2 R ℓmω −2 S ℓmω e i(mφ−ωt) ,(25b)
where the s S ℓmω are oblate spheroidal harmonics and where s R ℓmω satisfies the radial equation,
∆ −s d dr ∆ s+1 dR dr + K 2 − 2is(r − M )K ∆ + 4isωr − λ R = −4πT sℓmω ,(26)
with K = (r 2 + a 2 )ω − ma. The source, a distribution involving δ(r − r 0 ) and its first two derivatives, is obtained from the source term in Eq. (23) using the completeness and orthogonality of the spin-weighted spheroidal harmonics. Each angular eigenvalue λ is a continuous function of a. For a Schwarzschild background, λ has the value (ℓ − s)(ℓ + s + 1), and for Kerr it is labeled by its value of ℓ at a = 0.
For perturbations of Schwarzschild, tensor components with s 1 indices along m α and s 2 indices alongm α have angular behavior given by spin-weighted spherical harmonics s Y ℓm (θ, φ) with spin-weight s = s 1 − s 2 , where
s Y ℓm = [(ℓ − s)!/(ℓ + s)!] 1/2 ð s Y ℓm , 0 ≤ s ≤ ℓ, (−1) s [(ℓ + s)!/(ℓ − s)!] 1/2ð −s Y ℓm , −ℓ ≤ s ≤ 0,(27)
with
ðη = − (∂ θ + i csc θ∂ φ − s cot θ) η, ðη = − (∂ θ − i csc θ∂ φ + s cot θ) η.(28)
D. Reconstruction of the metric perturbation from ψ0 or ψ4
The CCK procedure for obtaining metric perturbations from perturbed Weyl scalars was developed by Chrzanowski and by Cohen and Kegeles [20][21][22] (see also Stewart [23]), and a simpler derivation is given in Wald [24]. Discussions in the context of the self-force problem are found in [25,26] and [28].
The procedure gives the metric perturbation in a radiation gauge, a gauge characterized by the conditions
l β h αβ = g αβ h αβ = 0,(29)
or by the corresponding conditions
n β h αβ = g αβ h αβ = 0.(30)
Price, Shankar and Whiting [35,36] show that a radiation gauge exists locally for vacuum perturbations of any type D vacuum spacetime. The CCK construction has two parts. Given a solution ψ 0 or ψ 4 to the source-free s = ±2 Teukolsky equation, one first finds a Hertz potential, a function Ψ that again satisfies a source-free Teukolsky equation. Then, by taking derivatives of the Hertz potential, one constructs a metric perturbation for which ψ 0 and ψ 4 are the perturbed Weyl scalars.
There is a striking difference between the asymptotic behavior produced by the CCK procedure and the asymptotic behavior of a metric perturbation in a Lorenz gauge that approximately satisfies Eq. (29) or (30). The difference is related to the terminology used in the literature for the two families of radiation gauges, in which the gauge satisfying l β h αβ = 0 is called the IRG or ingoing radiation gauge and the gauge satisfying n β h αβ = 0 is called the ORG or outgoing radiation gauge.
Outgoing solutions in a Lorenz gauge (for example modes for which the metric perturbation has asymptotic behavior e −iωu /r + O(r 2 )), however, satisfy the IRG conditions (29) to leading order: Because l α is along ∇ α u, we have
0 = ∇ β h αβ = l β h αβ + O(r −2 ).(31)
As one might expect, an asymptotically vanishing gauge vector can take one from a metric perturbation in a Lorenz gauge in which the IRG condition is approximately satisfied to a metric perturbation that exactly satisfies the condition: We exhibit in Appendix A an explicit, asymptotically vanishing, gauge vector from a generic outgoing solution h Lor αβ in a Lorenz gauge to an asymptotically flat metric perturbation in an IRG. An analogous gauge transformation leads for incoming radiation to an asymptotically flat metric perturbation satisfying n β h αβ = 0.
Curiously, however, the CCK procedure yields metric perturbations in the two gauges that are asymptotically flat for the opposite cases: In the IRG, with l β h αβ = 0, the CCK procedure yields an asymptotically flat metric for ingoing radiation. For outgoing radiation the CCK procedure gives a metric whose dominant components are asymptotically of order r. Similarly, in the ORG gauge, with n β h αβ = 0, the CCK procedure yields an asymptotically flat metric for outgoing radiation. For ingoing radiation the CCK procedure gives a metric whose dominant components are asymptotically of order r. This then is the justification for the terms "outgoing" and "ingoing radiation gauge" introduced by Chrzanowski and used in the subsequent literature.
For the gauge (30), a Hertz potential Ψ is related to ψ 0 by four angular derivatives or four radial derivatives, and both of these alternatives are listed below. In subsequent sections, we will be concerned only with the specialization of these results to the Schwarzschild spacetime. In this case given a Ψ that satisfies the Teukolsky equation for ψ 0 , a metric perturbation in the radiation gauge ORG is given by
h αβ = ̺ −4 { n α n β (δ − 3α −β + 5π)(δ − 4α + π) +m αmβ (∆ + 5µ − 3γ +γ)(∆ + µ − 4γ) −n (αmβ) (δ − 3α +β + 5π +τ )(∆ + µ − 4γ) + (∆ + 5µ −μ − 3γ −γ)(δ − 4α + π) }Ψ + c.c.,(32)
which we take as the starting point for the discussion that follows. Note that the sign in this equation is appropriate for a + − −− signature and is opposite to that in, for example, Wald [24]. In the ORG, Ψ is related to ψ 0 through four angular derivatives according to
ψ 0 = 1 8 [L 4Ψ + 12M ∂ t Ψ],(33)
where L = ð − ia sin θ∂ t . Equivalently,
L 4 = L 1 L 0 L −1 L −2 , with L s = −[∂ θ − s cot θ + i csc θ∂ φ ] − ia sin θ∂ t .
There is a corresponding equation involving four radial derivatives, namely 1
̺ −4 ψ 4 = 1 32 ∆ 2 D 4 ∆ 2Ψ ,(34)
where D is proportional to ∆, renormalized to make it the radial derivative along the ingoing principal null geodesics (the t → −t, φ → −φ version of D):
D := − 2∆ r 2 + a 2 cos 2 θ ∆ = − r 2 + a 2 ∆ ∂ t + ∂ r − a ∆ ∂ φ .(35)
The corresponding equations for the (different) Hertz potential in the IRG are listed in the second line of the summary table below, with L :=ð + ia sin θ∂ t .
Gauge Gauge conditions
Radial equation Angular equation The fact that ̺ −4 ψ 4 and ψ 0 satisfy the vacuum Teukolsky equation for spin-weights ∓2 when Ψ ORG or Ψ IRG satisfy the spin-weight ±2 Teukolsky equations follows from the relations in the table together with the commutators
ORG n β h αβ = 0, h = 0 ̺ −4 ψ4 = 1 32 ∆ 2 D 4 ∆ 2Ψ ψ0 = 1 8 [L 4Ψ + 12M ∂tΨ] IRG l β h αβ = 0, h = 0 ψ0 = 1 2 D 4Ψ ̺ −4 ψ4 = 1 8 [ L 4Ψ − 12M ∂tΨ]T 2 D 4 = D 4T −2 , T −2L 4 =L 4T −2 , T −2 ∆ 2D4 ∆ 2 = ∆ 2D4 ∆ 2T 2 , T 2 L 4 = L 4T 2 .(36)
1 D is Chandrasekhar's D † 0 and Ori's D † . Lousto and Whiting's Eq. (28) is an incorrect version of Eq. (34), with ∆ ( ∆ in their notation) instead of D/2. This is corrected by Whiting and Price [35], in which ∆ is defined as the GHP prime [37] of D. The Ori and Lousto-Whiting papers also have an incorrect factor of two in each of the equations for Ψ that is inherited from an error in Kegeles-Cohen [21].
These are equivalent to Eqs. (40) and (56) in Chap. 8 of Chandrasekhar [38] and their adjoint relations as defined there.
Explicit solutions to the equations for the Hertz potentials have been presented for a Schwarzschild background by Lousto and Whiting [25] and for Kerr by Ori [26]. Ori shows that the CCK procedure gives a unique Hertz potential in each gauge for each angular harmonic of Ψ. That is, for each harmonic, there is a unique Ψ that satisfies both the angular equation and the sourcefree Teukolsky equation; there is a unique Ψ that satisfies both the radial equation and the sourcefree Teukolsky equation; and the two solutions coincide. For ψ 0 proportional to the harmonic 2 S ℓmω ,
Ψ ORG is proportional to −2 S ℓ,−m,−ω .
Note that, although Ori's metric reconstruction is done mode-by-mode, his statement of uniqueness of solutions does not explicitly restrict it to uniqueness of each angular harmonic. This is, however, a necessary restriction: As shown in Keidl et al. [28], if one requires only that Ψ satisfy one of the radial or angular equations of Table II D, together with the appropriate Teukolsky equation, additional freedom remains. This freedom in the Hertz potential corresponds to the addition to the metric perturbation of type D solutions (changes of mass and spin and addition of a perturbed C-metric solution) and with gauge transformations of the perturbed metric.
With ψ 0 and the ORG Ψ decomposed in time and angular harmonics, Eq. (33) can be inverted algebraically as follows, at any r outside the particle's orbit -for r min < r < r max , where r min and r max are the perihelion and aphelion values of r. The harmonics of Ψ and ψ 0 each have the form
ψ 0ℓmω = 2 R ℓmω 2 S ℓmω e i(mφ−ωt) ,(37a)Ψ ℓmω = 2Rℓmω 2 S ℓmω e i(mφ−ωt) ,(37b)
where R and S are solutions to the radial and angular Teukolsky equations, respectively, andR is to be determined. Using the identity sSℓmω = (−1) m+s −s S ℓ −m −ω , we can write the harmonic decomposition ofΨ in the form
Ψ = ℓ,m,ω 2Rℓmω 2 S ℓmω e i(mφ−ωt) = ℓ,m,ω (−1) m 2Rℓ−m−ω −2 S ℓmω e i(mφ−ωt) .(38)
The Teukolsky-Starobinsky identity (Eqs. (9.59) and (9.61) of Ref. [38]) has the form
L 4 2 S ℓmω = D −2 S ℓmω ,(39)
where
D 2 = λ 2 CH (λ CH + 2) 2 + 8aω(m − aω)λ CH (5λ CH + 6) + 48a 2 ω 2 [2λ CH + 3(m − aω) 2 ]
and λ CH , the angular eigenvalue used by Chandrasekhar [38], is related to the separation constant λ of Eq. (4.9) of [19] by λ CH = λ + s + 2. Because Eq. (33) mixes Ψ andΨ, its inversion for each angular harmonic involves a linear combination of ψ 0ℓmω and ψ 0ℓ −m −ω . We find that the algebraic inversion gives the ORG Hertz potential in the form
Ψ ℓmω = 8 (−1) m Dψ 0ℓ −m −ω + 12iM ωψ 0ℓmω D 2 + 144M 2 ω 2 .(40)
We use this inversion for circular orbits in a Schwarzschild and Kerr background. For generic orbits, the individual harmonics ψ 0ℓmω do not satisfy the sourcefree Teukolsky equation in the region r min < r < r max , where r min and r max are the values of r at perihelion and aphelion; and the presence of a source invalidates the algebraic angular inversion. To find Ψ requires one to integrate one of the radial equations of Table I. With the Kinnersley tetrad, the IRG radial equation for Ψ has the simplest form: In Kerr coordinates u, r, θ,φ, where
u = t − r * , with dr * dr = r 2 + a 2 ∆ , andφ = φ + a ∞ r dr ′ 1 ∆(r ′ ) ,(41)we have Df (u, r, θ,φ) = ∂ r f (u, r, θ,φ). The radial equation for Ψ IRG is then ∂ 4 rΨ IRG = 2ψ 0 , with solution Ψ IRG = 2 ∞ r dr 4 ∞ r4 dr 3 ∞ r3 dr 2 ∞ r2 dr 1 ψ 0 (u, r, θ,φ),(42)
satisfying the vacuum Teukolsky equation for ingoing radiation when the outgoing radial null ray does not intersect the particle. To find Ψ at points on a t = constant surface, one can use Eq. (42) outside the radial coordinate r 0 of the particle and a corresponding integral from the horizon to r for r < r 0 .
E. Gauge transformations of the self-force
In the form (3), the perturbed geodesic is parameterized so that its tangent is normalized to 1 with respect to the background metric. A gauge transformation of this equation was obtained by Barack and Ori [27], and Appendix A gives an alternative, covariant derivation in terms of an infinitesimal diffeo of the metric and a family of unperturbed geodesics. With the same normalization, changing a background geodesic by a gauge transformation generated by ξ α changes its tangent vector by
δ ξ u α = (δ α β − u α u β )£ ξ u β ;(43)
and u α + δ ξ u α satisfies a geodesic equation of the metric perturbed by £ ξ g αβ . With the acceleration defined by
δ ξ a α := δ ξ u β ∇ β u α + u β ∇ β δ ξ u α , we have δ ξ a α = −(δ α β − u α u β )(u · ∇) 2 ξ β + R α βγδ u β u γ ξ δ ,(44)
and a α u α = 0. Note that the right side vanishes if ξ α happens to drag a geodesic of the background spacetime to another geodesic of the background spacetime: This is the equation of geodesic deviation governing the connecting vector joining two neighboring geodesics of g αβ . For general ξ α , the right side of Eq. (44) then measures the failure of an infinitesimal diffeo generated by ξ α to produce a geodesic of the background metric. A particle in circular orbit has 4-velocity u α = u t k α , with k α = t α + Ωφ α a Killing vector. The perturbed spacetime with a particle in circular orbit is helically symmetric, symmetric with respect to k α . We show in Appendix A that, for a gauge transformation that preserves helical symmetry (for £ k ξ α = 0), the gauge-transformed self-force is given by
f α = f α + 1 2 m(u t ) 2 ξ β ∇ β ∇ α (k γ k γ ).(45)
III. METHOD FOR COMPUTING THE SELF-FORCE ON AN ORBITING MASS
A. Overview
In this section we outline the method for computing the self-force in a radiation gauge. Subsequent sections will elaborate on the details of each step of the calculation. The method is a revision of that initially suggested in Refs. [28] and [39]. In broad terms, the steps involved in computing the self-force in a modified radiation gauge are:
A. Compute the retarded Weyl scalar, ψ ret 0 (or ψ ret 4 ). B. Use the retarded Weyl scalar to construct a Hertz potential Ψ ret as a sum of angular harmonics for r = r 0 .
C. Using the CCK formalism described above, reconstruct the retarded metric perturbation in a radiation gauge, and find the perturbation in mass and angular momentum in an arbitrary gauge.
D. Find the expression for the self force as a mode sum involving the retarded field.
E. Find the renormalization coefficients for the singular part f s α of the self-force and compute f ren α .
In this approach, all fields are written as a sum of time and angular harmonics. The angular harmonics (25a) of ψ ret 0 have finite one-sided limits as r → r 0 :
ψ ret± 0ℓmω := lim r→r ± 0 ψ ret 0ℓmω (t 0 , r).(46)
For a Schwarzschild background the harmonics are spin-weight 2 spherical harmonics, while for Kerr they are oblate spheroidal harmonics whose form depends on the Kerr parameter a. For a Kerr as well as a Schwarzschild background, however, one can write the singular field in terms of spin-weighted spherical harmonics [40][41][42], where each spinweighted spheroidal harmonic is a sum of the form
s S ℓmω e imφ = ∞ ℓ ′ =ℓmin b ℓ ′ ω s Y ℓ ′ m , ℓ min = max(|s|, |m|).(47)
The computation of ψ ret 0 is straightforward, involving an integration of the radial Teukolsky equation (26) and the computation of spin-weighted spherical harmonics. In computing a Hertz potential Ψ ret from ψ ret 0 , our choice of radiation gauge is dictated by requiring that the CCK-constructed Ψ ret vanish asymptotically: We use the ORG and find Ψ ret as the solution of Eq. (40) to the angular equation. The tetrad components of the metric perturbation h ret αβ are then obtained from Eq. (32) and are used to compute the expression for the bare self-force in terms of h ret αβ ,
f α [h ret ]/m = a ret α = −(g αδ − u α u δ ) ∇ β h ret γδ − 1 2 ∇ δ h ret βγ u β u γ .(48)
The remainder of the problem is the mode-sum renormalization of the self-force and the recovery of the part of the metric perturbation that does not arise from ψ 0 . We discuss them in turn in the next two subsections.
B. Mode-sum renormalization in a radiation gauge
One can carry out a mode sum renormalization by finding the radiation-gauge version of the power series (13) that expresses the singular behavior of a ret for r > r 0 and r < r 0 near the position of the particle. We consider two ways to proceed: 1. One can find an analytic expression for a s α as a sum in powers of L, starting from an expression we derive below for ψ s 0 , the singular part of ψ ret 0 . This analytic way follows the steps just listed in describing the path from ψ ret 0 to a ret α , to successively obtain expressions for Ψ s , h s αβ and a s α . 2. The second method is significantly simpler: One simply numerically matches a power series in L to a ret,µ ℓ (P ) for successive values of ℓ.
We begin with a discussion of the analytic method. We find an explicit expression for ψ s 0 to subleading order in the distance to the particle's trajectory in terms of components of the tetrad vectors along and orthogonal to the trajectory. We then characterize the powers of L that appear in the power series for a rets α . Finally, we turn to a description of renormalization by numerical matching. In Sec. IV below, we test the numerical method in a relatively simple case by comparing analytic and numerical renormalizations of ψ 0 for a particle in circular orbit in a Schwarzschild background.
Note that, in a mode-sum renormalization, the singular parts of the the perturbation in the metric and the self-force are determined by the large-L behavior of the retarded fields. They are, in particular, independent of the choice of gauge in which one describes perturbations of mass and angular momentum.
Analytic method
The analytic method begins by finding an analytic expression for ψ s 0 or ψ s 4 . The decomposition of the metric perturbation h ret αβ in a Lorenz gauge,
h ret,Lor αβ = h ren,Lor αβ + h s,Lor αβ ,(49)
gives a corresponding gauge-invariant decomposition of the perturbed Weyl scalars. From the expression for the perturbed Weyl (or Riemann) tensor in terms of the perturbed metric, we have
ψ ret 0 = O αβ 0 h ret αβ , ψ s 0 = O αβ 0 h s αβ ,(50a)ψ ret 4 = O αβ 4 h ret αβ , ψ s 4 = O αβ 4 h s αβ ,(50b)
where
O αβ 0 = 1 2 m α m β l γ l δ + l α l β m γ m δ − l α m β m γ l δ − m α l β l γ m δ ∇ γ ∇ δ ,(51a)O αβ 4 = 1 2 m αmβ n γ n δ + n α n βmγmδ − n αmβmγ n δ −m α n β n γmδ ∇ γ ∇ δ .(51b)
Although we will not need the Detweiler-Whiting form of the singular field, it is worth noting that using it would yield a smooth version of ψ ren 0 (or ψ ren 4 ) that satisfies the sourcefree Teukolsky equation in a neighborhood of the particle's trajectory. That is, because h S αβ satisfies the perturbed field equation with the same source as h ret αβ , the corresponding field ψ S 0 is a local solution to the s = 2 Teukolsky equation with the same source as ψ ret 0 , implying that ψ ret 0 − ψ S 0 satisfies the corresponding homogeneous equation.
Denote by ǫ := √ T 2 + X 2 + Y 2 + Z 2 the distance with respect to the Euclidean metric dT 2 + dX 2 + dY 2 + dZ 2 of the local inertial coordinates introduced before Eq. (9). We will argue below that the self-force in our modified radiation gauge, like that in a Lorenz gauge, has dominant singular behavior of order ǫ −2 and can be regularized by subtracting leading and subleading terms in ǫ. These arise from the leading and subleading terms in ǫ in ψ s 0 . Instead of directly using Eq. (50a) to compute ψ s 0 , it is easier simply to observe that O µν 0 and h s µν have to subleading order in ǫ their flat space form. It follows that the value of ψ s 0 has at subleading order in ǫ its form for a perturbation of flat space. That is, in terms of T and ρ, ψ s 0 is the linearized Schwarzschild field of a static particle:
C s γδ αβ = − 4m ρ 3 δ [γ [α δ δ] β] + 3δ [γ [α ∇ β] ρ∇ δ] ρ − 3δ [γ [α ∇ β] T ∇ δ] T − 6∇ [α ρ ∇ β] T ∇ [γ ρ ∇ δ] T .(52)
From this we obtain
ψ s 0 = − 6m ρ 3 l T m ρ − l ρ m T 2 ,(53)ψ s 4 = − 6m ρ 3 n Tmρ − n ρmT 2 ,(54)where l T := l α ∇ α T , l ρ := l α ∇ α ρ, m T := m α ∇ α T , m ρ := m α ∇ α ρ.
One can, of course, also obtain (53) and (54) For each of the quantities ψ 0 , Ψ, h αβ , and a α , we will use the subscript ℓ to denote the sum over m of the contributions from angular harmonics associated with ℓ, m, and we suppress the index ω. The equations that relate ψ 0 and Ψ do not mix spin-weighted spheroidal harmonics, and Ψ s is most simply found as a sum of these harmonics. The equations, (32) and (3), that relate Ψ to h s αβ and h s αβ to a α , however, mix spheroidal harmonics on a Kerr background. The large-L expansions of their components can be found in terms of spin-weighted spheroidal harmonics, spin-weighted spherical harmonics, or spherical harmonics. Different choices lead to different definitions of the subleading contributions, because of the mixing of different values of ℓ in relating, for example, spin-weighted spherical harmonics to spinweighted spheroidal harmonics. What matters is only that the same convention is used for the angular harmonics of the retarded and singular fields.
Because ψ s 0 involves two derivatives of h s αβ and can be computed from the Lorenz singular field, its large-L behavior is greater by two powers of L than the large-L behavior at r = r 0 of h s,Lor αβ of Eq. (12):
ψ s± 0ℓ := m ψ s± 0ℓm (t 0 ) 2 Y ℓm (θ 0 , φ 0 ) =Â ± L 2 +B ± L + O(L −1 ).(55)
In Sec. IV and Appendix C, we find the large-L expansion of ψ 0 for a particle in circular orbit in a Schwarzschild background (restricting consideration to the part of ψ 0 axisymmetric about the position of the particle). Because the Hertz potential Ψ ret involves four integrals of ψ 0 , its singular part has to subleading order in L the form
Ψ s ℓ = A ± Ψ /L 2 + B ± Ψ /L 3 .(56)
This behavior also follows from the explicit form (40) of Ψ ℓmω , together with the fact that, for large ℓ, the spheroidal eigenvalues λ approach their spherical values, λ/ℓ 2 → 1. The leading term in the expansion is then immediate from the leading term in (55), while subleading terms involve the expansion of λ (found analytically or numerically) in terms of L.
The metric perturbation involves two derivatives of Ψ, implying that the singular and retarded fields in a radiation gauge have the same leading power of L as their Lorenz counterparts,
h s,RG± ℓµν = A ± µν + B ± µν /L + O(L −2 ).(57)
Finally, the self-force f ren,RGα is computed from ∇ γ h ren,RG αβ , using Eq. (3). As in the Lorenz gauge, the additional derivative gives the behavior a s RGµ
ℓ = A µ± L + B µ± + O(L −2 ).(58)
The renormalized acceleration at the position of the particle is then given by
a ren RGµ = lim ℓmax→∞ ℓmax ℓ=0 (a ret RGµ ℓ − a s RGµ ℓ ).(59)
As noted in the introduction, for a particle in circular orbit in a Schwarzschild background, we find that the large L expansion of a s RGα agrees through O(L 0 ) with
a s RGα = −∇ α 1 ρ ,(60)
differing only by a constant from its form in a Lorenz gauge. This form of a s RGα does not contribute to the self-force, because Eq. (6) is satisfied -the small-ρ angle average of a s RGα vanishes. Thus in this case, we can identify the singular field with its leading and subleading terms as a power series in L,
a s RGµ ℓ = A µ± L + B µ± .(61)
For a CCK radiation gauge, there is as yet no general proof that a s RGα is given by its leading and subleading terms. This is true in a Lorenz gauge, and we expect it to hold for a radiation gauge as well, with an argument based on the common property that a sα can be expressed (for r > r 0 or r < r 0 ) as a power series that begins at O(ǫ −2 ) and involves positive powers of the coordinate differences x µ − x µ 0 and odd powers of ρ. Without a proof, one must check that the computed a sα satisfies Eq. (6). In the Schwarzschild example below, the coordinate expression for ψ s 0 to subleading order is a sum of more than 25 terms, and to find its large-L expansion we computed the large-L expansion of each term. Finding the corresponding large-L expansion for a s RGµ involves finding a large-L expansions of all combinations of three derivatives of each of these terms for each component of a s RGµ . Without significant insight, this would mean finding the large-L expansion of about 300 terms. Given the simple form that a sα takes for the Schwarzschild circular-orbit there may be a similar form in the generic case and a much simpler way to find it. In its absence, the numerical method is much easier to use.
Numerical method
The renormalization coefficients occuring in Eq. (58) for a s RGµ ℓ are coefficients in the large-L expansion of a ret RGµ evaluated at r = r ± 0 . Consequently, once one has found the numerical values of a ret RGµ , it is not in principle necessary to carry out an additional analytic computation of a s RGµ ℓ . Instead, one can match to the sequence of values a ret RGµ ℓ a power series in L of the form
a ret µ ℓ = A µ± L + B µ± + n k=2,even E µ k L k ,(62)
finding the E µ k that yield a best fit. A numerical check is that subtraction of the order L −k term reduces the order of the series by one power of L. And one should check that the reduction of order holds for values of L larger than those used to obtain the coefficients in the matching. Finally, one checks numerically that f reg,µ converges to a value f ren,µ as the cutoff ℓ max increases. The disadvantage of the numerical matching method is that, for a given desired accuracy in a renα , one must compute a retα to higher values of ℓ than is required when one or more renormalization coefficients are known analytically.
To identify a s α ℓ with A µ± L + B µ± , one must again show that a sα satisfies Eq. (6). This can done if one can find for each component a µ expressions in terms of ρ and the coordinate differences x µ − x µ 0 whose large-L expansion is A µ± L + B µ± , and if this expression satisfies (6). If the limiting angle-average has a finite value, that finite value must be added back, in accordance with Eq. (4), to find the self-force.
Equivalently, if the first two terms in this L-expansion correspond to the terms of order ǫ −2 and ǫ −1 in the positionspace expansion, so that the terms involving E k correspond to terms of order ǫ and higher, then these latter terms sum to zero. We suspect this is the case for a radiation gauge, because the behavior of the singular field as a power series in L implies behavior in ǫ that is no more singular than in a Lorenz gauge and corresponds for r > r 0 and r < r 0 to a power series in ǫ. This is consistent with our numerical construction of the singular part of the self-force for a particle in circular orbit in Schwarzschild.
With an analytic knowledge of the first term or terms in the expansion, the numerical method is still useful in finding subsequent terms, and this has been done to speed convergence in the self-force computations involving mode-sum renormalization in a Lorenz gauge. In particular, once one knows that a s µ ℓ can be identified with the leading and subleading terms A µ± L + B µ± , the computation of a s µ ℓ is given by
a ren µ = ℓmax ℓ=0 a ret µ ℓ − A µ± L − B µ± − n k=2,even E µ k L k + n k=2,even E µ k ∞ ℓ=0 L −k ,(63)
with an error of order ℓ −(n+1) max . To justify the numerical method, we present in Sec. IV below a comparison of the analytic and numerical determination of renormalization coefficients for the axisymmetric part of ψ 0 and their contribution to the self-force. The companion paper presents the full numerical computation of the self-force for a particle in circular orbit in a Schwarzschild background. Checks of the work include a numerical computation of a quantity h αβ u α u β that is invariant under helically symmetric gauge transformations and has previously been computed in Lorenz and Regge-Wheeler gauges. To complete the metric reconstruction one needs to add the contributions from a change in the mass and angular momentum of the spacetime outside r = r 0 . Satisfying the perturbed field equation also requires a discontinuous gauge transformation associated with a change in the system's center of mass. That nothing further is needed follows from an minor extension of a theorem by Wald that implies that a perturbed vacuum metric is determined up to gauge transformations and the addition of a Petrov type D perturbation of the black-hole geometry [43]. There are four kinds: (i) an infinitesimal change in the black hole's mass and (ii) in its spin; (iii) the perturbative version of the C-metric and (iv) the perturbative version of the Kerr-NUT solution. The type D perturbations are all stationary and axisymmetric, and only the mass and angular momentum perturbations are smooth in the region exterior to the black hole: The C and Kerr-NUT perturbations are each singular on their axis of symmetry, coinciding for a Kerr background, with the axis of symmetry of the Kerr geometry. If we were dealing with a source-free perturbation, regularity at the horizon and at infinity would rule out the addition of Kerr-NUT and C-metric perturbations, and, after a choice of gauge, we would be left with changes in mass and angular momentum (and gauge transformations). To extend the argument to h ret αβ in our case, we note that smoothness of each time-harmonic of ψ ret 0 for r = r 0 , together with the explicit form (40), implies that Ψ ret is smooth for r = r 0 . That is, smoothness of ψ ret 0 implies that the coefficients of each angular harmonic in its decomposition fall off faster than any power of L. Eq. (40) implies that the coefficients of the angular harmonics of Ψ fall off still faster in L. Thus each time harmonic of Ψ is smooth and has no contribution from a C or Kerr-NUT perturbation.
The contributions to the retarded field of mass and spin were examined by Detweiler and Poisson [44] and by Price [45]. With the Hertz potential restricted to ℓ ≥ 2 (for a Schwarzschild or Kerr background), there is no local contribution to mass and angular momentum. For r < r 0 , this is the appropriate solution for the retarded field. For r > r 0 , one has to determine in any gauge four parameters corresponding to changes in mass and spin (two correspond to a change in the spin direction and are gauge transformations); and three parameters associated with gauge transformations for r > r 0 that eliminate an asymptotic dipole. The change in mass and in the magnitude of angular momentum along its direction in the background Kerr spacetime can be found from the integrals
δM = S (2T α β − δ α β T )t β dS α , (64a) δJ = − S T α β φ β dS α (64b)
over any hypersurface intersecting the trajectory. (For a Schwarzschild background, all components of δJ are determined in this way.) The remaining parameters are determined by jump conditions in the field equations across r = r 0 . Although the metric perturbations corresponding to changes in mass and spin can be written in a radiation gauge, we do not see a good reason to do so. In particular, the radiation gauge form of a mass perturbation that arises from a Hertz potential has a singularity on the radial ray through the particle. (There is an alternative radiation-gauge form of a mass perturbation that is nonsingular on the axis of symmetry [39], but it has no obvious advantage over a mass perturbation in another gauge.) Instead we use for h ret αβ a radiation gauge for ℓ ≥ 2, together with an arbitrary convenient gauge for ℓ < 2.
D. Leading order parity of ψ0, Ψ and h αβ
We now consider the parity of the perturbed metric in a radiation gauge. Gralla's criterion is that the projection of the perturbed metric to a surface orthogonal to the particle's 4-velocity is even under parity to leading order in ρ. Parity here means the locally defined diffeo that exchanges points at the same geodesic distance on opposite sides of each geodesic orthogonal to the particle trajectory; and the associated hypersurface spanned by geodesics orthogonal to a point P of the trajectory is the surface onto which h αβ is projected. Because the invariance under parity is only to leading order, we can define parity in terms of local inertial coordinates T, X, Y, Z as the diffeo P : (T, X, Y, Z) → (T, −X, −Y, −Z). The Weyl tensor has to leading order in ρ its flat-space form, implying that it is even (invariant) under both parity and time-reversal, where time reversal, T , exchanges points with coordinates ±T, X, Y, Z. (Its behavior under both parity and time-reversal will be needed in the argument below. ) We first show that these symmetries are retained by the singular form of ψ 0 , given by Eq. (53). Because the tetrad vector fields l α and m α (and n α ) are smooth, they are constant near a point P on the trajectory to leading order in ρ (as long as the particle is not on the θ = 0, π axis, where m α is not defined). The corresponding scalars l α ∇ α T and m α ∇ α T are then even to leading order under parity. Because the cartesian components of ∇ α ρ have opposite signs at diametrically opposite points (T, X, Y, Z) and (T, −X, −Y, −Z), the scalars l α ∇ α ρ and m α ∇ α ρ are odd under parity. Then (l T m ρ − l ρ m T ) 2 is even and hence ψ 0 is even under P to leading order in ρ.
Similarly, the scalars l α ∇ α T and m α ∇ α T are odd to leading order under time-reversal; and l α ∇ α ρ and m α ∇ α ρ are even, implying that (l T m ρ − l ρ m T ) 2 and ψ 0 are even to leading order under time reversal.
To show that Ψ is even under parity to leading order in ρ requires additional steps. First, because Ψ is obtained from ψ 0 as a sum of angular harmonics, we use the fact that the leading order parts of ψ 0 and Ψ in ρ for small ρ are associated by the transform to angular harmonics with the leading order in L part of its angular harmonics, for large L, as described in Appendix B. In particular the leading terms in ρ of ψ 0 and Ψ are, respectively, O(ρ −3 ) and O(ρ), and they correspond to the large-L terms of order O(L 2 ) and O(L −2 ) in the angular harmonics. Second, Eq. (40), expressing Ψ in terms of ψ 0 , involves angular harmonics of ψ 0 on a surface of constant t, a surface that is not perpendicular to the particle trajectory; because of this, the plane tangent to the surface is not invariant under parity. It is, however, invariant under PT , implying that the restriction of ψ 0 to a constant t surface is invariant to leading order under PT (Restricted to the constant t surface, PT is a parity transformation about P .) It follows that Ψ s and h s αβ are also invariant to leading order under PT . We give the argument in terms of the the angular equation (33) for Ψ. The right side of this equation is dominated for large L by its first term, having to leading order the form
ψ 0 = 1 8 L 4Ψ ;(65)
that is, because L 4 is, for large L, quartic in λ and ω, it dominates the second term. (One cannot look at the large L limit with ω fixed, because ω is not independent of L: For a circular orbit, for example, ω = mΩ.) For the particle at θ 0 = 0, π, L has to leading order in ρ the form L = ∂ θ + i csc θ 0 ∂ φ − ia sin θ 0 ∂ t . In Boyer-Lindquist coordinates, PT is given to leading order in ρ by (t, r, θ, φ) → (2t 0 − t, 2r 0 − r, 2θ 0 − θ, 2φ 0 − φ), implying that each term in this leading form of L is odd under PT . Then L 4 is even to leading order:
L 4 PT = PT L 4 .(66)
The parts of Ψ that are even and odd under PT then have sources that differ by one order in L:
1 2 (1 + PT )[L 4 (Ψ odd +Ψ even ) + 12M ∂ t Ψ] = L 4Ψeven [1 + O(L −1 )] = 8ψ even 0 ,(67a)1 2 (1 − PT )[L 4 (Ψ odd +Ψ even ) + 12M ∂ t Ψ] = L 4Ψodd + O(Ψ even × L 3 ) = 8ψ odd 0 . (67b)
With a source smaller by one order in L, the algebraic inversion (40) then gives Ψ odd smaller than Ψ even by one power of L. Next note that, because ρ is independent of T and the tetrad vectors are smooth (and hence constant to lowest order in ρ), ψ 0 is independent of T to lowest order in ρ. That is, translating ψ 0 by ∆T changes it only by a term of order ψ 0 ∆T , from the O(∆T ) change in the tetrad vectors. Now time-translating Ψ from a point on the t = 0 surface to a point on the relatively boosted T = 0 surface through the same point P of the trajectory involves a translation by a time ∆T proportional to ρ and hence changes Ψ only to subleading order in ρ. Thus Ψ is invariant under P to leading order in ρ.
Finally, a tensor T αβ is invariant under P if T αβ = P * T αβ , where the pullback P * T αβ has components in coordinates {x µ } given by (P * T ) µν (Q) := ∂ µ P σ ∂ ν P τ T στ [P(Q)]. (We have used P −1 = P.) In terms of the coordinates (T, X, Y, Z), the requirement that the spatial projection of h αβ is invariant under parity is equivalent to the condition that each spatial component h ij is even under parity
h ij (T, X, Y, Z) = (Ph) ij (T, X, Y, Z) = h ij (T, −X, −Y, −Z).(68)
That this condition is satisfied follows from Eq. (32) for h αβ in terms of Ψ and the fact that the leading, O(ρ −1 ), part of h αβ comes entirely from terms quadratic in the derivative operators, in ∆, δ andδ. Because Ψ is independent of T to leading order in ρ, each derivative operator along a tetrad vector involves only the spatial (X i ) components of the vector, implying that the quadratic derivatives ∆ 2 Ψ, . . . ,δ 2 Ψ all have even parity to leading order. Finally, the lowest-order constancy of the tetrad vectors implies that the products of their components n i n j , . . . ,m (imj) in local inertial coordinates are even to leading order in ρ under parity. We conclude that the projection of h ret,RG ij orthogonal to the 4-velocity is even under parity at leading order in ρ: to O(ρ −1 ). 2
IV. PARTICLE IN CIRCULAR ORBIT IN A SCHWARZSCHILD GEOMETRY
As a simplest explicit example of the method, we consider a particle of mass m in circular orbit at radial coordinate r 0 about a Schwarzschild black hole of mass M . In this section we compare the numerical and analytic renormalization methods by looking at the mode-sum renormalization of the axisymmetric part of ψ 0 . We first compute the retarded field; we then find an analytic expression for the singular field to subleading order; finally, we obtain the renormalization coefficients of the singular field numerically, finding agreement to high accuracy with their analytic values. The numerical computation of the self-force is described in the companion paper.
We work in Schwarzschild coordinates and adopt the notation
ds 2 = f dt 2 − f −1 dr 2 − r 2 (dθ 2 + sin 2 θdφ 2 ),(69)
With this choice of tetrad the nonzero spin coefficients are
̺ = − 1 r , β = −α = cot θ 2 √ 2r , γ = M 2r 2 , µ = − 1 2r 1 − 2M r ,(72)
with corresponding Christoffel symbols
Γ 1 12 = −Γ 2 22 = 2γ (73a) Γ 1 43 = Γ 1 34 = Γ 4 24 = Γ 3 23 = µ (73b) Γ 3 13 = Γ 2 43 = Γ 2 34 = Γ 4 14 = −̺ (73c) Γ 3 33 = Γ 4 44 = −Γ 4 43 = −Γ 3 34 = 2β.(73d)
The only nonzero component of the background Weyl tensor is
Ψ 2 = − M r 3 .(74)
The particle's 4-velocity is
u α = u t (t α + Ωφ α ),(75)
with t α and φ α timelike and rotational Killing vectors and with u t = 1 − 3M/r 0 . Its energy and angular momentum per unit mass, E := −u α t α and J := u α φ α , are given by
E = r 0 − 2M r 2 0 − 3M r 0 , J 2 = r 2 0 M r 0 − 3M .(76)
From Eq. (32), the nonzero components of the metric perturbation are
h 11 = − r 2 2 (ð 2 Ψ + ð 2Ψ ),(77)h 33 = −r 4 1 4 ∂ 2 t − 2f ∂ t ∂ r + f 2 ∂ 2 r − 3(r − M ) 2r 2 ∂ t + f 3r − 2M 2r 2 ∂ r + r 2 − 2M 2 r 4 Ψ,(78)h 13 = r 3 2 √ 2 ∂ t − f ∂ r − 2 r ð Ψ.(79)
To compute the self-acceleration (3) in terms of these tetrad components, we use the relations
t α = 1 2 f l α + n α , φ α = −ir √ 2 (m α −m α ), ∇ α r = − 1 2 f l α + n α ,(80)
to write
a r = (u t ) 2 1 2 f 0 l α − n α 1 2 f 0 l β + n β − i Ωr 0 √ 2 (m β −m β ) 1 2 f 0 l γ + n γ − i Ωr 0 √ 2 (m γ −m γ ) (∇ β h αβ − 1 2 ∇ α h βγ ).
(81) Then, expanding the covariant derivatives and using Eqs. (72) and (73), we find
a r = (u t ) 2 f 2 0 1 16 f 0 D + 3 8 ∆ + i 8 Ω(ð −ð) − 1 2 M r 2 0 h 11 f 0 1 8 M r 0 D − 1 4 M r 0 f 0 ∆ + 1 2 M r 2 0 h 33 + − i √ 2 Ωr 0 ∆ − 1 4 √ 2 M r 2 0 (ð −ð) + i 2 √ 2 Ω h 13 + c.c. .(82)
A. The retarded field
The retarded fields ψ ret 4 and ψ ret 0 are simplest to compute in coordinates for which the unperturbed orbit lies in the θ = π/2 plane. The particle's trajectory is then given by φ = Ωt, where Ω = M/r 3 0 . From Eq. (1), its stress-energy tensor is
T αβ = m u t r 2 0 u α u β δ(r − r 0 )δ(cos(θ))δ(φ − Ωt) = ℓ,m m u t r 2 0 u α u β δ(r − r 0 ) s Y ℓm (θ, φ) sȲℓm π 2 , Ωt .(83)
Because the source and the background metric are both helically symmetric, Lie derived by t α + Ωφ α , the retarded fields -metric and Weyl tensor -will also be helically symmetric. Because the tetrad vectors (71) are also helically symmetric, the symmetry is shared by the scalars ψ ret 0 and ψ ret 4 , which therefore only involve φ and t in the combination φ − Ωt. In the harmonic decomposition, φ and t then occur only in the combination e im(φ−Ωt) , and the frequency associated with each value of m is ω = mΩ.
The scalars ψ ret 4 and ψ ret 0 satisfy the Bardeen-Press equation [46], the a = 0 form of the Teukolsky equation, namely
T s ψ := r 4 ∆ ∂ 2 t − 2s M r 2 ∆ − r ∂ t − ∆ −s ∂ ∂r ∆ s+1 ∂ ∂r −ðð ψ = 4πr 2 T s .(84)
We will work with ψ ret 0 , whose source, from Eq. (24a), has the form
T s=2 = −2(δ − 2β)δT 11 + 4(D − 4̺)(δ − 2β)T 13 − 2(D − 5̺)(D − ̺)T 33 =: T (0) + T (1) + T (2) ,(85)
where the superscripts indicate the maximum number of radial derivatives in each term. From Eq. (83), these terms have the explicit forms
T (0) = − ℓ,m mu t r 4 0 δ(r − r 0 )[(ℓ − 1)ℓ(ℓ + 1)(ℓ + 2)] 1/2 2 Y ℓm (θ, φ)Ȳ ℓm π 2 , Ωt ,(86)T (1) = 2 ℓ,m mΩu t r 2 0 iδ ′ (r − r 0 ) + mΩ f 0 + 4i r 0 δ(r − r 0 ) [(ℓ − 1)(ℓ + 2)] 1/2 2 Y ℓm (θ, φ) 1Ȳℓm π 2 , Ωt ,(87)T (2) = ℓ,m mΩ 2 u t δ ′′ (r − r 0 ) + 6 r 0 − 2imΩ f 0 δ ′ (r − r 0 ) − m 2 Ω 2 f 2 0 + 6imΩ r 0 f 0 − 2imM Ω r 2 0 f 2 0 − 4 r 2 0 δ(r − r 0 ) 2 Y ℓm (θ, φ) 2Ȳℓm π 2 , Ωt .(88)
Each mode of ψ 0 or ψ 4 ,
ψ = e −iωt R(r) s Y ℓm (θ, φ),(89)
has radial eigenfunction R 0 or R 4 satisfying the radial equation corresponding to its spin-weight:
∆R ′′ 0 + 6(r − M )R ′ 0 + ω 2 r 4 ∆ + 4iωr 2 (r − 3M ) ∆ − (ℓ − 2)(ℓ + 3) R 0 = 0,(90)∆R ′′ 4 − 2(r − M )R ′ 4 + ω 2 r 4 ∆ − 4iωr 2 (r − 3M ) ∆ − (ℓ − 1)(ℓ + 2) R 4 = 0,(91)
where the prime denotes a derivative with respect to the radial coordinate r. Solutions to these equations are related by
R 0 =R 4 r 4 f 2 .(92)
To compute ψ ret 0 , it is helpful to define a Green's functionĜ(r, r ′ ) as the solution to
∆Ĝ ′′ + 6(r − M )Ĝ ′ + ω 2 r 4 ∆ + 4iωr 2 (r − 3M ) ∆ − (ℓ − 2)(ℓ + 3) Ĝ = δ(r − r ′ )∆ 1/2 r 3 . (93) namelyĜ (r, r ′ ) = − ℓ,m A ℓm [∆(r ′ )] 5/2 r ′3 R H (r < )R ∞ (r > ),(94)
where R H and R ∞ are solutions to the radial equation for ψ 0 that are regular at the horizon and at infinity, respectively, and the quantity
A ℓm := 1 ∆ 3 (R H R ′ ∞ − R ∞ R ′ H )(95)
is a constant, independent of r. The full spatial Green's function G(x, x ′ ) ≡ G(r, θ, φ; r ′ , θ ′ , φ ′ ) is then given by
G(x, x ′ ) = − ℓ,m A ℓm [∆(r ′ )] 5/2 r ′3 R H (r < )R ∞ (r > ) 2 Y ℓm (θ, φ) 2Ȳℓm (θ ′ , φ ′ ).(96)
The Weyl scalar ψ 0 is defined and smooth everywhere except on the trajectory of the particle. It is given in terms of the source and the Green's function G(x, x ′ ) by
ψ 0 = 4π T (t, x ′ )G(x, x ′ )r ′2 dV ′ = 4π T (0) 0 + T (1) 0 + T (2) 0 G(x, x ′ )r ′2 dV ′ =: ψ (0) 0 + ψ (1) 0 + ψ (2) 0 ,(97)
where the superscripts on ψ 0 correspond to the three terms in Teukolsky source function defined in Eq. (85). The three terms in Eq. (97) have outside the particle trajectory 3 the form
ψ (0) 0 = 4πmu t ∆ 2 0 r 2 0 ℓm A ℓm [(ℓ − 1)l(ℓ + 1)(ℓ + 2)] 1/2 R H (r < )R ∞ (r > ) 2 Y ℓm (θ, φ)Ȳ ℓm π 2 , Ωt ,(98)ψ (1) 0 = 8πimΩu t ∆ 0 ℓm A ℓm [(ℓ − 1)(ℓ + 2)] 1/2 2 Y ℓm (θ, φ) 1Ȳℓm π 2 , Ωt × (99) [imΩr 2 0 + 2r 0 ]R H (r < )R ∞ (r > ) + ∆ 0 [R ′ H (r 0 )R ∞ (r)θ(r − r 0 ) +R H (r)R ′ ∞ (r 0 )θ(r 0 − r)] , ψ (2) 0 = −4πmΩ 2 u t ℓm A ℓm2 Y ℓm (θ, φ) 2Ȳℓm π 2 , Ωt × (100) [30r 4 0 − 80M r 3 0 + 48M 2 r 2 0 − m 2 Ω 2 r 6 0 − 2∆ 2 0 − 24∆ 0 r 0 (r 0 − M ) + 6imΩr 4 0 (r 0 − M )]R H (r < )R ∞ (r > ) +2(6r 5 0 − 20M r 4 0 + 16M 2 r 3 0 − 3r 0 ∆ 2 0 + imΩ∆ 0 r 4 0 )[R ′ H (r 0 )R ∞ (r)θ(r − r 0 ) + R ′ ∞ (r 0 )R H (r)θ(r 0 − r)] +r 2 0 ∆ 2 0 [R ′′ H (r 0 )R ∞ (r)θ(r − r 0 ) + R ′′ ∞ (r 0 )R H (r)θ(r 0 − r)] .
B. The singular field
Because the conservative part of the self-force is radial, it is axisymmetric about a radial ray through the particle. We will compare the analytic to the numerical determination of the singular part of a Weyl scalar by looking at the axisymmetric part of ψ 0 and (in Appendix D) its contribution to the self-force. We outline the calculation of the leading and subleading terms in the axisymmetric part of the singular field ψ s 0 as a sum of angular harmonics 2 Y ℓ0 (Θ) whose coefficients are polynomials in ℓ, with angular coordinates Θ and Φ chosen so that the Θ = 0 line (at fixed t) is the radial line through the particle. Details of the conversion from a small distance expansion to a large L expansion are left to Appendix C.
The analytic expression for the resulting renormalization coefficients is then compared to a numerical determination by matching the retarded field to a power series in L. Remarkably, although the subleading part of ψ s 0 is a lengthy expression -Eq. (C1), we will see that its axisymmetric part, written as a sum over angular harmonics, vanishes. Because the angular harmonics 2 Y ℓm are complete in L 2 (S 2 ), this means that, as a distribution, ψ s 0 has support at Θ = 0, where 2 Y ℓ0 (Θ) = 0.
The expression for the retarded field is simplest for coordinates in which the orbit is in the θ = π/2 plane. Expressing the singular field ψ s 0 of Eq. (53) as a sum of angular harmonics is simplest if angular coordinates Θ and Φ are chosen with the particle at Θ = 0, as we have just described. To compute the difference ψ ren 0 = ψ ret 0 − ψ s 0 , one must rotate ψ ret 0 to the coordinate position of or ψ s 0 or vice-versa. Following the conventions of Detweiler et al. [47] (henceforth DMW), Θ and Φ are related to θ and φ by a rotation of the form sin θ cos φ = cos Θ, sin θ sin φ = sin Θ cos Φ, cos θ = sin Θ sin Φ.
With the usual association of Cartesian coordinates x, y, z to r, θ, φ and ofx,ŷ,ẑ to r, Θ, Φ, the map is x =ẑ, y = x, z =ŷ. Eq. (53) for ψ s 0 involves the components l T = l α ∇ α T and l ρ = l α ∇ α ρ. We obtain these to subleading order in terms of Schwarzschild coordinates: That is, with ǫ the distance from the particle's position P with respect to the positive-definite metric g αβ + 2u α u β , the leading and subleading terms in T and ρ are O(ǫ) and O(ǫ 2 ), respectively. The corresponding leading and subleading terms of l T and l ρ are then O(1) and O(ǫ).
Expansions of ρ and of the local inertial coordinates T, X, Y, Z about a point P in terms of Schwarzschild coordinates are given, for example, in Ref. [47]. To subleading order, T has the form
T = (E(t − t 0 ) − J sin Θ cos Φ) + EM r 2 0 f 0 (t − t 0 )(r − r 0 ) − J r 0 (r − r 0 ) sin Θ cos Φ .(102)
It is convenient to work with ρ 2 instead of ρ; to subleading order, we have ρ 2 = ρ (2) + ρ (3) , where the order ǫ 2 and ǫ 3 contributions are, respectively,
ρ (2) = (r − r 0 ) 2 f 0 + (r 2 0 + J 2 cos 2 Φ) sin 2 Θ − 2EJ sin Θ cos Φ(t − t 0 ) + J 2 f 0 r 2 0 (t − t 0 ) 2 ,(103)ρ (3) = M r 2 0 1 + 2J 2 r 2 0 (t − t 0 ) 2 (r − r 0 ) + 2JE(M − r 0 ) f 0 r 2 0 (t − t 0 )(r − r 0 ) sin Θ cos Φ − M r 2 0 f 2 0 (r − r 0 ) 3 +r 0 sin 2 Θ(r − r 0 ) + 2r 0 sin 2 Θ sin 2 Φ(r − r 0 ) + 2E 2 r 0 f 0 sin 2 Θ cos 2 Φ(r − r 0 ).(104)
We use Eq. (53). This expression omits δ-functions with support at the position of the particle. These do not contribute to the renormalization of the retarded field if the renormalization is done by subtracting the singular field from the retarded field in a neighborhood of the particle, averaging over a sphere surrounding the particle, and then taking the limit as the radius of the sphere shrinks to zero (the Quinn-Wald prescription [16]). In a mode-sum regularization, we discard δ-functions with support at the particle in both the singular and the retarded field.
The background Kinnersley tetrad written in terms of Θ and Φ is
l α = 1 f , 1, 0, 0 , m α = 1 √ 2r 0, 0, 1, i sin Θ , where again f = 1 − 2M r(105)
We now expand the needed tetrad components to subleading (quadratic) order in Schwarzschild coordinates, about their values at P , using superscripts (0) and (1) as above to denote orders in ǫ and writing l α = l (0)α + l (1)α , m α = m (0)α + m (1)α . Using Eqs. (102), (103), (104), and (105), we have
l (0)T = E f 0 ,(106)l (1)T = − EM r 2 0 f 2 0 (r − r 0 ) + EM r 2 0 f 0 (t − t 0 ) − J r 0 sin Θ cos Φ, m (0)T = − J √ 2r 0 e −iΦ , m (1)T = 0, l (1)ρ = J 2 r 2 0 ρ (t − t 0 ) − JE f 0 ρ sin Θ cos Φ + 1 f 0 ρ (r − r 0 ), l (2)ρ = M r 2 0 f 0 ρ 1 + 2J 2 r 2 0 (t − t 0 )(r − r 0 ) + JE(M − r 0 ) f 2 0 r 2 0 ρ (r − r 0 ) sin Θ cos Φ + M (r 2 0 + 2J 2 ) 2r 4 0 ρ (t − t 0 ) 2 + JE(M − r 0 ) f 0 r 2 0 ρ (t − t 0 ) sin Θ cos Φ − 3M 2r 2 0 f 2 0 ρ (r − r 0 ) 2 − r 0 2ρ sin 2 Θ + r 0 ρ sin 2 Θ sin 2 Φ + E 2 r 0 f 0 ρ sin 2 Θ cos 2 Φ − 2M J 2 r 4 0 f 0 ρ (r − r 0 )(t − t 0 ) − M JE f 2 0 r 2 0 ρ (r − r 0 ) sin Θ cos Φ, m (1)ρ = √ 2 2r 0 ρ r 2 0 + J 2 cos Φe −iΦ sin Θ − √ 2JE 2r 0 ρ (t − t 0 )e −iΦ , m (2)ρ = √ 2J 2 e −iΦ cos Φ 2r 2 0 ρ (r − r 0 ) sin Θ − √ 2JEM e −iΦ 2r 3 0 f 0 ρ (t − t 0 )(r − r 0 ).
The terms in parentheses in Eq. (53) are then given to subleading order by
l T m ρ − m T l ρ 2 = l (0)T m (1)ρ − m (0)T l (1)ρ 2 +2 l (0)T m (1)ρ − m (0)T l (1)ρ l (0)T m (2)ρ − m (0)T l (2)ρ + l (1)T m (1)ρ − m (1)T l (1)ρ ;(107)
and ρ −5 is given by
1 ρ 5 = 1 ρ (2) 5 2 − 5 2 ρ (3) ρ (2) 7 2 ,(108)
implying
ψ s 0 = − 3 2 m ρ (2) 3 2 l (0)T m (1)ρ − m (0)T l (1)ρ 2 + 15 4 m ρ (2) 5 2 ρ (3) l (0)T m (1)ρ − m T (0) T l (1)ρ 2 −3 m ρ (2) 3 2 l (0)T m (1)ρ − m (0)T l (1)ρ l (0)T m (2)ρ − m (0)T l (2)ρ + l (1)T m (1)ρ − m (1)T l (1)ρ + O(ǫ −1 ).(109)
The expression for ψ s 0 as a mode sum is obtained from the value of this expression at t = t 0 . The full expression, including terms involving t − t 0 is given in Appendix (C). We denote by ψ s−L 0 and ψ s−SL 0 the leading and subleading parts of the singular field, respectively. Using Eqs. (103), (104) and (106), we obtain for ψ s 0 to subleading order the explicit form
ψ s-L 0 = − 3mE 2 r 2 0 f 2 0 sin 2 Θ ρ 5 + − 3mJ 2 e −2iΦ f 2 0 r 2 0 (r − r 0 ) 2 ρ 5 − 3mEJe −iΦ f 2 0 (r − r 0 ) sin Θ ρ 5 ,(110)ψ s-SL 0 = − 15mJ 2 M e −2iΦ 2f 4 0 r 4 0 (r − r 0 ) 5 ρ 7 − 15mJM Ee −iΦ f 4 0 r 2 0 sin Θ(r − r 0 ) 4 ρ 7 + 15me −2iΦ J 2 (−i sin Φe iΦ ) + J 2 r 2 0 cos 2 Φ r 0 f 2 0 (r − r 0 ) 3 sin 2 Θ ρ 7 + 15mJEe −iΦ (r 2 0 + 2J 2 cos 2 Φ) f 2 0 r 0 (r − r 0 ) 2 sin 3 Θ ρ 7 + 15mr 0 E 2 J 2 + r 2 0 + J 2 cos 2Φ 2f 2 0 (r − r 0 ) sin 4 Θ ρ 7 + 9m e −2iΦ J 2 M f 3 0 r 4 0 (r − r 0 ) 3 ρ 5 + 3me iΦ Jr 0 E f 0 sin 3 Θ ρ 5 + 15me −iΦ M JE r 3 0 f 3 0 (r − r 0 ) 2 sin Θ ρ 5 + 9mJ 2 r 0 f 0 (r − r 0 ) sin 2 Θ ρ 5(111)
The axisymmetric part of each of these terms is proportional to an expression of the form
δ k1 sin k2 Θ δ 2 + 1 − cos Θ) k+1/2 ,(112)
where k 1 , k 2 and k are positive integers, and δ, given by Eq. (C6), is proportional to r − r 0 . In Appendix C, following DMW, we use the generating function for Legendre polynomials,
1 (e T + e −T − 2u) 1/2 = ℓ e −(ℓ+1/2)|T | P ℓ (u), T = 0,(113)
and its derivatives to express each term as a sum of Legendre polynomials and their derivatives. We then use a relation between the spin-weighted harmonics s Y ℓm and Legendre polynomials to write the series in terms of the harmonics 2 Y ℓ0 . The leading order part of ψ s 0 then has the form
ψ s 0 r0 (Θ) = −m(r 0 − 3M ) 3/2 r 2 0 (r 0 − 2M ) 5/2 1 χ 5/2 ∞ ℓ=2 4π(ℓ + 2)! (ℓ − 2)!(2ℓ + 1) 2 Y ℓ0 (Θ, 0),(114)
where ψ s 0 is the axisymmetric part of ψ s 0 .
Finally, each subleading term is proportional as a distribution to the sum ∞ l=0 (l + 1/2)P l (cos θ). That sum is a δ-function with support at Θ = 0 [48], and its projection along 2 Y ℓ0 therefore vanishes for all ℓ. Eq. (114) thus gives ψ s 0 to subleading order in L.
C. Comparison with numerical determination of ψ s 0 We complete this section with a comparison of the analytic form of ψ s 0 with its numerical value obtained by matching ψ ret 0 to a power series in L. A comparison of the numerically and analytically determined contributions to the self-force from the axisymmetric part of ψ 0 is given in Appendix D.
The retarded Weyl scalar ψ ret 0,l is computed by integrating the radial Teukolsky equation for each value of ℓ. The coefficients of 2 Y ℓ0 are matched to a power-series in L of the form
ψ ret 0,l = 4π(ℓ + 2)! (2ℓ + 1)(ℓ − 2)! A + B L + C L 2 + · · · .(115)
Shown below is a table of the fractional error in A and B when found numerically and compared to the analytic form given by Eq. (114). Details of the numerical methods and checks of numerical accuracy are given in the companion paper. The fractional error in the renormalization coefficient A and the error in B for ψ0 is given here for a particle in circular orbit in a Schwarzschild background at radius r0. ∆A and ∆B are the differences between the coefficients obtained numerically and by using the analytic expression (C37). The analytic value of B is zero.
V. BRIEF DISCUSSION
The methods discussed in this paper for finding the self-force in a radiation gauge have been used to find the self-force on a particle in circular orbit in a Schwarzschild spacetime, and work on orbits in a Kerr background is now underway. The advantage of a radiation gauge is the ease with which the retarded field can be computed. A disadvantage is the difficulty in analytically computing the singular field h s αβ from ψ s 0 . We have avoided this difficulty by using a numerical matching procedure to find the singuar field, and the companion paper shows that the numerical matching reproduces the renormalization coefficients for gauge-invariant quantities to machine accuracy, for the Schwarzschild example.
We have shown that the perturbed metric in a radiation gauge generically has even parity to leading order in geodesic distance ρ to the particle trajectory. Using the renormalized field to compute the perturbed geodesic then relies on showing that the singular field gives no contribution to the self-force. The companion paper checks this for a particle in circular orbit in a Schwarzschild spacetime, in which the self-force is symmetric about a radial line through the particle. We numerically compute the axisymmetric part of the singular field and find that to subleading order it coincides with the axisymmetric part of −m∇ρ −1 ; it thus gives no contribution to the self-force at order ρ 0 .
This regular behavior of the singular part of the self force may seem remarkable, given the line singularity in a radiation gauge that arises when one includes the perturbed mass in a radiation-gauge metric obtained from a Hertz potential. It is less surprising, however, if one considers a particle at rest in flat space. When principal null directions are chosen to have an origin not on the particle trajectory, ψ 0 is nonzero, and the ℓ ≥ 2 part of the metric can be reconstructed in closed form in a radiation gauge using the CCK procedure. The self-force of course vanishes, with the contribution from the singular field coinciding with that from the retarded field; but the contribution from each is nonzero and gauge invariant under time-independent guage transformations. This implies that the singular part of the expression for the self-force in a radiation gauge has its Lorenz form −m∇ α ρ −1 . For a circular orbit, the result is implied by the fact that the gauge transformation of the self force can be written in a form, Eq. (A24), that involves no derivatives of the gauge vector ξ α , together with the fact that ξ α is O(ρ 0 ). For generic orbits, the gauge transformation involves (u · ∇) 2 ξ α , and it remains to be seen whether the singular part of the self force can again be identified with −m∇ α ρ −1 .
Programme funded by the German Federal Ministry of Education and Research and by WCU (World Class University) program of NRF/MEST (R32-2009-000-10130-0).
A result of Price et al. [36] shows, for a vacuum perturbation satisfying the condition l β h αβ = 0, that the remaining gauge condition, h = 0, is also satisfied. That is, when h 1µ = 0, we have h = −2h 34 ; and from Eq. (16) of that paper, the perturbed Einstein equation δ(G 11 − 8πT 11 ) = 0 implies
h 34 = a 0 ̺ ̺ +̺ ̺ + b 0 (̺ +̺),(A9)1 = k 1 (A10a) ξ 2 = −(iω + M/r)k 1 + k 2 ,(A10b)ξ 3 = −[ℓ(ℓ + 1)/2] 1/2 k 1 + k 3 r.(A10c)
For
ξ 1 =ξ 1 (r, θ)e i(mφ−ωu) , ξ 2 =ξ 2 (r, θ)e i(mφ−ωu) , ξ 3 =ξ 3 (r, θ)e i(mφ−ωu) . (A11)
The corresponding harmonics for the metric perturbation are
h ret µν =h µν (r, θ)e i(mφ−ωu) .(A12)
The gauge transformation for a Kerr background is governed by the equations
Dξ 1 = − 1 2 h ret 11 ,(A13a)Dξ 2 + (∆ − γ −γ)ξ 1 + (τ − π)ξ 3 + (τ −π)ξ 4 = −h ret 12 , (A13b) (δ − 2π)ξ 1 + (D +̺)ξ 3 = −h ret 13 ,(A13c)
The components ξ µ are given successively bỹ
ξ 1 = 1 2 ∞ r dr ′h 11 ,(A14a)ξ 3 = − 1 ̺ ∞ r dr ′̺ h 13 − (δ − 2π)ξ 1 ,(A14b)ξ 2 = ∞ r dr ′ h 12 + (∆ − γ −γ)ξ 1 + (π −τ )ξ 3 + (π − τ )ξ 4 .(A14c)
Asymptotic regularity follows from the asymptotic behavior (A1) of components along l α of outgoing waves in a Lorenz gauge. And the Price et al. result again implies h = 0.
IRG perturbed metric for ingoing radiation
The Hertz potential construction yields an asymptotically flat IRG form for each ingoing asymptotically flat metric perturbation. We find the gauge transformation from transformation from a Lorenz gauge to this asymptotically flat IRG. For simplicity, we restrict consideration to a Schwarzschild background.
In Eqs. (A4) and (A5) the outgoing null coordinate u is replaced by the ingoing null coordinate v = t + r * . In Eqs. (A7) ∂ r is the replaced by e −2imωr * ∂ r e 2iωr * , and the solution has the form
ξ 1 = 1 2 e −2iωr * ∞ r dr ′ e 2iωr ′ * h 11 (r ′ ) (A15a) ξ 2 = e −2iωr * ∞ r dr ′ e 2iωr * h 12 (r ′ ) + 1 4 f (r ′ )h 11 (r ′ ) − iω + M r ′2 ξ 1 (A15b) ξ 3 = r ∞ r dr ′ 1 r ′h 13 − 1 r ′2 [ℓ(ℓ + 1)/2] 1/2ξ 1 .(A15c)
Now, however, because the radiation is ingoing,h 11 ∼ e iωr * /r, when ω = 0. Asymptotic flatness follows from the relation
Gauge transformations of the self force
A gauge transformation of the self-force was obtained by Barack and Ori [27]. We give an alternate, covariant derivation, mention a second kind of gauge freedom, and obtain a simpler form of a gauge transformation of the self-force for a particle in circular orbit.
A gauge transformation is an infinitesimal diffeomorphism that drags an unperturbed geodesic of the background metric to a neighboring curve that is a geodesic of the dragged-along metric. This can be stated precisely in terms of a congruence of timelike geodesics through a neighborhood of a point P . The dragged metric at P differs from the original metric by h αβ = £ ξ g αβ , and the perturbed geodesic through P has 4-velocity altered by δu α = £ ξ u α . The perturbed geodesic equation associated with a perturbation that is pure gauge has the form
δ(u β ∇ β u α ) = £ ξ (u β ∇ β u α ) = 0. (A16) Writing £ ξ (u β ∇ β u α ) = (£ ξ u β )∇ β u α + u β ∇ β £ ξ u α + u β [£ ξ , ∇ β ]u α (A17) and [£ ξ , ∇ β ]u α = u γ ∇ β ∇ γ ξ α − R α γβδ u γ ξ δ ,(A18)
and with the perturbed acceleration defined byδa α :=δu β ∇ β u α + u β ∇ βδ u α , we havẽ
δa α = −(u · ∇) 2 ξ α + R α βγδ u β u γ ξ δ .(A19)
In this form, u α +δu α is normalized to −1 with respect to the perturbed metric g αβ + h αβ , and the geodesic equation is affinely parameterized with respect to the perturbed metric. If the geodesic is parameterized so that its tangent is normalized to 1 with respect to the background metric, and we denote by δ ξ u α the change in u α with that normalization, then
δ ξ u α = (δ α β − u α u β )δu β = (δ α β − u α u β )£ ξ u β . (A20) With δ ξ a α := δ ξ u β ∇ β u α − u β ∇ β δ ξ u α , Eq. (A19) implies δ ξ a α = −(δ α β − u α u β )(u · ∇) 2 ξ β + R α βγδ u β u γ ξ δ ,(A21)
and a α u α = 0. Note that the right side vanishes if ξ α happens to drag a geodesic of the background spacetime to another geodesic of the background spacetime: This is the equation of geodesic deviation governing the connecting vector joining two neighboring geodesics of g αβ . For general ξ α , the right side of Eq. (A21) then measures the failure of ξ α to produce a geodesic of the background metric. The effect of a gauge transformation on the perturbed geodesic equation (3) is to replace δu α by δu α + δ ξ u α and a α by a α + δ ξ a α .
There is a second kind of gauge freedom, an infinitesimal change in the background geodesic to which one compares a geodesic in the perturbed spacetime. The perturbed geodesic at an initial point of the trajectory can then be changed from u α to u α + δu α , with a α = 0. This allows one to regard the right side of Eq. (A21) as the change in the perturbed geodesic equation for a geodesic through an initial point P with the same initial tangent vector as that in the original gauge.
For a particle in circular orbit, the gauge transformation of the self-force takes a simpler form for a gauge vector that is helically symmetric. Writing u α = u t k α , with k α the Killing vector t α +Ωφ α , and using the relations £ k ξ α = 0, £ k (k β ∇ β ξ α ) = 0, and ∇ β ∇ γ k α = R α γβδ k δ , we obtain
− (k · ∇) 2 ξ α = 1 2 ξ β ∇ β ∇ α (k γ k γ ) − R α βγδ k β k γ ξ δ .(A22)
The last term cancels the last term in Eq. (A21) to give
δ ξ a α = 1 2 (u t ) 2 ξ β ∇ β ∇ α (k γ k γ ),(A23)
with corresponding gauge-transformed self-force
f α = f α + 1 2 m(u t ) 2 ξ β ∇ β ∇ α (k γ k γ ). (A24)
For a particle in circular orbit in a Schwarzschild background, Eq. (A23) takes the form [49] δ ξ a r = 3Ω 2 /(1 − 3M/r)ξ r .
(A25)
Appendix B: singularity and large ℓ behavior
Mode-sum renormalization involves relations, like the formal harmonic decomposition
(1 − cos θ) −1/2 = ∞ ℓ=0 4π ℓ + 1/2 Y ℓ0 (θ),(B1)
between the short-distance (ultraviolet) singular behavior of a function and the large-ℓ behavior of its harmonics. The angle θ is geodesic distance on the unit 2-sphere S to the origin θ = 0, and in this example, a function that behaves like θ −1 has angular harmonics that behave like L −1/2 , where, as in the body of the paper, L = ℓ + 1/2. In this appendix, we review how one characterizes the large ℓ behavior of functions whose explicit angular harmonics are not known; and we give a precise meaning, in terms of distributions on the 2-sphere, to formal expressions like the divergent right side of Eq. (B1) and to the angular harmonics of functions like f = θ −n for which the integral dΩfȲ ℓm diverges. We initially restrict the discussion to ordinary spherical harmonics and then generalize it to spin-weighted harmonics. The angular harmonics of the retarded fields are limits as r → r 0 of angular harmonics of expressions that are nonsingular on spheres of radius r = r 0 . One therefore defines the angular harmonics of the singular field in the same way; and we end by showing that our formalism reproduces the harmonic decomposition defined in this way.
The general relation between small-angle and large-ℓ behavior is essentially identical to the relation between a shortdistance singularity in a function f (x) and the behavior of its Fourier transformf (k) at large k. In one-dimension, for example, the n th derivative of a function f (x) is square integrable if and only if k nf (k) is square integrable, because f (n) = (ik) nf :
dx d n dx n f (x) 2 < ∞ ⇐⇒ dk k nf (k) 2 < ∞.(B2)
Similarly, the n th derivative of a function f (θ, φ) = ℓm f ℓm Y ℓm on S is square integrable if and only if ℓ n f ℓm is square summable: With D a the covariant derivative operator of the metric ds 2 = dθ 2 + sin 2 θdφ 2 on S, the angular Laplacian is D 2 := D a D a , and we have
dΩD a1 · · · D anf D a1 · · · D an f = dΩf (−D 2 ) n f < ∞ (B3) ⇐⇒ ℓ>0,m [ℓ(ℓ + 1)] n |f ℓm | 2 < ∞. (B4)
One extends this relation to functions like (1 − cos θ) −3/2 that are not square integrable by regarding them as distributions obtained by taking derivatives of functions like (1 − cos θ) 1/2 that are square integrable. If f is any distribution on S, f ℓm = dΩfȲ ℓm exists, because Y ℓm is smooth. Thus, for example, writing
( 2 α 2 D a D a + α + 2 2α )(1 − cos θ) α 2 = (1 − cos θ) α 2 −1 (B5)
gives f = (1 − cos θ) −3/2 as a distribution with
f ℓ0 = dΩ(1 − cos θ) −1/2 (−2D a D a + 1/2)Y ℓ0 (already well defined), or f ℓ0 = dΩ(1 − cos θ) 1/2 (2D a D a + 3/2)(−2D a D a + 1/2)Y ℓ0 .(B6)
For any distribution f , the right side of Eq. (B4) is finite for some negative n, with increasingly singular distributions corresponding to increasingly negative values of n. To obtain a theorem that relates in this way the short-distance and large-ℓ behavior of distributions, we define the standard spaces of functions whose first n derivatives are square integrable, the Sobolev spaces H n , and then extend the definition to spaces H r of distributions, where r can be negative (and need not be an integer).
Recall that the Hilbert space L 2 (S) is the completion of smooth (C ∞ ) functions in the norm f defined by
f 2 = dΩ|f | 2 .
The action of the operator D on a distribution f is given by
Df = ℓm Lf ℓm Y ℓm .(B9)
Equivalently, Df is defined by its action on smooth functions g,
Df (g) = f (Dg), or dΩḡDf = dΩ (Dḡ)f = ℓm Lḡ ℓm f ℓm .(B10)
We can now define H r (S) for all real r: Definition. The Sobolev space H r (S) is the completion of smooth functions in the norm · r defined by
f r = |D r f | . (B11)
With inner product defined by
g|f r = dΩD rḡ D r f,(B12)
the space H r is a Hilbert space. The relation
f 2 r = ℓm L 2r |f ℓm | 2 ,(B13)
implies that a distribution f is in H r if and only if the sequence (f ℓm ) of its angular harmonics has finite norm
(f ℓm )˜ 2 r := ℓm L 2r |f ℓm | 2 .(B14)
Formally, for each r, one can turn the set of sequences (f ℓm ) of complex numbers into a Hilbert space 4 with norm (f ℓm )˜ r . A function f is smooth if and only if it is in H r for all r, implying that the elements of a sequence (f ℓm ) are the angular harmonics of a smooth function if and only f ℓm falls off faster than any power of ℓ: lim ℓ→∞ ℓ n |f ℓm | = 0. A second immediate consequence of the correspondence is the fact that D maps H r to H r+1 and that the Laplacian maps H r to H r+2 . (In fact, these maps are isomorphisms.)
The function θ −1 is nearly square-integrable, with the integral dΩ θ −2 diverging only logarithmically and dΩ θ −2+ǫ finite for all ǫ > 0. This suggests that θ −1 ∈ H −ǫ for all ǫ > 0, and that is in fact the case: From Eq. (B1), the function f = (1 − cos θ) −1/2 satisfies Finally, a formal sum f = ℓm f ℓm Y ℓm (θ, φ), like the right side of Eq. (B1), has the meaning f (g) = ḡ ℓm f ℓm , for smooth g.
(f ℓm )˜ −ǫ = ℓm |f ℓm | 2 L −2ǫ = 4π ℓ L −1−2ǫ < ∞, all ǫ > 0.(B15)
Finally we must relate this formalism to angular harmonics of functions f (r, θ, φ) that are singular at r = r 0 and smooth for r = r 0 , when those harmonics are found as
lim r→r0 dΩ f (r, θ, φ) s Y ℓm .(B16)
We suppose that f can be written in the form D r F , where F (r, θ, φ) is continuous everywhere and smooth for r = r 0 and where D is an operator for which D and D † have domains that include C ∞ (S). Then, for g smooth,
dΩD n F g = dΩF D †n g =⇒ lim r→r0 dΩD n F g = lim r→r0
dΩF D †n g = dΩ lim r→r0 F D †n g = dΩF (r 0 )D †n g(r 0 ). 4 The inner product is j L 2rḡ ℓm f ℓm , and the Hilbert space is the completion in the norm ·˜ r of sequences for which lim ℓ→∞ |f ℓm |ℓ n = 0 (these are the sequences corresponding to smooth functions).
Because this last expression is, by definition, the action of the distribution f (r 0 ) = D n F (r 0 ) on g, we have the claimed equivalence lim r→r0 dΩf g = dΩf (r 0 )g(r 0 ). (B17)
We repeat the form at t = t 0 given in the text, in order to label each term with a subscript for later reference.
ψ s-L 0 = − 3mE 2 r 2 0 f 2 0 sin 2 Θ ρ 5 1 + − 3mJ 2 e −2iΦ f 2 0 r 2 0 (r − r 0 ) 2 ρ 5 2 + − 3mEJe −iΦ f 2 0 (r − r 0 ) sin Θ ρ 5 3 , (C2) ψ s-SL 0 = − 15mJ 2 M e −2iΦ 2f 4 0 r 4 0 (r − r 0 ) 5 ρ 7 1 + − 15mJM Ee −iΦ f 4 0 r 2 0 sin Θ(r − r 0 ) 4 ρ 7 2 + 15me −2iΦ J 2 (−i sin Φe iΦ ) + J 2 r 2 0 cos 2 Φ r 0 f 2 0 (r − r 0 ) 3 sin 2 Θ ρ 7 3 + 15mJEe −iΦ (r 2 0 + 2J 2 cos 2 Φ) f 2 0 r 0 (r − r 0 ) 2 sin 3 Θ ρ 7 4 + 15mr 0 E 2 J 2 + r 2 0 + J 2 cos 2Φ 2f 2 0 (r − r 0 ) sin 4 Θ ρ 7 5 + 9m e −2iΦ J 2 M f 3 0 r 4 0 (r − r 0 ) 3 ρ 5 6 + 3me iΦ Jr 0 E f 0 sin 3 Θ ρ 5 7 + 15me −iΦ M JE r 3 0 f 3 0 (r − r 0 ) 2 sin Θ ρ 5 8 + 9mJ 2 r 0 f 0 (r − r 0 ) sin 2 Θ ρ 5 9 (C3)
The axisymmetric part of each of these terms is to be written as a sum over 2 Y ℓ0 (Θ, 0) at r = r 0 . Each term in the singular field involves the leading part of ρ 2 , namelỹ
ρ 2 := A(r − r 0 ) 2 + B(1 − cos Θ) = ρ (2) | t=t0 + O(Θ 4 ),(C4)
where
A = r 0 r 0 − 2M , B = 2r 2 0 r 0 − 2M r 0 − 3M χ(Φ), χ(Φ) = 1 − M sin 2 Φ r 0 − 2M . (C5)
We follow the notation of DMW, writing ρ 2 := B(δ 2 + 1 − cos Θ),
δ 2 := A(r − r 0 ) 2 /B.(C6)
Then the leading term in ψ s 0 is given by
ψ s-L 0 = − 3mE 2 r 2 0 f 2 0 B 5/2 sin 2 Θ (δ 2 + 1 − cos Θ) 5/2 − 3mJ 2 e −2iΦ f 2 0 r 2 0 AB 3/2 δ 2 (δ 2 + 1 − cos Θ) 5/2 − 3mEJe −iΦ f 2 0 B 2 √ A δ sin Θ (δ 2 + 1 − cos Θ) 5/2 .(C7)
The axisymmetric part of the above expression is achieved by angle averaging over Φ. Substituting the values of A, B, E and J in the above expression, we obtain
where f (Φ) = (2π) −1 2π 0 f (Φ)dΦ. We start with a form of generating function of the Legendre polynomials given by 1 (e T + e −T − 2u) 1/2 = ℓ e −(ℓ+1/2)|T | P ℓ (u),
T = 0,(C9)
and set u = cos Θ, |T | = √ 2δ. In the limit T → 0 ± , the sum does not converge, but it is well-defined as a distribution:
lim T →0 1 (e T + e −T − 2u) 1/2 . = ℓ P ℓ (u),(C10)
where the symbol . = means equality of both sides as distributions on the sphere. In particular, the regularization proceeds by imposing a cutoff ℓ max on the singular field and on the retarded field; the projection P of the distribution (C9) onto the subspace ℓ ≤ ℓ max is the smooth function 1 (e T + e −T − 2u) k+1/2 . = ℓ (2ℓ + 1) 2(2k − 1)T 2k−1 P ℓ (u).
(C12)
Consider now the expression (C7) for the leading term in ψ s 0 . The first term is proportional to sin 2 Θ (δ 2 +1−cos Θ) 5/2 , where δ is proportional to T . We express this term as a sum of Legendre polynomials by differentiating (C9) twice with respect to u to obtain 1 (e T + e −T − 2u) 5/2 = 1 3 ∞ l=0 e −(ℓ+1/2)T P ′′ ℓ (u).
The second term is proportional to
and solving for α and β gives α = β = 1 2+u . Then
The axisymmetric part of the third term (its angle average over Φ) vanishes.
We next use the same techniques to express the subleading terms in ψ s 0 as power series in P ℓ with coefficients polynomial in ℓ. The axisymmetric parts of the second, fourth, seventh and eight terms vanish. In each of the remaining terms, Eq. (C12) is used to expand an expression involving a power of δ 2 + 1 − cos Θ, and in each case we find that the term is proportional as a distribution to the sum ∞ l=0 (l + 1/2)P l (cos θ); that sum is a δ-function with support at Θ = 0 [48]: ∞ l=0 (l + 1/2)P l (cos θ) = δ(1 − cos θ).
(C17) Because 2 Y l,0 vanishes at Θ = 0, the expansion of the singular field in terms of 2 Y ℓ0 has no subleading contribution. We verify the claimed form of each of the subleading terms in Eq. (C3) as follows: The first term is proportional to δ 5 /(δ 2 + 1 − cos Θ) 7/2 , and Eq. (C12) gives lim δ→0+ δ 5 (δ 2 + 1 − cos Θ) 7/2 . = 2 5 ℓ ℓ + 1 2 P ℓ (cos Θ).
The third term is proportional to δ 3 sin 2 Θ (δ 2 + 1 − cos Θ) 7/2 = 2δ 3 (δ 2 + 1 − cos Θ) 7/2 − 2δ 5 (δ 2 + 1 − cos Θ) 7/2 + O(ǫ −1 ),
and Eq. (C12) gives lim δ→0+ δ 3 sin 2 Θ (δ 2 + 1 − cos Θ) 7/2 . = 8 15 ℓ ℓ + 1 2 P ℓ (cos Θ). (C20)
The fifth term is proportional to δ sin 4 Θ (δ 2 + 1 − cos Θ) 7/2 = 4δ (δ 2 + 1 − cos Θ) 3/2 − 4δ 5 (δ 2 + 1 − cos Θ) 7/2 − 4δ 3 sin 2 Θ (δ 2 + 1 − cos Θ) 7/2 , The axisymmetric parts of the second, fourth, seventh and the eighth terms vanish.
We have obtained the leading terms as series of Legendre polynomials, and we now convert them to series involving 2 Y ℓ0 . We begin with P
ℓ . We use the relation between s Y ℓm and D ℓ −sm (Θ, Φ, 0), the representation matrix for the rotation group [50] to write 2 Y ℓ0 = Y ℓ2 . We next convert the series in terms of Legendre polynomials to series involving 2 Y ℓ0 . Then, using P m ℓ = 4π (2ℓ+1) (ℓ+m)! (ℓ−m)! 1/2 Y ℓm , we have
P (2) ℓ (cos Θ) = ℓ ′ C ℓℓ ′ 2 Y ℓ ′ 0 (Θ, 0),(C26)
where C ℓℓ ′ = 4π(ℓ − 1)l(ℓ + 1)(ℓ + 2) (2ℓ + 1)
1/2 δ ℓℓ ′ .(C27)
Regarding P ℓ (cos Θ) and P ′ ℓ (cos Θ) as elements of L 2 (S 2 ), we find 5 P ℓ (cos Θ) = ∞ n=2 A ℓn P (2) n (cos Θ), (C28) P ′ ℓ (cos Θ) = ∞ n=2 B ℓn P (2) n (cos Θ), (C29) 5 That is, the coefficients A ℓn and B ℓn are obtained from the inner products of P
(2) n with P ℓ (cos Θ) and P ′ ℓ (cos Θ), and the relations are implied by L 2 completeness of 2 Y ℓm .
where A ℓn := (2n + 1)(n − 2)! 2(n + 2)! − 2ℓ(ℓ − 1) (2ℓ + 1) δ ℓ,m + 4d ℓ,n , (C30) B ℓn := (2n + 1)(n − 2)! 2(n + 2)! 2ℓ(ℓ + 1)d ℓ−1,n ,
with d ℓ,n = 1, n − ℓ a positive even integer, 0, otherwise.
The form of these coefficients was found by numerical experiment for ℓ, n < 10 and then checked as rational numbers for larger values. The second term in the expression for the leading part of the singular field is proportional to the right side of Eq. (C16); and we now show for each ℓ that the bracketed expression vanishes as an element of L 2 . We have
P ′ ℓ + ℓ + 1 2 2 P ℓ = ∞ ℓ ′ =0 B ℓℓ ′ + ℓ + 1 2 2 A ℓℓ ′ C ℓℓ ′ 2 Y ℓ ′ 0 .(C32)
For ℓ ′ even, the sum over ℓ in Eq. (C16) is proportional to A ℓℓ ′ + ℓ + 1 2 2 B ℓℓ ′ = 0, for ℓ ′ even,
2(ℓ ′ + 2)! (2ℓ ′ + 1)(ℓ ′ − 2)! ∞ ℓ=0 A ℓℓ ′ + ℓ + 1 2 2 B ℓℓ ′ = ∞ ℓ=0 − ℓ + 1 2 ℓ(ℓ − 1)δ ℓℓ ′ + ℓ ′ −2
follows. A similar manipulation yields the same identity for ℓ ′ odd. We conclude that the projection of the distribution ℓmax ℓ=0 P ′ ℓ + ℓ + 1 2 2 P ℓ along 2 Y ℓ0 vanishes. Then, of the three terms in Eq. (C7) for ψ s 0 , only the term proportional to sin 2 Θ/ρ 5/2 is nonzero when written as a sum over 2 Y ℓ0 ; and the axisymmetric part of ψ s 0 has, to subleading order, the form . In using Eq. (40) to calculate A and B, we ignore the term involving ∂ t Ψ ∝ mΩΨ, because it smaller than the leading term by three powers of ℓ.
We numerically calculate a r,ret Shown below is a table of the fractional error in A and B when found numerically for the self-force's contribution from the axisymmetric part of h 11 .
PACS numbers: 04.30.Db, 04.25.Nx, 04.70.Bw
by applying the operators O αβ 0 and O αβ 0 of Eqs. (50a) and (50b) to the singular metric (9). Eqs. (53) and (54) are valid in general Petrov type D spacetimes.
C. Mass and Spin: The Remaining MetricThe CCK reconstruction of the metric perturbation from ψ 0 gives a perturbed metric for which there is no change in the mass and angular momentum. For a Schwarzschild background, this is immediate from the fact that ψ ret 0 involves only values of ℓ with ℓ ≥ 2, because the construction of the perturbed metric preserves the value of ℓ.For a Kerr background, both ψ ret 0 and Ψ ret are a sum of spheroidal harmonics with spin-weight 2, and their expression (47) in terms of spin-weighted spherical harmonics involves only harmonics with ℓ ≥ 2. A mass perturbation requires an ℓ = 0 part of the asymptotic perturbed metric at O(r −1 ). The operator in Eq. (32) mixes different values of ℓ, but it differs from its Schwarzschild form by terms smaller by O(r −1 ) than its leading terms. That is, both ψ ret 0 and Ψ ret are O(r −5 ), the spherically symmetric part of the operator (32) is O(r 4 ), and terms that mix different values of ℓ are O(r 3 ), implying h ret,RG αβ has no ℓ = 0 part at O(r −1 ). A perturbation of angular momentum requires an ℓ = 1 contribution at O(r −2 ) in h 13 . To see that it also vanishes, we show that the correction in a Kerr background to the Schwarzschild expression for h ret 13 is asymptotically O(r −3 ). The operator that relates h ret 13 to Ψ in Eq. (32) is (δ − 3α +β + 5π +τ)(∆ + µ − 4γ) + (∆ + 5µ −μ− 3γ −γ)(δ − 4α + π). Now D is a derivative along an outgoing null ray. In the Kerr coordinates of Eq. (41), Df (u, r, θ,φ) = ∂ r f (u, r, θ,φ). Because each time harmonic of Φ has the form Φ = S(θ,φ)e −iωu r −5 [1 + O(r −1 )], DΦ = S(θ,φ)e −iωu O(r −6 ); because the spin coefficients are O(r −1 ) or smaller, h ret 13 falls off like r −2 , and the correction to its Schwarzschild behavior is at O(r −3 ).
µ ) = (1/f (r), 1, 0, 0), (n µ ) = 1 2 (1, −f (r), 0, 0), (m µ ) = 1 √ 2r (0, 0, 1, i/ sin θ).
843 × 10 −13 1.067 × 10 −14 35 -0.00002455304484596332 -0.00002455304484594233 8.549 × 10 −13 8.341 × 10 −15 40 -0.00001634080095822354 -0.00001634080095821053 7.960 × 10 −13 5.325 × 10 −15 45 -0.00001141847437793787 -0.00001141847437792919 7.605 × 10 −13 3.673 × 10 −15 50 -0.000008290448479296679 -0.000008290448479290278 7.722 × 10 −13 2.807 × 10 −15 55 -0.000006208226467936966 -0.000006208226467932644 6.961 × 10 −13 1.912 × 10 −15 60 -0.000004768831841073202 -0.000004768831841069988 6.739 × 10 −13 1.433 × 10 −15 70 -0.000002990259098529844 -0.000002990259098527988 6.209 × 10 −13 8.488 × 10 −16 80 -0.000001996831417921701 -0.000001996831417920439 6.316 × 10 −13 5.893 × 10 −16
where a 0
0and b 0 are functions of u, θ, φ. Nowh 34 = ∇ 3 ξ 4 + ∇ 4 ξ 3 , and Eq. (A8) implies ξ 3 = O(r −1 ), whencẽ h 34 = O(r −2 ). Since the right side of Eq. (A9) is O(r −2 ) only if a 0 = b 0 = 0, we have h 34 = 0. For each harmonic, it is not difficult to show that Eqs. (A8) give the unique solution to Eqs. (A7) for which the components h IRG µν vanish asymptotically for ω = 0 and vanish faster than r −1 for ω = 0: Any other gauge transformation differs from the solution (A8) by a solution to the homogeneous equations, to Eqs. (A7) with h µν = 0. Their general solution isξ
ω nonzero, h 22 vanishes asymptotically only if k 1 = 0 and k 2 = 0; and h 33 vanishes asymptotically only if k 3 = 0. Similarly, for ω = 0, h 22 = o(r −1 ) only if k 1 = k 2 = 0; and h 33 = o(r −1 ) only if k 3 = 0. For a Kerr background, we similarly find the harmonics of the gauge vector in Kerr coordinates u, r, θ,φ of Eq. (41). Harmonics of the gauge vector have the form
When ω = 0, asymptotic flatness follows from the asymptotic conditions (A1). Again, because ξ 3 = O(r −1 ), we have∇ 3 ξ 4 = O(r −2 ), and the Price et al. relation then implies h = −2h 34 = 0.
The operator −D 2 + 1 4 is positive definite with eigenvalues L 2 . Definition. For positive integers n, the Sobolev space H n (S) is the completion of smooth functions in the norm · right-hand side is a sum of terms of the form C dΩf D 2k f, k = 0, . . . , n, a function has finite norm if and only if the function and its first n derivatives are square integrable. In particular, H 0 (S) = L 2 (S). Because the operator −D 2 + 1 4 is positive definite, it has a well-defined square root, and we can write the definition of the norm in the more concise form f n = D n f , with D := −D
(
Again, for ǫ = 0, the sum diverges logarithmically.) Thus (1 − cos θ) −1/2 ∈ H −ǫ . Because [2(1 − cos θ)] −1/2 differs from θ −1 by O(θ 2 ), the function θ −1 and any other function with the same singular behavior belong to H −ǫ . From Eq. (B5), successive applications of D 2 imply θ −n ∈ H −n−ǫ . If a function has singular behavior θ −n for integer n and if f ℓm has singular behavior L s , for some s, it follows that s = n − 1 2 .
subsequent functions obtained by taking derivatives can be regarded as projections of the corresponding derivatives of the distribution (C10). In particular, taking successive derivatives with respect to T gives the relation (Eq. (D11) of DMW) lim T →0
δ 2 (δ 2
22+1−cos Θ) 5/2 . The fact that it is O(ǫ −3 ) suggests that it can be written as a linear combination of derivatives of 1 (e T +e −T −2u) 1/2 ,T 2 (e T + e −T − 2u) 5/2 = α∂ u 1 (e T + e −T − 2u) 1/2 + β∂ 2 T 1 (e T + e −T − 2u) 1/2 ,
r 0 − 0 −
00subscript S-SL refers to singular subleading. From Eqs. (40), (32) and (D1), we find a[h 11 ] = AL + B + O(L 3M ) 1/2 (−20c 3 M + 2c 4 (7M − 2r 0 ) + 5c 6 (r 3M
[h 11 ] by matching it to a series in L of the form a r,ret h11,0 = AL + B +
TABLE I :
IRelations between the gauge-invariant Weyl scalars and the Hertz potentials in the two radiation gauges.
TABLE II :
II
In fact, the argument shows that each component hµν , regarded as a scalar, has even parity at leading order in ρ.
Note that the formal integral of the Green's function also gives a δ-function contribution with support on the trajectory, namely −4πmΩ 2 u t f −1 0 δ(r − r 0 )δ(cos θ)δ(φ − Ωt).
AcknowledgmentsFor a number of helpful discussions, we thank Leor Barack, Steven Detweiler, Samuel Gralla, Scott Hughes, Eric Poisson, Robert Wald, Bernard Whiting, and Alan Wiseman. Both Barack and an anonymous referee provided corrections and suggeted improvements to an earlier version of this paper. This work was supported in part by NSF Grant PHY 0503366. D.H.K's work was supported by the Alexander von Humboldt Foundation's Sofja KovalevskajaAppendix A: Gauge transformationsGauge transformation from Lorenz to radiation gaugeWe consider here the perturbed radiation-gauge metrics that describe asymptotically flat vacuum metric perturbations involving no linear change in mass or angular momentum. For harmonic time dependence, outgoing perturbations of this kind have, in a Lorenz (transverse-tracefree) gauge, the asymptotic behavior h αβ =ĥ αβ e −iωu , witĥand they therefore satisfy the IRG condition to O(r −2 ):We first find a corresponding asymptotically flat radiation-gauge metric perturbation by exhibiting a gauge transformation from the given Lorenz-gauge metric to an asymptotically flat metric satisfying the exact IRG conditions. We then show that there is an asymptotically vanishing gauge transformation to an asymptotically flat ORG metric perturbation. Uniqueness of the gauge transformation implies that this coincides with the asymptotically flat ORG metric obtained from the ORG Hertz potential of Eq.(40). Here, for asymptotic flatness, we are requiring only that the tetrad components h µν have, for harmonic time dependence the form e −iωu O(r −1 ); and for a time-independent perturbation involving no change in mass, angular momentum, or asymptotic dipole moment, h µν = o(r −1 ). IRG metric for outgoing radiation. We obtain as follows the gauge transformation from a Lorenz gauge to an IRG metric perturbation,We begin with the transformation for a Schwarzschild background and then generalize it to Kerr. The transformation is described most simply in coordinates u, r, θ, φ, with u the outgoing null coordinate. The harmonics of ξ α and h ret αβ are then given bywith the corresponding spin-weighted harmonics for the tetrad components of the metric perturbation,The radiation gauge condition l β h αβ = 0 has componentsThese equations have the explicit form,with solutionξAppendix D: Comparison of analytic and numerical computation of self-forceTo show how accurately one can recover the leading and the subleading terms in L in the mode sum expression for a s r by numerically matching a power series in L to the values of a ret ℓ , we will present an example where we know A and B analytically: the contribution to the self-force from the part of h 11 that is axisymmetric about a radial line through a particle in circular orbit in a Schwarzschild background. The contribution to the self-acceleration from h 11 isTo compute the leading and subleading terms analytically requires us to find the leading and subleading terms of the radial and time derivatives of ψ S 0 . From Eq. (C1), we havewith only two terms surviving the angle-average over Φ. The first term is proportional toThe second term is proportional toA similar calculation for the radial derivative of the leading singular field gives us Here the superscript S-L refers to the singular leading. The dot and prime represent time and radial derivatives, respectively.To find the t and r derivatives of the subleading terms, we need to assume the following result, directly verified only for small n. lim δ→0 + δ 2n (δ 2 + 1 − cos Θ) n+3/2 . = 0, n ≥ 1.(D7)
. J R Gair, gr-qc/0405137Class. Quant. Grav. 211595J. R. Gair et al., Class. Quant. Grav. 21, S1595 (2004), gr-qc/0405137.
. T A P , arXiv:0903.0103for members of the LISA International Science TeamT. A. P. (for members of the LISA International Science Team) (2009), arXiv:0903.0103.
. C C B F Schutz, J Centrella, S A Hughes, arXiv:0903.0100C. C. B. F. Schutz, J. Centrella and S. A. Hughes (2009), arXiv:0903.0100.
. B F Schutz, Nature. 323310B. F. Schutz, Nature 323, 310 (1986).
. C L Macleod, C J Hogan, 0712.0618Phys. Rev. 7743512C. L. MacLeod and C. J. Hogan, Phys. Rev. D77, 043512 (2008), 0712.0618.
. L Barack, C Cutler, gr-qc/0310125Phys. Rev. 6982005L. Barack and C. Cutler, Phys. Rev. D69, 082005 (2004), gr-qc/0310125.
. L M Burko, Phys. Rev. D. 6784001L. M. Burko, Phys. Rev. D 67, 084001 (2003).
. S Drasco, É É Flanagan, S A Hughes, arXiv:gr-qc/0505075Classical and Quantum Gravity. 22S. Drasco,É.É. Flanagan, and S. A. Hughes, Classical and Quantum Gravity 22, 801 (2005), arXiv:gr-qc/0505075.
. T Tanaka, arXiv:gr-qc/0508114Progress of Theoretical Physics Supplement. 163T. Tanaka, Progress of Theoretical Physics Supplement 163, 120 (2006), arXiv:gr-qc/0508114.
. É É Flanagan, T Hinderer, 0704.0389Phys. Rev. D. 75124007É.É. Flanagan and T. Hinderer, Phys. Rev. D 75, 124007 (2007), 0704.0389.
. A Pound, E Poisson, Phys. Rev. D. 7744012A. Pound and E. Poisson, Phys. Rev. D 77, 044012 (2008).
. E Poisson, arXiv:gr-qc/0306052Living Reviews in Relativity. 7E. Poisson, Living Reviews in Relativity 7, 6 (2004), arXiv:gr-qc/0306052.
. L Barack, arXiv:0908.1664gr-qcL. Barack (2009), arXiv:0908.1664[gr-qc].
. S Detweiler, arXiv:gr-qc/0501004Classical and Quantum Gravity. 22S. Detweiler, Classical and Quantum Gravity 22, S681 (2005), arXiv:gr-qc/0501004.
. Y Mino, M Sasaki, T Tanaka, arXiv:gr-qc/9606018Phys. Rev. D. 553457Y. Mino, M. Sasaki, and T. Tanaka, Phys. Rev. D 55, 3457 (1997), arXiv:gr-qc/9606018.
. T C Quinn, R M Wald, arXiv:gr-qc/9610053Phys. Rev. D. 563381T. C. Quinn and R. M. Wald, Phys. Rev. D 56, 3381 (1997), arXiv:gr-qc/9610053.
. S Detweiler, B F Whiting, gr-qc/0202086Phys. Rev. 6724025S. Detweiler and B. F. Whiting, Phys. Rev. D67, 024025 (2003), gr-qc/0202086.
. S A Teukolsky, Physical Review Letters. 291114S. A. Teukolsky, Physical Review Letters 29, 1114 (1972).
The table compares values of regularization parameters calculated analytically to values obtained numerically by matching the retarded field to a series in (ℓ + 1/2); the quantity that is regularized is a r,ret [h11] , as described in the text. The first column lists orbital radius in units of Schwarzschild mass; the second and the fifth columns list the analytically computed leading and the subleading regularization parameters A and B; the third and the sixth columns list the numerically obtained values of A and B, and the fourth and the. Table Iii, seventh columns list fractional differences between the analytic and numerical valuesTABLE III: The table compares values of regularization parameters calculated analytically to values obtained numerically by matching the retarded field to a series in (ℓ + 1/2); the quantity that is regularized is a r,ret [h11] , as described in the text. The first column lists orbital radius in units of Schwarzschild mass; the second and the fifth columns list the analytically computed leading and the subleading regularization parameters A and B; the third and the sixth columns list the numerically obtained values of A and B, and the fourth and the seventh columns list fractional differences between the analytic and numerical values.
. S A Teukolsky, Astrophys. J. 185635S. A. Teukolsky, Astrophys. J. 185, 635 (1973).
. J M Cohen, L S Kegeles, Phys. Rev. D. 101070J. M. Cohen and L. S. Kegeles, Phys. Rev. D 10, 1070 (1974).
. L S Kegeles, J M Cohen, Phys. Rev. D. 191641L. S. Kegeles and J. M. Cohen, Phys. Rev. D 19, 1641 (1979).
. P L Chrzanowski, Phys. Rev. D. 112042P. L. Chrzanowski, Phys. Rev. D 11, 2042 (1975).
. J M Stewart, Royal Society of London Proceedings Series A. 367527J. M. Stewart, Royal Society of London Proceedings Series A 367, 527 (1979).
. R M Wald, Physical Review Letters. 41203R. M. Wald, Physical Review Letters 41, 203 (1978).
. C O Lousto, B F Whiting, arXiv:gr-qc/0203061Phys. Rev. D. 6624026C. O. Lousto and B. F. Whiting, Phys. Rev. D 66, 024026 (2002), arXiv:gr-qc/0203061.
. A Ori, arXiv:gr-qc/0207045Phys. Rev. D. 67124010A. Ori, Phys. Rev. D 67, 124010 (2003), arXiv:gr-qc/0207045.
. L Barack, A Ori, arXiv:gr-qc/0107056Phys. Rev. D. 64124003L. Barack and A. Ori, Phys. Rev. D 64, 124003 (2001), arXiv:gr-qc/0107056.
. T S Keidl, J L Friedman, A G Wiseman, arXiv:gr-qc/0611072Phys. Rev. D. 75124009T. S. Keidl, J. L. Friedman, and A. G. Wiseman, Phys. Rev. D 75, 124009 (2007), arXiv:gr-qc/0611072.
. E Newman, R Penrose, Journal of Mathematical Physics. 3566E. Newman and R. Penrose, Journal of Mathematical Physics 3, 566 (1962).
. E Newman, R Penrose, Journal of Mathematical Physics. 4998E. Newman and R. Penrose, Journal of Mathematical Physics 4, 998 (1963).
. S Detweiler, B F Whiting, arXiv:gr-qc/0202086Phys. Rev. D. 6724025S. Detweiler and B. F. Whiting, Phys. Rev. D 67, 024025 (2003), arXiv:gr-qc/0202086.
S E Gralla, preparation: See Talk at 2010 Capra Meeting. S. E. Gralla, In preparation: See Talk at 2010 Capra Meeting, http://pirsa.org/10060049 (2010).
. S E Gralla, R M Wald, 0806.3293Classical and Quantum Gravity. 25205009S. E. Gralla and R. M. Wald, Classical and Quantum Gravity 25, 205009 (2008), 0806.3293.
. S Detweiler, Phys. Rev. Lett. 861931S. Detweiler, Phys. Rev. Lett. 86, 1931 (2001).
. B F Whiting, L R Price, Class. Quantum Grav. 22589B. F. Whiting and L. R. Price, Class. Quantum Grav. 22, S589 (2005).
. L R Price, K Shankar, B F Whiting, arXiv:gr-qc/0611070Classical and Quantum Gravity. 242367L. R. Price, K. Shankar, and B. F. Whiting, Classical and Quantum Gravity 24, 2367 (2007), arXiv:gr-qc/0611070.
. R Geroch, A Held, R Penrose, J. Math. Phys. 14874R. Geroch, A. Held, and R. Penrose, J. Math. Phys. 14, 874 (1973).
S Chandrasekhar, The mathematical theory of black holes. Oxford University PressS. Chandrasekhar, The mathematical theory of black holes (Oxford University Press, 1983).
. T S Keidl, University of Wisconsin-MilwaukePhD. ThesisT. S. Keidl, PhD. Thesis, University of Wisconsin-Milwauke (2007).
. S A Hughes, Phys. Rev. D. 6184004S. A. Hughes, Phys. Rev. D 61, 084004 (2000).
. S R Dolan, University of CambridgePhD ThesisS. R. Dolan, PhD Thesis, University of Cambridge (2006).
. N Warburton, L Barack, Phys. Rev. D. 8184039N. Warburton and L. Barack, Phys. Rev. D 81, 084039 (2010).
. R M Wald, Journal of Mathematics and Physics. 141453R. M. Wald, Journal of Mathematics and Physics 14, 1453 (1973).
. S Detweiler, E Poisson, arXiv:gr-qc/0312010Phys. Rev. D. 6984019S. Detweiler and E. Poisson, Phys. Rev. D 69, 084019 (2004), arXiv:gr-qc/0312010.
. L R Price, University of FloridaPh.D. thesisL. R. Price, Ph.D. thesis, University of Florida (2007).
. W H Press, J M Bardeen, Phys. Rev. Lett. 271303W. H. Press and J. M. Bardeen, Phys. Rev. Lett. 27, 1303 (1971).
. S Detweiler, E Messaritaki, B F Whiting, arXiv:gr-qc/0205079Phys. Rev. D. 67104016S. Detweiler, E. Messaritaki, and B. F. Whiting, Phys. Rev. D 67, 104016 (2003), arXiv:gr-qc/0205079.
G B Arfken, H J Weber, Mathematical Methods for Physicists. Harcourt Academic Press5th edG. B. Arfken and H. J. Weber, Mathematical Methods for Physicists (Harcourt Academic Press, 2001), 5th ed.
. L Barack, D A Golbourn, N Sago, 0709.4588Phys. Rev. D. 76124036L. Barack, D. A. Golbourn, and N. Sago, Phys. Rev. D 76, 124036 (2007), 0709.4588.
. J N Goldberg, A J Macfarlane, E T Newman, F Rohrlich, E C G Sudarshan, Journal of Mathematical Physics. 82155J. N. Goldberg, A. J. Macfarlane, E. T. Newman, F. Rohrlich, and E. C. G. Sudarshan, Journal of Mathematical Physics 8, 2155 (1967).
| [] |
[
"Evaluating 0-0 Energies with Theoretical Tools: a Short Review",
"Evaluating 0-0 Energies with Theoretical Tools: a Short Review"
] | [
"Pierre-François Loos \nLaboratoire de Chimie et Physique antiques\nUniversité de Toulouse\nCNRS\nUPS\nFrance\n",
"Denis Jacquemin \nLaboratoire CEISAM -UMR CNRS 6230\nUniversité de Nantes\n2 Rue de la Houssinière, BP 9220844322, Cedex 3NantesFrance\n"
] | [
"Laboratoire de Chimie et Physique antiques\nUniversité de Toulouse\nCNRS\nUPS\nFrance",
"Laboratoire CEISAM -UMR CNRS 6230\nUniversité de Nantes\n2 Rue de la Houssinière, BP 9220844322, Cedex 3NantesFrance"
] | [] | For a given electronic excited state, the 0-0 energy (T 0 or T 00 ) is the simplest property allowing straightforward and physically-sound comparisons between theory and (accurate) experiment. However, the computation of 0-0 energies with ab initio approaches requires determining both the structure and the vibrational frequencies of the excited state, which limits the quality of the theoretical models that can be considered in practice. is explains why only a rather limited, yet constantly increasing, number of works have been devoted to the determination of this property. In this contribution, we review these e orts with a focus on benchmark studies carried out for both gas phase and solvated compounds. Over the years, not only as the size of the molecules increased, but the re nement of the theoretical tools has followed the same trend. ough the results obtained in these benchmarks signi cantly depend on both the details of the protocol and the nature of the excited states, one can now roughly estimate, in the case of valence transitions, the overall accuracy of theoretical schemes as follows: 1 eV for CIS, 0.2-0.3 eV for CIS(D), 0.2-0.4 eV for TD-DFT when one employs hybrid functionals, 0.1-0.2 eV for ADC(2) and CC2, and 0.04 eV for CC3, the la er approach being the only one delivering chemical accuracy on a near-systematic basis. | 10.1002/cptc.201900070 | [
"https://arxiv.org/pdf/1903.02450v1.pdf"
] | 118,963,318 | 1903.02450 | 660735c89a430e7ca6f9827aa3589bfc88ff3a31 |
Evaluating 0-0 Energies with Theoretical Tools: a Short Review
Pierre-François Loos
Laboratoire de Chimie et Physique antiques
Université de Toulouse
CNRS
UPS
France
Denis Jacquemin
Laboratoire CEISAM -UMR CNRS 6230
Université de Nantes
2 Rue de la Houssinière, BP 9220844322, Cedex 3NantesFrance
Evaluating 0-0 Energies with Theoretical Tools: a Short Review
For a given electronic excited state, the 0-0 energy (T 0 or T 00 ) is the simplest property allowing straightforward and physically-sound comparisons between theory and (accurate) experiment. However, the computation of 0-0 energies with ab initio approaches requires determining both the structure and the vibrational frequencies of the excited state, which limits the quality of the theoretical models that can be considered in practice. is explains why only a rather limited, yet constantly increasing, number of works have been devoted to the determination of this property. In this contribution, we review these e orts with a focus on benchmark studies carried out for both gas phase and solvated compounds. Over the years, not only as the size of the molecules increased, but the re nement of the theoretical tools has followed the same trend. ough the results obtained in these benchmarks signi cantly depend on both the details of the protocol and the nature of the excited states, one can now roughly estimate, in the case of valence transitions, the overall accuracy of theoretical schemes as follows: 1 eV for CIS, 0.2-0.3 eV for CIS(D), 0.2-0.4 eV for TD-DFT when one employs hybrid functionals, 0.1-0.2 eV for ADC(2) and CC2, and 0.04 eV for CC3, the la er approach being the only one delivering chemical accuracy on a near-systematic basis.
I. INTRODUCTION
Most theoretical works investigating the photophysical or photochemical properties of molecules and materials intend to provide insights supplementing experimental measurements. To this end, it is most o en necessary to apply rst-principle approaches allowing to model electronic excited states (ES). A wide array of such approaches is now available to theoretical chemists. Probably, the two most prominent ES methods are i) time-dependent density-functional theory (TD-DFT) 1 that has been originally proposed by Runge and Gross, 2 but became very popular under the e cient linearresponse (LR) formalism developed by Casida in 1995, 3 and ii) multi-con guration/complete active space self-consistent eld (MCSCF/CASSCF) theories, 4 that are inherently adapted to model photochemical events. However, both approaches su er from signi cant drawbacks. As TD-DFT has been applied for modeling thousands of molecules, the de ciencies of its common adiabatic approximation are now well known, and one can cite important di culties in accurately modeling charge-transfer states, [5][6][7][8] and Rydberg states, [9][10][11][12] singlettriplet gaps, [13][14][15][16] as well as ES characterized by a signi cant double excitation character. 10,17,18 In addition, even for "wellbehaved" low-lying valence ES, TD-DFT presents a rather signi cant dependency on the exchange-correlation functional (XCF), 19 and choosing an appropriate XCF remains a di cult task. Similarly, there is also no unambiguous way to select an active space in CASSCF calculations, a method, that additionally yields too large transition energies as it does not account for dynamical correlation e ects. Beyond these two very popular theories, there exists many alternatives. In the case of single-determinant methods, let us cite i) the Bethe-Salpeter formalism applied on top of the GW approximation (BSE@GW), which can be considered as a beyond-TD-DFT approach and has shown some encouraging performances for chemical systems, 20 ii) the con guration interaction singles a) Electronic mail: [email protected] with a perturbative double correction [CIS(D)], 21,22 the simplest post-Hartree-Fock (HF) method providing reasonably accurate transition energies, iii) the algebraic diagrammatic construction (ADC) approach, 23 whose second-order approximation, ADC(2), enjoys a very favorable accuracy/cost ratio, and iv) coupled cluster (CC) schemes which allow for a systematic theoretical improvement via an increase of the expansion order (e.g., comparing CC2, 24 CCSD, 25,26 CC3,24 etc. results), though such strategy comes with a quick in ation of the computational cost. It is also possible to improve CASSCF results by including dynamical correlation e ects, typically by applying a second-order perturbative (PT2) correction such as in CASPT2 27,28 or in second-order n-electron valence state perturbation theory (NEVPT2). 29 Both theories greatly improve the quality of the transition energies, but become unpractically demanding for medium and large systems. Alternatively, one can also compute very high quality transition energies for various types of excited states using selected con guration interaction (sCI) methods [30][31][32] which have recently demonstrated their ability to reach near full CI (FCI) quality energies for small molecules. [33][34][35][36][37][38][39] e idea behind such methods is to avoid the exponential increase of the size of the CI expansion by retaining the most energetically relevant determinants only, thanks to the use of a second-order energetic criterion to select perturbatively determinants in the FCI space. 40,41 However, although the "exponential wall" is pushed back, this type of methods is only applicable to molecules with a small number of heavy atoms with relatively compact basis sets.
Beyond, these important methodological aspects, another issue is that most ab initio calculations of ES properties do not o er direct comparisons with experiment. is is in sharp contrast with ground state (GS) properties for which such comparisons are o en straightforward. For instance, "experimental" ES dipole moments are o en determined by indirect procedures, such as the measurement of solvato uorochromic e ects, so that rather large error bars are not uncommon. Another example comes with geometries: while there exists an almost in nite number of GS geometries obtained through X-ray di raction techniques for molecules of any size and nature, the experimental determination of ES geometrical pa-rameters remains tortuous, as it typically originates from an analysis of highly-excited vibronic bands. As a consequence, experimental ES structures are available only for a handful of small compounds, prohibiting comparisons between theory and experiment for non-trivial structures. Although, for both ES dipole moments and geometries, theoretical approaches have therefore a clear edge over their experimental counterparts, such calculations nevertheless require the access to ES energy gradients, which limits the number of methods that can be applied for non-trivial compounds. Besides, the most commonly reported theoretical ES data, that is, vertical absorption energies, have no experimental counterpart as they correspond to vibrationless di erences between total ES and GS energies at the GS geometry (E vert abs in Figure 1). As a consequence, they can be used to compare trends in a homologous series of compounds, 42 but are rather useless when one aims for quantitative theory-experiment comparisons. erefore, the simplest ES properties that are well-de ned both theoretically and experimentally are the 0-0 energies (E 0-0 , sometimes denoted T 0 or T 00 ). For a given ES, the 0-0 energy corresponds to the di erence between the ES and GS energies at their respective geometrical minimum, the adiabatic energy E adia (sometimes denoted T e ), corrected by the di erence of zeropoint vibrational energies between these two states (∆E ZPVE ). For gas phase molecules with well-resolved vibronic spectrum, E 0-0 can be directly measured with uncertainties of the order of 1 cm −1 . In other words, extremely accurate experimental data are available. In solution, E 0-0 is generally de ned as the crossing point between the measured (normalized) absorption and emission spectra. On the theory side, whilst E 0-0 is a well de ned quantity, its calculation is no cakewalk, notably due to the ∆E ZPVE term that necessitates the estimation of the vibrational ES frequencies.
In the present mini-review, we will consider previous works dealing with theory-experiment comparisons for E adia or E 0-0 energies. As expected, over the years, the methods available to compute E 0-0 have dramatically improved, so as the accuracy. Here, we do focus on benchmark studies tackling a signi cant number of diverse molecules with rst principle methods. We do not intend to provide an exhaustive list of the works considering only one or two compounds and their comparison with experiment, or a speci c chemical family of compounds. For the second category, the interested reader can nd several works devoted to, e.g., uoroborate derivatives, [43][44][45] biological chromophores, 46,47 DNA bases, 48 cyanines, 49 coumarins, 50 as well as many other works focussed on band shapes rather than E 0-0 energies. [51][52][53][54][55][56][57][58][59] II. 0-0 ENERGIES COMPUTED IN GAS PHASE In this Section, we review the theoretical investigations relying on gas-phase calculations to obtain E adia or E 0-0 . ough there is no universal classi cation for molecule sizes, we rst discuss works focussing on small compounds, that is, sets of compounds largely dominated by di-and tri-atomic molecules, before turning to medium (e.g., benzene) and large (e.g., reallife dyes) molecules in the second subsection. e main infor-FIG. 1. Representation of transition energies between two potential energy surfaces. E vert abs (blue) and E vert uo (red) are the (vertical) absorption and uorescence energies, whereas E GS reorg and E ES reorg (orange) are the (geometrical) reorganization energies of the GS and ES states, respectively. E vert abs and E 0-0 , our main interests here, are de ned in green and purple, respectively. mation associated with the various studies discussed below are summarized in Table I. TABLE I: Statistical analysis of the results obtained in various benchmarks comparing gas-phase E adia or E 0-0 computations to experimental data. MSE and MAE are the mean signed and mean absolute errors, and are given in eV. When a di erent method was used to compute E adia and to obtain the structures (and ZPVE corrections) this is mentioned using the usual "//" notation. To the best of our knowledge, one of the rst investigation on adiabatic energies is due to Stanton and coworkers, 64 who compared the performances of CIS, CIS(D), and CCSD for the computation of E adia in six diatomic molecules (H 2 , BH, CO, N 2 , BF, and C 2 ) in 1995. For such small molecules, it is possible to analyze the spectroscopic data 82 to obtain directly experimental E adia rather than E 0-0 . 83 ree atomic basis set were considered, namely, 6-31G(d), aug-cc-pVDZ, and aug-cc-pVTZ; we report only the results obtained with the largest basis in Table I. It is crystal clear that the CIS method is very far from experiment even for these quite simple molecules, with errors ranging from +0.99 eV (N 2 ) to −2.34 eV (C 2 ). e inclusion of the perturbative doubles vastly improves the estimates with a mean absolute error (MAE) of 0.27 eV. Nonetheless, CIS(D) systematically overshoots the experimental values for this particular set. CCSD further reduces the absolute error but underestimates E adia in each case. We note that such error sign is rather unusual for CCSD. Indeed, this approach generally delivers, for valence ES, too large transition energies. 84,85 e trend obtained in this early study is therefore most probably related to the size of the considered molecules. 39 A second key investigation is due to Furche and Alrichs (FA), 65,86 who bene ed from pioneering developments and e cient implementation of TD-DFT energy gradients. 87 Using this approach, they investigated around thirty small-size compounds (except for glyoxal, pyridine, benzene, and porphyrin) using a quite large basis set and several XCF. As can be seen in Table I, the two HF-based approaches, CIS and TD-HF, deliver very large errors, with a positive MSE, as expected for methods neglecting dynamical correlation. All the XCF tested within TD-DFT give a MAE in the 0.25-0.32 eV range, with no clear-cut advantage for hybrids over semi-local functionals, an outcome probably related to the size of the molecules. Small subsets of the original FA set were considered by Chiba et al., 88 and Nguyen et al. 73 for the testing of their own implementations of TD-DFT gradients for range-separated hybrids (not shown in Table I). In 2003, Köhn and Hä ig (KH) estimated transition energies for a similar set as FA with their own implementation of CC2 gradients. 61 ese authors considered several atomic basis sets and we report in Table I the data computed with the quadruple-ζ basis, though the deviations with respect to the triple-ζ basis are rather insigni cant.
DSD-PBEP86/aug-cc-pVTZ//B3LYP/def2-TZVP 0.03 0.08 PBE0-2/aug-cc-pVTZ//B3LYP/def2-TZVP 0.19 0.21 PBE0-DH/aug-cc-pVTZ//B3LYP/def2-TZVP 0.25 0.28 B2PLYP/aug-cc-pVTZ//SCS-CC2/def2-TZVPP −0.01 0.10 B2GPPLYP/aug-cc-pVTZ//SCS-CC2/def2-TZVPP 0.07 0.10 DSD-BLYP/aug-cc-pVTZ//SCS-CC2/def2-TZVPP 0.02 0.06 DSD-PBEP86/aug-cc-pVTZ//SCS-CC2/def2-TZVPP −0.02 0.06 PBE0-2/aug-cc-pVTZ//SCS-CC2/def2-TZVPP 0.15 0.17 PBE0-DH/aug-cc-pVTZ//SCS-CC2/def2-TZVPP 0.25 0.28 80 g 2018 35 31 (medium-size organic) CC3/aug-cc-pVTZ//CCSDR(3)/def2-TZVPP −0.01 0.02 CCSDR(3)/aug-cc-pVTZ//CCSDR(3)/def2-TZVPP 0.04 0.05 CCSD/aug-cc-pVTZ//CCSDR(3)/def2-TZVPP 0.21 0.21 CC2/aug-cc-pVTZ//CCSDR(3)/def2-TZVPP 0.
As can be seen the CC2 MAE (0.17 eV) is signi cantly smaller than its TD-DFT counterparts. For a work carried out more than 15 years ago, it is remarkable that a CC2 estimate of E 0-0 could be computed for a quite large molecule such as azobenzene. e KH set was employed twice in the following years. First, by Rhee, Casanova, and Head-Gordon in 2009 when they proposed the SOS-CIS(D 0 ) method which gives a MAE of 0.26 eV. 71 Second, by Liu et al. in 2010, who found that both TD-DFT and its Tamm-Danco approximation (TDA) deliver similar average deviations while considering B3LYP and ωB97 as XCF. Indeed, the di erences between the TD-DFT and TDA results (average errors of 0.12 and 0.14 eV with B3LYP and ωB97, respectively) are signi cantly smaller than the discrepancies with respect to experiment. In addition, Hä ig's group also considers a similar set of compounds in 2008 to investigate spin-scaled variants of CC2. ey found that the average deviations were not signi cantly altered compared to conventional CC2, and that the spin-scaling version improved the overall consistency (correlation) compared to experiment. 70 In 2005, Hä ig evaluated the performances of various single-reference wavefunction approaches using 19 ES (11 singlet and 8 triplet) determined on four diatomic molecules (N 2 , CO, CF, and BH) using a huge basis set allowing to be near the complete basis set limit. 68 As can be deduced from Table I, the convergence with respect to the expansion order in the CC series (CIS, CC2, CCSD, CCSDR(3), CC3) is rather erratic. In addition, all approaches (partially) including contributions from the doubles, i.e., CIS(D), ADC(2), CC2, and CCSD provide similar results with MAE of ca. 0.2 eV. In contrast, the inclusion of triples, either perturbatively or iteratively, leads to average deviations smaller than 0.10 eV. To our knowledge, this work was the rst demonstration that "chemical accurate" E adia (errors smaller than 1 kcal/mol or 0.043 eV) could potentially be a ained with theoretical methods on an almost systematic basis.
B. Medium and large compounds
e rst studies considering the computation of E 0-0 in larger, "real-life" structures are due to Grimme and his collaborators in 2004. 60,66,67 In the rst work of their series, 66 they investigated the vibronic shapes of seven π-conjugated molecules (anthracene, azulene, octatretraene, pentacene, phe-noxyl radical, pyrene, and styrene) with TD-B3LYP. e reproduction of the experimental band shapes is generally excellent, but the error in E 0-0 compared to experiment (ranging from −0.69 eV to +0.86 eV) is rather large, leading to the conclusion that the quality of the TD-DFT transition energies have to be blame rather than the structures, at least, for these rigid aromatic molecules. 66 In their second paper, 67 the number of transitions was signi cantly increased as they studied 30 singlet-singlet transitions and 13 doublet-doublet transitions in π-conjugated compounds. e calculations were performed with TD-DFT in gas-phase with three XCF (BP86, B3LYP, and BHHLYP) and the solvent e ects were accounted by applying an empirical +0.15 eV shi to the experimental 0-0 energies measured in condensed phase. Dierksen and Grimme noted a smooth evolution of the computed E 0-0 energies with the amount of exact exchange included in the functional for the π → π singlet-singlet transitions, BHHLYP leading to the smallest MAE. 67 Eventually, in Ref. 60, a third test set including 20 π → π and 12 n → π transitions, the GI set, was designed to compare the performances of TD-DFT, CIS(D), and one of its spin-scaled variant, namely SCS-CIS(D). For this set, the CIS(D) approach clearly outperforms TD-B3LYP, whereas SCS-CIS(D) does not improve the overall MAE but delivers a more balanced description of the two families of ES. Indeed, CIS(D) yields a signi cantly smaller MAE (0.10 eV) for the n → π subset than for its π → π counterpart (0.25 eV).
e GI set was also used in 2008 to evaluate the performances of several CC2 variants which all provided MAE around 0.15 eV. 70 ough most wavefunction calculations were performed on TD-DFT geometries, Hellweg et al. also tested the impact of performing CC2 optimizations. Interestingly, they noted almost no major di erence for the π → π states, whereas for the n → π transitions, CC2 structures signi cantly redshi ed the excitation energies as compared to those obtained with TD-DFT geometries.
e GI set was also used twice by Head-Gordon and coworkers. 69,71 to evaluate the performances of spin-scaled variants of the CIS(D) approach. In their rst work, the calculations were made on CIS structures, and the SCS-CIS(D) and SOS-CIS(D) approaches both exhibit very good performances (MAE for both approaches 0.12 eV), a result probably partially due to error compensations. 69 In the second work the focus was set on the performances of SOS-CIS(D 0 ). 71 In the most re ned calculations, a double-ζ basis set was applied to obtain the geometries and ZPVE corrections, whereas E adia was determined with aug-cc-pVTZ. e accuracy of SOS-CIS(D 0 ) is signi cantly be er for the GI set (containing medium-sized compounds) than for the KH set (gathering di/tri-atomics), indicating that the size of the molecules has a signi cant in uence on the methodological conclusions. In addition, the MSE for the π → π (+0.14 eV) and n → π (−0.11 eV) subsets di er with SOS-CIS(D 0 ), further stressing that reaching a balanced description of ES of di erent natures is di cult.
A decade ago, Nguyen, Day and Pachter compared TD-DFT/6-311+G(d,p) and experimental adiabatic energies for seven substituted coumarins and two stilbene derivatives exhibiting transitions with a signi cant chargetransfer character. 73 Unsurprisingly, 8,89 range-separated hy-brids clearly deliver more accurate results in this set, the B3LYP E 0-0 being systematically too small.
In 2011, Furche's group came up with another popular set (SKF) of 109 E 0-0 energies obtained in 91 very diverse compounds encompassing small, medium, and large structures for which experimental gas-phase E 0-0 values are available. Special care was taken in order to include diverse compounds (organic/inorganic, aliphatic/aromatic, etc.) and ES (86 singlets, 12 triplets, and 11 spin-unrestricted transitons). 62 e majority of the results were obtained on B3LYP/def2-TZVP structures and ∆E ZPVE , using E adia determined with various XCF and the same def2-TZVP basis set. As detailed below, several protocols were tested. For this diverse set, there is a signi cant superiority of the hybrid XCF (B3LYP and PBE0) compared to the local and semi-local XCF (Table I) which contrasts with the FA set (containing smaller compounds) discussed above. In Ref. 62, the authors also show that using a (non-augmented) polarized triple-ζ basis provides E 0-0 within ca. 0.03 eV of the basis set limit at the TD-DFT level and that, consistently with Grimme's conclusions, the error on the transition energies must be blame for the the major part of this deviation, the variations of the structural parameters when changing XCF having a minor impact. From this larger set, Furche and coworkers also extracted a subset of 15 representative ES, and performed ADC(2) and CC2 calculations. ese two methods were found to behave similarly and the addition of di use functions was found mandatory (in contrast to TD-DFT). For this subset, the MAE is 0.17 eV with CC2, a value consistent with the CC2 MAE obtained for previously discussed sets. A year later, the same group extended their analysis to variants of the TPSS XCF. 74 ey found that the current-dependent formalism for TPSS and TPSSh (cTPSS and cTPSSh) yield larger deviations than the standard formalism. In 2014, Fang, Oruganti, and Durbeej considered a larger number of XCF on a set encompassing all the singlet and triplet transitions of the SKF set. 75 Overall the most accurate results are a ained with CC2, whereas the "standard" global and range-separated hybrids (B3LYP, PBE0, CAM-B3LYP and ωB97X-D) yield errors around 0.25 eV. Unsurprisingly CIS and XCF including 100% of exact exchange (M06-HF) overestimate substantially the experimental reference, whereas BP86 gives the opposite error sign. In addition, the authors investigated the errors in 9 chemically-intuitive subsets. For the organic compounds, CC2 was systematically found to outperform TD-DFT in terms of average error, whereas this does not hold for small inorganic compounds. In an e ort to come up with a computationally e ective protocol, the authors also studied methodological e ects on two quantities. First, ∆E 0-0 = E 0-0 − E adia , that is the ∆E ZPVE correction, which was found to be centered on −0.12 eV, with a very small methodological dependence: the standard deviations determined across the various tested methods was as small as 0.02 eV, and in the 0.01-0.05 eV range for the nine subsets.
is clearly indicates that ∆E ZPVE is rather insensitive to the level of theory, con rming previous studies performed in the same research group, 47 and others. 90 Second, they studied ∆E adia = E adia − E vert abs , that is, the ES reorganization energy, E ES reorg . e methodological standard deviation was only 0.10 eV for E ES reorg , as compared to the much larger spread for E vert abs (0.39 eV), indicating that E ES reorg is also much less dependent on the level of theory than the vertical energies, in line with previous observations (see above). 66 Nevertheless, in contrast to ∆E ZPVE , the E ES reorg values cover a broad range of values depending on the molecule (−0.37 ± 0.30 eV). Later, Furche's 2011 set was also selected to assess semi-empirical approaches (see below for details). 77 Two years later, Hä ig and collaborators compared theoretical E 0-0 values to highly accurate gas phase experimental references for a 66-singlet set strongly dominated by π → π transitions (63 out of 66) in aromatic organic molecules (substituted phenyls and larger compounds) leading to the WGLH set. 63 ey rely on the aug-cc-pVTZ basis set for determining E adia , and the def2-TZVPP basis set for obtaining structures and vibrations. As can be seen in Figure 2, second-order wavefunction approaches, i.e., ADC(2), CC2, SCS-CC2, and SOS-CC2 performed beautifully with a tight distribution around the experimental reference and very small average deviations, all below the 0.10 eV threshold. is success is probably partially related to the rather uniform nature of the ES considered in this particular study, as compared to the SKF set. Obviously, TD-B3LYP is clearly less accurate than wavefunction schemes, though the MAE remains in line with other TD-DFT works. 19 Two simpli cations were tested as well: i) removing the diffuse functions for the calculation of the adiabatic energies, which yields slight increases of the MSE by ca. 0.04 eV, but has rather negligible e ects on the MAE; ii) using a ∆E ZPVE term obtained at the B3LYP/def2-TZVP level, which only yields a degradation of the MAE by ca. 0.02 eV, con rming the previously reported conclusion that this term can be safely estimated with a lower level of theory. 63 In 2016, Oruganti, Fang, and Bo Durbeej 78 consider the WGLH set with the same philosophy as their 2014 work, 75 i.e., nding simpli ed protocols delivering accurate 0-0 energies. First, they showed that none of the tested XCF could deliver the same accuracy as CC2, the smallest MAE being obtained with B3LYP (0.20 eV), whereas, BP86 and M06-2X E 0-0 deviate much more signi cantly from experiment (MAE of 0.40 and 0.36 eV, respectively). By using ZPVE corrections computed at the TD-DFT level, the changes on the CC2 E 0-0 values are rather minor (roughly 0.04 eV), whereas using CC2 for ge ing E vert abs and TD-DFT to determine both E ES reorg and ∆E ZPVE led to variations ranging from 0.06 to 0.12 eV depending on the XCF, the hybrid functionals clearly outperforming BP86 (and CIS). 78 ey concluded: "In fact, for a clear majority of the 66 states CC2-quality E 0-0 can be calculated by employing CC2 only for the vertical term". e WGLH set was also chosen in 2017 by Schwabe and Goerigk in their investigation of spin-scaling e ects on the transition energies obtained with double-hydrid XCFs. 79 Using the SCS-CC2 geometries of the original paper, they found that both ed and non-ed variants of double hybrids behaved similarly. Using DSD-PBEP86/aug-cc-pVTZ to determine E adia , they reached a MSE of −0.02 eV and a MAE of 0.06 eV, 79 both values being very similar to the one reported for the SCS-CC2 method. 63 In 2014, Barnes basis set o ered a good compromise, they investigated a wide range of XCF within the TD-DFT framework as well as CIS and CASPT2. While the usual CIS overestimation is extremely large (typically > 1 eV), the performance of CASPT2 is quite remarkable with a MSE of −0.02 eV and a MAE of 0.12 eV. At the TD-DFT level, the authors determined that the most valuable results are obtained with B3LYP, M06-2X, ωB97X-D, and CAM-B3LYP for these open-shell systems. In contrast to other studies, no signi cant di erence was noticed when separately considering the small (di-and tri-atomics) and the medium-sized compounds.
In 2016, Tuna, iel and coworkers proposed an extended benchmark of their OMx/MRCI methods, including calculations of E 0-0 . 77 For 12 cases, they could compare the OM2/MRCI and B3LYP ∆E ZPVE and an average deviation of 0.04 eV was found, a rather large value for this property, highlighting that the semi-empirical approach is not yet optimal to determine the ZPVE of ESs. As a consequence they relied on TD-B3LYP ∆E ZPVE in their benchmark study. ey investigated compounds of both Furche's 2011 and Hä ig's 2013 sets, discarding cases for which the OMx approaches were not parametrized. For the SKF set, the average errors are quite similar to TD-B3LYP (Table I), which is certainly a success. However, the authors noted that OM2 and OM3 yield di erent error signs for the π → π (underestimation) and n → π (overestimation) transitions, whereas TD-B3LYP consistently underestimate the 0-0 energies of both families of transitions. For the WGLH set, which is strongly dominated by π → π transitions in aromatic organic molecules, the average errors are substantially larger with MAE of 0.35 eV for both OM2/MRCI and OM3/MRCI, and a clear trend to undershoot E 0-0 .
Recently, we have put some e orts in reaching very accurate E 0-0 for non-trivial molecular systems. 80,81 In our rst contribution, we have considered singlet ES determined on molecules containing between 4 and 12 atoms for a set encompassing more n → π (25) than π → π (10) transitions. Using CC3 E adia , CCSDR(3) geometries, and B3LYP ∆E ZPVE , not only is the MAE very small (0.02 eV), but chemical accuracy is achieved on an almost systematic basis (ca. 90% success rate). e results for this set are illustrated in Figure 3. As one can be see, carbonyl uoride yields a signi cant deviation (−0.18 eV), but it has been determined that this case is an outlier, to be removed from the statistics, at the 99% condence level according to a Dixon Q-test. 80 Data from Table I clearly demonstrate that using lower levels of theory than CC3 to determine E adia signi cantly degrades the results with MAE of 0.05, 0.21, and 0.08 eV with CCSDR(3), CCSD and CC2, respectively. Interestingly, the CC2 MAE is similar to the one obtained on the WGLH set, whereas CCSD tends to exaggerate the transition energies, an observation consistent with other works. 39,84 In addition, using a quadruple-ζ basis set or including anharmonic corrections in the ∆E ZPVE term yield tri ing variations for the data of Figure 3. 80 In our most recent work, we have signi cantly increased both the size and the variety of the considered transitions (69 singlet, 30 triplet, 20 open-shell) with a focus set on the impact of the geometries on the computed E 0-0 . 81 First, the CC3 vertical and adiabatic energies determined on CC3, CCSDR(3), CCSD, CC2 and ADC(2) structures have been compared to a set of 31 singlet transitions. Interestingly, while the level of theory considered to optimize the GS and ES geometries has a very strong impact on the vertical values, it has a very small in uence on the adiabatic energies. For instance, taking the CC3//CC3 values as references, the MAE obtained with the CC3//CCSD method is 0.07 eV for E vert abs , 0.17 eV for E vert uo but 0.01 eV for E adia . erefore, there is a clear error compensation mechanism taking place between the vertical and the reorganization energies, in the following expression
E adia = E vert abs + E vert uo 2 + E GS reorg − E ES reorg 2 .(1)
is has been illustrated for the case of formaldehyde (see Figure 4). On the CC3 geometry, E adia = 3.580 eV, a value dominated by the rst term of the previous equation (3.385 eV), the second contributing to +0.195 eV. When going to other geometry optimization schemes, one notes signi cant changes of both terms with values 3.385, 3.405, 3.533, 3.350, and 3.364 eV for the rst, and 0.195, 0.175, 0.057, 0.244, and 0.278 eV for the la er when using CC3, CCSDR(3), CCSD, CC2, and ADC(2) geometries, respectively. Nevertheless, their sum (E adia ) is remarkably stable as seen in Figure 4. In addition, by comparing the experimental and theoretical 0-0 energies produced by combining i) CC3 E vert abs , ii) CCSD geometries, and iii) B3LYP ∆E ZPVE corrections, a tri ing MSE of −0.01 eV and a MAE of 0.03 eV are obtained for the set of 119 transitions considered. 81 Concomitantly, this means that, if E adia is determined at a high level of theory, one can obtain very accurate E 0-0 even on geometries that cannot be considered as highly accurate. is could explain why some of the previous works 62,63,66 noted small statistical uctuations when going from, e.g., CC2 to B3LYP geometries.
III. 0-0 ENERGIES IN SOLUTION
Performing comparisons between theoretical and experimental E 0-0 energies determined in solution allows to tackle large compounds for which gas-phase measurements are beyond reach, but obviously entails further approximations on the modeling side to account for environmental e ects. In solution, experimental E 0-0 values are generally taken as the absorption-uorescence crossing point (AFCP) or the foot of the absorption spectra. e second choice is a cruder approximation in most cases, while the former limits the reference data to uorescent compounds, that is, rather rigid derivatives. As noticed below, most published benchmark works use the polarizable continuum model (PCM) to describe solvation e ects, 91 applying either its linear-response (LR), 92,93 its corrected linear-response (cLR), 94 or its Improta's state-specifc (IBSF from the authors's name) 95 forms. e results obtained in published benchmarks are summarized in Table II. As stated in the previous Section, in their 2004 investigation Dierksen and Grimme applied an empirical correction to the experimental E 0-0 measured in solution to obtain gasphase reference values. 67 In two more recent investigations, the same group proposed to transform experimental AFCP into solvent-free vertical estimates for, rst, ve 96 and, next, twelve 97 dyes, by applying a series of additive theoretical corrections to the measured AFCP energies: i) solvation e ects on E vert abs are determined at the LR-PCM/PBE0/6-31G(d) level, ii) zero-point vibrational corrections (∆E ZPVE ) are computed at the PBE/TZVP level, and iii) reorganization e ects (the di erence between E vert abs and E adia ) are calculated at the same PBE/TZVP level. Such procedure allows to benchmark many levels of theory, as one only needs to compute gas-phase E vert abs . In this way, Goerigk and Grimme could obtain a MAE in the 0.16-0.20 eV range for many approaches (see Table II), 97 including CC2, several spin-scaled versions of CIS(D), two double-hydrid functionals (B2PLYP and B2GPLYP), as well as some hybrid functionals (BMK, PBE38 and CAM-B3LYP). In contrast to the results obtained for the WGLH set, 63 both CC2 and SCS-CC2 do not signi cantly outclass TD-DFT in the Goerigk-Grimme set. It is unclear if this unusual observation originates from the nature of the molecules included in their set or the theoretical protocol itself.
In 2012, another set of 40 medium and large uorophores was developed (JPAM set), 90 and TD-DFT calculations of E 0-0 were performed with a series of global and range-separated hybrid functionals using a fully coherent approach, i.e., the structures and ZPVE were consistently obtained for each functional used to compute E adia . In Ref. 90, the authors note that there is an inherent di culty when accounting explicitly for solvation e ects during the calculations. Indeed, while E adia and E 0-0 are equilibrium properties as they correspond to minimum-to-minimum energy di erences, the absorption and uorescence transitions are very fast processes and, in terms of solvation e ects, should be viewed as non-equilibrium processes, meaning that only the solvent's electrons have time to adapt to the solute electron density's changes. 91,92 Consistently, the AFCP is a non-equilibrium property as well.
To resolve this apparent contradiction, an extra correction needs to be applied to the threoretical E 0-0 values in order to allow a fairer comparison with experimental AFCP values. Using this protocol, a series of twelve hybrid functionals have been tested over the years on the JPAM set, 90,99,100 including optimally-tuned 102,103 versions of PBE (LC-PBE*) and PBE0 (LC-PBE0*). As can be deduced from Table II, the majority of the functionals lead to MAE in the 0.2-0.3 eV range, the smallest deviations being obtained with PBE0 (0.22 eV) and LC-PBE* (0.20 eV). e functionals including a rather large amount of exact exchange, e.g., M06-2X and CAM-B3LYP, signi cantly overestimate the experimental values, but they provide more consistent (in terms of correlation with experiment) AFCP energies than "standard" hybrid functionals like B3LYP and PBE0. e LC-PBE* functional allows to obtain both a small MAE and a high correlation, but at the cost of tuning the range separation parameter for each compound. 99 Consistently with the gas phase results discussed above, it was also shown that the band shapes are rather insensitive to the selected functional, 100 so that the choice of the functional can be driven by the accuracy in modeling E 0-0 . A subset of the JPAM set was also used in 2013 in a comparison between TDA and TD-DFT E 0-0 and band shapes. 98 With the B3LYP functional, the results were found to be substantially improved with TDA, but the authors warned that "using other exchange-correlation functionals might well lead to larger theory-experiment deviations with TDA than TD-DFT. "
In 2015, an even more extended set of uorescent compounds (JDB set) was assessed using a protocol in which i) the structural and vibrational parameters are determined in gas phase at the M06-2X/6-31+G(d) level, ii) the solvation e ects are calculated as the di erence of E adia computed in gas phase and in solution using LR-PCM or cLR-PCM, and iii) gas-phase E adia are determined using several wavefunction approaches in combination with the aug-cc-pVTZ atomic ba-sis set. 101 As can be seen in Table II the selected solvent model has a large impact on the statistics, the LR-PCM E 0-0 energies being almost systematically smaller than their cLR-PCM counterparts. 101 With the la er solvent model, the MAE are 0.13, 0.14, 0.15, and 0.24 eV with CC2, ADC(2), BSE/evGW, and TD-M06-2X, respectively, the two former wavefunction methods providing higher determination coe cients as compared to experiment, as illustrated in Figure 5. 101 Given that the CC2 MAE obtained in gas phase on accurate geometries tend to be smaller (0.08 eV in Ref. 80, 0.11 eV in Ref. 78 and 0.07 eV in Ref. 63), part of the 0.13 eV error in this 80-compound set is probably due to the limits of the PCM models. Consistently with the results obtained on the WGLH set, 63,78 the analysis of the data from the JDB set show that: i) ADC(2) and CC2 yield very similar estimates, ii) spin-scaling (SCS-CC2 and SOS-CC2) improves correlation with the experimental data but do not yield smaller MAE, and iii) the ∆E ZPVE term has a rather tight distributions around ca. −0.09 eV. With BSE/evGW the improvement with respect to TD-DFT is particularly signicant for CT transitions, an expected trend for a theory explicitly accounting for the electron-hole interaction. 20 e E vert abs , E vert uo and E adia data of the JDB set were also used by Adamo and coworkers to evaluate the performances of numerous double hybrid functionals. 104,105 In their second work, these authors found three subsets of the original JDB set able to reproduce the statistical errors of the complete set. eir most "advanced" subset (EX7-1) is composed of small molecules only, and therefore it allows rapid benchmarking as only computations on seven small compounds are needed to obtain relevant statistical results. Results obtained for the three families of transition energies with a wide range of double-hybrid functionals are given in Figure 6. Note that we did not included these results in Table II as Adamo and coworkers did not selected experimental data, but rather CC2 values, as references.
IV. SUMMARY
We have reviewed the generic benchmark studies devoted to adiabatic and 0-0 energies performed in the last two decades. Over the years, there has been a gradual shi from small to large molecules and from gas-phase to solvents. Additionally, the level of theory has gradually increased. is can be illustrated by the works benchmarking CC2: whilst Hä ig's 2003 contribution was mainly devoted to di-and tri-atomics, 61 his group tackled much larger organic compounds only a decade later. 63 Likewise, the rst CC3 benchmark that appeared in 2005 only encompassed 19 states in four diatomics, 68 whereas more than 110 transitions in a diverse set of molecules (from 3 to 16 atoms) have been tackled recently. 81 e results obtained in all these benchmarks, as measured by statistical deviations with respect to experimental measurements, are far from uniform, a logical consequence of the various protocols and molecular sets considered over the years. Nevertheless, some generic conclusions can be drawn:
1. It is challenging to get a balanced description of vari-ous kinds of states (n → π versus π → π , singletsinglet versus doublet-doublet…) and/or various families of compounds (small versus large, organic versus inorganic…). erefore, we believe that benchmark's results focussing solely on a speci c category of transitions/compounds should not be generalized.
2. In TD-DFT, for example, pure functionals, that do include exact exchange, perform reasonably well for very compact compounds, but tend to provide signi cantly too low transition energies for medium and large derivatives, for which hybrid functionals have clearly the edge.
3. CC2 and ADC(2) yield similar accuracies, generally signi cantly outperforming CIS(D). Globally, TD-DFT gives larger deviations than CC2 or ADC(2), except for double hybrids that are as accurate as these two approaches for a computational cost similar to CIS(D). ese new functionals therefore represent a good compromise between accuracy and computational cost.
4. Spin-scaling approaches, e.g., SOS-CIS(D) and SCS-CC2, tend to provide more consistent data with respect to experiment but do not deliver smaller average deviations.
5. e total errors obtained for E 0-0 are mainly driven by the errors on the transition energies, the level of theory used to obtain the structures having a rather minor impact on the results. is outcome can be explained by an error compensation mechanism between the vertical and reorganization energies.
6. e ∆E ZPVE correction, the most costly contribution to 0-0 energies, is particularly insensitive to the methodological choice and is roughly equal to −0.08 eV for low-lying singlet-singlet transitions. One can therefore select a low level of theory to compute it without signi cant loss of accuracy.
7. Given the two previous points, several simpli ed protocols can be used to compute more quickly E 0-0 . It is noteworthy that very compact test sets providing almost the same statistical values have been developed recently.
8. e details of the approach employed to model solvation e ects has a signi cant impact on the transition energies, hence, on the statistical results. At this stage, this conclusion holds for TD-DFT only, as wavefunctionbased benchmarks accounting for solvation e ects have yet to appear.
Given that calculations of theoretical E 0-0 o er wellgrounded comparisons with highly re ned experiments, the vast majority of the error comes from theory, and one can therefore provide a rough estimate of the accuracy of various theoretical models, i.e., 1 eV for CIS, 0.2-0.3 eV for CIS(D), 0.2-0.4 eV for TD-DFT when using hybrid functionals, 0.1-0.2 eV for ADC (2) and CC2, and 0.04 eV for CC3. Interestingly, rather similar error ranges have been obtained for CIS(D), ADC(2),
et al. studied the E 0-0 values determined for 29 transitions in 15 radicals (from diatomics to small aromatic systems). 76 A er having demonstrated that the 6-311++G(d,p) FIG. 2. Error distribution pa ern for E 0-0 in the WGHL set of compounds. e values are in eV. Reproduced from Ref. 63 with permission from the PCCP owner societies.
FIG. 3 .
3Ref. Year No. of ESs No. of molecules Method Solvent MSE MAE 97 a 2010 12 12 (organic dyes) CIS/def2-TZVPP//PBE/TZVP LR-PCM 0.77 0.77 CIS(D)/def2-TZVPP//PBE/TZVP LR-PCM 0.25 0.25 SCS-CIS(D)'/def2-TZVPP//PBE/TZVP LR-PCM 0.33 0.33 SCS-CIS(D) λ=0 /def2-TZVPP//PBE/TZVP LR-PCM 0.13 0.20 SCS-CIS(D) λ=1 /def2-TZVPP//PBE/TZVP LR-PCM 0.03 0.19 SOS-CIS(D)/def2-TZVPP//PBECAM-B3LYP/def2-TZVPP//PBE/TZVP LR-PCM 0.11 0.18 B2PLYP/def2-TZVPP//PBE/TZVP LR-PCM −0.11 0.20 B2GPLYP/def2-TZVPP//PBE/TZVP LR-PCM −0.01 0.16 90 2012 40 40 (organic dyes) B3LYP/6-311++G(2df,2p)//6-31+G(d) cLR-PCM −0.14 0.27 PBE0/6-311++G(2df,2p)//6-31+G(d) cLR-PCM −0.03 0.22 Continued on next page Deviation (in eV) from the experimental E 0-0 of the theoretical E 0-0 determined at the CC3//CCSDR(3) level. Reproduced from Ref. 80 with permission of the American Chemical Society. Copyright 2018 American Chemical Society.
FIG. 4 .
4CC3/aug-cc-pVTZ transition energies computed for formaldehyde computed with, from le to right, the CC3, CCSDR(3), CCSD, CC2 and ADC(2) geometries. e absorption, uorescence, adiabatic and reorganization energies are represented in blue, red, green and orange, respectively. On the horizontal axis, we provide the optimal C --O bond lengths for these ve geometries.Reproduced from Ref. 81 with permission of the American Chemical Society. Copyright 2019 American Chemical Society. Ref. Year No. of ESs No. PCM −0.26 0.30 TDA-B3LYP/6-31+G(d) IBSF-PCM −0.04 0.13 99 b 2014 40 40 (organic dyes) SOGGA11-X/6-311++G(2df,2p)//6-31+G(d) cLR-PCM 0.21 0.24 ωB97X-D/6-311++G(2df,2p)//6-31+G(d) cLR-PCM 0.30 0.30 LC-PBE*/6-311++G(2df,2p)//6-31+G(d) cLR-PCM 0.12 0.20 Continued on next page /evGW/aug-cc-pVTZ//M06-2X/6-31+G(d) LR-PCM −0.14 0.19 cLR-PCM 0.02 0.15 a Extends the previous work by the same group, 96 see the text for details of the procedure; b (Sub)set of the JPAM set proposed in Ref. 90; c Structures and ZPVE obtained in gas-phase with M06-2X/6-31+G(d);
FIG. 5 .
5Correlation plots between experimental AFCP energies and theoretical E 0-0 obtained for the JDB set applying the cLR-PCM solvent model. e central line indicates a perfect theory-experiment match. Adapted from Figure 6 of Ref. 101 with permission of the American Chemical Society. Copyright 2015 American Chemical Society. FIG. 6. MAE (in eV) for E vert abs (red), E vert uo (blue) and E adia (green) computed with double-hybrid functionals for the EX7-1 subset of the JDB set using CC2 results as references. Reproduced from Figure 10 of Ref. 105 with permission of the American Chemical Society. Copyright 2017 American Chemical Society.
Ref. Year No. of ESs No. of moleculesMethod
MSE
MAE
64 a 1995
6
6 (diatomics)
CIS/aug-cc-pVTZ
−0.06
0.73
CIS(D)/aug-cc-pVTZ
0.27
0.27
CCSD/aug-cc-pVTZ
−0.19
0.19
65 b 2002
34
28 (mostly di/triatomics) CIS/aug-TZVPP
0.32
0.66
TD-HF/aug-TZVPP
0.23
0.63
LDA/aug-TZVPP
−0.18
0.25
BLYP/aug-TZVPP
−0.27
0.32
Continued on next page
Ref. Year No. of ESs No. of molecules
Method
MSE
MAE
BP86aug-TZVPP
−0.22
0.31
PBE/aug-TZVPP
−0.24
0.30
B3LYP/aug-TZVPP
−0.13
0.28
PBE0/aug-TZVPP
−0.08
0.30
61 b 2003
20
29 (mostly di/triatomics) CC2/aug-cc-pVQZ
−0.05
0.17
66 c 2004
9
7 (aromatics)
B3LYP/TZVP
−0.13
0.43
67 d 2004
43
41 (π-conjugated)
BP86/TZVP
−0.56
0.57
B3LYP/TZVP
−0.33
0.35
BHHLYP/TZVP
−0.01
0.18
60 2004
32
22 (diverse)
B3LYP/TZV(d,p)
−0.11
0.28
CIS(D)/aug-cc-pVTZ//B3LYP/TZV(d,p)
0.16
0.19
SCS-CIS(D)/aug-cc-pVTZ//B3LYP/TZV(d,p)
0.23
0.23
68 a 2005
19
4 (diatomics)
CIS/aug-cc-pwCVQZ
0.03
0.57
CIS(D)/aug-cc-pwCVQZ
0.29
0.26
ADC(2)/aug-cc-pwCVQZ
0.18
0.21
CC2/aug-cc-pwCVQZ
0.10
0.16
CCSD/aug-cc-pwCVQZ
0.20
0.20
CCSDR(3)/aug-cc-pwCVQZ
0.07
0.07
CC3/aug-cc-pwCVQZ
0.01
0.04
69 2007
32
22 (diverse) e
CIS/aug-cc-pVTZ//CIS/6-311G(d,p)
0.63
0.71
CIS(D)/aug-cc-pVTZ//CIS/6-311G(d,p)
0.19
0.22
SCS-CIS(D)/aug-cc-pVTZ//CIS/6-311G(d,p)
0.02
0.12
SOS-CIS(D)/aug-cc-pVTZ//CIS/6-311G(d,p)
0.02
0.12
70 a 2008
26
19 (di/triatomics)
CC2/cc-pVQZ
0.01
0.17
SCS-CC2/cc-pVQZ
0.09
0.16
SOS-CC2/cc-pVQZ
0.13
0.17
32
22 (diverse) e
B3LYP/aug-cc-pVTZ//B3LYP/TZVP
−0.13
0.29
CC2/aug-cc-pVTZ//B3LYP/TZVP
−0.02
0.14
SCS-CC2/aug-cc-pVTZ//B3LYP/TZVP
0.08
0.14
SOS-CC2/aug-cc-pVTZ//B3LYP/TZVP
0.13
0.17
71 b 2009
20
29 (mostly di/triatomics) f CIS/aug-cc-pVTZ
0.19
0.58
SOS-CIS(D 0 )/aug-cc-pVTZ
0.12
0.26
CC2/aug-cc-pVTZ
−0.08
0.18
32
22 (diverse) e
SOS-CIS(D 0 )/aug-cc-pVTZ
0.05
0.17
72 b 2010
20
29 (mostly di/triatomics) f B3LYP/aug-cc-pVTZ
−0.25
0.30
TDA-B3LYP/aug-cc-pVTZ
−0.12
0.26
ωB97/aug-cc-pVTZ
−0.05
0.25
TDA-ωB97/aug-cc-pVTZ
0.08
0.25
73 g 2010
9
7 (charge-transfer)
B3LYP/6-311+G(d,p)
−0.36
0.36
LC-BOP/6-311+G(d,p)
0.16
0.24
CAM-B3LYP/6-311+G(d,p)
0.07
0.22
MCAM-B3LYP/6-311+G(d,p)
−0.06
0.06
62 2011
91
109 (diverse)
CIS/def2-TZVP//B3LYP/def2-TZVP
0.90
0.98
LSDA/def2-TZVP//B3LYP/def2-TZVP
−0.21
0.49
PBE/def2-TZVP//B3LYP/def2-TZVP
−0.33
0.40
BP86/def2-TZVP//B3LYP/def2-TZVP
−0.32
0.39
TPSS/def2-TZVP//B3LYP/def2-TZVP
−0.20
0.32
B3LYP/def2-TZVP
−0.08
0.21
PBE0/def2-TZVP//B3LYP/def2-TZVP
−0.08
0.25
15
15 (subset of previous)
CC2/def2-TZVPD//B3LYP/def2-TZVP
0.10
0.17
74 2012
91
109 (various) h
cTPSS/def2-TZVP//B3LYP/def2-TZVP
−0.26
0.34
TPSSh/def2-TZVP//B3LYP/def2-TZVP
−0.08
0.26
cTPPSh/def2-TZVP//B3LYP/def2-TZVP
−0.13
0.27
63 2013
66
46 (aromatics) i
B3LYP/aug-cc-pVTZ//B3LYP/def2-TZVP
0.00
0.19
ADC(2)/aug-cc-pVTZ//ADC(2)/def2-TZVPP
−0.03
0.08
CC2/aug-cc-pVTZ//CC2/def2-TZVPP
0.00
0.07
SCS-CC2/aug-cc-pVTZ//SCS-CC2/def2-TZVPP
0.01
0.05
SOS-CC2/aug-cc-pVTZ//SOS-CC2/def2-TZVPP
−0.01
0.06
75 2014
79
96 (various) h
CIS/cc-pVDZ
0.78
0.88
CC2/cc-pVDZ
0.11
0.19
BP86/cc-pVDZ
−0.38
0.42
Continued on next page
Ref. Year No. of ESs No. of molecules
Method
MSE
MAE
B3LYP/cc-pVDZ
−0.11
0.24
PBE0/cc-pVDZ
−0.03
0.26
M06-2X/cc-pVDZ
0.05
0.30
M06-HF/cc-pVDZ
−0.01
0.50
CAM-B3LYP/cc-pVDZ
0.09
0.27
ωB97X-D/cc-pVDZ
0.10
0.27
76 2014
29
15 (small radicals)
CIS/6-311++G(d,p)
1.66
1.75
BLYP/6-311++G(d,p)
−0.22
0.32
PBE/6-311++G(d,p)
−0.13
0.29
VSXC/6-311++G(d,p)
−0.07
0.26
M06-L/6-311++G(d,p)
0.17
0.36
B3LYP/6-311++G(d,p)
−0.05
0.18
PBE0/6-311++G(d,p)
0.05
0.25
M06/6-311++G(d,p)
−0.10
0.25
BHandH/6-311++G(d,p)
0.16
0.32
BHandHLYP/6-311++G(d,p)
0.11
0.35
M06-2X/6-311++G(d,p)
−0.04
0.24
CAM-B3LYP/6-311++G(d,p)
0.08
0.23
ωB97X-D/6-311++G(d,p)
0.08
0.22
LC-BLYP/6-311++G(d,p)
0.18
0.38
LC-PBE/6-311++G(d,p)
0.28
0.45
LC-M06-L/6-311++G(d,p)
0.33
0.39
HSE06/6-311++G(d,p)
0.08
0.22
HISS/6-311++G(d,p)
0.29
0.38
CASPT2/6-311++G(d,p)
−0.02
0.12
77 g 2016
68
59 (organic) h
OM2/MRCI
−0.01
0.26
OM3/MRCI
−0.03
0.27
B3LYP/def2-TZVP
−0.11
0.24
65
45 (aromatics) j
OM2/MRCI
−0.22
0.35
OM3/MRCI
−0.23
0.35
78 2016
66
46 (aromatics) j
CIS/def2-TZVP
1.08
1.08
BP86/def2-TZVP
−0.39
0.40
B3LYP/def2-TZVP
0.05
0.20
PBE0/def2-TZVP
0.16
0.24
M06-2X/def2-TZVP
0.33
0.36
M06-HF/def2-TZVP
0.55
0.57
CAM-B3LYP/def2-TZVP
0.30
0.33
ωB97X-D/def2-TZVP
0.30
0.32
CC2/def2-TZVP
0.09
0.11
79 k 2017
66
46 (aromatics) j
B2PLYP/aug-cc-pVTZ//B3LYP/def2-TZVP
0.01
0.11
B2GPPLYP/aug-cc-pVTZ//B3LYP/def2-TZVP
0.21
0.24
DSD-BLYP/aug-cc-pVTZ//B3LYP/def2-TZVP
0.05
0.10
adia values were considered; b Depending on the molecule E adia or E 0-0 values were considered; c Some of the experiments were made in solution or in a matrix, but the the gas-phase theoretical calculations were uncorrected; d Solvent e ects empirically corrected; e Same set (GI) as in Ref. 60; f Same set (KH) as in Ref. 61; g ∆E ZPVE at the B3LYP level; h (Sub)set (SKF) of the one considered in Ref. 62; i More than one conformer of the same molecules are investigated in several cases; j Same set (WGLH) as in Ref. 63; k Variant "A" of the spin-scaling parameters, the so-called "original" values; A. Small compounds04
0.08
81 g 2019
119
109 (diverse)
CC3/aug-cc-pVTZ//CCSD/def2-TZVPP
−0.01
0.03
Continued on next page
Ref. Year No. of ESs No. of molecules
Method
MSE
MAE
a E
TABLE II :
IIStatistical analysis of the results obtained in various benchmarks comparing E 0-0 computations in solution to experimental data (AFCP). See caption ofTable I for more details.
in recent comparisons with FCI data for small compounds, 80 whereas the TD-DFT accuracy is globally the one found in comparisons with CC3 or CASPT2. Cc2 , 85CC2, and CC3, in recent comparisons with FCI data for small compounds, 80 whereas the TD-DFT accuracy is globally the one found in comparisons with CC3 or CASPT2. 85
Time-Dependent Density-Functional eory: Concepts and Applications; Oxford Graduate Texts. C Ullrich, Oxford University PressNew YorkUllrich, C. Time-Dependent Density-Functional eory: Concepts and Ap- plications; Oxford Graduate Texts; Oxford University Press: New York, 2012.
Density-Functional eory for Time-Dependent Systems. E Runge, E K Gross, Phys. Rev. Le. 52Runge, E.; Gross, E. K. U. Density-Functional eory for Time-Dependent Systems. Phys. Rev. Le . 1984, 52, 997-1000.
Time-Dependent Density-Functional Response eory for Molecules. M E Casida, D P Chong, Ed, World Scienti c. 1Recent Advances in Density Functional MethodsCasida, M. E. In Time-Dependent Density-Functional Response eory for Molecules; Chong, D. P., Ed.; Recent Advances in Density Functional Meth- ods; World Scienti c: Singapore, 1995; Vol. 1; pp 155-192.
. B O Roos, K A Fulscher, M P Malmqvist, P.-A Serrano-Andres, L , Adv. Chem. Phys. Prigogine, I., Rice, S. A.WileyB. O. Roos, K. A.; Fulscher, M. P.; Malmqvist, P.-A.; Serrano-Andres, L. In Adv. Chem. Phys.; Prigogine, I., Rice, S. A., Eds.; Wiley, New York, 1996;
Does density functional theory contribute to the understanding of excited states of unsaturated organic compounds?. D J Tozer, R D Amos, N C Handy, B O Roos, L Serrano-Andrès, Mol. Phys. 97Tozer, D. J.; Amos, R. D.; Handy, N. C.; Roos, B. O.; Serrano-Andrès, L. Does density functional theory contribute to the understanding of excited states of unsaturated organic compounds? Mol. Phys. 1999, 97, 859-868.
Long-range charge-transfer excited states in time-dependent density functional theory require nonlocal exchange. A Dreuw, J L Weisman, M Head-Gordon, J. Chem. Phys. 119Dreuw, A.; Weisman, J. L.; Head-Gordon, M. Long-range charge-transfer excited states in time-dependent density functional theory require non- local exchange. J. Chem. Phys. 2003, 119, 2943-2946.
Ab Initio Study of the Excited-State Coupled Electron-Proton-Transfer Process in the 2-Aminopyridine Dimer. A L Sobolewski, W Domcke, Chem. Phys. 294Sobolewski, A. L.; Domcke, W. Ab Initio Study of the Excited-State Coupled Electron-Proton-Transfer Process in the 2-Aminopyridine Dimer. Chem. Phys. 2003, 294, 73-83.
Failure of Time-Dependent Density Functional eory for Long-Range Charge-Transfer Excited States: the Zincbacteriochlorin-Bacteriochlorin and Bacteriochlorophyll-Spheroidene Complexes. A Dreuw, M Head-Gordon, J. Am. Chem. Soc. 126Dreuw, A.; Head-Gordon, M. Failure of Time-Dependent Density Func- tional eory for Long-Range Charge-Transfer Excited States: the Zincbacteriochlorin-Bacteriochlorin and Bacteriochlorophyll-Spheroidene Complexes. J. Am. Chem. Soc. 2004, 126, 4007-4016.
Improving Virtual Kohn-Sham Orbitals and Eigenvalues: Application to Excitation Energies and Static Polarizabilities. D J Tozer, N C Handy, J. Chem. Phys. 109Tozer, D. J.; Handy, N. C. Improving Virtual Kohn-Sham Orbitals and Eigenvalues: Application to Excitation Energies and Static Polarizabilities. J. Chem. Phys. 1998, 109, 10180-10189.
On the determination of excitation energies using density functional theory. D J Tozer, N C Handy, Phys. Chem. Chem. Phys. 2Tozer, D. J.; Handy, N. C. On the determination of excitation energies using density functional theory. Phys. Chem. Chem. Phys. 2000, 2, 2117-2121.
Molecular excitation energies to high-lying bound states from time-dependent densityfunctional response theory: Characterization and correction of the timedependent local density approximation ionization threshold. M E Casida, C Jamorski, K C Casida, D R Salahub, J. Chem. Phys. 108Casida, M. E.; Jamorski, C.; Casida, K. C.; Salahub, D. R. Molecular exci- tation energies to high-lying bound states from time-dependent density- functional response theory: Characterization and correction of the time- dependent local density approximation ionization threshold. J. Chem. Phys. 1998, 108, 4439-4449.
Asymptotic Correction Approach to Improving Approximate Exchange-Correlation Potentials: Time-Dependent Density-Functional eory Calculations of Molecular Excitation Spectra. M E Casida, D R Salahub, J. Chem. Phys. 113Casida, M. E.; Salahub, D. R. Asymptotic Correction Approach to Improving Approximate Exchange-Correlation Potentials: Time-Dependent Density- Functional eory Calculations of Molecular Excitation Spectra. J. Chem. Phys. 2000, 113, 8918-8935.
In uence of Triplet Instabilities in TDDFT. M J G Peach, M J Williamson, D J Tozer, J. Chem. eory Comput. 7Peach, M. J. G.; Williamson, M. J.; Tozer, D. J. In uence of Triplet Instabili- ties in TDDFT. J. Chem. eory Comput. 2011, 7, 3578-3585.
Overcoming Low Orbital Overlap and Triplet Instability Problems in TDDFT. M J G Peach, D J Tozer, J. Phys. Chem. A. 116Peach, M. J. G.; Tozer, D. J. Overcoming Low Orbital Overlap and Triplet Instability Problems in TDDFT. J. Phys. Chem. A 2012, 116, 9783-9789.
On the Triplet Instability in TDDFT. M J G Peach, N Warner, D J Tozer, Mol. Phys. 111Peach, M. J. G.; Warner, N.; Tozer, D. J. On the Triplet Instability in TDDFT. Mol. Phys. 2013, 111, 1271-1274.
Reliable Prediction with Tuned Range-Separated Functionals of the Singlet-Triplet Gap in Organic Emi ers for ermally Activated Delayed Fluorescence. H Sun, C Zhong, J.-L Brédas, J. Chem. eory Comput. 11Sun, H.; Zhong, C.; Brédas, J.-L. Reliable Prediction with Tuned Range- Separated Functionals of the Singlet-Triplet Gap in Organic Emi ers for ermally Activated Delayed Fluorescence. J. Chem. eory Comput. 2015, 11, 3851-3858.
Conical Intersections and Double Excitations in Time-Dependent Density Functional eory. B G Levine, C Ko, J Enneville, T J Martínez, Mol. Phys. 104Levine, B. G.; Ko, C.; enneville, J.; Martínez, T. J. Conical Intersections and Double Excitations in Time-Dependent Density Functional eory. Mol. Phys. 2006, 104, 1039-1051.
Perspectives on Double-Excitations in TDDFT. P Ellio, S Goldson, C Canahui, N T Maitra, Chem. Phys. 391Ellio , P.; Goldson, S.; Canahui, C.; Maitra, N. T. Perspectives on Double- Excitations in TDDFT. Chem. Phys. 2011, 391, 110-119.
. A D Laurent, D Jacquemin, Td-Dft, Benchmarks, A Review. Int. J. antum Chem. 113Laurent, A. D.; Jacquemin, D. TD-DFT Benchmarks: A Review. Int. J. antum Chem. 2013, 113, 2019-2039.
X Blase, I Duchemin, D Jacquemin, Bethe-Salpeter, Chemistry: Relations with TD-DFT, Applications and Challenges. 47Blase, X.; Duchemin, I.; Jacquemin, D. e Bethe-Salpeter Equation in Chemistry: Relations with TD-DFT, Applications and Challenges. Chem. Soc. Rev. 2018, 47, 1022-1043.
A Doubles Correction to Electronic Excited States From Con guration Interaction in the Space of Single Substitutions. M Head-Gordon, R J Rico, M Oumi, T J Lee, Chem. Phys. Le. 219Head-Gordon, M.; Rico, R. J.; Oumi, M.; Lee, T. J. A Doubles Correction to Electronic Excited States From Con guration Interaction in the Space of Single Substitutions. Chem. Phys. Le . 1994, 219, 21-29.
A Perturbative Correction to Restricted Open-Shell Con guration-Interaction with Single Substitutions for Excited-States of Radicals. M Head-Gordon, D Maurice, M Oumi, Chem. Phys. Le. 246Head-Gordon, M.; Maurice, D.; Oumi, M. A Perturbative Correction to Restricted Open-Shell Con guration-Interaction with Single Substitutions for Excited-States of Radicals. Chem. Phys. Le . 1995, 246, 114-121.
Algebraic Diagrammatic Construction Scheme for the Polarization Propagator for the Calculation of Excited States. A Dreuw, M Wormit, WIREs Comput. Mol. Sci. 5Dreuw, A.; Wormit, M. e Algebraic Diagrammatic Construction Scheme for the Polarization Propagator for the Calculation of Excited States. WIREs Comput. Mol. Sci. 2015, 5, 82-95.
Second-Order Approximate Coupled Cluster Singles and Doubles Model CC2. O Christiansen, H Koch, P Jørgensen, Chem. Phys. Le. 243Christiansen, O.; Koch, H.; Jørgensen, P. e Second-Order Approximate Coupled Cluster Singles and Doubles Model CC2. Chem. Phys. Le . 1995, 243, 409-418.
Coupled Cluster Response Functions. H Koch, P Jørgensen, J. Chem. Phys. 93Koch, H.; Jørgensen, P. Coupled Cluster Response Functions. J. Chem. Phys. 1990, 93, 3333-3344.
Equation of Motion Coupled-Cluster Method -A Systematic Biorthogonal Approach to Molecular Excitation Energies, Transition-Probabilities, and Excited-State Properties. J F Stanton, R J Bartle, J. Chem. Phys. 98Stanton, J. F.; Bartle , R. J. e Equation of Motion Coupled-Cluster Method -A Systematic Biorthogonal Approach to Molecular Excitation Energies, Transition-Probabilities, and Excited-State Properties. J. Chem. Phys. 1993, 98, 7029-7039.
Second-Order Perturbation eory With a CASSCF Reference Function. K Andersson, P A Malmqvist, B O Roos, A J Sadlej, K Wolinski, J. Phys. Chem. 94Andersson, K.; Malmqvist, P. A.; Roos, B. O.; Sadlej, A. J.; Wolinski, K. Second-Order Perturbation eory With a CASSCF Reference Function. J. Phys. Chem. 1990, 94, 5483-5488.
Second-Order Perturbation eory With a Complete Active Space Self-Consistent Field Reference Function. K Andersson, P.-A Malmqvist, B O Roos, J. Chem. Phys. 96Andersson, K.; Malmqvist, P.-A.; Roos, B. O. Second-Order Perturbation eory With a Complete Active Space Self-Consistent Field Reference Function. J. Chem. Phys. 1992, 96, 1218-1226.
Introduction of n -Electron Valence States for Multireference Perturbation eory. C Angeli, R Cimiraglia, S Evangelisti, T Leininger, J.-P Malrieu, J. Chem. Phys. 114Angeli, C.; Cimiraglia, R.; Evangelisti, S.; Leininger, T.; Malrieu, J.-P. In- troduction of n -Electron Valence States for Multireference Perturbation eory. J. Chem. Phys. 2001, 114, 10252-10264.
Studies in Con guration Interaction: e First-Row Diatomic Hydrides. C F Bender, E R Davidson, Phys. Rev. 183Bender, C. F.; Davidson, E. R. Studies in Con guration Interaction: e First-Row Diatomic Hydrides. Phys. Rev. 1969, 183, 23-30.
Con guration Interaction Studies of Ground and Excited States of Polyatomic Molecules. I. e CI Formulation and Studies of Formaldehyde. J L Whi En, M Hackmeyer, J. Chem. Phys. 51Whi en, J. L.; Hackmeyer, M. Con guration Interaction Studies of Ground and Excited States of Polyatomic Molecules. I. e CI Formulation and Studies of Formaldehyde. J. Chem. Phys. 1969, 51, 5584-5596.
Iterative Perturbation Calculations of Ground and Excited State Energies from Multicon gurational Zeroth-Order Wavefunctions. B Huron, J P Malrieu, P Rancurel, J. Chem. Phys. 58Huron, B.; Malrieu, J. P.; Rancurel, P. Iterative Perturbation Calculations of Ground and Excited State Energies from Multicon gurational Zeroth- Order Wavefunctions. J. Chem. Phys. 1973, 58, 5745-5759.
Heat-Bath Con guration Interaction: An E cient Selected Con guration Interaction Algorithm Inspired by Heat-Bath Sampling. A A Holmes, N M Tubman, C J Umrigar, J. Chem. eory Comput. 12Holmes, A. A.; Tubman, N. M.; Umrigar, C. J. Heat-Bath Con guration Interaction: An E cient Selected Con guration Interaction Algorithm Inspired by Heat-Bath Sampling. J. Chem. eory Comput. 2016, 12, 3674- 3680.
Semistochastic Heat-Bath Con guration Interaction Method: Selected Con guration Interaction with Semistochastic Perturbation eory. S Sharma, A A Holmes, G Jeanmairet, A Alavi, C J Umrigar, J. Chem. eory Comput. 13Sharma, S.; Holmes, A. A.; Jeanmairet, G.; Alavi, A.; Umrigar, C. J. Semis- tochastic Heat-Bath Con guration Interaction Method: Selected Con g- uration Interaction with Semistochastic Perturbation eory. J. Chem. eory Comput. 2017, 13, 1595-1604.
Alternative De nition of Excitation Amplitudes in Multi-Reference State-Speci c Coupled Cluster. Y Garniron, E Giner, J.-P Malrieu, A Scemama, J. Chem. Phys. 154107Garniron, Y.; Giner, E.; Malrieu, J.-P.; Scemama, A. Alternative De nition of Excitation Amplitudes in Multi-Reference State-Speci c Coupled Cluster. J. Chem. Phys. 2017, 146, 154107.
Selected Con guration Interaction Dressed by Perturbation. Y Garniron, A Scemama, E Giner, M Ca Arel, P.-F Loos, J. Chem. Phys. 64103Garniron, Y.; Scemama, A.; Giner, E.; Ca arel, M.; Loos, P.-F. Selected Con guration Interaction Dressed by Perturbation. J. Chem. Phys. 2018, 149, 064103.
Excited States of Methylene, Polyenes, and Ozone from Heat-Bath Con guration Interaction. A D Chien, A A Holmes, M ; O En, C J Umrigar, S Sharma, P M Zimmerman, J. Phys. Chem. A. 122Chien, A. D.; Holmes, A. A.; O en, M.; Umrigar, C. J.; Sharma, S.; Zim- merman, P. M. Excited States of Methylene, Polyenes, and Ozone from Heat-Bath Con guration Interaction. J. Phys. Chem. A 2018, 122, 2714- 2722.
Deterministic Construction of Nodal Surfaces Within antum Monte Carlo: e Case of FeS. A Scemama, Y Garniron, M Ca Arel, P F Loos, Scemama, A.; Garniron, Y.; Ca arel, M.; Loos, P. F. Deterministic Construc- tion of Nodal Surfaces Within antum Monte Carlo: e Case of FeS. J.
. Chem. eory Comput. 14Chem. eory Comput. 2018, 14, 1395-1402.
A Mountaineering Strategy to Excited States: Highly-Accurate Reference Energies and Benchmarks. P.-F Loos, A Scemama, A Blondel, Y Garniron, M Ca Arel, D Jacquemin, J. Chem. eory Comput. 14Loos, P.-F.; Scemama, A.; Blondel, A.; Garniron, Y.; Ca arel, M.; Jacquemin, D. A Mountaineering Strategy to Excited States: Highly- Accurate Reference Energies and Benchmarks. J. Chem. eory Comput. 2018, 14, 4360-4379.
Using Perturbatively Selected Con guration Interaction in antum Monte Carlo Calculations. E Giner, A Scemama, M Ca Arel, Can. J. Chem. 91Giner, E.; Scemama, A.; Ca arel, M. Using Perturbatively Selected Con g- uration Interaction in antum Monte Carlo Calculations. Can. J. Chem. 2013, 91, 879-885.
Ca arel, M. Fixed-Node Di usion Monte Carlo Potential Energy Curve of the Fluorine Molecule F 2 Using Selected Con guration Interaction Trial Wavefunctions. E Giner, A Scemama, J. Chem. Phys. 44115Giner, E.; Scemama, A.; Ca arel, M. Fixed-Node Di usion Monte Carlo Potential Energy Curve of the Fluorine Molecule F 2 Using Selected Con g- uration Interaction Trial Wavefunctions. J. Chem. Phys. 2015, 142, 044115.
Dye Chemistry with Time-Dependent Density Functional eory. A D Laurent, C Adamo, D Jacquemin, Phys. Chem. Chem. Phys. 16Laurent, A. D.; Adamo, C.; Jacquemin, D. Dye Chemistry with Time- Dependent Density Functional eory. Phys. Chem. Chem. Phys. 2014, 16, 14334-14356.
Revisiting the Optical Signatures of BODIPY with Ab Initio Tools. S Chibani, B Le Guennic, A Charaf-Eddin, A D Laurent, D Jacquemin, Chem. Sci. 4Chibani, S.; Le Guennic, B.; Charaf-Eddin, A.; Laurent, A. D.; Jacquemin, D. Revisiting the Optical Signatures of BODIPY with Ab Initio Tools. Chem. Sci. 2013, 4, 1950-1963.
Boranil and Related NBO Dyes: Insights From eory. S Chibani, A Charaf-Eddin, B Le Guennic, D Jacquemin, J. Chem. eory Comput. 9Chibani, S.; Charaf-Eddin, A.; Le Guennic, B.; Jacquemin, D. Boranil and Related NBO Dyes: Insights From eory. J. Chem. eory Comput. 2013, 9, 3127-3135.
Improving the Accuracy of Excited State Simulations of BODIPY and aza-BODIPY Dyes with a Joint SOS-CIS(D) and TD-DFT Approach. S Chibani, A D Laurent, B Le Guennic, D Jacquemin, J. Chem. eory Comput. 10Chibani, S.; Laurent, A. D.; Le Guennic, B.; Jacquemin, D. Improving the Accuracy of Excited State Simulations of BODIPY and aza-BODIPY Dyes with a Joint SOS-CIS(D) and TD-DFT Approach. J. Chem. eory Comput. 2014, 10, 4574-4582.
Non-Condon E ects in the One-and Two-Photon Absorption Spectra of the Green Fluorescent Protein. E Kamarchik, A I Krylov, J. Phys. Chem. Le. 2Kamarchik, E.; Krylov, A. I. Non-Condon E ects in the One-and Two- Photon Absorption Spectra of the Green Fluorescent Protein. J. Phys. Chem. Le . 2011, 2, 488-492.
antum Chemical Comparison of Vertical, Adiabatic, and 0-0 Excitation Energies e PYP and GFP chromophores. M Uppsten, B Durbeej, J. Comput. Chem. 33Uppsten, M.; Durbeej, B. antum Chemical Comparison of Vertical, Adi- abatic, and 0-0 Excitation Energies e PYP and GFP chromophores. J. Comput. Chem. 2012, 33, 1892-1901.
Coupled-cluster and density functional theory studies of the electronic 0-0 transitions of the DNA bases. V A Ovchinnikov, D Sundholm, Phys. Chem. Chem. Phys. 16Ovchinnikov, V. A.; Sundholm, D. Coupled-cluster and density functional theory studies of the electronic 0-0 transitions of the DNA bases. Phys. Chem. Chem. Phys. 2014, 16, 6931-6941.
Rationalisation of the optical signatures of nor-dihydroxanthene-hemicyanine fused near-infrared uorophores by rst-principle tools. C Azarias, M Ponce-Vargas, I Navizet, P Fleurat-Lessard, A Romieu, B Le Guennic, J.-A Richard, D Jacquemin, Phys. Chem. Chem. Phys. 20Azarias, C.; Ponce-Vargas, M.; Navizet, I.; Fleurat-Lessard, P.; Romieu, A.; Le Guennic, B.; Richard, J.-A.; Jacquemin, D. Rationalisation of the opti- cal signatures of nor-dihydroxanthene-hemicyanine fused near-infrared uorophores by rst-principle tools. Phys. Chem. Chem. Phys. 2018, 20, 12120-12128.
Benchmarking TD-DFT against Vibrationally Resolved Absorption Spectra at Room Temperature: 7-Aminocoumarins as Test Cases. F Muniz-Miranda, A Pedone, G Ba Istelli, M Montalti, J Bloino, V Barone, J. Chem. eory Comput. 11Muniz-Miranda, F.; Pedone, A.; Ba istelli, G.; Montalti, M.; Bloino, J.; Barone, V. Benchmarking TD-DFT against Vibrationally Resolved Absorp- tion Spectra at Room Temperature: 7-Aminocoumarins as Test Cases. J. Chem. eory Comput. 2015, 11, 5371-5384.
E ective Method to Compute Franck-Condon Integrals for Optical Spectra of Large Molecules in Solution. F Santoro, R Improta, A Lami, J Bloino, V Barone, J. Chem. Phys. 84509Santoro, F.; Improta, R.; Lami, A.; Bloino, J.; Barone, V. E ective Method to Compute Franck-Condon Integrals for Optical Spectra of Large Molecules in Solution. J. Chem. Phys. 2007, 126, 084509.
Analysis and Prediction of Absorption Bandshapes, Fluorescence Bandshapes, Resonance Raman Intensities and Excitation Pro les using the Time Dependent eory of Electronic Spectroscopy. T Petrenko, F Neese, J. Chem. Phys. 164319Petrenko, T.; Neese, F. Analysis and Prediction of Absorption Bandshapes, Fluorescence Bandshapes, Resonance Raman Intensities and Excitation Pro les using the Time Dependent eory of Electronic Spectroscopy. J. Chem. Phys. 2007, 127, 164319.
E ective Method for the Computation of Optical Spectra of Large Molecules at Finite Temperature Including the Duschinsky and Herzberg-Teller E ect: e Qx Band of Porphyrin as a Case Study. F Santoro, A Lami, R Improta, J Bloino, V Barone, J. Chem. Phys. 224311Santoro, F.; Lami, A.; Improta, R.; Bloino, J.; Barone, V. E ective Method for the Computation of Optical Spectra of Large Molecules at Finite Tem- perature Including the Duschinsky and Herzberg-Teller E ect: e Qx Band of Porphyrin as a Case Study. J. Chem. Phys. 2008, 128, 224311.
Insights for an Accurate Comparison of Computational Data to Experimental Absorption and Emission Spectra: Beyond the Vertical Transition Approximation. Avila Ferrer, F J Cerezo, J Stendardo, E Improta, R Santoro, F , J. Chem. eory Comput. 9Avila Ferrer, F. J.; Cerezo, J.; Stendardo, E.; Improta, R.; Santoro, F. Insights for an Accurate Comparison of Computational Data to Experimental Ab- sorption and Emission Spectra: Beyond the Vertical Transition Approxi- mation. J. Chem. eory Comput. 2013, 9, 2072-2082.
General Time Dependent Approach to Vibronic Spectroscopy Including Franck-Condon, Herzberg-Teller, and Duschinsky E ects. A Baiardi, J Bloino, V Barone, J. Chem. eory Comput. 9Baiardi, A.; Bloino, J.; Barone, V. General Time Dependent Approach to Vibronic Spectroscopy Including Franck-Condon, Herzberg-Teller, and Duschinsky E ects. J. Chem. eory Comput. 2013, 9, 4097-4115.
Herzberg-Teller, and Multiple Electronic Resonance Interferential E ects in Resonance Raman Spectra and Excitation Pro les. e Case of Pyrene. Avila Ferrer, F J Barone, V Cappelli, C Santoro, F Duschinsky, J. Chem. eory Comput. 9Avila Ferrer, F. J.; Barone, V.; Cappelli, C.; Santoro, F. Duschinsky, Herzberg- Teller, and Multiple Electronic Resonance Interferential E ects in Reso- nance Raman Spectra and Excitation Pro les. e Case of Pyrene. J. Chem. eory Comput. 2013, 9, 3597-3611.
A Multifrequency Virtual Spectrometer for Complex Bio-Organic Systems: Vibronic and Environmental E ects on the UV/Vis Spectrum of Chlorophyll a. V Barone, M Biczysko, M Borkowska-Panek, J Bloino, 15Barone, V.; Biczysko, M.; Borkowska-Panek, M.; Bloino, J. A Multifre- quency Virtual Spectrometer for Complex Bio-Organic Systems: Vibronic and Environmental E ects on the UV/Vis Spectrum of Chlorophyll a. ChemPhysChem 2014, 15, 3355-3364.
Vibronic-structure tracking: A shortcut for vibrationally resolved UV/Vis-spectra calculations. D Barton, C König, J Neugebauer, J. Chem. Phys. 164115Barton, D.; König, C.; Neugebauer, J. Vibronic-structure tracking: A short- cut for vibrationally resolved UV/Vis-spectra calculations. J. Chem. Phys. 2014, 141, 164115.
Going Beyond the Vertical Approximation with Time-Dependent Density Functional eory. F Santoro, D Jacquemin, WIREs Comput. Mol. Sci. 6Santoro, F.; Jacquemin, D. Going Beyond the Vertical Approximation with Time-Dependent Density Functional eory. WIREs Comput. Mol. Sci. 2016, 6, 460-486.
Calculation of 0-0 Excitation Energies of Organic Molecules by CIS(D) antum Chemical Methods. S Grimme, E I Izgorodina, Chem. Phys. 305Grimme, S.; Izgorodina, E. I. Calculation of 0-0 Excitation Energies of Organic Molecules by CIS(D) antum Chemical Methods. Chem. Phys. 2004, 305, 223-230.
Analytic Gradients for Excited States in the Coupled-Cluster Model CC2 Employing the Resolution-Of-e-Identity Approximation. A Köhn, C Hä Ig, J. Chem. Phys. 119Köhn, A.; Hä ig, C. Analytic Gradients for Excited States in the Coupled- Cluster Model CC2 Employing the Resolution-Of-e-Identity Approxi- mation. J. Chem. Phys. 2003, 119, 5021-5036.
Assessing Excited State Methods by Adiabatic Excitation Energies. R Send, M Kühn, F Furche, J. Chem. eory Comput. 7Send, R.; Kühn, M.; Furche, F. Assessing Excited State Methods by Adiabatic Excitation Energies. J. Chem. eory Comput. 2011, 7, 2376-2386.
Hä ig, C. Benchmarks for 0-0 transitions of aromatic organic molecules: DFT/B3LYP, ADC(2), CC2, SOS-CC2 and SCS-CC2 compared to high-resolution gas-phase data. N O C Winter, N K Graf, S Leutwyler, Phys. Chem. Chem. Phys. 15Winter, N. O. C.; Graf, N. K.; Leutwyler, S.; Hä ig, C. Benchmarks for 0-0 transitions of aromatic organic molecules: DFT/B3LYP, ADC(2), CC2, SOS-CC2 and SCS-CC2 compared to high-resolution gas-phase data. Phys. Chem. Chem. Phys. 2013, 15, 6623-6630.
A Comparison of Single Reference Methods for Characterizing Stationary Points of Excited State Potential Energy Surfaces. J F Stanton, J Gauss, N Ishikawa, M Head-Gordon, J. Chem. Phys. 103Stanton, J. F.; Gauss, J.; Ishikawa, N.; Head-Gordon, M. A Comparison of Single Reference Methods for Characterizing Stationary Points of Excited State Potential Energy Surfaces. J. Chem. Phys. 1995, 103, 4160-4174.
Adiabatic Time-Dependent Density Functional Methods for Excited States Properties. F Furche, R Ahlrichs, J. Chem. Phys. 117Furche, F.; Ahlrichs, R. Adiabatic Time-Dependent Density Functional Methods for Excited States Properties. J. Chem. Phys. 2002, 117, 7433- 7447.
A density functional calculation of the vibronic structure of electronic absorption spectra. M Dierksen, S Grimme, J. Chem. Phys. 120Dierksen, M.; Grimme, S. A density functional calculation of the vibronic structure of electronic absorption spectra. J. Chem. Phys. 2004, 120, 3544- 3554.
Structure of Electronic Absorption Spectra of Large Molecules: A Time-Dependent Density Functional Study on the In uence of Exact Hartree-Fock Exchange. M Dierksen, S Grimme, Vibronic, J. Phys. Chem. A. 108Dierksen, M.; Grimme, S. e Vibronic Structure of Electronic Absorption Spectra of Large Molecules: A Time-Dependent Density Functional Study on the In uence of Exact Hartree-Fock Exchange. J. Phys. Chem. A 2004, 108, 10225-10237.
C Hä Ig, Response eory and Molecular Properties (A Tribute to Jan Linderberg and Poul Jørgensen). Hä ig, C. In Response eory and Molecular Properties (A Tribute to Jan Linderberg and Poul Jørgensen);
Advances in antum Chemistry. H A Jensen, Ed, Academic Press50Jensen, H. A., Ed.; Advances in antum Chemistry; Academic Press, 2005; Vol. 50; pp 37-60.
Scaled Second-Order Perturbation Corrections to Con guration Interaction Singles: E cient and Reliable Excitation Energy Methods. Y M Rhee, M Head-Gordon, J. Phys. Chem. A. 111Rhee, Y. M.; Head-Gordon, M. Scaled Second-Order Perturbation Correc- tions to Con guration Interaction Singles: E cient and Reliable Excitation Energy Methods. J. Phys. Chem. A 2007, 111, 5314-5326.
Hä ig, C. Benchmarking the Performance of Spin-Component Scaled CC2 in Ground and Electronically Excited States. A Hellweg, S A Grün, Phys. Chem. Chem. Phys. 10Hellweg, A.; Grün, S. A.; Hä ig, C. Benchmarking the Performance of Spin-Component Scaled CC2 in Ground and Electronically Excited States. Phys. Chem. Chem. Phys. 2008, 10, 4119-4127.
Performance of asi-Degenerate Scaled Opposite Spin Perturbation Corrections to Single Excitation Con guration Interaction for Excited State Structures and Excitation Energies with Application to the Stokes Shi of 9-Methyl-9,10-dihydro-9-silaphenanthrene. Y M Rhee, D Casanova, M Head-Gordon, J. Phys. Chem. A. 113Rhee, Y. M.; Casanova, D.; Head-Gordon, M. Performance of asi- Degenerate Scaled Opposite Spin Perturbation Corrections to Single Exci- tation Con guration Interaction for Excited State Structures and Excitation Energies with Application to the Stokes Shi of 9-Methyl-9,10-dihydro-9- silaphenanthrene. J. Phys. Chem. A 2009, 113, 10564-10576.
A Parallel Implementation of the Analytic Nuclear Gradient for Time-Dependent Density Functional eory Within the Tamm-Danco Approximation. F Liu, Z Gan, Y Shao, C P Hsu, A Dreuw, M Head-Gordon, B T Miller, B R Brooks, J G Yu, T R Furlani, J Kong, Mol. Phys. 108Liu, F.; Gan, Z.; Shao, Y.; Hsu, C. P.; Dreuw, A.; Head-Gordon, M.; Miller, B. T.; Brooks, B. R.; Yu, J. G.; Furlani, T. R.; Kong, J. A Parallel Implementation of the Analytic Nuclear Gradient for Time-Dependent Density Functional eory Within the Tamm-Danco Approximation. Mol. Phys. 2010, 108, 2791-2800.
Analytical Energy Gradients of Coulomb-A enuated Time-Dependent Density Functional Methods for Excited States. K A Nguyen, P N Day, R Pachter, Int. J. antum Chem. 110Nguyen, K. A.; Day, P. N.; Pachter, R. Analytical Energy Gradients of Coulomb-A enuated Time-Dependent Density Functional Methods for Excited States. Int. J. antum Chem. 2010, 110, 2247-2255.
Harnessing the meta-generalized gradient approximation for time-dependent density functional theory. J E E Bates, F Furche, J. Chem. Phys. 137164105Bates, J. E. E.; Furche, F. Harnessing the meta-generalized gradient approx- imation for time-dependent density functional theory. J. Chem. Phys. 2012, 137, 164105.
How Method-Dependent Are Calculated Di erences Between Vertical, Adiabatic and 0-0 Excitation Energies?. C Fang, B Oruganti, B Durbeej, J. Phys. Chem. A. 118Fang, C.; Oruganti, B.; Durbeej, B. How Method-Dependent Are Calculated Di erences Between Vertical, Adiabatic and 0-0 Excitation Energies? J. Phys. Chem. A 2014, 118, 4157-4171.
TDDFT Assessment of Functionals for Optical 0-0 Transitions in Small Radicals. L Barnes, S Abdul-Al, A.-R Allouche, 25350349J. Phys. Chem. A. 118Barnes, L.; Abdul-Al, S.; Allouche, A.-R. TDDFT Assessment of Functionals for Optical 0-0 Transitions in Small Radicals. J. Phys. Chem. A 2014, 118, 11033-11046, PMID: 25350349.
Semiempirical antum-Chemical Orthogonalization-Corrected Methods: Benchmarks of Electronically Excited States. D Tuna, Y Lu, A Koslowski, W Iel, J. Chem. eory Comput. 12Tuna, D.; Lu, Y.; Koslowski, A.; iel, W. Semiempirical antum-Chemical Orthogonalization-Corrected Methods: Benchmarks of Electronically Ex- cited States. J. Chem. eory Comput. 2016, 12, 4400-4422.
Assessment of a Composite CC2/DFT Procedure for Calculating 0-0 Excitation Energies of Organic Molecules. B Oruganti, C Fang, B Durbeej, Mol. Phys. 114Oruganti, B.; Fang, C.; Durbeej, B. Assessment of a Composite CC2/DFT Procedure for Calculating 0-0 Excitation Energies of Organic Molecules. Mol. Phys. 2016, 114, 3448-3463.
Time-Dependent Double-Hybrid Density Functionals with Spin-Component and Spin-Opposite Scaling. T Schwabe, L Goerigk, J. Chem. eory Comput. 13Schwabe, T.; Goerigk, L. Time-Dependent Double-Hybrid Density Func- tionals with Spin-Component and Spin-Opposite Scaling. J. Chem. eory Comput. 2017, 13, 4307-4323.
Jacquemin, D. eoretical 0-0 Energies with Chemical Accuracy. P.-F Loos, N Galland, J. Phys. Chem. Le. 9Loos, P.-F.; Galland, N.; Jacquemin, D. eoretical 0-0 Energies with Chem- ical Accuracy. J. Phys. Chem. Le . 2018, 9, 4646-4651.
Chemically Accurate 0-0 Energies with not-so-Accurate Excited State Geometries. P.-F Loos, D Jacquemin, 10.1021/acs.jctc.8b01103J. Chem. eory Comput. In PressLoos, P.-F.; Jacquemin, D. Chemically Accurate 0-0 Energies with not-so- Accurate Excited State Geometries. J. Chem. eory Comput. 2019, In Press, doi: 10.1021/acs.jctc.8b01103.
Constants of Diatomic Molecules; Molecular Spectra and Molecular Structure. K P Huber, G Herzberg, Van Nostrand4PrincetonHuber, K. P.; Herzberg, G. Constants of Diatomic Molecules; Molecular Spectra and Molecular Structure; Van Nostrand: Princeton, 1979; Vol. 4.
Comparison Between Equation of Motion and Polarization Propagator Calculations. J Oddershede, N E Grūner, G H Diercksen, Chem. Phys. 97Oddershede, J.; Grūner, N. E.; Diercksen, G. H. Comparison Between Equa- tion of Motion and Polarization Propagator Calculations. Chem. Phys. 1985, 97, 303-310.
Benchmarks for Electronically Excited States: CASPT2, CC2, CCSD and CC3. M Schreiber, M R Silva-Junior, S P A Sauer, W Iel, J. Chem. Phys. 128134110Schreiber, M.; Silva-Junior, M. R.; Sauer, S. P. A.; iel, W. Benchmarks for Electronically Excited States: CASPT2, CC2, CCSD and CC3. J. Chem. Phys. 2008, 128, 134110.
Benchmarks for Electronically Excited States: Time-Dependent Density Functional eory and Density Functional eory Based Multireference Con guration Interaction. M R Silva-Junior, M Schreiber, S P A Sauer, W Iel, J. Chem. Phys. 104103Silva-Junior, M. R.; Schreiber, M.; Sauer, S. P. A.; iel, W. Benchmarks for Electronically Excited States: Time-Dependent Density Functional eory and Density Functional eory Based Multireference Con guration Interaction. J. Chem. Phys. 2008, 129, 104103.
Analytical time-dependent density functional derivative methods within the RI-J approximation, an approach to excited states of large molecules. D Rappoport, F Furche, J. Chem. Phys. 12264105Rappoport, D.; Furche, F. Analytical time-dependent density functional derivative methods within the RI-J approximation, an approach to excited states of large molecules. J. Chem. Phys. 2005, 122, 064105.
Geometric Derivatives of Excitation Energies Using SCF and DFT. C Van Caillie, R D Amos, Chem. Phys. Le. 308van Caillie, C.; Amos, R. D. Geometric Derivatives of Excitation Energies Using SCF and DFT. Chem. Phys. Le . 1999, 308, 249-255.
Excited State Geometry Optimizations by Analytical Energy Gradient of Long-Range Corrected Time-Dependent Density Functional eory. M Chiba, T Tsuneda, K Hirao, J. Chem. Phys. 124144106Chiba, M.; Tsuneda, T.; Hirao, K. Excited State Geometry Optimizations by Analytical Energy Gradient of Long-Range Corrected Time-Dependent Density Functional eory. J. Chem. Phys. 2006, 124, 144106.
Assessment of the Coulomb-a enuated exchange-correlation energy functional. M J G Peach, T Helgaker, P Salek, T W Keal, O B Lutnaes, D J Tozer, N C Handy, Phys. Chem. Chem. Phys. 8Peach, M. J. G.; Helgaker, T.; Salek, P.; Keal, T. W.; Lutnaes, O. B.; Tozer, D. J.; Handy, N. C. Assessment of the Coulomb-a enuated exchange-correlation energy functional. Phys. Chem. Chem. Phys. 2006, 8, 558-562.
Assessment of Functionals for Optical 0-0 Transitions in Solvated Dyes. D Jacquemin, A Planchat, C Adamo, B Mennucci, Td-Dft, J. Chem. eory Comput. 8Jacquemin, D.; Planchat, A.; Adamo, C.; Mennucci, B. A TD-DFT Assess- ment of Functionals for Optical 0-0 Transitions in Solvated Dyes. J. Chem. eory Comput. 2012, 8, 2359-2372.
antum Mechanical Continuum Solvation Models. J Tomasi, B Mennucci, R Cammi, 16092826Chem. Rev. 105Tomasi, J.; Mennucci, B.; Cammi, R. antum Mechanical Continuum Solvation Models. Chem. Rev. 2005, 105, 2999-3094, PMID: 16092826.
Linear response theory for the polarizable continuum model. R Cammi, B Mennucci, J. Chem. Phys. 110Cammi, R.; Mennucci, B. Linear response theory for the polarizable con- tinuum model. J. Chem. Phys. 1999, 110, 9877-9886.
Time-dependent density functional theory for molecules in liquid solutions. M Cossi, V Barone, J. Chem. Phys. 115Cossi, M.; Barone, V. Time-dependent density functional theory for molecules in liquid solutions. J. Chem. Phys. 2001, 115, 4708-4717.
Formation and relaxation of excited states in solution: A new time dependent polarizable continuum model based on time dependent density functional theory. M Caricato, B Mennucci, J Tomasi, F Ingrosso, R Cammi, S Corni, G Scalmani, J. Chem. Phys. 124520Caricato, M.; Mennucci, B.; Tomasi, J.; Ingrosso, F.; Cammi, R.; Corni, S.; Scalmani, G. Formation and relaxation of excited states in solution: A new time dependent polarizable continuum model based on time dependent density functional theory. J. Chem. Phys. 2006, 124, 124520.
A state-speci c polarizable continuum model time dependent density functional theory method for excited state calculations in solution. R Improta, V Barone, G Scalmani, M J Frisch, J. Chem. Phys. 54103Improta, R.; Barone, V.; Scalmani, G.; Frisch, M. J. A state-speci c polariz- able continuum model time dependent density functional theory method for excited state calculations in solution. J. Chem. Phys. 2006, 125, 054103.
Computation of Accurate Excitation Energies for Large Organic Molecules with Double-Hybrid Density Functionals. L Goerigk, J Moellmann, S Grimme, Phys. Chem. Chem. Phys. 11Goerigk, L.; Moellmann, J.; Grimme, S. Computation of Accurate Excita- tion Energies for Large Organic Molecules with Double-Hybrid Density Functionals. Phys. Chem. Chem. Phys. 2009, 11, 4611-4620.
Assessment of TD-DFT Methods and of Various Spin Scaled CIS n D and CC2 Versions for the Treatment of Low-Lying Valence Excitations of Large Organic Dyes. L Goerigk, S Grimme, J. Chem. Phys. 184103Goerigk, L.; Grimme, S. Assessment of TD-DFT Methods and of Various Spin Scaled CIS n D and CC2 Versions for the Treatment of Low-Lying Va- lence Excitations of Large Organic Dyes. J. Chem. Phys. 2010, 132, 184103.
Is the Tamm-Danco Approximation Reliable for the Calculation of Absorption and Fluorescence Band Shapes?. A Chantzis, A D Laurent, C Adamo, D Jacquemin, J. Chem. eory Comput. 9Chantzis, A.; Laurent, A. D.; Adamo, C.; Jacquemin, D. Is the Tamm-Danco Approximation Reliable for the Calculation of Absorption and Fluorescence Band Shapes? J. Chem. eory Comput. 2013, 9, 4517-4525.
Performance of an Optimally Tuned Range-Separated Hybrid Functional for 0-0 Electronic Excitation Energies. D Jacquemin, B Moore, A Planchat, C Adamo, J Autschbach, J. Chem. eory Comput. 10Jacquemin, D.; Moore, B.; Planchat, A.; Adamo, C.; Autschbach, J. Per- formance of an Optimally Tuned Range-Separated Hybrid Functional for 0-0 Electronic Excitation Energies. J. Chem. eory Comput. 2014, 10, 1677-1685.
Electronic Band Shapes Calculated with Optimally Tuned Range-Separated Hybrid Functionals. B Moore, A Charaf-Eddin, A Planchat, C Adamo, J Autschbach, D Jacquemin, J. Chem. eory Comput. 10Moore, B.; Charaf-Eddin, A.; Planchat, A.; Adamo, C.; Autschbach, J.; Jacquemin, D. Electronic Band Shapes Calculated with Optimally Tuned Range-Separated Hybrid Functionals. J. Chem. eory Comput. 2014, 10, 4599-4608.
ADC(2), CC2, and BSE/GW formalisms for 80 Real-Life Compounds. D Jacquemin, I Duchemin, X Blase, 0 Energies Using Hybrid Schemes: Benchmarks of TD-DFT, CIS(D). 11Jacquemin, D.; Duchemin, I.; Blase, X. 0-0 Energies Using Hybrid Schemes: Benchmarks of TD-DFT, CIS(D), ADC(2), CC2, and BSE/GW formalisms for 80 Real-Life Compounds. J. Chem. eory Comput. 2015, 11, 5340-5359.
Tuned Range-Separated Hybrids in Density Functional eory. R Baer, E Livshits, U Salzner, Ann. Rev. Phys. Chem. Baer, R.; Livshits, E.; Salzner, U. Tuned Range-Separated Hybrids in Density Functional eory. Ann. Rev. Phys. Chem. 2010, 61, 85-109.
Delocalization Error and "Functional Tuning" in Kohn-Sham Calculations of Molecular Properties. J Autschbach, M Srebro, Acc. Chem. Res. 47Autschbach, J.; Srebro, M. Delocalization Error and "Functional Tuning" in Kohn-Sham Calculations of Molecular Properties. Acc. Chem. Res. 2014, 47, 2592)-2602.
Nonempirical Double-Hybrid Functionals: An E ective Tool for Chemists. E Brémond, I Cio Ni, J C Sancho-García, C Adamo, Acc. Chem. Res. 49Brémond, E.; Cio ni, I.; Sancho-García, J. C.; Adamo, C. Nonempirical Double-Hybrid Functionals: An E ective Tool for Chemists. Acc. Chem. Res. 2016, 49, 1503-1513.
Speed-Up of the Excited-State Benchmarking: Double-Hybrid Density Functionals as Test Cases. E Brémond, M Savarese, A J Perez-Jimenez, J C Sancho-Garcia, C Adamo, J. Chem. eory Comput. 13Brémond, E.; Savarese, M.; Perez-Jimenez, A. J.; Sancho-Garcia, J. C.; Adamo, C. Speed-Up of the Excited-State Benchmarking: Double-Hybrid Density Functionals as Test Cases. J. Chem. eory Comput. 2017, 13, 5539-5551.
| [] |
[] | [
"J I Crespo-Anadón \nCentro de Investigaciones Energéticas\nMedioambientales y Tecnológicas\nCIEMAT\n28040MadridSpain\n"
] | [
"Centro de Investigaciones Energéticas\nMedioambientales y Tecnológicas\nCIEMAT\n28040MadridSpain"
] | [] | The latest results from the Double Chooz experiment on the neutrino mixing angle θ 13 are presented. A detector located at an average distance of 1050 m from the two reactor cores of the Chooz nuclear power plant has accumulated a live time of 467.90 days, corresponding to an exposure of 66.5 GW-ton-year (reactor power × detector mass × live time). A revised analysis has boosted the signal efficiency and reduced the backgrounds and systematic uncertainties compared to previous publications, paving the way for the two detector phase. The measured sin 2 2θ 13 = 0.090 +0.032 −0.029 is extracted from a fit to the energy spectrum. A deviation from the prediction above a visible energy of 4 MeV is found, being consistent with an unaccounted reactor flux effect, which does not affect the θ 13 result. A consistent value of θ 13 is measured in a rate-only fit to the number of observed candidates as a function of the reactor power, confirming the robustness of the result. | 10.1016/j.nuclphysbps.2015.06.025 | [
"https://arxiv.org/pdf/1412.3698v1.pdf"
] | 118,668,137 | 1412.3698 | ca1e308ab5634d2284a2b619db740a600967fcae |
11 Dec 2014
J I Crespo-Anadón
Centro de Investigaciones Energéticas
Medioambientales y Tecnológicas
CIEMAT
28040MadridSpain
11 Dec 2014Double Chooz: Latest resultsreactorneutrinooscillationθ 13
The latest results from the Double Chooz experiment on the neutrino mixing angle θ 13 are presented. A detector located at an average distance of 1050 m from the two reactor cores of the Chooz nuclear power plant has accumulated a live time of 467.90 days, corresponding to an exposure of 66.5 GW-ton-year (reactor power × detector mass × live time). A revised analysis has boosted the signal efficiency and reduced the backgrounds and systematic uncertainties compared to previous publications, paving the way for the two detector phase. The measured sin 2 2θ 13 = 0.090 +0.032 −0.029 is extracted from a fit to the energy spectrum. A deviation from the prediction above a visible energy of 4 MeV is found, being consistent with an unaccounted reactor flux effect, which does not affect the θ 13 result. A consistent value of θ 13 is measured in a rate-only fit to the number of observed candidates as a function of the reactor power, confirming the robustness of the result.
Introduction
Neutrino oscillations in the standard three-flavor framework are described by three mixing angles, three mass-squared differences (two of which are independent) and one CP-violating phase. Excepting the phase which still remains unknown, all the other parameters have been measured [1]. θ 13 was the last to be measured by short-baseline reactor and long-baseline accelerator experiments [2,3,4,5,6,7,8,9].
For the energies and distances relevant to Double Chooz, the oscillation probability is well approximated by the two-flavor case. Thus, the survival probability reads:
P ν e →ν e = 1 − sin 2 2θ 13 sin 2
1.27 ∆m 2 31 [eV 2 ]L[m] E ν [MeV] (1)
So θ 13 can be measured from the deficit in the electron antineutrino flux emitted by the reactors. In this analysis, ∆m 2 31 = 2.44 +0.09 −0.10 × 10 −3 eV 2 , taken from [10], assuming normal hierarchy .
Antineutrinos are detected through the inverse betadecay (IBD) process on protons, ν e + p → e + + n, which provides two signals: a prompt signal in the range of 1 -10 MeV is given by the positron kinetic energy and the resulting γs from its annihilation. This visible energy is related to the ν e energy by E vis ≈ E ν − 0.8 MeV. A delayed signal is given by the γs released in the radiative capture of the neutron by a Gd or H nucleus. The results presented here correspond only to captures in Gd, which occur after a mean time of 31.1 µs and release a total energy of 8 MeV, which is far above the natural radioactivity energies. The coincidence of these two signals grants the experiment a powerful background suppression.
The Double Chooz experiment
Double Chooz (DC) is a 2-detector experiment located in the surroundings of the Chooz nuclear power plant (France), which has two pressurized water reactor cores, producing 4.25 GW th each. The Near Detector (ND), placed at ∼ 400 m from the cores, has a 120 m.w.e. overburden and it is currently being commissioned. The Far Detector (FD), placed at ∼ 1050 m from the cores, has a 300 m.w.e. overburden and its data are used here. The 2-detector concept allows to extract θ 13 with high precision from the relative comparison of the ν e flux at the two detectors. Because the detectors are built identical, all the correlated uncertainties between them are cancelled.
Since the ND was not operative for this analysis yet, an accurate reactor flux simulation was needed to obtain the ν e prediction.Électricité de France provides the instantaneous thermal power of each reactor, and the location and initial composition of the reactor fuel. The simulation of the evolution of the fission rates and the asso-ciated uncertainties is done with MURE [11,12], which has been benchmarked with another code [13]. The reference ν e spectra for 235 U, 239 Pu and 241 Pu are computed from their β spectrum [14,15,16], while [17] is used for 238 U for the first time. The short-baseline Bugey4 ν e rate measurement [18] is used to suppress the normalization uncertainty on the ν e prediction, correcting for the different fuel composition in the two experiments. The systematic uncertainty on the ν e rate amounts to 1.7%, dominated by the 1.4% of the Bugey4 measurement. Had the Bugey4 measurement not been included, the uncertainty would have been 2.8%. The DC detector is composed of four concentrical cylindrical vessels (see figure 1). The innermost volume, the ν-target (NT), is an 8 mm thick acrylic vessel (UV to visible transparent) filled with 10.3 m 3 of liquid scintillator loaded with Gd (1g/l) to enhance the neutron captures. The γ-catcher (GC), a 55 cm thick layer of liquid scintillator (Gd-free) enclosed in a 12 mm thick acrylic vessel surrounds the NT to maximize the energy containment. Surrounding the GC is the buffer, a 105 cm thick layer of mineral oil (non-scintillating) contained in a stainless steel tank where 390 low background 10-inch photomultiplier tubes (PMT) are installed, and which shields from the radioactivity of the PMTs and the surrounding rock. The elements described so far constitute the inner detector (ID). Enclosing the ID and optically separated from it, the inner veto (IV), a 50 cm thick layer of liquid scintillator, serves as a cosmic muon veto and as an active shield to incoming fast neutrons observed by 78 8-inch PMTs positioned on its walls. A 15 cm thick demagnetized steel shield protects the whole detector from external γ-rays. The outer veto (OV), two orthogonally aligned layers of plastic scintillator strips placed on top of the detector, allows a 2D reconstruction of impinging muons. An upper OV covers the chimney, which is used for filling the volumes and for the insertion of calibration sources (encapsulated radioactive sources of 137 Cs, 68 Ge, 60 Co and 252 Cf and a laser). Attached to the ID and IV PMTs, a multi-wavelength LED-fiber light injection system is used to periodically calibrate the readout electronics.
Waveforms from all ID and IV PMTs are digitized and recorded by dead-time free flash-ADC electronics.
DC has pioneered the measurement of θ 13 using the ν e spectral information because of its exhaustive treatment of the energy scale, which is applied in parallel to the recorded data and the Monte Carlo (MC) simulation. A linearized photoelectron (PE) calibration produces a PE number in each PMT which has been corrected from dependencies on the gain non-linearity and time. A uniformity calibration corrects for the spatial dependence of the PE, equalizing the response within the detector. The conversion from PE to energy units is obtained from the analysis of neutron captures in H from a 252 Cf calibration source deployed at the center of the detector. A stability calibration is applied to the data to remove the remaining time variation by analyzing the evolution of the H capture peak from spallation neutrons, which is also crosschecked at different energies using the Gd capture peak and the α decays of 212 Po. Two further calibrations are applied to the MC to correct for the energy non-linearity relative to the data: the first is applied to every event and it arises from the modeling of the readout systems and the charge integration algorithm; the second, which is only applied to positrons, is associated to the scintillator modeling. The total systematic uncertainty in the energy scale amounts to 0.74%, improving the previous one [2] by a 1.5 factor.
Neutrino selection
The minimum energy for a selected event is E vis > 0.4 MeV, where the trigger is already 100% efficient. Events with E vis > 20 MeV or E IV > 16 MeV are rejected and tagged as muons, imposing a 1 ms veto after them to reject also muon-induced events. Light noise is a background caused by spontaneous light emission from some PMT bases, and it is avoided by requiring the selected events to satisfy all the following cuts: i) q max , the maximum charge recorded by a PMT, must be less or equal to 12% of the total charge of the event; ii) 1/N × N i=0 (q max − q i ) 2 /q i < 3 × 10 4 charge units, where N is the number of PMTs located at less than 1 m from the PMT with the maximum charge; iii) σ t < 36 ns or σ q > (464 − 8σ t ) charge units, where σ t and σ q are the standard deviations of PMTs hit times and integrated charge distributions, respectively. Events passing the previous cuts are used to search for coincidences, which must satisfy the conditions: the prompt E vis must be in (0.5, 20) MeV, the delayed E vis in (4, 10) MeV, the correlation time between the signals must be in (0.5, 150) µs, and the distance between reconstructed vertex positions must be less than 1 m. In addition, only the delayed signal can be in a time window spanning 200 µs before and 600 µs after the prompt signal.
Background measurement and vetoes
The backgrounds are non-neutrino processes which mimic the characteristic coincidence of the IBD.
Cosmogenic isotopes. Unstable isotopes are produced by spallation of nuclei inside the detector by cosmic muons. Products as 9 Li and 8 He have a decay mode in which a neutron is emitted along with an electron, indistinguishable from an IBD interaction. Moreover, lifetimes of 9 Li and 8 He are 257 ms and 172 ms, respectively, so the 1 ms after-muon veto is not effective. A cut in a likelihood based on the event distance to the muon track and the number of neutron candidates following the muon in 1 ms allows to reject 55% of 9 Li and 8 He. The 9 Li/ 8 He contamination is determined from fits to the time correlation between the IBD candidates and the previous muon. The estimation of the remaining 9 Li/ 8 He background in the IBD candidates sample is 0.97 +0.41 −0.16 events/day. The events vetoed by the likelihood cut are used to build the prompt energy spectrum (see figure 3), which includes also captures on H to enhance the statistics.
Fast neutrons and stopping muons. Fast neutrons originated from spallation by muons in the surrounding rock can enter the detector and reproduce the IBD signature by producing recoil protons (prompt) and be captured later (delayed). Stopping muons are muons which stop inside the detector, giving the prompt signal, and then decay producing the Michel electron that fakes the delayed signal. In order to reject this background, events fulfilling at least one of the following conditions are discarded: (i) Events with an OV trigger coincident with the prompt signal. (ii) Events whose delayed signal is not consistent with a point-like vertex inside the detector. (iii) Events in which the IV shows correlated activity to the prompt signal. The three vetoes together reject 90% of the events with a prompt E vis > 12 MeV, where this background is dominant. The veto (iii) is used to extract the fast neutron/stopping muon prompt energy spectrum, which is found to be flat. This shape is further confirmed by using the other vetoes. The rate of this background in the candidate sample is estimated from a IBD-like coincidence search in which the prompt signal has an energy in the (20, 30) MeV region, and it amounts to 0.604 ± 0.051 events/day. Accidental background. Those are random coincidences of two triggers satisfying the selection criteria. Because of its random nature, their rate and spectrum (see figure 3) can be studied with great precision from the data by an off-time coincidence search, the same as the IBD selection except for the correlated time window, which is opened more than 1 s after the prompt signal. The use of multiple windows allows to collect high statistics. The background rate is measured to be 0.0701 ± 0.0003 (stat) ± 0.026 (syst).
Other backgrounds, such as the 13 C(α, n) 16 O reaction or the 12 B decay, were considered but they were found to have negligible occurrence. Table 1 summarizes the estimated background rates and the reduction with respect to the previous publication [2]. [2] shows the reduction of the background rate in [5] with respect to the previous publication [2], after correcting for the different prompt energy range.
IBD detection efficiency
A dedicated effort was carried out to decrease the detection efficiency uncertainty. This signal normalization uncertainty is dominated by the neutron detection uncertainty, which has been reduced from 0.96% in [2] to the current 0.54% in [5]. This was achieved thanks to the reduction of the volume-wise selection systematic uncertainty by using two new methods to estimate the neutron detection efficiency in the full Target. The first one uses the neutrons produced by the IBD interactions, which are homogeneously distributed in the detector, to produce a direct measurement of the volume-wide efficiency. The second method exploits the symmetry shown by the neutron detection efficiency, in which the data from the 252 Cf source deployed along the vertical coordinate can be extrapolated to the radial coordinate. Another reduction was obtained on the uncertainty arising from the spill-in/spill-out currents (neutron migration into and out of the NT, respectively), which are sensitive to the low energy neutron physics. It was decreased by comparing the custom DC Geant4 simulation, which includes an analytical modeling of the impact of the molecular bonds on low energy neutrons, to Tripoli4, a MC code with a specially accurate model of low energy neutron physics.
After accounting for the uncertainties introduced by the background vetoes and the scintillator proton number, the detection-related normalization uncertainty totals 0.6%.
Oscillation analyses
In a live-time of 460.67 days with at least one reactor running, 17351 IBD candidates were observed. The prediction, including backgrounds, in case of no oscillation was 18290 +370 −330 . The deficit is understood as a consequence of neutrino oscillation. In addition, a live-time of 7.24 days with the two reactors off was collected [19], in which 7 IBD candidates were observed, whereas the prediction including the residual ν e was 12.9 +3.1 −1.4 . The reactor-off measurement allows to test the background model and constrain the total background rate in the oscillation analysis. It is a unique advantage of DC, which has only two reactors.
The normalization uncertainties of the signal and the background are summarized in table 2, showing also the improvement with respect to the previous analysis [2].
Source
Uncertainty (%) [
Reactor rate modulation analysis
From the linear correlation existing between the observed and the expected candidate rates at different reactor conditions, a fit to a straight line determines simultaneously sin 2 2θ 13 (proportional to the slope) and the total background rate B (intercept) [4]. Including the prediction of the total background B = 1.64 +0.41 −0.17 events/day, the best fit is found at sin 2 2θ 13 = 0.090 +0.034 −0.035 and B = 1.56 +0.18 −0.16 events/day (see figure 2). A background model independent measurement of θ 13 is possible when the background constraint is removed and B is treated as a free parameter. The best fit (χ 2 min /d.o. f. = 1.9/5) corresponds to sin 2 2θ 13 = 0.060 ± 0.039 and B = 0.93 +0.43 −0.36 events/day, consistent with the background-constrained fit.
The impact of the reactor-off data is tested by removing the reactor-off point (with the background rate still unconstrained). In this case, the best fit (χ 2 min /d.o. f. = 1.3/4) gives sin 2 2θ 13 = 0.089 ± 0.052 and B = 1.56±0.86 events/day, which confirms the improvement granted by the reactor-off measurement.
Rate + shape analysis
This analysis measures sin 2 2θ 13 by minimizing a χ 2 in which the prompt energy spectrum of the observed IBD candidates and the prediction are compared. A covariance matrix accounts for the statistical and systematic (reactor flux, MC normalization, 9 Li/ 8 He spectrum shape, accidental statistical) uncertainties in each bin and the bin-to-bin correlations. A set of nuisance parameters accounts for the other uncertainty sources: In addition to the oscillation-induced deficit on the bottom panel of figure 4, a spectrum distortion is observed above 4 MeV. The excess has been found to be proportional to the reactor power, disfavoring a background origin. Considering only the IBD interaction, the structure is consistent with an unaccounted reactor ν e flux effect, which does not affect significantly the θ 13 . The good agreement with the shape-independent reactor rate modulation result demonstrates it. The existence of this distortion has been later confirmed by the Daya Bay and RENO reactor experiments. Figure 5 shows the projected sensitivity of the Rate + Shape analysis using the IBD neutrons captured in Gd. A 0.2% relative detection efficiency uncertainty is assumed, the expected remnant from the cancellation of the correlated detection uncertainties due to the use of identical detectors. The portion of reactor flux uncorrelated between detectors is 0.1% (thanks to the simple experimental setup with two reactors). Backgrounds in the Near Detector are scaled from the Far Detector accounting for the different muon flux. Comparing the curves from the previous [2] and the current analysis [5], the improvement gained with the new techniques is clear, and it is expected to improve further (e.g. systematic uncertainty on the background rate is limited by statistics).
Conclusion
Double Chooz has presented improved measurements of θ 13 corresponding to 467.90 days of live-time of a single detector using the neutrons captured in Gd. The most precise value is extracted from a fit to the observed positron energy spectrum: sin 2 2θ 13 = 0.090 +0.032 −0.029 . A consistent result is found by a fit to the observed candidate rates at different reactor powers: sin 2 2θ 13 = 0.090 +0.034 −0.035 . A distortion in the spectrum is observed above 4 MeV, with an excess correlated to the reactor power. It has no significant impact on the θ 13 result.
As a result of the improved analysis techniques, Double Chooz will reach a 15% precision on sin 2 2θ 13 in 3 years of data taking with two detectors, with the potential to improve to 10%.
Figure 1 :
1Double Chooz far detector design.
Figure 2 :
2Observed versus expected candidate daily rates for different reactor powers.The prediction under the null oscillation hypothesis (dotted line) and the best-fit with the background rate constrained by its uncertainty (blue dashed line) are shown. The first point corresponds to the reactor-off data.
Figure 3 :
3∆m 2 31 , the number of residual ν e when reactors are off (1.57 ± 0.47 events), the 9 Li/ 8 He and fast neutron/stopping muon rates, the systematic component of the uncertainty on the accidental background rate, and the energy scale. The best fit (χ 2 min /d.o. f. = 52.2/40) is found at sin 2 2θ 13 = 0.090 +0.032 −0.029 (see figures 3,4). Measured prompt energy spectrum (black points with statistical error bars), superimposed on the no-oscillation prediction (blue dashed line) and on the best fit (red solid line), with the stacked best-fit backgrounds added.
Figure 4 :Figure 5 :
45Top: Measured prompt energy spectrum with best-fit backgrounds subtracted (black points with statistical error bars) superimposed on the no-oscillation prediction (blue dashed line) and on the best fit (red solid line). Bottom: Ratio of data to the no-oscillation prediction (black points with statistical error bars) superimposed on the best fit ratio (red solid line). The gold band represents the systematic uncertainty on the best-fit prediction.Total years of data-taking since II (n-Gd): FD only DC-II (n-Gd): ND and FD DC-III (n-Gd): FD only DC-III (n-Gd): ND and FD Range of potential precision (n-Gd): ND and FD Double Chooz projected sensitivity using the IBD neutrons captured in Gd. The previous analysis, [2] with only the FD (black dashed line) and adding the ND (black solid line), and the current analysis, with only the FD (blue dashed line) and adding the ND (blue solid line), are shown. The shaded region represents the range of improvement expected by reducing the systematic uncertainty, bounded from below by considering only the reactor systematic uncertainty.
Table 1 :
1Summary of background rate estimations.[5]/
Table 2 :
2Signal and background normalization uncertainties relative
to the signal prediction. [5]/[2] shows the reduction of the uncertainty
with respect to the previous publication [2].
. K Olive, 10.1088/1674-1137/38/9/090001Review of Particle Physics, Chin.Phys. 3890001K. Olive, et al., Review of Particle Physics, Chin.Phys. C38 (2014) 090001. doi:10.1088/1674-1137/38/9/090001.
Reactor electron antineutrino disappearance in the Double Chooz experiment. Y Abe, 10.1103/PhysRevD.86.052008arXiv:1207.6632Phys.Rev. 8652008Y. Abe, et al., Reactor electron antineutrino disappearance in the Double Chooz experiment, Phys.Rev. D86 (2012) 052008. arXiv:1207.6632, doi:10.1103/PhysRevD.86.052008.
First Measurement of θ 13 from Delayed Neutron Capture on Hydrogen in the Double Chooz Experiment. Y Abe, 10.1016/j.physletb.2013.04.050arXiv:1301.2948doi:10. 1016/j.physletb.2013.04.050Phys.Lett. 723Y. Abe, et al., First Measurement of θ 13 from Delayed Neu- tron Capture on Hydrogen in the Double Chooz Experiment, Phys.Lett. B723 (2013) 66-70. arXiv:1301.2948, doi:10. 1016/j.physletb.2013.04.050.
Background-independent measurement of θ 13 in Double Chooz. Y Abe, 10.1016/j.physletb.2014.04.045arXiv:1401.5981Phys.Lett. 735Y. Abe, et al., Background-independent measurement of θ 13 in Double Chooz, Phys.Lett. B735 (2014) 51-56. arXiv:1401. 5981, doi:10.1016/j.physletb.2014.04.045.
Improved measurements of the neutrino mixing angle θ 13 with the Double Chooz detector. Y Abe, 10.1007/JHEP10(2014)086arXiv:1406.7763JHEP. 141086Y. Abe, et al., Improved measurements of the neutrino mixing angle θ 13 with the Double Chooz detector, JHEP 1410 (2014) 86. arXiv:1406.7763, doi:10.1007/JHEP10(2014)086.
Spectral measurement of electron antineutrino oscillation amplitude and frequency at Daya Bay. F An, 10.1103/PhysRevLett.112.061801arXiv:1310.6732doi:10.1103/ PhysRevLett.112.061801Phys.Rev.Lett. 11261801F. An, et al., Spectral measurement of electron antineutrino os- cillation amplitude and frequency at Daya Bay, Phys.Rev.Lett. 112 (2014) 061801. arXiv:1310.6732, doi:10.1103/ PhysRevLett.112.061801.
Observation of Reactor Electron Antineutrino Disappearance in the RENO Experiment. J Ahn, 10.1103/PhysRevLett.108.191802arXiv:1204.0626doi:10.1103/ PhysRevLett.108.191802Phys.Rev.Lett. 108191802J. Ahn, et al., Observation of Reactor Electron Antineu- trino Disappearance in the RENO Experiment, Phys.Rev.Lett. 108 (2012) 191802. arXiv:1204.0626, doi:10.1103/ PhysRevLett.108.191802.
Electron neutrino and antineutrino appearance in the full MINOS data sample. P Adamson, 10.1103/PhysRevLett.110.171801arXiv:1301.4581doi:10.1103/ PhysRevLett.110.171801Phys.Rev.Lett. 11017171801P. Adamson, et al., Electron neutrino and antineutrino ap- pearance in the full MINOS data sample, Phys.Rev.Lett. 110 (17) (2013) 171801. arXiv:1301.4581, doi:10.1103/ PhysRevLett.110.171801.
Observation of Electron Neutrino Appearance in a Muon Neutrino Beam. K Abe, 10.1103/PhysRevLett.112.061802arXiv:1311.4750Phys.Rev.Lett. 11261802K. Abe, et al., Observation of Electron Neutrino Appear- ance in a Muon Neutrino Beam, Phys.Rev.Lett. 112 (2014) 061802. arXiv:1311.4750, doi:10.1103/PhysRevLett. 112.061802.
Combined analysis of ν µ disappearance and ν µ → ν e appearance in MINOS using accelerator and atmospheric neutrinos. P Adamson, 10.1103/PhysRevLett.112.191801arXiv:1403.0867Phys.Rev.Lett. 112191801P. Adamson, et al., Combined analysis of ν µ disappearance and ν µ → ν e appearance in MINOS using accelerator and atmo- spheric neutrinos, Phys.Rev.Lett. 112 (2014) 191801. arXiv: 1403.0867, doi:10.1103/PhysRevLett.112.191801.
O Meplan, ENC 2005: European Nuclear Conference; Nuclear power for the XXIst century: from basic research to high-tech industry. O. Meplan, et al., in ENC 2005: European Nuclear Conference; Nuclear power for the XXIst century: from basic research to high-tech industry (2005).
Reactor Simulation for Antineutrino Experiments using DRAGON and MURE. C Jones, 10.1103/PhysRevD.86.012001arXiv:1109.5379Phys.Rev. 8612001C. Jones, et al., Reactor Simulation for Antineutrino Exper- iments using DRAGON and MURE, Phys.Rev. D86 (2012) 012001. arXiv:1109.5379, doi:10.1103/PhysRevD.86. 012001.
Experimental beta spectra from Pu-239 and U-235 thermal neutron fission products and their correlated anti-neutrinos spectra. F Von Feilitzsch, A Hahn, K Schreckenbach, 10.1016/0370-2693(82)90622-0Phys.Lett. 118F. Von Feilitzsch, A. Hahn, K. Schreckenbach, Experimental beta spectra from Pu-239 and U-235 thermal neutron fission products and their correlated anti-neutrinos spectra, Phys.Lett. B118 (1982) 162-166. doi:10.1016/0370-2693(82) 90622-0.
Determination of the anti-neutrino spectrum from U-235 thermal neutron fission products up to 9.5-MeV. K Schreckenbach, G Colvin, W Gelletly, F Von Feilitzsch, 10.1016/0370-2693(85)91337-1Phys.Lett. 160K. Schreckenbach, G. Colvin, W. Gelletly, F. Von Feilitzsch, Determination of the anti-neutrino spectrum from U-235 ther- mal neutron fission products up to 9.5-MeV, Phys.Lett. B160 (1985) 325-330. doi:10.1016/0370-2693(85)91337-1.
Anti-neutrino Spectra From 241 Pu and 239 Pu Thermal Neutron Fission Products. A Hahn, 10.1016/0370-2693(89)91598-0Phys.Lett. 218A. Hahn, et al., Anti-neutrino Spectra From 241 Pu and 239 Pu Thermal Neutron Fission Products, Phys.Lett. B218 (1989) 365-368. doi:10.1016/0370-2693(89)91598-0.
Experimental Determination of the Antineutrino Spectrum of the Fission Products of 238 U. N Haag, 10.1103/PhysRevLett.112.122501arXiv:1312.5601doi:10.1103/ PhysRevLett.112.122501Phys.Rev.Lett. 11212122501N. Haag, et al., Experimental Determination of the Antineu- trino Spectrum of the Fission Products of 238 U, Phys.Rev.Lett. 112 (12) (2014) 122501. arXiv:1312.5601, doi:10.1103/ PhysRevLett.112.122501.
Study of reactor anti-neutrino interaction with proton at Bugey nuclear power plant. Y Declais, 10.1016/0370-2693(94)91394-3Phys.Lett. 338Y. Declais, et al., Study of reactor anti-neutrino interaction with proton at Bugey nuclear power plant, Phys.Lett. B338 (1994) 383-389. doi:10.1016/0370-2693(94)91394-3.
Direct Measurement of Backgrounds using Reactor-Off Data in Double Chooz. Y Abe, 10.1103/PhysRevD.87.011102arXiv:1210.3748Phys.Rev. 8711102Y. Abe, et al., Direct Measurement of Backgrounds using Reactor-Off Data in Double Chooz, Phys.Rev. D87 (2013) 011102. arXiv:1210.3748, doi:10.1103/PhysRevD.87. 011102.
| [] |
[
"Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization",
"Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization"
] | [
"Erick J Canales-Rodríguez \nCentro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain\n",
"Alessandro Daducci \nSignal Processing Lab (LTS5)\nÉcole polytechnique fédérale de Lausanne (EPFL)\nLausanneSwitzerland\n\nUniversity Hospital Center (CHUV)\nUniversity of Lausanne (UNIL)\nLausanneSwitzerland\n",
"4☯Stamatios N Sotiropoulos \nCentre for Functional Magnetic Resonance Imaging of the Brain (FMRIB)\nUniversity of Oxford\nJohn Radcliffe Hospital\nOX39DUOxfordUnited Kingdom\n",
"5☯Emmanuel Caruyer \nCNRS-IRISA (UMR 6074\nInria\nVisAGeS Project-Team\nINSERM\nUniversité de Rennes 1\nCampus de BeaulieuU746, 35042Rennes CedexVisAGeSFrance\n",
"Santiago Aja-Fernández \nLaboratorio de Procesado de Imagen (LPI)\nETSI Telecomunicación\nUniversidad de Valladolid\nValladolidSpain\n",
"Joaquim Radua \nCentro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain\n\nDepartment of Psychosis Studies\nInstitute of Psychiatry, Psychology & Neuroscience\nKing's College London\nUnited Kingdom\n\nDepartment of Clinical Neuroscience\nKarolinska Institutet\nStockholmSweden\n",
"Jesús M Yurramendi Mendizabal \nDepartamento de Ciencia de la Computación e Inteligencial Artificial\nUniversidad del País Vasco\nEuskal Herriko Unibertsitatea\nSpain\n",
"Yasser Iturria-Medina \nMcConnell Brain Imaging Center\nMontreal Neurological Institute\nMcGill University\nMontrealQuebecCanada\n",
"Lester Melie-García \nDepartment of Clinical Neurosciences\nThe Neuroimaging Research Laboratory\nLaboratoire de Recherche en Neuroimagerie: LREN\nUniversity Hospital Center (CHUV)\nLausanneSwitzerland\n",
"Yasser Alemán-Gómez \nCentro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain\n\nDepartamento de Bioingeniería e Ingeniería Aeroespacial\nUniversidad Carlos III de Madrid\nMadridSpain\n\nInstituto de Investigación Sanitaria Gregorio Marañón\nMadridSpain\n",
"Jean-Philippe Thiran \nSignal Processing Lab (LTS5)\nÉcole polytechnique fédérale de Lausanne (EPFL)\nLausanneSwitzerland\n\nUniversity Hospital Center (CHUV)\nUniversity of Lausanne (UNIL)\nLausanneSwitzerland\n",
"Salvador Sarró \nCentro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain\n",
"Edith Pomarol-Clotet \nCentro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain\n",
"Raymond Salvador \nCentro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain\n",
"\nBarcelonaSpain\n"
] | [
"Centro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain",
"Signal Processing Lab (LTS5)\nÉcole polytechnique fédérale de Lausanne (EPFL)\nLausanneSwitzerland",
"University Hospital Center (CHUV)\nUniversity of Lausanne (UNIL)\nLausanneSwitzerland",
"Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB)\nUniversity of Oxford\nJohn Radcliffe Hospital\nOX39DUOxfordUnited Kingdom",
"CNRS-IRISA (UMR 6074\nInria\nVisAGeS Project-Team\nINSERM\nUniversité de Rennes 1\nCampus de BeaulieuU746, 35042Rennes CedexVisAGeSFrance",
"Laboratorio de Procesado de Imagen (LPI)\nETSI Telecomunicación\nUniversidad de Valladolid\nValladolidSpain",
"Centro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain",
"Department of Psychosis Studies\nInstitute of Psychiatry, Psychology & Neuroscience\nKing's College London\nUnited Kingdom",
"Department of Clinical Neuroscience\nKarolinska Institutet\nStockholmSweden",
"Departamento de Ciencia de la Computación e Inteligencial Artificial\nUniversidad del País Vasco\nEuskal Herriko Unibertsitatea\nSpain",
"McConnell Brain Imaging Center\nMontreal Neurological Institute\nMcGill University\nMontrealQuebecCanada",
"Department of Clinical Neurosciences\nThe Neuroimaging Research Laboratory\nLaboratoire de Recherche en Neuroimagerie: LREN\nUniversity Hospital Center (CHUV)\nLausanneSwitzerland",
"Centro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain",
"Departamento de Bioingeniería e Ingeniería Aeroespacial\nUniversidad Carlos III de Madrid\nMadridSpain",
"Instituto de Investigación Sanitaria Gregorio Marañón\nMadridSpain",
"Signal Processing Lab (LTS5)\nÉcole polytechnique fédérale de Lausanne (EPFL)\nLausanneSwitzerland",
"University Hospital Center (CHUV)\nUniversity of Lausanne (UNIL)\nLausanneSwitzerland",
"Centro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain",
"Centro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain",
"Centro de Investigación Biomédica en Red de Salud Mental\nCIBERSAM\nC/Dr Esquerdo, 4628007MadridSpain",
"BarcelonaSpain"
] | [] | Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel whitematter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was PLOS ONE | Evaluación y Fomento de la Investigación and the European Regional Development Fund (FEDER): Miguel Servet Research Contracts (CP10/00596 to performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. Fig 4. Main peaks in the 45-degrees phantom data. Main peaks extracted from the fiber ODFs estimated in the phantom data with inter-fiber angle equal to 45 degrees and Rician noise with a SNR = 15 are shown. Results are based on reconstructions using 200 iterations. Peaks are visualized as thin cylinders. | 10.1371/journal.pone.0138910 | null | 17,159,984 | 1410.6353 | 07f7d90641b6974652f60039619fb50a470f2d90 |
Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization
October 15, 2015 Published: October 15, 2015
Erick J Canales-Rodríguez
Centro de Investigación Biomédica en Red de Salud Mental
CIBERSAM
C/Dr Esquerdo, 4628007MadridSpain
Alessandro Daducci
Signal Processing Lab (LTS5)
École polytechnique fédérale de Lausanne (EPFL)
LausanneSwitzerland
University Hospital Center (CHUV)
University of Lausanne (UNIL)
LausanneSwitzerland
4☯Stamatios N Sotiropoulos
Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB)
University of Oxford
John Radcliffe Hospital
OX39DUOxfordUnited Kingdom
5☯Emmanuel Caruyer
CNRS-IRISA (UMR 6074
Inria
VisAGeS Project-Team
INSERM
Université de Rennes 1
Campus de BeaulieuU746, 35042Rennes CedexVisAGeSFrance
Santiago Aja-Fernández
Laboratorio de Procesado de Imagen (LPI)
ETSI Telecomunicación
Universidad de Valladolid
ValladolidSpain
Joaquim Radua
Centro de Investigación Biomédica en Red de Salud Mental
CIBERSAM
C/Dr Esquerdo, 4628007MadridSpain
Department of Psychosis Studies
Institute of Psychiatry, Psychology & Neuroscience
King's College London
United Kingdom
Department of Clinical Neuroscience
Karolinska Institutet
StockholmSweden
Jesús M Yurramendi Mendizabal
Departamento de Ciencia de la Computación e Inteligencial Artificial
Universidad del País Vasco
Euskal Herriko Unibertsitatea
Spain
Yasser Iturria-Medina
McConnell Brain Imaging Center
Montreal Neurological Institute
McGill University
MontrealQuebecCanada
Lester Melie-García
Department of Clinical Neurosciences
The Neuroimaging Research Laboratory
Laboratoire de Recherche en Neuroimagerie: LREN
University Hospital Center (CHUV)
LausanneSwitzerland
Yasser Alemán-Gómez
Centro de Investigación Biomédica en Red de Salud Mental
CIBERSAM
C/Dr Esquerdo, 4628007MadridSpain
Departamento de Bioingeniería e Ingeniería Aeroespacial
Universidad Carlos III de Madrid
MadridSpain
Instituto de Investigación Sanitaria Gregorio Marañón
MadridSpain
Jean-Philippe Thiran
Signal Processing Lab (LTS5)
École polytechnique fédérale de Lausanne (EPFL)
LausanneSwitzerland
University Hospital Center (CHUV)
University of Lausanne (UNIL)
LausanneSwitzerland
Salvador Sarró
Centro de Investigación Biomédica en Red de Salud Mental
CIBERSAM
C/Dr Esquerdo, 4628007MadridSpain
Edith Pomarol-Clotet
Centro de Investigación Biomédica en Red de Salud Mental
CIBERSAM
C/Dr Esquerdo, 4628007MadridSpain
Raymond Salvador
Centro de Investigación Biomédica en Red de Salud Mental
CIBERSAM
C/Dr Esquerdo, 4628007MadridSpain
BarcelonaSpain
Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization
October 15, 2015 Published: October 15, 201510.1371/journal.pone.0138910Received: May 13, 2015 Accepted: September 6, 2015RESEARCH ARTICLE 1 FIDMAG Germanes Hospitalàries, C/ Dr. Antoni Pujadas, 38, 08830, Sant Boi de Llobregat, ☯ These authors contributed equally to this work. * 1 / 29 OPEN ACCESS Citation: Canales-Rodríguez EJ, Daducci A, Sotiropoulos SN, Caruyer E, Aja-Fernández S, Radua J, et al. (2015) Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization. PLoS ONE 10(10): e0138910. Editor: Alexander Leemans, University Medical Center Utrecht, NETHERLANDS Data Availability Statement: All relevant data are within the paper and its Supporting Information files. The implementation of the new methodology isbe freely available as part of the 'HARDI tools' web: (http://neuroimagen.es/webs/hardi_tools).
Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel whitematter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was PLOS ONE | Evaluación y Fomento de la Investigación and the European Regional Development Fund (FEDER): Miguel Servet Research Contracts (CP10/00596 to performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. Fig 4. Main peaks in the 45-degrees phantom data. Main peaks extracted from the fiber ODFs estimated in the phantom data with inter-fiber angle equal to 45 degrees and Rician noise with a SNR = 15 are shown. Results are based on reconstructions using 200 iterations. Peaks are visualized as thin cylinders.
Introduction
After decades of developments in diffusion Magnetic Resonance Imaging (MRI), the successful implementation of a variety of advanced methods has shed light on the complex patterns of brain organization present at micro [1] and macroscopic scales [2][3][4]. Among these methods, Diffusion Tensor Imaging (DTI) [5] has become a classic in both clinical and research studies. DTI can deliver quantitative results, it may be easily implemented in any clinical MRI system and, thanks to its short acquisition time, it may be suitable for studying a wide range of brain diseases. Unfortunately, it is now well recognized that due to its simplistic assumptions, the DTI model does not adequately describe diffusion processes in areas of complex tissue organization, like in areas with kissing, branching or crossing fibers [6].
Such limitations in the DTI approach have prompted the recent development of numerous sampling protocols, diffusion models and reconstruction techniques (e.g., see [7][8][9] and references therein). While some of these techniques have been based on model-free methods, including q-ball imaging [10] and its extensions [11][12][13][14][15][16][17], diffusion orientation transforms [18,19], diffusion spectrum imaging [20] and related q-space techniques [21][22][23][24][25][26], other approaches have relied on parametric diffusion models using higher-order tensors [27][28][29] and multiple second-order diffusion tensors [6]. In the later group, different numerical techniques involving gradient descent [6], Bayesian inference [30][31][32] and algorithms inspired from compressed sensing theory [33][34][35] have been applied to solve the resulting inverse problems.
Spherical Deconvolution (SD) is a class of multi-compartment reconstruction technique that can be implemented using both parametric and nonparametric signal models [36][37][38][39][40][41][42][43][44][45][46][47][48][49]. SD methods have become very popular owing to their ability to resolve fiber crossings with small inter-fiber angles in datasets acquired within a clinically feasible scan time. This resolving power is driven by the fact that, as opposed to model-free techniques that estimate the diffusion Orientational Distribution Function (ODF), the output from SD is directly the fiber ODF itself.
Among the different SD algorithms, Constrained Spherical Deconvolution (CSD) [39,40] has been received with special interest due to its good performance and short computational time. In CSD, the average signal profile from white-matter regions of parallel fibers is first estimated, and afterwards, the fiber ODF is estimated by deconvolving the measured diffusion data in each voxel with this signal profile, which is also known as the single-fiber 'response function'.
More recently and as an alternative to CSD, a new SD method based on a damped Richardson-Lucy algorithm adapted to Gaussian noise (dRL-SD) has been proposed [37,42]. An extensive evaluation of both CSD and dRL-SD algorithms has revealed a superior ability to resolve low anisotropy crossing-fibers by CSD but a lower percentage of spurious fiber orientations and a lower over-all sensitivity to the selection of the response function by the dRL-SD approach [50]. This later feature is of great relevance since the assumption of a common response function for all brain tracts is a clear over-simplification of both methods, with the consequences of it minimized by the dRL-SD.
From an algorithmic perspective dRL-SD inherits the benefits of the standard RL deconvolution method applied with great success in diverse fields ranging from microscopy [51] to astronomy [52]. Remarkably, RL deconvolution is robust to the experimental noise and the obtained solution can be constrained to be non-negative without the need for including additional penalization functions in the estimation process. Moreover, from a modeling point of view, dRL-SD is implemented using an extended multi-compartment model that allows considering the partial volume effect in brain voxels with mixture of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF), a strategy that has been shown to be effective in reducing the occurrence of spurious fiber orientations [37].
However, in spite of the good properties of dRL-SD and other SD methods some methodological issues remain. These methods, to some extent, assume additivity and zero mean Gaussianity for the underlying noise and are potentially vulnerable to significant departures from such an assumption. Indeed, it is well known that the MRI noise is non-Gaussian and depends on many factors, including the number of coils in the scanner and the multichannel image combination method. Real experiments have shown that noise follows Rician [53] and noncentral Chi (nc-χ) distributions [54] evidencing the inappropriateness of the Gaussian model. This issue is especially relevant in diffusion MRI data where the high b-values required to enhance the angular contrast lead to extremely low signal-to-noise (SNR) ratios. A recent study [55] has shown that different multichannel image combination methods can changes the properties of the signal and can have an effect on fiber orientation estimation.
On the other hand, the standard reconstruction in SD, based on a voxel-by-voxel fiber ODF estimation, although reasonable it may not be optimal in a global sense as it does not take into account the underlying spatial continuity of the image. Recent research on the inclusion of spatial continuity into SD methods via regularization has yielded promising results [9,56,57]. Among these, spatially regularized SD methods based on Total Variation (TV) [9] are very appealing due to their outstanding ability to simultaneously smooth away noise in flat regions whilst preserving edges, and due to their robustness to high levels of noise [58].
This work has two main aims: (1) the study and quantification of the benefits of the adequate modelling of the noise distribution in the context of spherical deconvolution, and (2) the study and quantification of the effects of including a TV spatial regularization term in the proposed estimation framework.
To address the first objective we developed a new SD methodology which, following a more realistic view, deals with non-Gaussian noise models. Specifically, the estimation framework is based on a natural extension to the RL algorithm for Rician and nc-χ likelihood distributions. We had chosen the RL algorithm as a starting point for our work because this algorithm has proven to be highly efficient in diverse applications, and because the performance of the resulting method can be directly compared to the state-of-the-art dRL-SD method, which employs a nearly-equivalent SD estimation algorithm but based on a Gaussian noise model. The second aim was addressed by including TV regularization to the developed formulation. Moreover, for completeness we have extended also the dRL-SD method via the spatially-regularized proposed estimation.
To compare the relative performance between the SD methods based on Gaussian and non-Gaussian noise models, and their respective implementations including the TV regularization, the different algorithms were applied to several synthetic phantom datasets which had been contaminated with noise patterns mimicking the Rician and nc-χ noise distributions produced in multichannel scanners. To the best of our knowledge, this is the first evaluation of such methods in a scenario where Rician and nc-χ noise are explicitly created as a function of the number of coils, their spatial sensitivity maps, the correlation between coils, and the reconstruction methodology used to combine the multichannel signals. As a final analysis, the new method is also applied to real multichannel diffusion MRI data from a healthy subject.
Following this introduction there is a 'Theory' section providing an overview of the different topics relevant to the study and the derivation of the new SD reconstruction algorithm. Description about computer simulations, image acquisition strategies and metrics designed to evaluate the performance of the reconstructions is provided in the Materials and Methods section. Relevant findings are succinctly described in the Results section. Finally, main results, contributions and limitations of this work are addressed in the Discussion and Conclusions section.
Theory
This section contains a description of the forward/generative model used to relate the local diffusion process with the measured diffusion MRI data. It also provides a brief review of MRI noise models. Finally, the diffusion and MRI noise models are used to derive the new SD reconstruction algorithms.
Generative signal and fiber ODF model
The diffusion MRI signal measured for a given voxel can be expressed as the sum of the signals from each intra-voxel compartment. The term 'compartment' is defined as a homogeneous region in which the diffusion process possesses identical properties in magnitude and orientation throughout, and which is different to the diffusion processes occurring in other compartments. One example of this approach is the multi-tensor model that allows considering multiple WM parallel-fiber populations within the voxel. In this model the diffusion process taking place inside each compartment of parallel fibers is described by a second-order self-diffusion tensor [6].
In real brain data, in addition to the different WM compartments, voxels might also contain GM and CSF components. This issue was considered by [37], who extended the multi-tensor model by incorporating the possible contribution from these compartments. This is the generative multi-tissue signal model that will be used in the present study. In the absence of any source of noise, the resulting expression for the signal is:
S i ¼ S 0 X M j¼1 f j exp Àb i v T i D j v i þ f GM expðÀb i D GM Þ þ f CSF expðÀb i D CSF Þ ;ð1Þ
where M is the total number of WM parallel fiber bundles; f j denotes the volume fraction of the j th fiber-bundle compartment; f GM and f CSF are the volume fractions of the GM and CSF com-
partments respectively, so that X M j¼1 f j þ f GM þ f CSF ¼ 1; b i
is the diffusion-sensitization factor (i.e., b-value) used in the acquisition scheme to measure the diffusion signal S i along the diffusion-sensitizing gradient unit vector v i , i = 1, . . ., N; D GM and D CSF are respectively the mean diffusivity coefficients in GM and CSF; S 0 is the signal amplitude in the absence of diffusionsensitization gradients (b i = 0); D j ¼ R T j AR j denotes the anisotropic diffusion tensor of the j th fiber-bundle, where R j is the rotation matrix that rotates a unit vector initially oriented along the x-axis toward the j th fiber orientation (θ j , ϕ j ) and A is a diagonal matrix containing information about the magnitude and anisotropy of the diffusion process inside that compartment: where λ 1 is the diffusivity along the j th fiber orientation, λ 2 and λ 3 are the diffusivities in the plane perpendicular to it. It is assumed that λ 1 >λ 2 %λ 3 . At each voxel, the measured diffusion signals S i for N different sampling parameters (i.e., v i and b i , i[1, . . ., N]) can be recast in matrix form as: In the framework of model-based spherical deconvolution, H is created by specifying the diffusivities, which are chosen according to prior information, and by providing a dense discrete set of equidistant M-orientations O ¼ fðy j ; j Þ; j 2 ½1; . . .; Mg uniformly distributed on the unit sphere. Previous studies have used different sets of orientations, ranging from M = 129 [43] to M = 752 [42]. Then, the goal is to infer the volume fraction of all predefined oriented fibers, f, from the vector of measurements S and the 'dictionary' H of oriented basis signals. Under this reconstruction model, f can be interpreted to as the fiber ODF evaluated on the set O Matrix H is also known as the 'diffusion basis functions' [43], or the 'point spread function' [37][38][39] that blurs the fiber ODF to produce the observed measurements.
A ¼ l 1 0 0 0 l 2 0 0 0 l 3 0 B @ 1 C A;ð2ÞS ¼ Hf ;ð3Þ¼ S 0 expðÀb i v T i D j v i Þ Likewise, H ISO is an N x 2 matrix
It should be noticed that solving the deconvolution problem given by Eq (3) is not simple because the resulting system of linear equations is ill-conditioned and ill-posed (i.e., there are more unknowns than measurements and some of the columns of H are highly correlated), which can lead to numerical instabilities and physically meaningless results (e.g., volume fractions with negative values). A common strategy to avoid such instabilities is to use robust algorithms that search for solutions compatible with the observed data but which also satisfy some additional constraints. Thus, in SD it is typical to estimate the fiber ODF by constraining it to be non-negative and symmetric around the origin (i.e., antipodal symmetry). As mentioned in the introduction, though, all these reconstruction algorithms may not be necessarily optimal when dealing with non-Gaussian noise, as it is the case for MRI noise.
MRI noise models
In conventional MRI systems, the data are measured using a single quadrature detector (i.e., coil with two orthogonal elements) that gives two signals which, for convenience, are treated as the real and imaginary parts of a complex number. The magnitude of this complex number (i.e. the square root of the sum of their squares) is commonly used because it avoids different kinds of MRI artifacts [53]. Given that the noise in the real and imaginary components follows a Gaussian distribution, the magnitude signal S i will follow a Rician distribution [53] with a probability function given by
PðS i j S i ; s 2 Þ ¼ S i s 2 exp À 1 2s 2 ½S i 2 þ S 2 i & ' I 0 S i S i s 2 uðS i Þ;ð4Þ
where S i denotes the true magnitude signal intensity in the absence of noise, σ 2 is the variance of the Gaussian noise in the real and imaginary components, I 0 is the modified Bessel function of first kind of order zero and u is the Heaviside step function that is equal to 0 for negative arguments and to 1 for non-negative arguments.
Modern clinical scanners are usually equipped with a set of 4 to 32 multiple phased-array coils, the signals of which can be combined following different strategies that, in turn, will give rise to different statistical properties for the noise [54]. One frequent strategy uses the spatial matched filter (SMF) approach linearly combining the complex signals of each coil and producing voxelwise complex signals [59]. Since the noise in the resulting real and imaginary components remains Gaussian a Rician distribution is expected in the final combined magnitude image. An alternative to the SMF is to create the composite magnitude image as the root of the sum-of-squares (SoS) of the complex signals of each coil. Under this approach the combined image follows a nc-χ distribution [60] given by,
PðS i j S i ; s 2 ; nÞ ¼ S i s 2 S i S i n exp À 1 2s 2 ½S i 2 þ S 2 i & ' I nÀ1 S i S i s 2 uðS i Þ;ð5Þ
where n is the number of coils and I n-1 is the modified Bessel function of first kind of order n-1. This expression is strictly valid when the different coils produce uncorrelated noise with equal variance, and when noise correlation cannot be neglected it provides a good approximation if effective n eff and s 2 eff values are considered [61], with n eff being a non-integer number lower than the real number of coils and s 2 eff is higher than the real noise variance in each coil. A related SoS image combination method that increases the validity of Eq (5) is the covariance-weighted SoS. This method is equivalent to pre-whitening (i.e., decorrelate) the measured signals before applying the standard SoS image combination. The covariance-weighted SoS approach requires the estimation of the noise covariance matrix of the system which, in practice, may be carried out by digitizing noise from the coils in the absence of excitations [62].
It is important to note that there are additional factors that can change the noise characteristics described above, including the use of accelerated techniques based on under-sampling approaches such as those used in parallel MRI (pMRI) and partial Fourier, certain reconstruction filters in k-space, and some of the preprocessing steps conducted after image reconstruction.
Empirical data suggest that some of these factors do not substantially change the type of distribution of the noise. On the one hand, [54] investigated the effects of the type of filter in kspace, the number of receiving channels and the use of pMRI reconstruction techniques, and found that noise distributions always followed Rician and nc-χ distributions with a reasonable accuracy-although their standard deviations and effective number of receiver channels were altered when fast pMRI and subsequent SoS reconstructions were used. On the other hand, [55] showed real diffusion MRI data noise to also follow Rician and nc-χ noise distributions after a preprocessing that included motion and eddy currents corrections. Unfortunately, the combined effect of all factors has not, to the best of our knowledge, being studied. In this regard, a complete evaluation should include the study of the effects of additional data manipulation processes routinely applied in many clinical research studies, such as B0-unwarping due to magnetic field inhomogeneity and partial Fourier reconstructions. Although the latter has been investigated in terms of signal-to-noise ratio, its influence on the shape of the noise distribution remains unknown.
However, while it is impossible to ensure that Rician and nc-χ distributions are the optimal noise models for all possible strategies used for sampling, reconstructing and preprocessing diffusion MRI data, such models are flexible enough to adapt to deviations from the initial theoretical assumptions. Their parameterization in terms of spatial-dependent effective parameters (i.e., n eff ðx; y; zÞ; s 2 eff ðx; y; zÞ, as in [61,63,64]) allows characterizing the spatially varying nature of the noise observed in accelerated MRI reconstructed data, as well as the spatial correlation introduced by reconstruction algorithms, whilst preserving the good theoretical properties of the models with standard parameters, i.e. the null probability to obtain negative signals and the ability to characterize the signal-dependent non-linear bias of the data.
Spherical deconvolution of diffusion MRI data
Eq (5) based on either conventional (i.e., n, σ 2 ) or effective (i.e., n eff , s 2 eff ) parameters provides a very general MRI noise model, which includes the Rician distribution (given in Eq (4)) as a special case with n = 1. Consequently, if we derive the spherical deconvolution reconstruction corresponding to Eq (6) any particular solution of interest will become available.
Specifically, if we assume the linear model given by Eqs (1)-(3) the likelihood model for the vector of measurements S under a nc-χ distribution is
PðSjH; f; s 2 ; nÞ ¼ Y N i¼1 S i s 2 S i S i n exp À 1 2s 2 ½S i 2 þ S 2 i & ' I nÀ1 S i S i s 2 uðS i Þ;ð6Þ
where S i and S i ¼ ðHf Þ i are the measured and expected signal intensities for ith sampling parameters, respectively. 3.1 Unbiased and positive recovery: the multiplicative Richardson-Lucy algorithm for nc-χ noise. The maximum likelihood (ML) estimate in Eq (6) is obtained by differentiating its negative log-likelihood J(f) = −log p(S|H,f,σ 2 , n) with respect to f and equating the derivative to zero, which after some algebraic manipulations becomes
f ¼ f H T S I n ðSHf =s 2 Þ I nÀ1 ðSHf =s 2 Þ h i H T Hf ;ð7Þ
where '' stands for the Hadamard component-wise product, and the division operators are applied component-wise to the vector's elements. Eq (7) is nonlinear in f and its solution can be obtained through a modified version of the expectation maximization technique, originally developed by Richardson and Lucy for a Poisson noise model [65,66] and known as the RL algorithm. When we applied this technique to nc-χ and Rician distributed noise it naturally led to the following iterative estimation formula:
f kþ1 ¼ f k H T S I n ðSHf k =s 2 Þ I nÀ1 ðSHf k =s 2 Þ h i H T Hf k ;ð8Þ
in which the solution calculated at the k th iteration step (f k ) gradually improves (i.e. its likelihood increases after each step) until a final, stationary solution f kþ1 f k ¼ 1, is reached. As shown Appendix A in S1 File, this formula can also be related to the RL algorithm for Gaussian noise, employed in the undamped RL-SD technique [42].
Under the absence of any prior knowledge about f, the initial estimate (f°) can be fixed to a non-negative constant density distribution [42]. In that case, the algorithm transforms a perfectly smooth initial estimate into sharper estimates, with sharpness increasing with the number of iterations. So, roughly speaking, the number of iterations can be considered as a regularization parameter controlling the angular smoothness of the final estimate. Notably, if f°is non-negative, the successive estimates remain non-negative as well, and the algorithm always produces reconstructions with positive elements. Moreover, as in [37,42] the estimation does not involve any matrix inversion, thus avoiding related numerical instabilities.
In order to evaluate Eq (8) an estimates 2 of σ 2 is required. Although obtaining it from a region-of-interest (ROI) is feasible [67] its accuracy may be compromised by systematic experimental issues such as ghosting artifacts, signal suppression by the scanner outside the brain, zero padding and by filters applied in the k-space. Moreover, with the use of fast parallel MRI sequences, where each coil records signals with partial coverage in the k-space, properties of the noise become spatially heterogeneous (i.e. they change from voxel to voxel across the image). While some authors have proposed alternatives to overcome these limitations [68] here we have estimated the noise variance at each voxel from the same data used to infer the fiber ODF.
Specifically, by minimizing the negative log-likelihood with respect to σ 2 we have obtained an iterative scheme analogous to Eq (8):
a kþ1 ¼ 1 nN S T S þ f T H T Hf 2 À 1 T N ðS Hf Þ I n ðS Hf =a k Þ I nÀ1 ðS Hf =a k Þ ! & ' ;ð9Þ
where α k is the estimate of σ 2 at the k th iteration (starting with an arbitrarily initial estimate α°) and 1 N is a N×1 vector of ones. The resulting algorithm based on Eqs (8) and (9) is termed RUMBA-SD, which is the abbreviation of 'Robust and Unbiased Model-Based Spherical Deconvolution'. The spatially-regularized extension to this algorithm is described in the following section.
3.2 Towards a robust recovery: Total variation regularization. When considering the TV model [58] the maximum a posteriori (MAP) solution at voxel (x, y, z) is obtained by minimizing the augmented functional:
Jðf Þ ¼ ÀlogPðSjH; f ; s 2 ; nÞ þ a TV TVðf Þ;ð10Þ
where the first term is the negative log-likelihood defined in previous sections and the second term is the TV energy, defined as the sum of the absolute values of the first-order spatial derivative (i.e., gradient "r") of the fiber ODF components over the entire brain image, TVðf Þ ¼ X j jr½f 3D j j, evaluated at voxel (x, y, z); [f 3D ] j is a 3D image created in a way that each voxel contains the element at position j of their corresponding estimate vector f, and α TV is a parameter controlling the level of spatial regularization. Importantly, and in contrast to the previous ML estimate, now the solution at a given voxel is not independent from the solutions in other voxels. The spatial dependence introduced by the TV functional promotes smooth solutions in homogeneous regions (discourages the solution from having oscillations), yet it does allow the solution to have sharp discontinuities [58]. This property is highly relevant for SD because, while it promotes continuity and smoothness along individual tracts, it prevents partial volume contamination from adjacent tracts. In this work, the MAP estimate from Eq (10) is obtained using an iterative scheme similar to that proposed in [51], where the estimate at each iteration is calculated by the multiplication of two terms: the standard ML estimate, and the regularization term derived from the TV functional
f kþ1 ¼ f k H T S I n ðSHf k =s 2 Þ I nÀ1 ðSHf k =s 2 Þ h i H T Hf k R k ;ð11Þ
with the TV regularization vector R k at voxel (x, y, z), and at the k th iteration, computed element-by-element as
ðR k Þ j ¼ 1 1 À a TV div r f k 3D ½ j r f k 3D ½ j 0 @ 1 A j ðx;y;zÞ ;ð12Þ
where (R k ) j is the element j of vector R k and div is the divergence operator. In practice, to correct for potential singularities at r f k
3D Â Ã j ¼ 0, the term r f k 3D Â Ã j is replaced by its approximated value ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r f k 3D Â Ã j 2 þ ε r ,
where ε is a small positive constant. Moreover, any negative value in R k is replaced by its absolute value to preserve the non-negativity of the estimated fiber ODF. Notice that by setting α TV = 0 the estimator in Eq (11) becomes equal to the unregularized version in Eq (8).
In the current implementation the simultaneous estimation of all the parameters is carried out via an alternating iterative scheme summarized in Table 1. Briefly, it minimizes the functional in Eq (10) with respect to the fiber ODF while assuming that σ 2 is known and fixed, and then it updates the noise variance using the new fiber ODF estimate. While for SMF-based data all the equations are evaluated using n = 1, for SoS-based data n is fixed to the real number of coils, or to the effective value n eff if provided. (But see Appendix B in S1 File)
The regularization parameter α TV is adaptively adjusted at each iteration following the discrepancy principle. Specifically, it is selected to match the estimated variance [69] using two alternative strategies: (i) assuming a constant mean parameter over the entire brain image, α TV = E{α k+1 } (see Table 1), potentially increasing the precision and robustness of the estimator; or (ii) assuming a spatial dependent parameter, α TV (x, y, z) = α k+1 (x, y, z), which may be more appropriate in situations where a differential variance across the image is expected, like in accelerated pMRI based-data.
It should be remembered that the accuracy of the reconstruction for the SoS case depends on the variant used to combine the images. In this work it is assumed that the data is combined using the covariance-weighted SoS method. However, even if the available data were combined using the conventional SoS approach (i.e., without taking into account the noise correlation matrix among coil elements), the method could still provide a reasonable approximation (for more details see Appendix B in S1 File). 3)). This step would make sense when the fiber response signal used to create the dictionary matches the real signal from the compartments, whereas it may be omitted when the latter cannot be guaranteed. Notice that the original The evaluation of the ratio of modified Bessel functions of first kind involved in the updates of Eqs (11) and (9) is best computed by considering the ratio as a new composite function, and not by means of the simple evaluation of the ratio of the individual functions. Specifically, this ratio is computed here in terms of Perron continued fraction [70]. All the details are provided in Appendix C in S1 File.
Following a similar estimation framework, the TV regularization was also included into the dRL-SD method. All the relevant equations are provided in Appendix D in S1 File.
Materials and Methods 1 Synthetic fiber bundles with different inter-fiber angles
In order to test the resolving power of the methods as a function of the underlying inter-fiber angle, various synthetic phantoms including two fiber bundles were generated. The inter-fiber angles were gradually modified from 1 to 90 degrees applying one degree increases, eventually yielding 90 different phantoms with 50 x 50 x 50 voxels. The volume fractions of the two fiber bundles were assumed to be equal (f 1 = f 2 = 0.5) in the fiber crossing region.
The intra-voxel diffusion MRI signal was generated via the multi-tensor model [6] using N = 70 sampling orientations with constant b = 3000 s/mm 2 plus one additional image with b = 0 (i.e., S 0 = 1 was assumed in all voxels). The diffusion tensor diffusivities of both fiber groups were assumed to be identical and equal to λ 1 = 1.7 10 −3 mm 2 /s and λ 2 = λ 3 = 0.3 10 −3 mm 2 /s respectively.
Synthetic fiber bundles with different volume fractions
To test the ability of the different methods to detect non-dominant fibers, various synthetic phantoms containing two fiber bundles were generated. In these phantoms the inter-fiber angle was fixed to 70 degrees (an angle presumably detectable by all the methods) and the volume fraction of the non-dominant fiber bundle was gradually changed from 0.1 to 0.5 in 0.01steps, generating 41 different phantoms with 50 x 50 x 50 voxels each. The intra-voxel diffusion MRI signal was created using the same generative multi-tensor model, b-value and sampling orientations as in the previous section.
Synthetic "HARDI reconstruction challenge 2013" phantom
The reconstruction algorithms were also tested on the synthetic diffusion MRI phantom developed for the "HARDI Reconstruction Challenge 2013" Workshop, within the IEEE International Symposium on Biomedical Imaging (ISBI 2013). This phantom comprises a set of 27 fiber bundles with fibers of varying radii and geometry which connect different areas of a 3D image with 50 x 50 x 50 voxels. It contains a wide range of configurations including branching, crossing and kissing fibers, together with the presence of isotropic compartments mimicking the CSF contamination effects occurring near ventricles in real brain images.
The intra-voxel diffusion MRI signal was generated using N = 64 sampling points on a sphere in q-space with constant b = 3000 s/mm 2 , plus one additional image with b = 0. For pure GM and CSF voxels, signals were generated using two mono-exponential models: exp (−D GM b) and exp(−D CSF b) with D GM = 0.2 10 −3 mm 2 /s and D CSF = 1.7 10 −3 mm 2 /s. In voxels belonging to single-fiber WM bundles, the signal measured along the q-space unit direction q ¼ q=jqj was generated by a mixture of signals from intra-and extra-axonal compartments: f int s int ðq; v; t; L; RÞ þ f ext s ext ðq; v; b; l 1 ; l 2 Þ; where v denotes the local fiber orientation. Signal from the intra-axonal compartment S int was created following the theoretical model of a restricted diffusion process inside a cylinder of length L = 5 mm and radius R = 5 μm at the diffusion time τ = 20.8 s [19,71]. The extra-axonal signal S ext was generated using a diffusion tensor model with cylindrical symmetry (i.e., λ 1 = 1.7 10 −3 mm 2 /s, λ 2 = λ 3 = 0.2 10 −3 mm 2 /s). Mixture fractions were fixed to f int = 0.6 and f ext = 0.4. The noiseless dataset can be freely downloaded from the Webpage of the site: http://hardi.epfl.ch/static/events/2013_ISBI/.
Multichannel noise generation
The synthetic diffusion images from the above phantoms were contaminated with noise mimicking the SoS and SMF strategies used in scanners in order to combine multiple-coils signals. To that aim, the noisy complex-valued image measured from the kth coil was assumed to be equal to
S k ¼ SC k þ e R k þ ie I k ;ð13Þ
where C k is the relative sensitivity map [72] of kth coil, e R k $ Nð0; SÞ and e I k $ Nð0; SÞ are two different Gaussian noise realizations simulating the noise in the real and imaginary components with zero-mean value and covariance matrix S. For simplicity S was assumed to be given by
S ¼ s 2 1 r Á Á Á r r 1 Á Á Á r . . . . . . . . . . . . r r Á Á Á 1 0 B B B B B @ 1 C C C C C A ;ð14Þ
where σ 2 is the noise variance of each coil and ρ indicates the correlation coefficient between any two coils. For the SoS reconstruction, magnitude images were generated as:
S SoS ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi X n k¼1 jS k j 2 q ;ð15Þ
where |S k | stands for the magnitude of S k . Notice that Eq (15) is the conventional SoS image combination and not the covariance-weighted variant. We have followed this approach in order to simulate the effect of any remaining residual correlation ρ present in real systems (we have assumed a ρ = 0.05) [63].
In the SMF reconstruction, magnitude images were generated as:
S SMF ¼ X n k¼1 S k C k ;ð16Þ
with simulated sensitivity maps depicted in Fig A in S2 File satisfying the relationship X n k¼1 C 2 k ¼ 1; which holds in practice when the relative sensitivity maps are calculated as C k = |S k |/S SOS [72]. These sensitivity maps have been previously used in [73].
It should be noted that different scanner vendors can implement different SMF and SoS variants. In this work we have used the variants given in [55] for datasets acquired without undersampling in the k-space, i.e., R = 1, where R is the acceleration factor of the acquisition defined as the ratio of the total k-space phase-encoding lines over the number of k-space lines actually acquired. Notice that in the absence of noise S SOS = S SMF . Besides, for the particular case of a single coil with uniform sensitivity, i.e., n = 1 and C = 1, Eq (15) and Eq (16) become identical.
The 132 3D phantoms resulting from the procedures described in the three previous sections were contaminated with noise using a range of clinical signal-to-noise ratios (SNR) of 10, 15, 20 and 30, where SNR = S 0 /σ. In order to generate signals under equivalent conditions, for each value of σ the same noise realizations fe R k g and fe I k g were used to generate the final images S SOS and S SMF . All datasets were created simulating a scanner with 8 coils (see Figs A and B in S2 File).
Evaluation metrics
The performance of the algorithms was quantified by comparing the obtained reconstructions against the ground-truth via three main criteria: (i) the angular error in the orientation of fiber populations, (ii) the proper estimation of the number of fiber populations present in every voxel and (iii) the volume fraction error.
For the analyses, local peaks from the reconstructed fiber ODFs were identified as those vertices in the grid with higher values than their adjacent neighbors, considering only cases where magnitudes exceeded at least one tenth of the amplitude of the highest peak (i.e., 0.1Áf max ) [50]. From all identified peaks, the highest four where finally retained.
Next, we adopted some of the evaluation metrics widely used in the literature [9]. Specifically, we used the angular error, defined as the average minimum angle between the extracted peaks and the true fiber directions [74]:
y ¼ 1 M true X M true k¼1 min m arccosð e T m v k Þ È É ;ð17Þ
where M true is the true number of fiber populations, e m is the unitary vector along with the mth detected fiber peak and v k is the unitary vector along the kth true fiber direction. The volume fraction error of the estimated fiber compartments was assessed by means of the average absolute error between the estimated and the actual peak amplitudes:
Df ¼ 1 M true X M true k¼1 jf m À f k j; ð18Þ
where f m is the normalized height of the mth detected fiber peak and f k is the volume fraction of the kth true fiber. As usual, the angular and volume fraction errors between each pair of fibers were measured by comparing the true fiber with the closest estimated fiber. Finally, the success rate (SR) was employed to quantify the estimation of the number of fiber compartments. The SR is defined as the proportion of voxels in which the algorithms estimate the right number of fiber compartments. To discriminate the different factors leading to an erroneous estimation, the mean number of over-estimated n + and under-estimated n − fiber populations were computed over the whole image [9].
Settings for the evaluation algorithms
Both RUMBA-SD and dRL-SD methods were implemented using in-house developed Matlab code, applying the same dictionaries H created from the signal generative model given in Eqs (1)-(3). These used M = 724 fiber orientations distributed on the unit sphere, with a mean angular separation between adjacent neighbor vertices of 8.36 degrees, and a standard deviation of 1.18 degrees.
To assess the effect of using dictionaries with optimal and non-optimal diffusivities, two different dictionaries were created and applied to the datasets described in subsections 1 (i.e., fiber bundles with different inter-fiber angles) and 2 (i.e., fiber bundles with different volume fractions). The first dictionary was generated by using the same diffusivities employed in the synthetic data, whereas the second dictionary was created from diffusivities estimated in regions of parallel fibers (outside the fiber crossing area) by means of a standard diffusion tensor fitting on the noisy data (i.e., dtifit tool in FSL package).
Similarly, two dictionaries were created to test the reconstructions on the data described in subsection 3 (i.e., "HARDI Reconstruction Challenge 2013" phantom data). In this case the model diffusivities and the 'true' diffusivities were deliberately set to different values in order to consider the possibility of model misspecification. The first dictionary was created with tensor diffusivities equal to λ 1 = 1.4 10 −3 mm 2 /s and λ 2 = λ 3 = 0.4 10 −3 mm 2 /s. Two isotropic compartments with diffusivities equal to 0.2 10 −3 mm 2 /s and 1.4 10 −3 mm 2 /s were also included. In the second dictionary the diffusivities were assumed to be equal to λ 1 = 1.6 10 −3 mm 2 /s and λ 2 = λ 3 = 0.3 10 −3 mm 2 /s, and the isotropic diffusivities were equal to 0.2 10 −3 mm 2 /s and 1.6 10 −3 mm 2 /s respectively.
The starting condition f°in all cases was set as a non-negative iso-probable spherical function [37]. The accuracy and convergence of both methods as a function of the number of iterations was investigated by repeating the calculations using 200 and 400 iterations, which is within the optimal range as suggested in [37] and [50]. The extended algorithms with TV regularization were also tested using 600 and 1000 iterations. The geometric damping and threshold parameters for dRL-SD were set to v = 8 and η = 0.06 respectively [37], see Appendix D in S1 File. For SoS-based data, n was fixed to the real number of coils in RUMBA-SD.
To differentiate the standard RUMBA-SD and dRL-SD algorithms from their regularized versions, we have added the term '+TV' to their names, i.e., RUMBA-SD+TV and dRL-SD +TV.
Real brain data
Diffusion MRI data were acquired from a healthy subject on a 3T Siemens scanner (Erlangen) located at the University of Oxford (UK). The subject provided informed written consent before participating in the study, which was approved by the Institutional Review Board of the University of Oxford. Whole brain diffusion images were acquired with a 32-channel head coil along 256 different gradient directions on the sphere in q-space with constant b = 2500 s/mm 2 . Additionally, 36 b = 0 volumes were acquired with in-plane resolution = 2.0 x 2.0 mm 2 and slice thickness = 2 mm. The acquisition was carried out without undersampling in the k-space (i.e., R = 1). Raw multichannel signals were combined using either the standard GRAPPA approach or the GRAPPA approach with the adaptive combination of the SMF available in the scanner, giving SoS and SMF-based datasets respectively. Then, the two resulting datasets were separately corrected for eddy current distortions and head motion as implemented in FSL [75].
A subset of 64 directions with nearly uniform coverage on the sphere was selected from the full set of 256 gradients directions, and measurements for this subset were used to 'create' an under-sampled version of the data, which also included 3 b = 0 volumes. The resulting HARDI sequence based on 64 directions is similar to those widely employed in clinical studies, thus results from this dataset are useful to evaluate the impact of the new technique on standard clinical data.
Results
Gaussian versus non-gaussian noise models
The angular and volume fraction errors from the dRL-SD and RUMBA-SD reconstructions in the 90 synthetic phantoms with different inter-fiber angles are depicted in Fig 1, as well as in Fig C in S2 File. Fig 1 shows results using a dictionary created with the same diffusivities applied to generate the data (i.e., λ 1 = 1.7 10 −3 mm 2 /s and λ 2 = λ 3 = 0. 3 A set of patterns can be drawn from these results. First, RUMBA-SD was able to resolve fiber crossings with smaller inter-fiber angles (around 5 degrees and 10 degrees for datasets corrupted with Rician and nc-χ noise respectively). Second, RUMBA-SD produced volume fraction estimates with a higher precision (lower variance), even in phantoms where the fiber configuration was well-resolved by both methods. Third, although dRL-SD produced a relatively lower proportion of spurious fibers (n + ), RUMBA-SD produced a lower proportion of undetected fibers (n − ) leading to a higher success rate (SR). Finally, the performance of both methods was inferior when the dictionary was created using diffusivities estimated from a 'standard' DTI fitting in WM regions of parallel fibers. In line with previous findings on dRL-SD [50], optimal results were obtained from the sharper fiber response model.
All these points also hold for results obtained with other SNRs and with a differing number of algorithm iterations. Specifically, when a higher number of iterations was employed (i.e., 400), a lower proportion of n − and a higher proportion of n + was obtained with both methods. The performance of RUMBA-SD was better in terms of the estimated volume fractions and the success rate, especially in the Rician noise case. In order to verify if the lower bias in the volume fractions estimated by RUMBA-SD could be explained only by its higher success rate, the calculation was repeated by considering only those voxels in each phantom where the two fiber populations were identified. After correcting for this factor, a noticeable advantage was still observed for RUMBA-SD (see left lower panel of Fig 2). When comparing these results with those from Fig 1 (and Fig D in S2 File), it is clear that TV regularization provides multiple benefits in both algorithms, including a superior ability to detect fiber crossings with smaller inter-fiber angles and a higher success rate. The later is due to a lower proportion of undetected fibers (n − ) and of spurious fibers (n + ). This pattern is evident in Fig 4, which depicts the peaks extracted from the fiber ODFs estimated from the SMF-based phantom with inter-fiber angle equal to 45 degrees and SNR = 15. Peaks are plotted as thin cylinders.
Original versus TV-regularized algorithms
As before, the above patterns were also observed in the analyses obtained with datasets based on other SNRs and with different number of iterations. In all cases RUMBA-SD+TV detected fiber crossings at lower inter-fiber angles. Fig 5 shows an example of this in the phantom with inter-fiber angle equal to 33 degrees corrupted with Rician noise and SNR = 15. 3 "HARDI reconstruction challenge 2013" phantom data On the one hand, RUMBA-SD was able to resolve some fiber configurations that were not detected by dRL-SD, especially in voxels involving small inter-fiber angles or fiber crossings with a non-dominant tract. On the other hand, the spatially-regularized algorithms have substantially improved the performance of the original methods. Moreover, RUMBA-SD+TV was the method providing better reconstructions. These findings are in line with results from previous sections and remained valid also when different dictionaries, number of iterations and noise levels were employed.
Additional complementary results about the performance of RUMBA-SD in relation to several other reconstruction methods can be found in the website of the 'HARDI Reconstruction Challenge 2013': http://hardi.epfl.ch/static/events/2013_ISBI/workshop.html#results. An earlier version of RUMBA-SD took part in that Challenge, ranking number one in the 'HARDIlike' category (team name: 'Capablanca'). In the discussion section we provide additional information.
Real brain data
Fiber ODFs were estimated separately for each different SMF-and SoS-based dataset, including the original measured data with the full set of 256 gradient directions and its reduced form including a subset of 64 directions. In all cases, both RUMBA-SD and dRL-SD methods were implemented using the same dictionary, which was created assuming a sharp fiber response model with diffusivities equal to λ 1 = 1.7 10 −3 mm 2 /s and λ 2 = λ 3 = 0.3 10 −3 mm 2 /s, and two isotropic terms with diffusivities equal to 0.7 10 −3 mm 2 /s and 2.5 10 −3 mm 2 /s. Fig 7 shows the fiber ODFs estimated from the reduced SMF-and SoS-based datasets (i.e., data containing the reduced set of 64 gradient directions) in a coronal ROI on the right brain hemisphere. These results correspond to the reconstructions employing 200 iterations. Visual inspection of Fig 7 reveals that RUMBA-SD has produced sharper fiber ODFs than dRL-SD in both data. It has detected more clearly the fiber crossings. Interestingly, the fiber ODF profiles estimated by dRL-SD from the SoS-based data are smoother than those estimated from the SMF-based data. This behavior is less perceptible in the case of RUMBA-SD, suggesting that it could be more robust to different multichannel combination methods. Fig 8 depicts the fiber ODFs estimated with RUMBA-SD and RUMBA-SD+TV in a ROI of both the full and reduced SMF-based datasets. This region contains complex fiber geometries, including the mixture of the anterior limb of internal capsule (alic), the external capsule (ec) and part of the superior longitudinal fasciculus (slf) on the left brain hemisphere. Although in both cases multiple fibers were detected in the area of intersection, RUMBA-SD+TV has provided multi-directional fiber ODFs with a higher number of lobes, which may represent fiber crossings as well as intra-voxel fiber dispersion. The similarity of the reconstructions from the full and reduced datasets suggests that the method is robust with respect to the number of measurements, with the regularized version being the most robust.
In a subsequent analysis we examined the statistical properties of inter-fiber angles as estimated by all methods. Only white matter voxels where both methods detected one or two fibers are included in each plot, with inter-fiber angles in single fiber voxels assumed to be zero. Points on the main diagonal line characterize voxels where both methods gave identical inter-fiber angle estimates, whereas points above and below the main diagonal correspond to voxels where the two methods detected two fibers with different inter-fiber angle. The high density of points forming two secondary lines near the main diagonal indicates nearly similar reconstructions by both methods, with the angular differences being similar to the angular resolution of the reconstruction grid (i.e., about 8 degrees). The higher number of points above the main diagonal in both panels, especially for inter-fiber angles lower than 50 degrees, suggests that RUMBA-SD and RUMBA-SD+TV provide higher inter-fiber angles than dRL-SD and dRL-SD+TV, respectively. Finally, points located on the X and Y axes are voxels where one method detected two fibers while the other detected one. The very low density of points on the X axis of panel A for inter-fiber angles lower than 50 degrees (see the blue bracket) indicates that in nearly all voxels where dRL-SD detected two fibers, RUMBA-SD was also able to detect two fibers. In contrast, the high density of points on the Y axis in the same range of inter-fiber angles indicates that in many voxels RUMBA-SD detected two fibers whereas dRL-SD detected a single one. A similar but attenuated effect can be noticed in panel B, suggesting that TV regularization contributes to reduce the differences between both methods.
Discussion and Conclusions
In this study we propose a new model-based spherical deconvolution method, RUMBA-SD. In contrast to previous methods, usually based on Gaussian noise with zero mean, RUMBA-SD considers Rician and noncentral Chi noise models, which are more adequate for characterizing the non-linear bias introduced in the diffusion images measured in current 1.5T and 3T multichannel MRI scanners. Although recent progress has been made in new SD methods adapted to corrupted Rician data, e.g., see [41,76] to the best of our knowledge, our study provides the first SD extension to noncentral Chi noise. Furthermore, RUMBA-SD offers a very general estimation framework applicable to different datasets, with its flexibility emanating from two features: i) the explicit dependence between the likelihood model and the number of coils in the scanner and ii) the specific methodology employed to combine multichannel signals. In addition, the voxel-wise estimation of the noise variance adequately deals with potential deviations in the noise distribution, due, for instance, to accelerated MRI techniques or to preprocessing effects. We hope that the proposed technique will help extend SD methods to a wide range of datasets taken from different scanners and using different protocols.
This study adds to previous diffusion MRI studies trying to overcome the signal-dependent bias introduced by Rician and noncentral Chi noise. Apart from the robust DTI estimation methods in [77] and the earlier DTI study conducted by [78], the noise filtering techniques recently described in [79,80] are specially relevant. These techniques can be applied in the preprocessing steps prior to HARDI data estimation.
The performance of RUMBA-SD has been evaluated exhaustively against the state-of-theart dRL-SD technique. For that, we have used 132 different 3D synthetic phantoms, including 90 phantoms simulating fiber crossings with different inter-fiber angle, 41 phantoms simulating fiber crossings with different volume fraction, and the complex phantom designed for the "HARDI Reconstruction Challenge 2013" Workshop organized within the IEEE International Symposium on Biomedical Imaging. The comparison of these two methods has allowed us to weight the impact on the results of the Rician and noncentral Chi likelihood models included in RUMBA-SD, in relation to the Gaussian model assumed in dRL-SD. Since both approaches were implemented using the same dictionary of basis signals and similar reconstruction methods based on Richardson-Lucy algorithms adapted to Gaussian, Rician and noncentral Chi noise models, results should be considered comparable. Only voxels in white matter where both methods detected one or two fibers are included. The inter-fiber angle in voxels with a single fiber was assumed to be equal to zero. Points on the main diagonal line are those voxels where the inter-fiber angle estimated from both methods was identical, whereas points above and below the main diagonal correspond to voxels where the two methods detected two fibers but with different inter-fiber angle. Points located on the X and Y axes are voxels where one method detected two fibers whereas the other detected a single fiber. Taken together, findings from all synthetic datasets demonstrate the benefits of an adequate modelling of the noise distribution in the context of spherical deconvolution, and of the inclusion of TV regularization. Interestingly, RUMBA-SD resolved fiber crossings with smaller inter-fiber angles and smaller non-dominant fibers. Likewise, RUMBA-SD produced volume fraction estimates with higher accuracies and precision and produced, as well, a lower proportion of undetected fibers, resulting in a higher success rate (see Figs 1 and 2 in S2 File). On the other hand, the TV spatially-regularized versions of both dRL-SD and RUMBA-SD have substantially improved the performance of the original methods in all studied metrics (see Figs 3-6 in S2 File).
As previously mentioned, an earlier version of RUMBA-SD took part in the HARDI Reconstruction Challenge 2013, ranking number one in the 'HARDI-like' category. Notably, this position was shared with a reconstruction based on the CSD method included in Dipy software [81] (http://nipy.org/dipy/), which had applied a Rician denoising algorithm [82] to the raw diffusion MRI data prior to the actual CSD reconstruction. The superior performance of these two approaches strengthens the importance of taking into account the non-Gaussian nature of the noise. Moreover, it opens new questions on the optimal strategy to be followed. Should we divide the deconvolution process into to two disjoint steps (first denoising and then estimation) or it is more adequate to follow the unified approach proposed here?. The main advantage of the former is that it may benefit from including state-of-art denoising algorithms like the adaptative non-local means denoising method proposed in [82]. Conversely, the main advantage of the unified approach is that it provides a precise model to distinguish real signals from noise throughout all the 4D diffusion MRI data. Many of the advanced denoising algorithms that are currently applied in isolation were developed to filter volumetric (3D) data. Since each 3D image is processed individually, their mutual dependence in terms of orientation is ignored. In contrast, the unified approach described here provides a more general estimation framework that may be extended to include advanced similarity measures like those employed in [82], merging the benefits of both strategies. A new manuscript on the 'Challenge' phantom (currently under preparation) will provide additional information on the performance of RUM-BA-SD in relation to several reconstruction methods and in terms of connectivity metrics derived from fiber tracking analyses.
When applied to human brain data, RUMBA-SD has also achieved the best results, with its reconstructions showing the highest ability to detect fiber crossings (see . And although any conclusion derived from real data is hampered by the unknown anatomy at the voxel level, all previous results on synthetic data seem to support the validity of RUMBA-SD for real data.
Our findings can also be contrasted with those reported in [55]. In that study, the authors show that the SoS approach produces a signal-dependent bias that reduces the signal dynamic range and may subsequently lead to decreased precision and accuracy in fiber orientation estimates. Our study, however, suggests that the noncentral Chi noise in SoS-based data is not a major concern for the SD methods considered. Thus, heavier squashing of fiber ODFs when SoS reconstruction is used [55] is not that prominent with the SD if compared to diffusion ODF estimation methods [11]. This result may have different explanations for each technique. The robustness of dRL-SD may be explained, in part, by its lower over-all sensitivity to selection of the response function [50], which make it robust to the use of dictionaries estimated from either biased or unbiased signals. This behaviour may be additionally boosted by the inclusion of the damping factor in the RL algorithm. In contrast, the robustness of RUMBA-SD can be explained by the use of proper likelihood models that explicitly consider the bias as a function of the noise corrupting the data.
To finish, some limitations and future extensions of the study should be acknowledged. First, we have not evaluated the proposed method in synthetic data simulating partial Fourier k-space acquisitions and with parallel imaging using various acceleration factors (i.e., R>1), yet it may be interesting to consider it. Second, the RUMBA-SD estimation framework is based on a discrete approximation of the fiber ODF, which may be potentially extended to continuous functions on the sphere, like the spherical harmonics and the wavelets. Third, the TV regularization implemented in this study is based on a channel-by-channel first order scheme. New studies may be designed to compare different regularization techniques such as higher order TV, vectorial TV and the fiber continuity approach introduced in [56], to mention only a few examples. Fourth, different strategies for creating the signal dictionary could be explored, like using mixtures of intra-compartment models to capture different diffusion profiles, or applying more appropriate models to fit multi-shell data [31,83]. Fifth, the recursive calibration of the single-fiber response function proposed by [84] may be another possible add-on. Finally, it is worth mentioning that the inversion algorithm behind RUMBA-SD is not limited to fiber ODF reconstructions, but it can be also applied to solve other linear mixture models from diffusion MRI data. It was recently showed that some microstructure imaging methods such as ActiveAx and NODDI can be reformulated as convenient linear systems, however, the deconvolution methods proposed for them assume Gaussian noise and are performed on a voxel-by-voxel basis [85]. Here the iterative scheme proposed in RUMBA-SD could be used to address both limitations, potentially leading to improved reconstructions in microstructure imaging.
Supporting Information
where each of the two columns of length N contains the values of the signal for each isotropic compartment, i.e., H ISO i1 ¼ S 0 expðÀb i D GM Þ and H ISO i2 ¼ S 0 expðÀb i D CSF Þ. Finally, the column-vector f of length M+2 includes the volume fractions of each compartment within the voxel.
SOS, then n = number of coils (or n eff ) end for k = 1,2,. . ., repeat the following steps, until a termination criterion is satisfied compute f k+1 via Eqs(11) and(12), assuming σ 2 = α kf k+1 = f k+1 /sum(f k+1 ) ( * )compute α k+1 via Eq (9) assuming f = f k+1 update α TV end ( * ) Optionally, the ODF vector may be scaled to unity, thus preserving the physical definition of the j th element in f as the volume fraction of the j th compartment of the voxel (see Eqs (1)-(
10 −3 mm 2 /s), whereas Fig C in S2 File displays results using tensor diffusivities estimated from the noisy data (λ 1 = 1.1 10 −3 mm 2 /s and λ 2 = λ 3 = 0.35 10 −3 mm 2 /s). In both dictionaries two isotropic compartments with diffusivities equal to 0.1 10 −3 mm 2 /s and 2.5 10 −3 mm 2 /s were included. Average values of SR, n + and n − are also reported in Figs D and E in S2 File. Results shown come from the reconstructions employing 200 iterations and the datasets with a SNR = 15.
Fig 1 .
1Reconstruction accuracy for RUMBA-SD and dRL-SD using a dictionary based on original diffusivities. Reconstruction accuracy of RUMBA-SD (blue color) and dRL-SD (red color) are shown in terms of the angular error (θ) (see Eq(17)) and the volume fraction error (Δf) (see Eq(18)), as a function of the inter-fiber angle in the 90 synthetic phantoms. Continuous lines in each plot represent the mean values for each method. The semi-transparent coloured bands symbolize values within one standard deviation from both sides of the mean. Analyses are based on a dictionary created with the same diffusivities used to generate the data and with a SNR = 15.doi:10.1371/journal.pone.0138910.g001
Fig 2
2shows the performance of dRL-SD and RUMBA-SD in the 41 phantoms with interfiber angle equal to 70 degrees and using different volume fractions. Specifically, average values and standard deviations for the estimated volume fractions of the smaller fiber group are reported for the SNR = 15 datasets. Results are based on 200 iterations, using the dictionary created with the sharper fiber response model. Additional results on n + and n − are show in Fig F in S2 File.
Fig 2 .
2Reconstruction accuracy of RUMBA-SD and dRL-SD measured in phantoms with different volume fractions. Reconstruction accuracy of RUMBA-SD (blue color) and dRL-SD (red color) is shown in terms of the volume fraction of the smaller fiber bundle (upper panel) and the success rate (middle panel) in the 41 synthetic phantoms with inter-fiber angle equal to 70 degrees, using different volume fractions. The lower panel shows results similar to those depicted in the upper panel but considering only voxels where the two fiber bundles were detected. The discontinuous diagonal black line in the upper and lower panels represents the ideal result as a reference. The continuous coloured lines in each plot denote the mean values for each method. The semi-transparent coloured bands represent the values within one standard deviation to both sides of the mean. Results refer to the datasets with SNR = 15 and dictionary created with the true diffusivities.
doi:10.1371/journal.pone.0138910.g002
Fig 3
3reports angular and volume fraction errors corresponding to the TV spatially-regularized versions of both methods applied to the 90 phantoms characterizing the different inter-fiber angles. Results are based on the same parameters and options used in Fig 1: dictionary created using the sharper fiber response model; noisy datasets with a SNR = 15 and reconstructions using 200 iterations. Average values of SR, n + and n − are reported in Fig G in S2 File.
Fig 3 .
3Reconstruction accuracy levels of RUMBA-SD+TV and dRL-SD+TV. Reconstruction accuracy of RUMBA-SD+TV (blue color) and dRL-SD+TV (red color) is shown in terms of the angular error (θ) (see Eq(17)) and the volume fraction error (Δf) (see Eq(18)) as a function of the inter-fiber angle in the 90 synthetic phantoms. Continuous lines are the mean values for each method, and semi-transparent coloured bands contain values within one standard deviation on both sides of the mean. This analysis is based on a dictionary created with the same diffusivities used to generate the data with a SNR = 15.doi:10.1371/journal.pone.0138910.g003
Fig 5 .
5Main peaks in the 33-degrees phantom data. Main peaks extracted from the fiber ODFs estimated in the phantom data with inter-fiber angle equal to 33 degrees and Rician noise with a SNR = 15 are shown. Results are based on reconstructions using 200 iterations. Peaks are visualized as thin cylinders. doi:10.1371/journal.pone.0138910.g005
Fig 6
6depicts the peaks extracted from the fiber ODFs estimated in a complex region containing various tracts from the SMF-based data generated with a SNR = 20. Results come from reconstructions using 400 iterations and a dictionary with diffusivities equal to λ 1 = 1.6 10 −3 mm 2 /s and λ 2 = λ 3 = 0.3 10 −3 mm 2 /s. Figs H, I, J and K in S2 File show the results from both methods
Fig 6 .
6Main peaks from the fiber ODFs estimated in the "HARDI Reconstruction Challenge 2013" phantom. Visualization of the main peaks extracted from the fiber ODFs reconstructed from the SMF-based data generated with SNR = 20 in a complex region of the "HARDI Reconstruction Challenge 2013" phantom. Results are based on reconstructions using 400 iterations. Peaks are visualized as thin cylinders. doi:10.1371/journal.pone.0138910.g006 and their regularized versions in the whole slice. Additionally, Fig L in S2 File shows the results corresponding to the reconstructions using 1000 iterations on the same region of interest depicted in Fig 6.
Fig 9 depicts scatter plots of inter-fiber angles estimated by dRL-SD and RUMBA-SD (panel A) and by dRL-SD+TV and RUMBA-SD+TV (panel B). These results are based on reconstructions employing 300 iterations in the 64-direction SMF-based dataset.
Fig 7 .
7Fiber ODF profiles estimated from real data. Visualization of the fiber ODFs estimated in a region of interest on the right brain hemisphere. Results from both SMF-and SoS-based multichannel diffusion datasets (i.e., with Rician and Noncentral Chi noise, respectively) are depicted. The background images are the generalized fractional anisotropy images computed from each reconstruction.doi:10.1371/journal.pone.0138910.g007
Fig 8 .
8Fiber ODF profiles estimated from real data. Visualization of the fiber ODFs estimated from RUMBA-SD and RUMBA-SD+TV in a region of interest on the left brain hemisphere. Results are based on estimates employing 300 iterations. The upper and lower panels correspond to results from the full and reduced SMF-based datasets, respectively. The following tracts are highlighted: alic (anterior limb of internal capsule), ec (external capsule), and part of the slf (superior longitudinal fasciculus). doi:10.1371/journal.pone.0138910.g008
Fig 9 .
9Scatter plots of inter-fiber angles estimated in real data. Scatter plots of the inter-fiber angles estimated by dRL-SD and RUMBA-SD (panel A) and by dRL-SD+TV and RUMBA-SD+TV (panel B) in the same voxels. Results are based on reconstructions in the 64-direction SMF dataset.
doi:10.1371/journal.pone.0138910.g009
S1
File. Supplementary appendices. (DOCX) S2 File. Supplementary figures. (DOCX)
where S = [S 1 . . .S i . . .S N ] T and H = [H WM |H ISO ] comprises two sub-matrices. H WM is an N x Mmatrix where every column of length N contains the values of the signal generated by the
model given in Eq (1) for a single fiber-bundle compartment oriented along one of the M-
directions, i.e., the (i, j) th element of H WM is equal to H WM
ij
Table 1 .
1General pseudocode MAP algorithm.
PLOS ONE | DOI:10.1371/journal.pone.0138910October 15, 2015
PLOS ONE | DOI:10.1371/journal.pone.0138910 October 15, 2015 7 / 29
PLOS ONE | DOI:10.1371/journal.pone.0138910 October 15, 2015 18 / 29
AcknowledgmentsWe would like to thank Dr Karla Miller for assisting us with data collection. The presented study is a tribute to composers who popularized Rumba music in the 1940s and 50s, including Dámaso Pérez Prado, Mongo Santamaría, Xavier Cugat, and Chano Pozo.Author Contributions
Age-related alterations in white matter microstructure measured by diffusion tensor imaging. D H Salat, D S Tuch, D N Greve, A J Van Der Kouwe, N D Hevelone, A K Zaleta, 15917106Neurobiol Aging. 268Salat DH, Tuch DS, Greve DN, van der Kouwe AJ, Hevelone ND, Zaleta AK, et al. Age-related alter- ations in white matter microstructure measured by diffusion tensor imaging. Neurobiol Aging. 2005; 26 (8):1215-27. PMID: 15917106
Brain hemispheric structural efficiency and interconnectivity rightward asymmetry in human and nonhuman primates. Y Iturria-Medina, Perez Fernandez, A Morris, D M Canales-Rodriguez, E J Haroon, H A , Garcia Penton, L , 10.1093/cercor/bhq05820382642Cereb Cortex. 211Iturria-Medina Y, Perez Fernandez A, Morris DM, Canales-Rodriguez EJ, Haroon HA, Garcia Penton L, et al. Brain hemispheric structural efficiency and interconnectivity rightward asymmetry in human and nonhuman primates. Cereb Cortex. 2011; 21(1):56-67. doi: 10.1093/cercor/bhq058 PMID: 20382642
Studying the human brain anatomical network via diffusion-weighted MRI and Graph Theory. Y Iturria-Medina, R C Sotero, E J Canales-Rodriguez, Y Aleman-Gomez, Melie-Garcia , L , 10.1016/j.neuroimage.2007.10.06018272400Neuroimage. 403Iturria-Medina Y, Sotero RC, Canales-Rodriguez EJ, Aleman-Gomez Y, Melie-Garcia L. Studying the human brain anatomical network via diffusion-weighted MRI and Graph Theory. Neuroimage. 2008; 40 (3):1064-76. doi: 10.1016/j.neuroimage.2007.10.060 PMID: 18272400
Mapping the structural core of human cerebral cortex. P Hagmann, L Cammoun, X Gigandet, R Meuli, C J Honey, V J Wedeen, 10.1371/journal.pbio.0060159PLoS Biol. 67159Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, Wedeen VJ, et al. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008; 6(7):e159. doi: 10.1371/journal.pbio.0060159 PMID:
Estimation of the effective self-diffusion tensor from the NMR spin echo. P J Basser, J Mattiello, D Lebihan, 8019776J Magn Reson B. 1033Basser PJ, Mattiello J, LeBihan D. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B. 1994; 103(3):247-54. PMID: 8019776
High angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity. D S Tuch, T G Reese, M R Wiegell, N Makris, J W Belliveau, V J Wedeen, 12353272Magn Reson Med. 484Tuch DS, Reese TG, Wiegell MR, Makris N, Belliveau JW, Wedeen VJ. High angular resolution diffu- sion imaging reveals intravoxel white matter fiber heterogeneity. Magn Reson Med. 2002; 48(4):577- 82. PMID: 12353272
Quantitative evaluation of 10 tractography algorithms on a realistic diffusion MR phantom. P Fillard, M Descoteaux, A Goh, S Gouttard, B Jeurissen, J Malcolm, 10.1016/j.neuroimage.2011.01.03221256221Neuroimage. 561Fillard P, Descoteaux M, Goh A, Gouttard S, Jeurissen B, Malcolm J, et al. Quantitative evaluation of 10 tractography algorithms on a realistic diffusion MR phantom. Neuroimage. 2011; 56(1):220-34. doi: 10.1016/j.neuroimage.2011.01.032 PMID: 21256221
Recent advances in diffusion MRI modeling: Angular and radial reconstruction. H E Assemlal, D Tschumperle, L Brun, K Siddiqi, 10.1016/j.media.2011.02.00221397549Med Image Anal. 154Assemlal HE, Tschumperle D, Brun L, Siddiqi K. Recent advances in diffusion MRI modeling: Angular and radial reconstruction. Med Image Anal. 2011; 15(4):369-96. doi: 10.1016/j.media.2011.02.002 PMID: 21397549
Quantitative Comparison of Reconstruction Methods for Intra-Voxel Fiber Recovery From Diffusion MRI. A Daducci, E J Canales-Rodriguez, M Descoteaux, E Garyfallidis, Y Gur, Ying-Chia L , IEEE Transactions on. 332Medical ImagingDaducci A, Canales-Rodriguez EJ, Descoteaux M, Garyfallidis E, Gur Y, Ying-Chia L, et al. Quantitative Comparison of Reconstruction Methods for Intra-Voxel Fiber Recovery From Diffusion MRI. Medical Imaging, IEEE Transactions on. 2014; 33(2):384-99.
Q-ball imaging. D S Tuch, 15562495Magn Reson Med. 526Tuch DS. Q-ball imaging. Magn Reson Med. 2004; 52(6):1358-72. PMID: 15562495
Reconstruction of the orientation distribution function in single-and multiple-shell q-ball imaging within constant solid angle. I Aganj, C Lenglet, G Sapiro, E Yacoub, K Ugurbil, N Harel, 10.1002/mrm.2236520535807Magn Reson Med. 642Aganj I, Lenglet C, Sapiro G, Yacoub E, Ugurbil K, Harel N. Reconstruction of the orientation distribu- tion function in single-and multiple-shell q-ball imaging within constant solid angle. Magn Reson Med. 2010; 64(2):554-66. doi: 10.1002/mrm.22365 PMID: 20535807
Mathematical description of q-space in spherical coordinates: exact q-ball imaging. E J Canales-Rodriguez, L Melie-Garcia, Y Iturria-Medina, 10.1002/mrm.2191719319889Magn Reson Med. 616Canales-Rodriguez EJ, Melie-Garcia L, Iturria-Medina Y. Mathematical description of q-space in spheri- cal coordinates: exact q-ball imaging. Magn Reson Med. 2009; 61(6):1350-67. doi: 10.1002/mrm. 21917 PMID: 19319889
Regularized, fast, and robust analytical Q-ball imaging. M Descoteaux, E Angelino, S Fitzgibbons, R Deriche, 17763358Magn Reson Med. 583Descoteaux M, Angelino E, Fitzgibbons S, Deriche R. Regularized, fast, and robust analytical Q-ball imaging. Magn Reson Med. 2007; 58(3):497-510. PMID: 17763358
Measurement of fiber orientation distributions using high angular resolution diffusion imaging. A W Anderson, 16161109Magn Reson Med. 545Anderson AW. Measurement of fiber orientation distributions using high angular resolution diffusion imaging. Magn Reson Med. 2005; 54(5):1194-206. PMID: 16161109
Q-ball reconstruction of multimodal fiber orientations using the spherical harmonic basis. C P Hess, P Mukherjee, E T Han, D Xu, D B Vigneron, 16755539Magn Reson Med. 561Hess CP, Mukherjee P, Han ET, Xu D, Vigneron DB. Q-ball reconstruction of multimodal fiber orienta- tions using the spherical harmonic basis. Magn Reson Med. 2006; 56(1):104-17. PMID: 16755539
A new methodology for the estimation of fiber populations in the white matter of the brain with the Funk-Radon transform. A Tristan-Vega, C F Westin, Aja-Fernandez , S , 10.1016/j.neuroimage.2009.09.07019815078Neuroimage. 492Tristan-Vega A, Westin CF, Aja-Fernandez S. A new methodology for the estimation of fiber popula- tions in the white matter of the brain with the Funk-Radon transform. Neuroimage. 2010; 49(2):1301- 15. doi: 10.1016/j.neuroimage.2009.09.070 PMID: 19815078
Estimation of fiber orientation probability density functions in high angular resolution diffusion imaging. A Tristan-Vega, C F Westin, Aja-Fernandez , S , 10.1016/j.neuroimage.2009.04.04919393321Neuroimage. 472Tristan-Vega A, Westin CF, Aja-Fernandez S. Estimation of fiber orientation probability density func- tions in high angular resolution diffusion imaging. Neuroimage. 2009; 47(2):638-50. doi: 10.1016/j. neuroimage.2009.04.049 PMID: 19393321
Diffusion orientation transform revisited. E J Canales-Rodriguez, C P Lin, Y Iturria-Medina, C H Yeh, K H Cho, L Melie-Garcia, 10.1016/j.neuroimage.2009.09.06719815083Neuroimage. 492Canales-Rodriguez EJ, Lin CP, Iturria-Medina Y, Yeh CH, Cho KH, Melie-Garcia L. Diffusion orienta- tion transform revisited. Neuroimage. 2010; 49(2):1326-39. doi: 10.1016/j.neuroimage.2009.09.067 PMID: 19815083
Resolution of complex tissue microarchitecture using the diffusion orientation transform (DOT). E Ozarslan, T M Shepherd, B C Vemuri, S J Blackband, T H Mareci, 16546404Neuroimage. 313Ozarslan E, Shepherd TM, Vemuri BC, Blackband SJ, Mareci TH. Resolution of complex tissue micro- architecture using the diffusion orientation transform (DOT). Neuroimage. 2006; 31(3):1086-103. PMID: 16546404
Mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging. V J Wedeen, P Hagmann, W Y Tseng, T G Reese, R M Weisskoff, 16247738Magn Reson Med. 546Wedeen VJ, Hagmann P, Tseng WY, Reese TG, Weisskoff RM. Mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging. Magn Reson Med. 2005; 54(6):1377-86. PMID: 16247738
Deconvolution in diffusion spectrum imaging. E J Canales-Rodriguez, Y Iturria-Medina, Y Aleman-Gomez, Melie-Garcia , L , 10.1016/j.neuroimage.2009.11.06619962440Neuroimage. 501Canales-Rodriguez EJ, Iturria-Medina Y, Aleman-Gomez Y, Melie-Garcia L. Deconvolution in diffusion spectrum imaging. Neuroimage. 2010; 50(1):136-49. doi: 10.1016/j.neuroimage.2009.11.066 PMID: 19962440
Bessel Fourier Orientation Reconstruction (BFOR): an analytical diffusion propagator reconstruction for hybrid diffusion imaging and computation of qspace indices. A P Hosseinbor, M K Chung, Y C Wu, A L Alexander, 10.1016/j.neuroimage.2012.08.07222963853Neuroimage. 64Hosseinbor AP, Chung MK, Wu YC, Alexander AL. Bessel Fourier Orientation Reconstruction (BFOR): an analytical diffusion propagator reconstruction for hybrid diffusion imaging and computation of q- space indices. Neuroimage. 2013; 64:650-70. doi: 10.1016/j.neuroimage.2012.08.072 PMID: 22963853
Multiple q-shell diffusion propagator imaging. M Descoteaux, R Deriche, Le Bihan, D Mangin, J F Poupon, C , 10.1016/j.media.2010.07.00120685153Med Image Anal. 154Descoteaux M, Deriche R, Le Bihan D, Mangin JF, Poupon C. Multiple q-shell diffusion propagator imaging. Med Image Anal. 2011; 15(4):603-21. doi: 10.1016/j.media.2010.07.001 PMID: 20685153
Hybrid diffusion imaging. Y C Wu, A L Alexander, 17481920Neuroimage. 363Wu YC, Alexander AL. Hybrid diffusion imaging. Neuroimage. 2007; 36(3):617-29. PMID: 17481920
Generalized q-sampling imaging. F C Yeh, V J Wedeen, W Y Tseng, 10.1109/TMI.2010.204512620304721IEEE Trans Med Imaging. 299Yeh FC, Wedeen VJ, Tseng WY. Generalized q-sampling imaging. IEEE Trans Med Imaging. 2010; 29 (9):1626-35. doi: 10.1109/TMI.2010.2045126 PMID: 20304721
Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure. E Ozarslan, C G Koay, T M Shepherd, M E Komlosh, M O Irfanoglu, C Pierpaoli, 10.1016/j.neuroimage.2013.04.01623587694Neuroimage. 78Ozarslan E, Koay CG, Shepherd TM, Komlosh ME, Irfanoglu MO, Pierpaoli C, et al. Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure. Neuro- image. 2013; 78:16-32. doi: 10.1016/j.neuroimage.2013.04.016 PMID: 23587694
Generalized diffusion tensor imaging and analytical relationships between diffusion tensor imaging and high angular resolution diffusion imaging. E Ozarslan, T H Mareci, 14587006Magn Reson Med. 505Ozarslan E, Mareci TH. Generalized diffusion tensor imaging and analytical relationships between dif- fusion tensor imaging and high angular resolution diffusion imaging. Magn Reson Med. 2003; 50 (5):955-65. PMID: 14587006
Characterizing non-Gaussian diffusion by using generalized diffusion tensors. C Liu, R Bammer, B Acar, M E Moseley, 15122674Magn Reson Med. 515Liu C, Bammer R, Acar B, Moseley ME. Characterizing non-Gaussian diffusion by using generalized diffusion tensors. Magn Reson Med. 2004; 51(5):924-37. PMID: 15122674
Diffusional kurtosis imaging: the quantification of non-gaussian water diffusion by means of magnetic resonance imaging. J H Jensen, J A Helpern, A Ramani, H Lu, K Kaczynski, 15906300Magn Reson Med. 536Jensen JH, Helpern JA, Ramani A, Lu H, Kaczynski K. Diffusional kurtosis imaging: the quantification of non-gaussian water diffusion by means of magnetic resonance imaging. Magn Reson Med. 2005; 53 (6):1432-40. PMID: 15906300
Probabilistic diffusion tractography with multiple fibre orientations: What can we gain?. T E Behrens, H J Berg, S Jbabdi, M F Rushworth, M W Woolrich, 17070705Neuroimage. 341Behrens TE, Berg HJ, Jbabdi S, Rushworth MF, Woolrich MW. Probabilistic diffusion tractography with multiple fibre orientations: What can we gain? Neuroimage. 2007; 34(1):144-55. PMID: 17070705
Model-based analysis of multishell diffusion MR data for tractography: how to get over fitting problems. S Jbabdi, S N Sotiropoulos, A M Savio, M Grana, T E Behrens, 10.1002/mrm.2420422334356Magn Reson Med. 686Jbabdi S, Sotiropoulos SN, Savio AM, Grana M, Behrens TE. Model-based analysis of multishell diffu- sion MR data for tractography: how to get over fitting problems. Magn Reson Med. 2012; 68(6):1846- 55. doi: 10.1002/mrm.24204 PMID: 22334356
A Bayesian framework to identify principal intravoxel diffusion profiles based on diffusion-weighted MR imaging. L Melie-Garcia, E J Canales-Rodriguez, Y Aleman-Gomez, C P Lin, Y Iturria-Medina, P A Valdes-Hernandez, 10.1016/j.neuroimage.2008.04.24218571437Neuroimage. 422Melie-Garcia L, Canales-Rodriguez EJ, Aleman-Gomez Y, Lin CP, Iturria-Medina Y, Valdes-Hernandez PA. A Bayesian framework to identify principal intravoxel diffusion profiles based on diffusion-weighted MR imaging. Neuroimage. 2008; 42(2):750-70. doi: 10.1016/j.neuroimage.2008.04.242 PMID: 18571437
Spatially regularized compressed sensing for high angular resolution diffusion imaging. O Michailovich, Y Rathi, S Dolui, 10.1109/TMI.2011.214218921536524IEEE Trans Med Imaging. 305Michailovich O, Rathi Y, Dolui S. Spatially regularized compressed sensing for high angular resolution diffusion imaging. IEEE Trans Med Imaging. 2011; 30(5):1100-15. doi: 10.1109/TMI.2011.2142189 PMID: 21536524
Resolution of crossing fibers with constrained compressed sensing using diffusion tensor MRI. B A Landman, J A Bogovic, H Wan, F El Zahraa Elshahaby, P L Bazin, J L Prince, 10.1016/j.neuroimage.2011.10.01122019877Neuroimage. 593Landman BA, Bogovic JA, Wan H, El Zahraa ElShahaby F, Bazin PL, Prince JL. Resolution of crossing fibers with constrained compressed sensing using diffusion tensor MRI. Neuroimage. 2012; 59 (3):2175-86. doi: 10.1016/j.neuroimage.2011.10.011 PMID: 22019877
Sparse regularization for fiber ODF reconstruction: from the suboptimality of l2 and l1 priors to l0. A Daducci, D Van De Ville, J P Thiran, Y Wiaux, 10.1016/j.media.2014.01.01124593935Med Image Anal. 186Daducci A, Van De Ville D, Thiran JP, Wiaux Y. Sparse regularization for fiber ODF reconstruction: from the suboptimality of l2 and l1 priors to l0. Med Image Anal. 2014; 18(6):820-33. doi: 10.1016/j. media.2014.01.011 PMID: 24593935
Mesh-based spherical deconvolution: a flexible approach to reconstruction of non-negative fiber orientation distributions. V Patel, Y Shi, P M Thompson, A W Toga, 10.1016/j.neuroimage.2010.02.06020206705Neuroimage. 513Patel V, Shi Y, Thompson PM, Toga AW. Mesh-based spherical deconvolution: a flexible approach to reconstruction of non-negative fiber orientation distributions. Neuroimage. 2010; 51(3):1071-81. doi: 10.1016/j.neuroimage.2010.02.060 PMID: 20206705
A modified damped Richardson-Lucy algorithm to reduce isotropic background effects in spherical deconvolution. F Dell'acqua, P Scifo, G Rizzo, M Catani, A Simmons, G Scotti, 10.1016/j.neuroimage.2009.09.03319781650Neuroimage. 492Dell'acqua F, Scifo P, Rizzo G, Catani M, Simmons A, Scotti G, et al. A modified damped Richardson- Lucy algorithm to reduce isotropic background effects in spherical deconvolution. Neuroimage. 2010; 49(2):1446-58. doi: 10.1016/j.neuroimage.2009.09.033 PMID: 19781650
Parametric spherical deconvolution: inferring anatomical connectivity using diffusion MR imaging. E Kaden, T R Knosche, A Anwander, 17596967Neuroimage. 372Kaden E, Knosche TR, Anwander A. Parametric spherical deconvolution: inferring anatomical connec- tivity using diffusion MR imaging. Neuroimage. 2007; 37(2):474-88. PMID: 17596967
Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution. J D Tournier, F Calamante, A Connelly, 17379540Neuroimage. 354Tournier JD, Calamante F, Connelly A. Robust determination of the fibre orientation distribution in diffu- sion MRI: non-negativity constrained super-resolved spherical deconvolution. Neuroimage. 2007; 35 (4):1459-72. PMID: 17379540
Direct estimation of the fiber orientation density function from diffusion-weighted MRI data using spherical deconvolution. J D Tournier, F Calamante, D G Gadian, A Connelly, 15528117Neuroimage. 233Tournier JD, Calamante F, Gadian DG, Connelly A. Direct estimation of the fiber orientation density function from diffusion-weighted MRI data using spherical deconvolution. Neuroimage. 2004; 23 (3):1176-85. PMID: 15528117
Nonparametric Bayesian inference of the fiber orientation distribution from diffusion-weighted MR images. E Kaden, F Kruggel, 10.1016/j.media.2012.01.00422381587Med Image Anal. 164Kaden E, Kruggel F. Nonparametric Bayesian inference of the fiber orientation distribution from diffu- sion-weighted MR images. Med Image Anal. 2012; 16(4):876-88. doi: 10.1016/j.media.2012.01.004 PMID: 22381587
A model-based deconvolution approach to solve fiber crossing in diffusion-weighted MR imaging. F Dell'acqua, G Rizzo, P Scifo, R A Clarke, G Scotti, F Fazio, 17355058IEEE Trans Biomed Eng. 543Dell'Acqua F, Rizzo G, Scifo P, Clarke RA, Scotti G, Fazio F. A model-based deconvolution approach to solve fiber crossing in diffusion-weighted MR imaging. IEEE Trans Biomed Eng. 2007; 54(3):462-72. PMID: 17355058
Diffusion basis functions decomposition for estimating white matter intravoxel fiber geometry. A Ramirez-Manzanares, M Rivera, B C Vemuri, P Carney, T Mareci, 17695129IEEE Trans Med Imaging. 268Ramirez-Manzanares A, Rivera M, Vemuri BC, Carney P, Mareci T. Diffusion basis functions decompo- sition for estimating white matter intravoxel fiber geometry. IEEE Trans Med Imaging. 2007; 26 (8):1091-102. PMID: 17695129
A unified computational framework for deconvolution to reconstruct multiple fibers from diffusion weighted MRI. B Jian, B C Vemuri, 18041262IEEE Trans Med Imaging. 2611Jian B, Vemuri BC. A unified computational framework for deconvolution to reconstruct multiple fibers from diffusion weighted MRI. IEEE Trans Med Imaging. 2007; 26(11):1464-71. PMID: 18041262
Maximum entropy spherical deconvolution for diffusion MRI. D C Alexander, 17354686Inf Process Med Imaging. 19Alexander DC. Maximum entropy spherical deconvolution for diffusion MRI. Inf Process Med Imaging. 2005; 19:76-87. PMID: 17354686
Ball and rackets: Inferring fiber fanning from diffusionweighted MRI. S N Sotiropoulos, T E Behrens, S Jbabdi, 10.1016/j.neuroimage.2012.01.05622270351Neuroimage. 602Sotiropoulos SN, Behrens TE, Jbabdi S. Ball and rackets: Inferring fiber fanning from diffusion- weighted MRI. Neuroimage. 2012; 60(2):1412-25. doi: 10.1016/j.neuroimage.2012.01.056 PMID: 22270351
Characterization and propagation of uncertainty in diffusion-weighted MR imaging. T E Behrens, M W Woolrich, M Jenkinson, H Johansen-Berg, R G Nunes, S Clare, 14587019Magn Reson Med. 505Behrens TE, Woolrich MW, Jenkinson M, Johansen-Berg H, Nunes RG, Clare S, et al. Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magn Reson Med. 2003; 50 (5):1077-88. PMID: 14587019
Deterministic and probabilistic tractography based on complex fibre orientation distributions. M Descoteaux, R Deriche, T R Knosche, A Anwander, 10.1109/TMI.2008.200442419188114IEEE Trans Med Imaging. 282Descoteaux M, Deriche R, Knosche TR, Anwander A. Deterministic and probabilistic tractography based on complex fibre orientation distributions. IEEE Trans Med Imaging. 2009; 28(2):269-86. doi: 10.1109/TMI.2008.2004424 PMID: 19188114
Sparse solution of fiber orientation distribution function by diffusion decomposition. F C Yeh, W Y Tseng, 10.1371/journal.pone.007574724146772PLoS One. 81075747Yeh FC, Tseng WY. Sparse solution of fiber orientation distribution function by diffusion decomposition. PLoS One. 2013; 8(10):e75747. doi: 10.1371/journal.pone.0075747 PMID: 24146772
A pitfall in the reconstruction of fibre ODFs using spherical deconvolution of diffusion MRI data. G D Parker, D Marshall, P L Rosin, N Drage, S Richmond, D K Jones, 10.1016/j.neuroimage.2012.10.02223085109Neuroimage. 65Parker GD, Marshall D, Rosin PL, Drage N, Richmond S, Jones DK. A pitfall in the reconstruction of fibre ODFs using spherical deconvolution of diffusion MRI data. Neuroimage. 2013; 65:433-48. doi: 10. 1016/j.neuroimage.2012.10.022 PMID: 23085109
Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. N Dey, L Blanc-Feraud, C Zimmer, P Roux, Z Kam, J C Olivo-Marin, 16586486Microsc Res Tech. 694Dey N, Blanc-Feraud L, Zimmer C, Roux P, Kam Z, Olivo-Marin JC, et al. Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Microsc Res Tech. 2006; 69(4):260-6. PMID: 16586486
Deconvolution in. J L Starck, E Pantin, F Murtagh, Astronomy: A Review. Publications of the Astronomical Society of the Pacific. 114Starck JL, Pantin E, Murtagh F. Deconvolution in Astronomy: A Review. Publications of the Astronomi- cal Society of the Pacific. 2002; 114:1051-69.
The Rician distribution of noisy MRI data. H Gudbjartsson, S Patz, 8598820Magn Reson Med. 346Gudbjartsson H, Patz S. The Rician distribution of noisy MRI data. Magn Reson Med. 1995; 34(6):910- 4. PMID: 8598820
Influence of multichannel combination, parallel imaging and other reconstruction techniques on MRI noise characteristics. O Dietrich, J G Raya, S B Reeder, M Ingrisch, M F Reiser, S O Schoenberg, 10.1016/j.mri.2008.02.00118440746Magn Reson Imaging. 266Dietrich O, Raya JG, Reeder SB, Ingrisch M, Reiser MF, Schoenberg SO. Influence of multichannel combination, parallel imaging and other reconstruction techniques on MRI noise characteristics. Magn Reson Imaging. 2008; 26(6):754-62. doi: 10.1016/j.mri.2008.02.001 PMID: 18440746
Effects of image reconstruction on fibre orientation mapping from multichannel diffusion MRI: Reducing the noise floor using SENSE. S N Sotiropoulos, S Moeller, S Jbabdi, J Xu, J L Andersson, E J Auerbach, Magn Reson Med. Sotiropoulos SN, Moeller S, Jbabdi S, Xu J, Andersson JL, Auerbach EJ, et al. Effects of image recon- struction on fibre orientation mapping from multichannel diffusion MRI: Reducing the noise floor using SENSE. Magn Reson Med. 2013.
Fiber continuity: an anisotropic prior for ODF estimation. M Reisert, V G Kiselev, 10.1109/TMI.2011.211276921317082IEEE Trans Med Imaging. 306Reisert M, Kiselev VG. Fiber continuity: an anisotropic prior for ODF estimation. IEEE Trans Med Imag- ing. 2011; 30(6):1274-83. doi: 10.1109/TMI.2011.2112769 PMID: 21317082
A Robust Spherical Deconvolution Method for the Analysis of Low SNR or Low Angular Resolution Diffusion Data. J-D Tournier, F Calamante, A Connelly, ISMRM 21st Annual Meeting. Salt Lake City, Utah, USATournier J-D, Calamante F, Connelly A, editors. A Robust Spherical Deconvolution Method for the Analysis of Low SNR or Low Angular Resolution Diffusion Data. ISMRM 21st Annual Meeting; 2013 20-26 April 2013; Salt Lake City, Utah, USA.
Nonlinear total variation based noise removal algorithms. L I Rudin, S Osher, E Fatemi, Physica D. 60Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D. 1992; 60:259-68.
GRAPPA: how to choose the optimal method. M Blaimer, F Breuer, M Mueller, R M Heidemann, M A Griswold, Jakob Pm Smash, Sense, Pils, 15548953Top Magn Reson Imaging. 154Blaimer M, Breuer F, Mueller M, Heidemann RM, Griswold MA, Jakob PM. SMASH, SENSE, PILS, GRAPPA: how to choose the optimal method. Top Magn Reson Imaging. 2004; 15(4):223-36. PMID: 15548953
Signal-to-noise measurements in magnitude images from NMR phased arrays. C D Constantinides, E Atalar, E R Mcveigh, 9358462Magn Reson Med. 385Constantinides CD, Atalar E, McVeigh ER. Signal-to-noise measurements in magnitude images from NMR phased arrays. Magn Reson Med. 1997; 38(5):852-7. PMID: 9358462
Statistical noise analysis in GRAPPA using a parametrized noncentral Chi approximation model. Aja-Fernandez S , Tristan - Vega, A Hoge, W S , 10.1002/mrm.2270121413083Magn Reson Med. 654Aja-Fernandez S, Tristan-Vega A, Hoge WS. Statistical noise analysis in GRAPPA using a parame- trized noncentral Chi approximation model. Magn Reson Med. 2011; 65(4):1195-206. doi: 10.1002/ mrm.22701 PMID: 21413083
Massively parallel MRI detector arrays. B Keil, L L Wald, 10.1016/j.jmr.2013.02.00123453758J Magn Reson. 229Keil B, Wald LL. Massively parallel MRI detector arrays. J Magn Reson. 2013; 229:75-89. doi: 10.1016/ j.jmr.2013.02.001 PMID: 23453758
Influence of noise correlation in multiple-coil statistical models with sum of squares reconstruction. Aja-Fernandez S , Tristan - Vega, A , 10.1002/mrm.2302021656560Magn Reson Med. 672Aja-Fernandez S, Tristan-Vega A. Influence of noise correlation in multiple-coil statistical models with sum of squares reconstruction. Magn Reson Med. 2012; 67(2):580-5. doi: 10.1002/mrm.23020 PMID: 21656560
Noise estimation in parallel MRI: GRAPPA and SENSE. Aja-Fernandez S Vegas-Sanchez-Ferrero, G , Tristan - Vega, A , 10.1016/j.mri.2013.12.00124418329Magn Reson Imaging. 323Aja-Fernandez S, Vegas-Sanchez-Ferrero G, Tristan-Vega A. Noise estimation in parallel MRI: GRAPPA and SENSE. Magn Reson Imaging. 2014; 32(3):281-90. doi: 10.1016/j.mri.2013.12.001 PMID: 24418329
An iterative technique for the rectification of observed distributions. L B Lucy, Astron J. 79Lucy LB. An iterative technique for the rectification of observed distributions. Astron J. 1974; 79:745- 54.
Bayesian-based iterative method of image restoration. W H Richardson, J Opt Soc Am. 62Richardson WH. Bayesian-based iterative method of image restoration. J Opt Soc Am. 1972; 62:55-9.
Measurement of signal intensities in the presence of noise in MR images. R M Henkelman, 4000083Med Phys. 122Henkelman RM. Measurement of signal intensities in the presence of noise in MR images. Med Phys. 1985; 12(2):232-3. PMID: 4000083
Noise estimation in single-and multiple-coil magnetic resonance data based on statistical models. Aja-Fernandez S , Tristan - Vega, A , Alberola-Lopez , C , 10.1016/j.mri.2009.05.02519570640Magn Reson Imaging. 2710Aja-Fernandez S, Tristan-Vega A, Alberola-Lopez C. Noise estimation in single-and multiple-coil mag- netic resonance data based on statistical models. Magn Reson Imaging. 2009; 27(10):1397-409. doi: 10.1016/j.mri.2009.05.025 PMID: 19570640
An algorithm for total variation minimization and applications. A Chambolle, Journal of Mathematical Imaging and Vision. 20Chambolle A. An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision. 2004; 20:89-97.
On the Computation of Modified Bessel Function Ratios. W Gautschi, J Slavik, MATHEMATICS OF COMPUTATION. 32143Gautschi W, Slavik J. On the Computation of Modified Bessel Function Ratios. MATHEMATICS OF COMPUTATION. 1978; 32(143):865-75.
Restricted diffusion in cylindrical geometry. O Söderman, B Jönsson, Journal of Magnetic Resonance, Series A. 117Söderman O, Jönsson B. Restricted diffusion in cylindrical geometry. Journal of Magnetic Resonance, Series A. 1995; 117:94-7.
SENSE: sensitivity encoding for fast MRI. K P Pruessmann, M Weiger, M B Scheidegger, P Boesiger, 10542355Magn Reson Med. 425Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med. 1999; 42(5):952-62. PMID: 10542355
Effective noise estimation and filtering from correlated multiple-coil MR data. Aja-Fernandez S Brion, V , Tristan - Vega, A , 10.1016/j.mri.2012.07.00623122024Magn Reson Imaging. 312Aja-Fernandez S, Brion V, Tristan-Vega A. Effective noise estimation and filtering from correlated multi- ple-coil MR data. Magn Reson Imaging. 2013; 31(2):272-85. doi: 10.1016/j.mri.2012.07.006 PMID: 23122024
Inferring multiple maxima in intravoxel white matter fiber distribution. E J Canales-Rodriguez, L Melie-Garcia, Y Iturria-Medina, E Martinez-Montes, Y Aleman-Gomez, C P Lin, 10.1002/mrm.2167318727080Magn Reson Med. 603Canales-Rodriguez EJ, Melie-Garcia L, Iturria-Medina Y, Martinez-Montes E, Aleman-Gomez Y, Lin CP. Inferring multiple maxima in intravoxel white matter fiber distribution. Magn Reson Med. 2008; 60 (3):616-30. doi: 10.1002/mrm.21673 PMID: 18727080
Advances in functional and structural MR image analysis and implementation as FSL. S M Smith, M Jenkinson, M W Woolrich, C F Beckmann, T E Behrens, H Johansen-Berg, 15501092Neuroimage. 231SupplSmith SM, Jenkinson M, Woolrich MW, Beckmann CF, Behrens TE, Johansen-Berg H, et al. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage. 2004; 23 Suppl 1:S208-19. PMID: 15501092
Noise correction on Rician distributed data for fibre orientation estimators. R A Clarke, P Scifo, G Rizzo, F Dell'acqua, G Scotti, F Fazio, 10.1109/TMI.2008.92061518753042IEEE Trans Med Imaging. 279Clarke RA, Scifo P, Rizzo G, Dell'Acqua F, Scotti G, Fazio F. Noise correction on Rician distributed data for fibre orientation estimators. IEEE Trans Med Imaging. 2008; 27(9):1242-51. doi: 10.1109/TMI. 2008.920615 PMID: 18753042
Least squares for diffusion tensor estimation revisited: propagation of uncertainty with Rician and non-Rician signals. A Tristan-Vega, Aja-Fernandez , S Westin, C F , 10.1016/j.neuroimage.2011.09.07422015852Neuroimage. 594Tristan-Vega A, Aja-Fernandez S, Westin CF. Least squares for diffusion tensor estimation revisited: propagation of uncertainty with Rician and non-Rician signals. Neuroimage. 2012; 59(4):4032-43. doi: 10.1016/j.neuroimage.2011.09.074 PMID: 22015852
Formal characterization and extension of the linearized diffusion tensor model. R Salvador, A Pena, D K Menon, T A Carpenter, J D Pickard, E T Bullmore, 15468122Hum Brain Mapp. 242Salvador R, Pena A, Menon DK, Carpenter TA, Pickard JD, Bullmore ET. Formal characterization and extension of the linearized diffusion tensor model. Hum Brain Mapp. 2005; 24(2):144-55. PMID: 15468122
Noise correction for HARDI and HYDI data obtained with multi-channel coils and Sum of Squares reconstruction: An anisotropic extension of the LMMSE. V Brion, C Poupon, O Riff, Aja-Fernandez , S , Tristan - Vega, A Mangin, J F , 10.1016/j.mri.2013.04.00223659768Magn Reson Imaging. 318Brion V, Poupon C, Riff O, Aja-Fernandez S, Tristan-Vega A, Mangin JF, et al. Noise correction for HARDI and HYDI data obtained with multi-channel coils and Sum of Squares reconstruction: An aniso- tropic extension of the LMMSE. Magn Reson Imaging. 2013; 31(8):1360-71. doi: 10.1016/j.mri.2013. 04.002 PMID: 23659768
A signal transformational framework for breaking the noise floor and its applications in MRI. C G Koay, E Ozarslan, P J Basser, 10.1016/j.jmr.2008.11.01519138540J Magn Reson. 1972Koay CG, Ozarslan E, Basser PJ. A signal transformational framework for breaking the noise floor and its applications in MRI. J Magn Reson. 2009; 197(2):108-19. doi: 10.1016/j.jmr.2008.11.015 PMID: 19138540
Dipy, a library for the analysis of diffusion MRI data. E Garyfallidis, M Brett, B Amirbekian, A Rokem, S Van Der Walt, M Descoteaux, 10.3389/fninf.2014.0000824600385Front Neuroinform. 88Garyfallidis E, Brett M, Amirbekian B, Rokem A, Van Der Walt S, Descoteaux M, et al. Dipy, a library for the analysis of diffusion MRI data. Front Neuroinform. 2014; 8:8. doi: 10.3389/fninf.2014.00008 PMID: 24600385
Adaptive non-local means denoising of MR images with spatially varying noise levels. J V Manjon, P Coupe, L Marti-Bonmati, D L Collins, M Robles, 10.1002/jmri.2200320027588J Magn Reson Imaging. 311Manjon JV, Coupe P, Marti-Bonmati L, Collins DL, Robles M. Adaptive non-local means denoising of MR images with spatially varying noise levels. J Magn Reson Imaging. 2010; 31(1):192-203. doi: 10. 1002/jmri.22003 PMID: 20027588
Multi-tissue constrained spherical deconvolution for improved analysis of multi-shell diffusion MRI data. B Jeurissen, J D Tournier, T Dhollander, A Connelly, J Sijbers, Neuroimage. Jeurissen B, Tournier JD, Dhollander T, Connelly A, Sijbers J. Multi-tissue constrained spherical decon- volution for improved analysis of multi-shell diffusion MRI data. Neuroimage. 2014.
Recursive calibration of the fiber response function for spherical deconvolution of diffusion MRI data. C M Tax, B Jeurissen, S B Vos, M A Viergever, A Leemans, 10.1016/j.neuroimage.2013.07.06723927905Neuroimage. 86Tax CM, Jeurissen B, Vos SB, Viergever MA, Leemans A. Recursive calibration of the fiber response function for spherical deconvolution of diffusion MRI data. Neuroimage. 2014; 86:67-80. doi: 10.1016/j. neuroimage.2013.07.067 PMID: 23927905
Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data. A Daducci, E J Canales-Rodriguez, H Zhang, T B Dyrby, D C Alexander, J P Thiran, 10.1016/j.neuroimage.2014.10.02625462697Neuroimage. 105Daducci A, Canales-Rodriguez EJ, Zhang H, Dyrby TB, Alexander DC, Thiran JP. Accelerated Micro- structure Imaging via Convex Optimization (AMICO) from diffusion MRI data. Neuroimage. 2015; 105:32-44. doi: 10.1016/j.neuroimage.2014.10.026 PMID: 25462697
| [] |
[
"arXiv:hep-ph/0512124v1 9 Dec 2005 HELICITY FORMALISM AND SPIN ASYMMETRIES IN HADRONIC PROCESSES *",
"arXiv:hep-ph/0512124v1 9 Dec 2005 HELICITY FORMALISM AND SPIN ASYMMETRIES IN HADRONIC PROCESSES *"
] | [
"M Anselmino \nDipartimento di Fisica Teorica\nUniversità di Torino\nINFN\nSezione di Torino\nV. P. Giuria 110125TorinoItaly\n",
"M Boglione \nDipartimento di Fisica Teorica\nUniversità di Torino\nINFN\nSezione di Torino\nV. P. Giuria 110125TorinoItaly\n",
"U D'alesio \nDipartimento di Fisica\nUniversità di Cagliari and INFN\nSezione di Cagliari\nC.P. 17009042MonserratoCAItaly\n",
"E Leader \nImperial College London\nPrince Consort RoadSW7 2BWLondonU.K\n",
"S Melis \nDipartimento di Fisica\nUniversità di Cagliari and INFN\nSezione di Cagliari\nC.P. 17009042MonserratoCAItaly\n",
"F Murgia \nDipartimento di Fisica\nUniversità di Cagliari and INFN\nSezione di Cagliari\nC.P. 17009042MonserratoCAItaly\n"
] | [
"Dipartimento di Fisica Teorica\nUniversità di Torino\nINFN\nSezione di Torino\nV. P. Giuria 110125TorinoItaly",
"Dipartimento di Fisica Teorica\nUniversità di Torino\nINFN\nSezione di Torino\nV. P. Giuria 110125TorinoItaly",
"Dipartimento di Fisica\nUniversità di Cagliari and INFN\nSezione di Cagliari\nC.P. 17009042MonserratoCAItaly",
"Imperial College London\nPrince Consort RoadSW7 2BWLondonU.K",
"Dipartimento di Fisica\nUniversità di Cagliari and INFN\nSezione di Cagliari\nC.P. 17009042MonserratoCAItaly",
"Dipartimento di Fisica\nUniversità di Cagliari and INFN\nSezione di Cagliari\nC.P. 17009042MonserratoCAItaly"
] | [] | We present a generalized QCD factorization scheme for the high energy inclusive polarized process, (A, S A ) + (B, S B ) → C + X, including all intrinsic partonic motions. This introduces many non-trivial azimuthal phases and several new spin and k ⊥ dependent soft functions. The formal expressions for single and double spin asymmetries are discussed. Numerical results for A N | 10.1142/9789812773272_0026 | [
"https://export.arxiv.org/pdf/hep-ph/0512124v1.pdf"
] | 9,311,711 | hep-ph/0512124 | e240483e3253b097d03a596dfcbb16c7634c0ce9 |
arXiv:hep-ph/0512124v1 9 Dec 2005 HELICITY FORMALISM AND SPIN ASYMMETRIES IN HADRONIC PROCESSES *
M Anselmino
Dipartimento di Fisica Teorica
Università di Torino
INFN
Sezione di Torino
V. P. Giuria 110125TorinoItaly
M Boglione
Dipartimento di Fisica Teorica
Università di Torino
INFN
Sezione di Torino
V. P. Giuria 110125TorinoItaly
U D'alesio
Dipartimento di Fisica
Università di Cagliari and INFN
Sezione di Cagliari
C.P. 17009042MonserratoCAItaly
E Leader
Imperial College London
Prince Consort RoadSW7 2BWLondonU.K
S Melis
Dipartimento di Fisica
Università di Cagliari and INFN
Sezione di Cagliari
C.P. 17009042MonserratoCAItaly
F Murgia
Dipartimento di Fisica
Università di Cagliari and INFN
Sezione di Cagliari
C.P. 17009042MonserratoCAItaly
arXiv:hep-ph/0512124v1 9 Dec 2005 HELICITY FORMALISM AND SPIN ASYMMETRIES IN HADRONIC PROCESSES *
We present a generalized QCD factorization scheme for the high energy inclusive polarized process, (A, S A ) + (B, S B ) → C + X, including all intrinsic partonic motions. This introduces many non-trivial azimuthal phases and several new spin and k ⊥ dependent soft functions. The formal expressions for single and double spin asymmetries are discussed. Numerical results for A N
Introduction and formalism
Recently 1,2,3 we have developed an approach to study (un)polarized cross sections for inclusive particle production in hadronic collisions at high energy and moderately large p T and semi-inclusive deeply inelastic scattering (SIDIS). 4 Assuming that factorization is preserved, this approach generalizes the usual Leading Order (LO), collinear perturbative QCD formalism by including spin and intrinsic transverse momentum, k ⊥ , effects both in the soft contributions (parton distribution (PDF) and fragmentation (FF) functions) and in the elementary processes. Helicity formalism is adopted and exact non collinear kinematics is fully taken into account. Unpolarized cross sections and transverse single spin asymmetries (SSA), with emphasis on the Sivers 5 and Collins 6 effects, were already discussed in Refs. 1, 2. Here we report on the most complete case of unpolarized cross sections and single and double spin asymmetries for the process (A, S A ) + (B, S B ) → C + X. 3 The cross section for this process can be given as a LO (factorized) convolution of all possible hard elementary QCD processes, ab → cd, with soft,
E C dσ (A,SA)+(B,SB)→C+X d 3 p C = a,b,c,d,{λ} dx a dx b dz 16π 2 x a x b z 2 s d 2 k ⊥a d 2 k ⊥b d 3 k ⊥C δ(k ⊥C ·p c ) J(k ⊥C ) × ρ a/A,SA λ a ,λ ′ af a/A,SA (x a , k ⊥a ) ρ b/B,SB λ b ,λ ′ bf b/B,SB (x b , k ⊥b ) (1) ×M λ c ,λ d ;λ a ,λ bM * λ ′ c ,λ d ;λ ′ a ,λ ′ b δ(ŝ +t +û)D λ C ,λ C λ c ,λ ′ c (z, k ⊥C ) ,
where A, B are initial, spin 1/2 hadrons in pure spin states S A and S B ; C is the unpolarized observed hadron; J(k ⊥C ) is a phase-space kinematical factor; 1 ρ a/A,SA λ a ,λ ′ af a/A,SA (x a , k ⊥a ) contains all information on parton a and its polarization state, through its helicity density matrix and the spin and k ⊥ dependent PDF (analogously for parton b);M λ c ,λ d ;λ a ,λ b are the LO helicity amplitudes for the elementary process ab → cd; D
λ C ,λ C λ c ,λ ′ c (z, k ⊥C )
is a product of soft helicity fragmentation amplitudes for the c → C + X process. The remaining notation, in particular for kinematical variables, should be obvious. 1,2 Let us stress here that a formal proof of factorization for the AB → C + X process in the non collinear case is still missing; universality and evolution properties of the new spin and k ⊥ dependent PDF and FF are not established or well known yet; a consistent account of all higher-twist effects is still missing. In the sequel, we will discuss in more detail the basic ingredients of Eq. (1).
Spin and k ⊥ dependent PDF and FF (leading twist)
The most general expression for the helicity density matrix of quark a inside hadron A with polarization state
S A is ρ a/A,SA λ a ,λ ′ a = 1 2 1 + P a z P a x − iP a y P a x + iP a y 1 − P a z A,SA = 1 2 1 + P a L P a T e −iφs a P a T e iφs a 1 − P a L A,SA ,(2)
where P a j is the j-th component of the quark polarization vector in its helicity frame. Introducing soft, nonperturbative helicity amplitudes for the inclusive process A → a + X,F λ a ,λ X A ;λ A , we can write
ρ a/A,SA λ a ,λ ′ af a/A,SA (x a , k ⊥a ) = λ A ,λ ′ A ρ A,SA λ A ,λ ′ A XA,λX AF λ a ,λ X A ;λ AF * λ ′ a ,λ X A ;λ ′ A ≡ λ A ,λ ′ A ρ A,SA λ A ,λ ′ AF λ a ,λ ′ a λ A ,λ ′ A ,(3)
where ρ A,SA
λ A ,λ ′ A is in turn the helicity density matrix of hadron A ρ A,SA λ A ,λ ′ A = 1 2 1 + P A Z P A X − iP A Y P A X + iP A Y 1 − P A Z = 1 2 1 + P A L P A T e −iφS A P A T e iφS A 1 − P A L ,(4)
and P A J is the J-th component of the hadron polarization vector in its helicity rest frame. The definition ofF
λ a ,λ ′ a λ A ,λ ′ A
in terms of the helicity ampli-tudesF λ a ,λ X A ;λ A can be deduced from Eq. (3). One can see that, due to rotational invariance and parity properties, the following relations hold:
7 F λ ′ a ,λ a λ ′ A ,λ A = F λ a ,λ ′ a λ A ,λ ′ A * , F λ a ,λ ′ a λ A ,λ ′ A (x a , k ⊥a ) = F λ a ,λ ′ a λ A ,λ ′ A (x a , k ⊥a ) exp [ i(λ A − λ ′ A )φ a ] ,(5)F −λ a ,−λ ′ a −λ A ,−λ ′ A = (−1) 2(SA−sa) (−1) (λ A −λ a )+(λ ′ A −λ ′ a ) F λ a ,λ ′ a λ A ,λ ′ A .
Using Eq. (5) one can easily associate the eight functions
F ++ ++ , F ++ −− , F +− +− , F +− −+ , F ++ +− , F −− +− , F +− ++ , F +− −− ,(6)
to the eight leading twist, spin and k ⊥ dependent PDF: F ++ ++ ± F ++ −− are respectively related to the unpolarized and longitudinally polarized PDF, f a/A and ∆ L f a/A ; F +− +− ± F +− −+ are related with the two possible contributions to the transversity PDF; F ++ +− ± F −− +− are respectively related to the probability of finding an unpolarized (longitudinally polarized) parton inside a transversely polarized hadron, the Sivers function, 5 ∆ N f a/A ↑ (the g ⊥ 1T PDF); F +− ++ ± F +− −− are related to the probability of finding a transversely polarized parton respectively inside an unpolarized hadron (the so-called Boer-Mulders function, 8 ∆ N f a ↑ /A ) and inside a longitudinally polarized hadron, the h ⊥ 1L PDF. More precisely, the relations are the following: 3
f a/A =f a/A,SL = F ++ ++ + F ++ −− f a/A,ST =f a/A + 1 2 ∆f a/ST = F ++ ++ + F ++ −− + 2 ImF ++ +− sin(φ SA − φ a ) P a xfa/A,SL = ∆f sx/SL = 2 ReF +− ++ P a xfa/A,ST = ∆f sx/ST = F +− +− + F −+ +− cos(φ SA − φ a ) (7) P a yfa/A,SL = P a yfa/A = ∆f sy/SL = −2 ImF +− ++ P a yfa/A,ST = ∆f sy/ST = −2 ImF +− ++ + F +− +− − F −+ +− sin(φ SA − φ a ) P a zfa/A,SL = ∆f sz/SL = F ++ ++ − F ++ −− P a zfa/A,ST = ∆f sz/ST = 2 ReF ++ +− cos(φ SA − φ a ) ,
where φ SA and φ a are respectively the azimuthal angle of the hadron spin polarization vector and of the parton a transverse momentum, k ⊥a , in the hadronic c.m. frame. We have also used the notation ∆f si/SJ =f si/SJ − f −si/SJ . More details and relations with the notation of the Amsterdam group 8 can be found in Ref. 3. Since the helicity density matrix for a massless gluon can be formally written similarly to that of a quark,
ρ g/A,SA λ g ,λ ′ g = 1 2 1 + P g z T g 1 − iT g 2 T g 1 + iT g 2 1 − P g z A,SA = 1 2 1 + P g circ −P g lin e −2iφ −P g lin e 2iφ 1 − P g circ A,SA ,(8)
where T g 1 and T g 2 are related to the degree of linear polarization of the gluon, relations analogous to those shown for quarks hold also for gluons. 3 Analogously, introducing soft nonperturbative helicity fragmentation amplitudes for the process c → C + X, and limiting to the case of unpolarized hadron C, properties similar to those shown for PDF in Eq. (5) hold. 2,3 From these relations one can easily see that, for each parton, only two independent FF survive: the usual unpolarized FF; the well known Collins function 6 for quarks, ∆ ND C/q ↑ (z, k ⊥C ), and a Collins-like function for (linearly) polarized gluons, ∆ ND C/T g 1 (z, k ⊥C ). 3
Helicity amplitudes for the elementary process ab → cd
Since intrinsic parton motions are fully taken into account in our approach, all soft processes, A(B) → a(b) + X and c → C + X, and the elementary process ab → cd take place out of the hadronic production plane (the XZ cm plane). The relation between the elementary helicity amplitudes given in the hadronic c.m. frame,M λ c ,λ d ;λ a ,λ b , and those given in the canonical partonic c.m. frame (no azimuthal phases),M 0
λ c ,λ d ;λ a ,λ b
, has been given in Ref. 2. In summary,
M λ c ,λ d ;λ a ,λ b =M 0 λ c ,λ d ;λ a ,λ b (9) × e −i(λ a ξa+λ b ξ b −λ c ξc−λ d ξ d ) e −i[(λ a −λ b )ξa−(λ c −λ d )ξc] e i(λ a −λ b )φ ′′ c ,
where ξ j ,ξ j (j = a, b, c, d), φ ′′ c are phases which depend on the initial kinematical configuration in the hadronic c.m. frame. 2 Parity properties for the canonical helicity amplitudesM 0 are well known, 7 and so are the relations between a given canonical helicity amplitude and those obtained by exchanging the two initial (final) partons. 7 For massless partons there are only three independent helicity amplitudes,M ++;++ =M 0 1 exp(iϕ 1 ), M −+;−+ =M 0 2 exp(iϕ 2 ),M −+;+− =M 0 3 exp(iϕ 3 ), where ϕ j (j = 1, 2, 3) are the corresponding phases given in Eq. (9). At LO there are eight elementary contributions ab → cd which must be considered separately, since they involve different combinations of PDF and FF in Eq. (1): q a q b → q c q d , g a g b → g c g d , qg → qg, gq → gq, qg → gq, gq → qg, g a g b → qq, qq → g c g d (the first contribution includes all quark and antiquark cases).
Kernels for the process A(S
A ) + B(S B ) → C + X
In the previous sections we have presented all the ingredients required for the evaluation of the convolution integral for the double polarized cross section, Eq. (1). We can then derive the expression of all the physically relevant single and double spin asymmetries for the process A(S A )+B(S B ) → C+X, and, with appropriate modifications, for other inclusive production processes. Defining kernels as:
Σ(S A , S B ) ab→cd = {λ} ρ a/A,SA λ a ,λ ′ af a/A,SA (x a , k ⊥a ) ρ b/B,SB λ b ,λ ′ bf b/B,SB (x b , k ⊥b ) ×M λ c ,λ d ;λ a ,λ bM * λ ′ c ,λ d ;λ ′ a ,λ ′ bD λ C ,λ C λ c ,λ ′ c (z, k ⊥C ) ,(10)
we present here, as an example, the kernel for the q a q b → q c q d process:
Σ(S A , S B ) qaq b →qcq d = 1 2D C/c (z, k ⊥C )f a/SA (x a , k ⊥a )f b/SB (x b , k ⊥b ) × |M 0 1 | 2 + |M 0 2 | 2 + |M 0 3 | 2 + P a z P b z |M 0 1 | 2 − |M 0 2 | 2 − |M 0 3 | 2 +M 0 2M 0 3 P a x P b x + P a y P b y cos(ϕ 3 − ϕ 2 ) − P a x P b y − P a y P b x sin(ϕ 3 − ϕ 2 ) − 1 2 ∆ ND C/c ↑ (z, k ⊥C )f a/SA (x a , k ⊥a )f b/SB (x b , k ⊥b ) (11) × M 0 1M 0 2 P a x sin(ϕ 1 − ϕ 2 + φ H C ) − P a y cos(ϕ 1 − ϕ 2 + φ H C ) +M 0 1M 0 3 P b x sin(ϕ 1 − ϕ 3 + φ H C ) − P b y cos(ϕ 1 − ϕ 3 + φ H C ) ,
where φ H C is the azimuthal angle of hadron C three-momentum in the helicity frame of parton c. A more complete list of kernels for all partonic contributions and more details can be found in Ref. 3.
5.
Cross section and SSA for the process pp → π + X The formalism described in the previous sections is very general and can be applied to the calculation of unpolarized cross sections, single and double 6 spin asymmetries for inclusive particle production in hadronic collisions. As an explicit example, we now discuss the transverse single spin asymmetry for inclusive pion production in proton-proton collisions, A N (pp → π+X) = (dσ ↑ − dσ ↓ )/(dσ ↑ + dσ ↓ ), where dσ ↑,↓ stands for the cross section of Eq.
(1) with S A =↑, ↓, S B = 0. We present, limiting ourselves to the case q a q b → q c q d , the combination of kernels appearing in the numerator and denominator of the SSA (dependence on x a,b , k ⊥a,b in PDF and on z, k ⊥C in FF is understood):
[Σ(↑, 0) − Σ(↓, 0)] qaq b →qcq d = 1 2 ∆f a/A ↑f b/B |M 0 1 | 2 + |M 0 2 | 2 + |M 0 3 | 2 D C/c + 2 ∆ −f a sy/↑ cos(ϕ 3 − ϕ 2 ) − ∆f a sx/↑ sin(ϕ 3 − ϕ 2 ) ∆f b sy/BM 0 2M 0 3DC/c + ∆ −f a sy/↑ cos(ϕ 1 − ϕ 2 + φ H C ) − ∆f a sx/↑ sin(ϕ 1 − ϕ 2 + φ H C ) (12) ×f b/BM 0 1M 0 2 ∆ ND C/c ↑ + 1 2 ∆f a/A ↑ ∆f b sy/B cos(ϕ 1 − ϕ 3 + φ H C )M 0 1M 0 3 ∆ ND C/c ↑ , [Σ(↑, 0) + Σ(↓, 0)] qaq b →qcq d = f a/Afb/B |M 0 1 | 2 + |M 0 2 | 2 + |M 0 3 | 2 D C/c + 2 ∆f a sy/A ∆f b sy /B cos(ϕ 3 − ϕ 2 )M 0 2M 0 3DC/c (13) + f a/A ∆f b sy/B cos(ϕ 1 − ϕ 3 + φ H C )M 0 1M 0 3 + ∆f a sy/Afb/B cos(ϕ 1 − ϕ 2 + φ H C )M 0 1M 0 2 ∆ ND C/c ↑ .
There are four terms contributing to the numerator of the SSA, Eq. contributions of all terms appearing in Eqs. (12), (13), which involve several unknown or poorly known functions. To this end we saturate known positivity bounds for the Collins and Sivers functions, replacing all other k ⊥ dependent polarized PDF with the corresponding unpolarized ones (keeping trace of azimuthal phases); we sum all possible contributions with the same sign; for all PDF we assume an x and flavour independent gaussian shape vs. k ⊥ , taking k ⊥ = 0.8 GeV/c, while for FF k ⊥C (z) is taken as in Ref. 1; for the unpolarized, k ⊥ -integrated PDF and FF, we take respectively the MRST01 9 and the KKP 10 sets. See Ref. 1 for all other details on numerical calculations. In Fig. 1 (left) we show the maximized contributions to the unpolarized differential cross section for the pp → π 0 + X process for the kinematical regime of the E704 data on SSA. The usual contribution clearly dominates, the other two being suppressed by the azimuthal phases. In Fig. 1 (right) we show the contributions to the SSA, A N (p ↑ p → π + +X), in the same kinematical regime (including the negative x F region). Full k ⊥ treatment and azimuthal phases considerably suppress the Collins effect; 2 the same is not true for Sivers contribution. In Fig. 2 (left) we plot A N (p ↑ p → π 0 + X) for the kinematical regime of the STAR experiment at RHIC. In Fig. 2 (right) we show A N (p ↑p → π + + X), in the kinematical regime of the proposed PAX experiment at GSI. These last results are particularly interesting for the gluon Sivers function. Many other applications of our formalism are possible and some of them are under investigation, like the study of the double longitudinal asymmetry, A LL , for pion production; the transverse Λ polarization in unpolarized hadronic reactions; the study of the Collins effect in polarized SIDIS; the Drell-Yan process; inclusive particle production in pion-proton collisions and so on. Hopefully, this thorough phenomenological analysis of present and forthcoming experimental results on spin asymmetries will help in clarifying the role of spin effects in high energy hadronic reactions.
2 leading twist, spin and k ⊥ dependent PDF and FF (see Eq. (8) of Ref. 2):
Figure 1 .
1(12): the Sivers contribution (2nd line); the transversity⊗Boer-Mulders contribution (3rd line); the transversity⊗Collins contribution (4th and 5th lines); the Sivers⊗Boer-Mulders⊗Collins contribution (last line). Similarly, there are three terms contributing to the denominator of the SSA, Eq.(13): the usual term involving only unpolarized quantities (2nd line); the Boer-Mulders⊗Boer-Mulders contribution (3rd line); the Boer-Mulders⊗Collins contribution (last two lines). Similar considerations apply to all other partonic contributions. Whenever gluons are involved, functions analogous to the transversity, Boer-Mulders, Collins functions for quarks, but describing linearly polarized gluons appear, see Ref. 3. Let us now present some numerical results on unpolarized cross sections and SSA for the pp → π + X process. Our aim here is basically that of showing the (maximized) possible LEFT: (maximized) contributions to the unp. cross section for pp → π 0 + X process and E704 kinematics; solid line: usual contribution; dashed line: Boer-Mulders⊗Collins; dotted line: Boer-Mulders⊗Boer-Mulders; RIGHT: (maximized) contributions to A N (p ↑ p → π + + X) for E704 kinematics; solid line: quark Sivers contribution; dashed line: gluon Sivers; dotted line: transversity⊗Collins; all other contributions are much smaller.
Figure 2 .
2(Maximized) contributions to: (left) A N (p ↑ p → π 0 +X) for STAR kinematics (at x F < 0 all contributions are vanishingly small); (right) A N (p ↑p → π + + X) for PAX kinematics; lines are as inFig 1.
. U , F Murgia, Phys. Rev. 7074009U. D'Alesio and F. Murgia, Phys. Rev. D70 (2004) 074009
. M Anselmino, M Boglione, U D'alesio, E Leader, F Murgia, Phys. Rev. 7114002M. Anselmino, M. Boglione, U. D'Alesio, E. Leader and F. Murgia, Phys. Rev. D71 (2005) 014002
. M Anselmino, M Boglione, U D'alesio, E Leader, S Melis, F Murgia, hep-ph/0509035M. Anselmino, M. Boglione, U. D'Alesio, E. Leader, S. Melis and F. Murgia, hep-ph/0509035
. M Anselmino, M Boglione, U D'alesio, A Kotzinian, F Murgia, A Prokudin, Phys. Rev. 7174006M. Anselmino, M. Boglione, U. D'Alesio, A. Kotzinian, F. Murgia and A. Prokudin, Phys. Rev. D71 (2005) 074006;
. Phys. Rev. 7294007Phys. Rev. D72 (2005) 094007
. D Sivers, Phys. Rev. 41261D. Sivers, Phys. Rev. D41 (1990) 83; D43 (1991) 261
. J C Collins, Nucl. Phys. 396161J.C. Collins, Nucl. Phys. B396 (1993) 161
E Leader, Spin in Particle Physics. Cambridge University PressE. Leader, Spin in Particle Physics, Cambridge University Press, 2001
. D Boer, P J Mulders, Phys. Rev. 575780D. Boer and P.J. Mulders, Phys. Rev. D57 (1998) 5780
. A D Martin, R G Roberts, W J Stirling, R S Thorne, Phys. Lett. 531216A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne, Phys. Lett. B531 (2002) 216
. B A Kniehl, G Kramer, B Pötter, Nucl. Phys. 582514B.A. Kniehl, G. Kramer, and B. Pötter, Nucl. Phys. B582 (2000) 514
| [] |
[
"Nodal surfaces and interdimensional degeneracies",
"Nodal surfaces and interdimensional degeneracies"
] | [
"Pierre-François Loos \nResearch School of Chemistry\nDipartimento di Scienza e Alta Tecnologia\nAustralian National University\nUniversità dell'Insubria, Via Lucini 3, I2601, 22100Canberra, ComoACTAustralia, Italy\n",
"Dario Bressanini \nResearch School of Chemistry\nDipartimento di Scienza e Alta Tecnologia\nAustralian National University\nUniversità dell'Insubria, Via Lucini 3, I2601, 22100Canberra, ComoACTAustralia, Italy\n"
] | [
"Research School of Chemistry\nDipartimento di Scienza e Alta Tecnologia\nAustralian National University\nUniversità dell'Insubria, Via Lucini 3, I2601, 22100Canberra, ComoACTAustralia, Italy",
"Research School of Chemistry\nDipartimento di Scienza e Alta Tecnologia\nAustralian National University\nUniversità dell'Insubria, Via Lucini 3, I2601, 22100Canberra, ComoACTAustralia, Italy"
] | [] | The aim of this paper is to shed light on the topology and properties of the nodes (i.e. the zeros of the wave function) in electronic systems. Using the "electrons on a sphere" model, we study the nodes of two-, threeand four-electron systems in various ferromagnetic configurations (sp, p 2 , sd, pd, p 3 , sp 2 and sp 3 ). In some particular cases (sp, p 2 , sd, pd and p 3 ), we rigorously prove that the non-interacting wave function has the same nodes as the exact (yet unknown) wave function. The number of atomic and molecular systems for which the exact nodes are known analytically is very limited and we show here that this peculiar feature can be attributed to interdimensional degeneracies. Although we have not been able to prove it rigorously, we conjecture that the nodes of the non-interacting wave function for the sp 3 configuration are exact. | 10.1063/1.4922159 | [
"https://arxiv.org/pdf/1505.05906v1.pdf"
] | 12,834,395 | 1505.05906 | a06b00a66f93e71e46050390afa3f806c15ca54f |
Nodal surfaces and interdimensional degeneracies
Pierre-François Loos
Research School of Chemistry
Dipartimento di Scienza e Alta Tecnologia
Australian National University
Università dell'Insubria, Via Lucini 3, I2601, 22100Canberra, ComoACTAustralia, Italy
Dario Bressanini
Research School of Chemistry
Dipartimento di Scienza e Alta Tecnologia
Australian National University
Università dell'Insubria, Via Lucini 3, I2601, 22100Canberra, ComoACTAustralia, Italy
Nodal surfaces and interdimensional degeneracies
Fermion nodesfull configuration interactionquantum Monte Carlofixed-node error
The aim of this paper is to shed light on the topology and properties of the nodes (i.e. the zeros of the wave function) in electronic systems. Using the "electrons on a sphere" model, we study the nodes of two-, threeand four-electron systems in various ferromagnetic configurations (sp, p 2 , sd, pd, p 3 , sp 2 and sp 3 ). In some particular cases (sp, p 2 , sd, pd and p 3 ), we rigorously prove that the non-interacting wave function has the same nodes as the exact (yet unknown) wave function. The number of atomic and molecular systems for which the exact nodes are known analytically is very limited and we show here that this peculiar feature can be attributed to interdimensional degeneracies. Although we have not been able to prove it rigorously, we conjecture that the nodes of the non-interacting wave function for the sp 3 configuration are exact.
I. INTRODUCTION
Considering an antisymmetric (real) electronic wave function Ψ(S, R), where S = (s 1 , s 2 , . . . , s n ) are the spin coordinates and R = (r 1 , r 2 , . . . , r n ) are the Ddimensional spatial coordinates of the n electrons, the nodal hypersurface (or simply "nodes") is a (n D − 1)dimensional manifold defined by the set of configuration points N for which Ψ(N ) = 0. The nodes divide the configuration space into nodal cells or domains which are either positive or negative depending on the sign of the electronic wave function in each of these domains. In recent years, strong evidence has been gathered showing that, for the lowest state of any given symmetry, there is a single nodal hypersurface that divides configuration space into only two nodal domains (one positive and one negative). [1][2][3][4][5][6][7][8][9][10][11] In other words, to have any chance to have exact nodes, a wave function must have only two nodal cells. For simplicity, in the remainder of this paper, we will say that a wave function has exact nodes if it has the same nodes as the exact wave function. Except in some particular cases, electronic or more generally fermionic nodes are poorly understood due to their high dimensionality and complex topology. 1,5 The number of systems for which the exact nodes are known analytically is very limited. For atoms, it includes two triplet -3 S e (1s2s) and 3 P e (2p 2 ) -and two singlet -1 S e (2s 2 ) and 1 P e (2p 2 ) -states of the helium atom and the three-electron atomic state 4 S(2p 3 ). 4,5,12 The quality of fermion nodes is of prime importance in quantum Monte Carlo (QMC) calculations due to the fermion sign problem, which continues to preclude the application of in principle exact QMC methods to large systems. The dependence of the diffusion Monte Carlo a) Electronic mail: Corresponding author: [email protected] b) Electronic mail: [email protected] (DMC) energy on the quality of the trial wave function is often significant in practice, and is due to the fixed-node approximation which segregates the walkers in regions defined by the trial or guiding wave function. The fixednode error is only proportional to the square of the nodal displacement error, but it is uncontrolled and its accuracy difficult to assess. [13][14][15] Recently, Mitas and coworkers have discovered a interesting relationship between electronic density, degree of node nonlinearity and fixed-node error. [16][17][18] Here, we study the topology and properties of the nodes in a class of systems composed of electrons located on the surface of a sphere. Due to their high symmetry and their mathematical simplicity, these systems are the ideal "laboratory" to study nodal hypersurface topologies in electronic states. Moreover, Mitas showed that the non-interacting wave function of spin-polarized electrons on a sphere has only two nodal cells which is probably a necessary condition for exactness 8,9 (see above). Although the present paradigm can be seen as over simplified, it has been successfully used to shed light on the adiabatic connection within density-functional theory, 19 to prove the universality of the correlation energy of an electron pair 20-23 , as a model for ring-shaped semiconductors (known as quantum rings), 24 to understand the properties of excitons, 25 and to create finite 26,27 and infinite 28-30 uniform electron gases 31 and new correlation density-functionals. 32,33 In this paper, we report the analytic expression of the exact nodes for two-electron triplet states (sp, p 2 , sd and pd). We also show that, as in the atomic case, the nodes of the non-interacting wave function for the threeelectron state 4 S(p 3 ) are identical to the nodes of the exact wave function. In addition to these systems where the non-interacting wave function has exact nodes, we study the quality of the non-interacting nodes for the sp 2 and sp 3 configurations. For the sp 2 configuration, we show that, although not exact, the non-interacting nodes are very accurate and, based on numerical evidences, we conjecture that the non-interacting and exact nodes of the sp 3 configuration are identical. We use atomic units throughout.
II. ELECTRONS ON A SPHERE
Our model consists of n spin-up electrons restricted to remain a surface of a sphere. 26,34 The non-interacting orbitals for an electron on a sphere of radius R are the normalized spherical harmonics Y m (Ω), where Ω = (θ, φ) are the polar and azimuthal angles respectively. We will label spherical harmonics with = 0, 1, 2, 3, 4, . . . as s, p, d, f , g, . . . functions. The coordinates of the electrons on the unit sphere are given by their cartesian coordinates x = cos φ sin θ, y = sin φ sin θ and z = cos θ. The average electronic density is measured by the so-called Wigner-Seitz radius r s = ( √ n/2) R.
The Hamiltonian of the system is simplŷ
H = − 1 2 n i ∇ 2 i + n i<j 1 r ij ,(1)
where ∇ 2 i is the angular part of the Laplace operator for electron i and r ij = |r i − r j | is the interelectronic distance between electrons i and j, i.e. the electrons interact through the sphere. 35 We write the electronic wave function as
Φ({s i }, {Ω i }) = Ξ({s i }) Ψ 0 ({Ω i }) Λ({Ω i }).(2)
Ξ is the spin wave function and only depends on the spin coordinates {s i }. Because we only consider ferromagnetic systems, the spin wave function is Ξ({s i }) = n i=1 α(s i ), and is symmetric with respect to the interchange of two electrons. The non-interacting wave function Ψ 0 is a Slater determinant of spin orbitals and defines the nodal hypersurface. Λ is a nodeless correlation factor and, because Ψ 0 is antisymmetric, this means that Λ has to be symmetric with respect to the exchange of two electrons.
We will label each state using the following notations: 2S+1 L e,o , where L = S, P, D, F, . . . and S = n i=1 s i is the total spin angular momentum. The suffixes e (even) and o (odd) are related to the parity of the states given by (−1) 1+···+ n .
A. Two-electron systems
Non-interacting wave functions
First, we study ferromagnetic two-electron (i.e. triplet) states. For each two-electron state gathered in Table I, the non-interacting wave function takes a simple form:
Ψ 0 (sp) = 1 z 1 1 z 2 = z · r 12 ,(3)Ψ 0 (p 2 ) = x 1 y 1 x 2 y 2 = z · r × 12 ,(4)Ψ 0 (sd) = 1 2z 2 1 − x 2 1 − y 2 1 1 2z 2 2 − x 2 2 − y 2 2 = (z · r + 12 )(z · r 12 ),(5)Ψ 0 (pd) = y 1 x 1 z 1 y 2 x 2 z 2 − x 1 y 1 z 1 x 2 y 2 z 2 = (z · r + 12 )(z · r × 12 ),(6)
where z = (0, 0, 1) is the unit vector of the z axis,
r ij = r i − r j , r + ij = r i + r j and r × ij = r i × r j .
Due to their ferromagnetic nature, each state has "Pauli" nodes which corresponds to configurations where two electrons touch. The Pauli hyperplanes are only a subset of the full nodes and it is interesting to note that these nodes are related to the 1D nodes. 24,27,32 The non-interacting nodes are represented in Fig. 1 for a given position of the first electron Ω 1 = (π/6, 0), which is represented by a small yellow sphere. The possible positions of the second electron for which Ψ 0 vanishes are represented by smaller yellow dots.
Proof of the exactness of the nodes
Equation (3) shows that the non-interacting nodes of the sp configuration corresponds to small circle perpendicular to the z axis. Now, let us prove that these noninteracting and exact nodes are identical. We begin by placing the two electrons on a small circle perpendicular to the z axis, as sketched in Fig. 2. For this particular configuration, the two electrons have the same value of the polar angle θ = θ 1 = θ 2 and, without loss of generality, the azimuthal angles can be chosen such that φ 1 = −φ 2 = φ. Suppose that for this configuration the exact wave function has a value Ψ ≡ Ψ({(θ, +φ), (θ, −φ)}) = K. Now, we reflect the wave function with respect to the symmetry plane σ(xz) that passes through the x and z axes and bisects the azimuthal angle φ. Due to the P nature of the state (B 1g representation of the D 2h point group as shown in Table I), the wave function is invariant to such reflexion, i.e. Ψ ≡ Ψ({(θ, −φ), (θ, +φ)}) = K. However, the two electrons have been exchanged and because this is a triplet state, the wave function must have changed sign (Pauli principle). Because Ψ = −Ψ , this implies that K = −K which means that K = 0 and ∀(θ, φ), Ψ(θ, θ, φ, −φ) = 0. We have discovered the nodes of the sp configuration by using symmetry operators belonging to the symmetry group of this particular state. This methodology can be applied to the other two-electron states to show that, in each case, the non-interacting nodes are the same as the nodes of the exact wave function. We have confirmed these results by performing full configuration interaction (FCI) calculations 36 , as well as near-exact Hylleraas-type TABLE I. Non-interacting wave function Ψ0 for various ferromagnetic states of n electrons on a sphere and their corresponding irreducible representation (IR) in D 2h . z = (0, 0, 1) is the unit vector of the z axis, rij = ri − rj, r + ij = ri + rj and r × ij = ri × rj. calculations. For each of these calculations, we have shown that the non-interacting nodes never move when electronic correlation is taken into account. This provides a complementary "computational" proof of the exactness of the non-interacting nodes. This observation also means that, for any size of the basis set, the FCI nodes are exact. However, we must show that these are the only nodes since there might be possibly be other nodal surfaces. For the ground state, the number of nodal cells is minimal and Mitas 9 has shown that these systems have the minimal number of two nodal pockets. As explained by Bajdich et al. 5 , any distortion of the node from the great circle leads to additional cells which can only increase energy by imposing higher curvature (kinetic energy) on the wave function. This argument has been used by Feynman to demonstrate that the energy of fermionic ground state is always higher than the energy of the bosonic state, and by Ceperley 1 to demonstrate the tiling property of the nodal surface. 37 It is interesting to note that the exact nodes of the pd configuration can be represented using two Slater determinants -see Eq. (6) -and the nodal surface is composed of two intersecting nodal surfaces as shown in Fig. 1. This is in agreement with the result of Pechukas who showed that, when two nodal surfaces cross, they are perpendicular at the crossing point. 38 We have recently shown that, for certain states such as the 3 P o (sp), 3 P e (p 2 ), 3 D e (sd) and 3 D o (pd) states, ex-act solutions of the Schrödinger equation can be found in closed form for specific values of the radius R. 39,40 Even though the exact closed-form expression of the Schrödinger equation is only known for particular values of the radius, their exact nodes are analytically known for all radii (see Table I).
n State Configuration Ψ 0 ({Ω i }) D 2h IR Exact? 2 3 P o sp z · r 12 A g ⊗ B 1u = B 1u Yes 2 3 P e p 2 z · r × 12 B 3u ⊗ B 2u = B 1g Yes 2 3 D e sd (z · r + 12 )(z · r 12 ) A g ⊗ A g = A g Yes 2 3 D o pd (z · r + 12 )(z · r × 12 ) B 2g ⊗ B 2u = B 3g ⊗ B 3u = A u Yes 3 4 S o p 3 r 1 · r × 23 B 1u ⊗ B 2u ⊗ B 3u = A u Yes 3 4 D e sp 2 z · (r 12 × r 13 ) A g ⊗ B 3u ⊗ B 2u = B 1g No 4 5 S o sp 3 (r 12 + r 34 )(r × 12 + r × 34 ) A g ⊗ B 1u ⊗ B 2u ⊗ B 3u = A u Unknown (a) 3 P o (sp) (b) 3 P e (p 2 ) (c) 3 D e (sd) (d) 3 D o (pd)
One could ask if there are any other two-electron states for which we know the exact nodes? The answer is no. Each of the states having exact nodes is the lowest-energy state of a given irreducible representation of the D 2h point group (the largest abelian point group in 3D). For example, the states 3 D e (sd) and 3 D o (pd) correspond to the lowest-energy state of the A g and A u representations, respectively, while the states 3 P o (sp) and 3 P e (p 2 ) (both triply degenerate) are the lowest-energy state of the B ku and B kg representation (k = 1, 2, 3), respectively. For example, the d 2 and sf configurations are excited states of the A g and A u representation, and one can easily show that their non-interacting nodes are not exact. This result is not surprising because we know that excited states must have additional nodes in order to be orthogonal to the lowest-energy state, and these additional nodes are usually not imposed by symmetry. 24
Interdimensional degeneracies
We would like to mention here that the singlet equivalent of the four triplet states for which we have found the exact expression of the nodal surface do exist. These are the 1 S e (s 2 ), 1 P o (sp), 1 D o (pd) and 1 F e (pf ) states. 40 These singlet states are connected to their triplet partner by exact interdimensional degeneracies. [40][41][42] Two states in different dimensions are said to be interdimensionally degenerated when their energies are the same. Exploiting the relations between problems with a different number of spatial dimensions is a widespread and useful technique in physics and chemistry (see for example Ref. 43).
These types of interdimensional degeneracies also explain why the exact nodes of the 3 P e (2p 2 ) and 1 P e (2p 2 ) states of the helium atom are known. Indeed, these 3D helium states are degenerate with the 1 S e (2s 2 ) and 3 S e (1s2s) states in 5D, and the exact nodes of these states are known. 4,5 To illustrate this, let us take a concrete example. In D dimensions, the spatial part of the exact wave function for the 1 S e (1s 2 ) ground state of the helium atom satisfies the following equation
− 1 2 ∆ (D) Λ + − 2 r 1 − 2 r 2 + 1 r 12 Λ = E Λ,(7)
where ∆ (D) is the Laplace operator in D dimensions which, in terms of r 1 , r 2 and r 12 , reads
∆ (D) =∂+ (D − 1) 1 r 1 ∂ ∂r 1 + 1 r 2 ∂ ∂r 2 + 2 r 12 ∂ ∂r 12 .(8)
Λ is a nodeless, totally symmetric function of r 1 , r 2 and r 12 for any value of D ≥ 2. Now, let us consider the 3 P e (2p 2 ) state of the helium atom in D − 2 dimensions and let us write the spatial wave function as Φ = Ψ 0 Λ where Ψ 0 = (x 1 y 2 − y 1 x 2 ) and Λ is a function of r 1 , r 2 and r 12 . One can easily show that
∆ (D) Φ = Ψ 0 ∆ (D) Λ + 2 r 1 ∂Λ ∂r 1 + 2 r 2 ∂Λ ∂r 2 + 4 r 12 ∂Λ ∂r 12 = Ψ 0 ∆ (D+2) Λ(9)
Therefore, Λ satisfies Eq. (8) and is thus a nodeless, totally symmetric function of r 1 , r 2 and r 12 , and the nodes are given entirely by the function Ψ 0 = (x 1 y 2 − y 1 x 2 ). A similar relationship can be obtained between the 3 S e (1s2s) in 5D and the 1 P e (2p 2 ) state in 3D. Interdimensional degeneracies also explain why the nodes of the 3 Σ − g state of the H 2 molecule (which is degenerate with the 1 Σ + g state in 5D) are also known. 5,42 Interdimensional degeneracies could potentially be used to discover exact nodes of new atomic and molecular systems. They have been exploited very successfully in van der Waals clusters. 44 B. Three-electron systems
1. 4 S o (p 3 ) state
The first three-electron system we wish to consider here is the p 3 configuration. It corresponds to the state where three spin-up electrons occupy the p orbitals and the lowest s orbital is vacant. This state has an uniform electronic density and its non-interacting wave function is given by
Ψ 0 (p 3 ) = x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 = r 1 · (r 2 × r 3 ).(10)
In Eq. (10), the scalar triplet product can be interpreted as the signed volume of the parallelepiped formed by the three radius vectors r 1 , r 2 and r 3 . Thus, it is easy to understand that the non-interacting nodes of the p 3 configuration are encountered when the three electrons are located on a great circle, hence minimizing the volume of the parallelepiped. For this state, we can show that the non-interacting and exact nodes are identical by using symmetry operations, as sketched in Fig. 2. First, we place the three electrons on a great circle which can be taken as the equator (i.e. θ 1 = θ 2 = θ 3 = π/2) with no loss of generality. We assume that, for this configuration, the exact wave function has a value Ψ ≡ Ψ({(π/2, φ 1 ), (π/2, φ 2 )}, (π/2, φ 3 )}) = K. Because this state has odd parity, inversion must change the sign of the wave function: Ψ ≡ Ψ({(π/2, φ 1 + π), (π/2, φ 2 + π)}, (π/2, φ 3 + π)}) = −K. By applying the C 2 (z) rotation around the z axis (which consists of adding π to the azimuthal angle of each electron), one can bring back the electrons to their original positions. Due to the S nature of the state, a rotation does not affect the wave function, and we have Ψ ≡ Ψ({(π/2, φ 1 ), (π/2, φ 2 )}, (π/2, φ 3 )}) = −K. Because Ψ = Ψ , this means that K = 0. Once again, using simple symmetry operations, we have shown that the non-interacting and exact nodes are the same. We have confirmed this proof by performing FCI and nearexact Hylleraas calculations, and showed that the noninteracting nodes never move. The exactness of the noninteracting nodes for this state is probably due to its high symmetry. Moreover, the p 3 configuration is the lowest-energy state of A u symmetry in the D 2h point group.
Let us give an alternative proof of the exactness of the non-interacting nodes for the p 3 configuration. Here we will take advantage of a particular interdimensional de- generacy between a fermionic excited state and a bosonic ground state. 42 It can be easily shown that (see Eqs. (2) and (10))
∆ (3) Φ = ∆ (3) Ψ 0 Λ = Ψ 0 ∆ (5) Λ − 6 R 2 .(11)
Because Ψ 0 is antisymmetric, the condition of antisymmetry of the total wave function Φ implies that Λ is a totally symmetric function. This means that Λ is the groundstate wave function of the spinless bosonic s 3 state at D = 5. Consequently, Λ is nodeless and the nodes are given by the zeros of Ψ 0 .
In the case of atomic systems, Bajdich et al. have demonstrated that the non-interacting wave function of the 4 S(2p 3 ) state has also the same nodes as the exact wave function, 5 and this can also be attributed to a wellknown interdimensional degeneracy. Indeed, Herrick has shown that the exact 4 S(2p 3 ) fermionic state at D = 3 is degenerate of the spinless bosonic 1s 3 ground state at D = 5. 42
4 D e (sp 2 ) state
We now consider the quartet D state created by placing one electron in the lowest s orbital and two electrons in the p orbitals. This state is the ground state for three spin-up electrons on a sphere. Unlike the p 3 configuration considered above, this state has a non-uniform density and its non-interacting wave function is
Ψ 0 (sp 2 ) = 1 x 1 y 1 1 x 2 y 2 1 x 3 y 3 = z · (r 12 × r 13 ).(12)
The non-interacting nodes of the sp 2 nodes are encountered when the three electrons are on a small circle perpendicular to the z axis (see Table I). For a particular positions of the first two electrons, we have computed the FCI nodes for the sp 2 configuration for increasing basis set using up to d, f , g, h, i and j functions. The results are reported in Fig. 3 where we have represented the nodal surface of the sp 2 configuration at various level of theory and for a particular position of two electrons Ω 1 = (π/6, 0) and Ω 2 = (π/6, π) for r s = 1. Based on these results, we can consider the FCI(j) nodes as near exact. We observe that the difference between the noninteracting and FCI nodes is always quite small (less than a degree), and that the non-interacting nodes have the same quality of a FCI(g) calculations. This shows that the non-interacting nodes in the sp 2 configuration are not identical to the nodes of the exact wave function but are nonetheless very accurate. This state probably lacks symmetry due to the vacant p orbital, and it would be interesting to know what happen in the case of the sp 3 configuration.
C. Four-electron systems
1. 5 S o (sp 3 ) state
The 5 S o (sp 3 ) state is the ground state of four spin-up electrons on a sphere and has a uniform density. The non-interacting wave function for the sp 3 configuration reads To investigate further this conjecture, we computed the FCI nodes for this state for increasing basis set using up to f , g, h, i, j and k functions. Because of the slow convergence of the FCI wave function, the results were inconclusive. However, the FCI nodes appear to converge slowly toward the non-interacting nodes, thus suggesting that the non-interacting nodes are either exact or almost exact.
Ψ 0 (sp 3 ) = 1 x 1 y 1 z 1 1 x 2 y 2 z 2 1 x 3 y 3 z 3 1 x 4 y 4 z 4 = (r 12 + r 34 )(r × 12 + r × 34 ),(13)
To further investigate this claim, we have performed variational Monte Carlo (VMC) calculations 45 for all the states considered in this study (see Table II). The trial wave function that we have used for the VMC calculations is of the form Φ T = Ψ 0 e J where Ψ 0 is given in Table I and the Jastrow factor J is a symmetric function of the interelectronic distances containing two-, three-and fourbody terms. 46 The parameters of the Jastrow factor are optimized by energy minimization. [47][48][49][50] More details will be reported elsewhere. 51 These VMC results are compared with benchmark calculations. As shown in Table II, for all the two-electron states as well as the p 3 configuration for which we use the exact nodal wave function Ψ 0 , VMC is able to reach sub-microhartree accuracy. The same comment can be done for the sp 3 configuration while, for the sp 2 configuration where we know that Ψ 0 does not give a exact picture of the nodal surface, the error is more than one order of magnitude larger than for the other systems. This leads us to conjecture that the sp 3 nodes given by Eq. (13) are identical to the nodes of the exact (yet unknown) wave function.
III. CONCLUSION
In this paper, we have studied the fermionic nodes for various electronic states of the "electrons on a sphere" paradigm. We have rigorously demonstrated that, for the sp, p 2 , sd, pd and p 3 configurations, the non-interacting wave function has the same nodes as the exact wave function. We have shown that this peculiar feature can be attributed to exact interdimensional degeneracies. Interdimensional degeneracies also explain why the exact nodes of various atomic and molecular systems are known analytically. Therefore, we could potentially used new interdimensional degeneracies to discover the exact nodes for new atomic and molecular systems.
Even when the non-interacting nodes are not exact, we have shown that most of the features of the exact nodal surface are captured by the non-interacting nodes. Thus, we expect the fixed-node error to be quite small for these systems. This could be a new, alternative way to obtain accurate near-exact energies for finite and infinite uniform electron gases. Indeed, as illustrated in Ref. 26, the electrons-on-a-sphere model can be used to create finite and infinite uniform electron gases and we have shown that the conventional "jellium" model 52 (i.e. electrons in a periodic box) and the present model are equivalent in the thermodynamic limit due to the the "short-sightedness of electronic matter". 53,54 Although we have not been able to prove it rigorously, we have conjectured that the nodes of the non-interacting wave function for the sp 3 configuration are exact. This claim is supported by numerical evidence.
FIG. 1 .
1Non-interacting nodes for various two-electron ferromagnetic states. The first electron is at Ω1 = (π/6, 0) (large yellow dot). The possible position of the second electron corresponding to Ψ0 = 0 are represented by a yellow line.
FIG. 2 .
2Proof of the exactness of the non-interacting nodes for the3 P o (sp) (left) 4 S o (p 3 ) (right) states.
FIG. 3 .
3Non-interacting and FCI nodes of the 4 D e (sp 2 ) state at rs = 1. The first and second electrons are at Ω1 = (π/6, 0) and Ω2 = (π/6, π).
and one can show that this determinant is zero if and only if the four electrons are coplanar. This means that the non-interacting nodes of the sp 3 configuration corresponds to small circles. Bajdich et al. have studied the sp 3 nodes in atomic systems and they have conjectured that the
TABLE II .
IIVMC and benchmark energies for various states at rs = 1. The statistical errors are reported in parentheses.non-interacting nodes are "reasonably close to the exact one although the fine details of the nodal surface are not captured perfectly." To the best of our knowledge, there is no known interdimensional degeneracy involving the 5 S o (sp 3 ) state.States
VMC
Benchmark
3 P o (sp)
1.465 189 86(4)
1.465 189 850 a
3 P e (p 2 )
2.556 684 32(9)
2.556 684 316 a
3 D e (sd)
3.556 684 32(9)
3.556 684 316 a
3 D o (pd)
4.635 924 8(2)
4.635 924 645 a
4 S o (p 3 )
2.239 988 8(3)
2.239 988 9 a
4 D e (sp 2 )
1.699 883(3)
1.699 872 b
5 S o (sp 3 )
1.836 555 6(6)
1.836 556 b
a Hyllerras-type calculation
b Extrapolated FCI calculation
. D M Ceperley, J. Stat. Phys. 631237D. M. Ceperley, J. Stat. Phys. 63, 1237 (1991).
. W A Glauser, W R Brown, W A LesterJr, D Bressanini, B L Hammond, M L Koszykowski, J. Chem. Phys. 979200W. A. Glauser, W. R. Brown, W. A. Lester Jr., D. Bressanini, B. L. Hammond, and M. L. Koszykowski, J. Chem. Phys. 97, 9200 (1992).
D Bressanini, D M Ceperley, P Reynolds, Recent Advances in Quantum Monte Carlo Methods. W. A. Lester Jr., S. M. Rothstein, and S. Tanaka2D. Bressanini, D. M. Ceperley, and P. Reynolds, in Recent Advances in Quantum Monte Carlo Methods, Vol. Vol. 2, edited by W. A. Lester Jr., S. M. Rothstein, and S. Tanaka (World Scientfic, 2001).
. D Bressanini, P J Reynolds, Phys. Rev. Lett. 95110201D. Bressanini and P. J. Reynolds, Phys. Rev. Lett. 95, 110201 (2005).
. M Bajdich, L Mitas, G Drobny, L K Wagner, Phys. Rev. B. 7275131M. Bajdich, L. Mitas, G. Drobny, and L. K. Wagner, Phys. Rev. B 72, 075131 (2005).
. D Bressanini, G Morosi, S Tarasco, J. Chem. Phys. 123204109D. Bressanini, G. Morosi, and S. Tarasco, J. Chem. Phys. 123, 204109 (2005).
. T C Scott, A Luchow, D Bressanini, J D Morgan, Iii , Phys. Rev. A. 7560101T. C. Scott, A. Luchow, D. Bressanini, and J. D. Morgan III, Phys. Rev. A 75, 060101 (2007).
. L Mitas, Phys. Rev. Lett. 961L. Mitas, Phys. Rev. Lett. 96, 240402/1 (2006).
. L Mitas, arXiv:cond-mat:/0605550L. Mitas, arXiv:cond-mat:/0605550 (2008).
. D Bressanini, G Morosi, J. Chem. Phys. 12954103D. Bressanini and G. Morosi, J. Chem. Phys. 129, 054103 (2008).
. D Bressanini, Phys. Rev. B. 86115120D. Bressanini, Phys. Rev. B 86, 115120 (2012).
. D J Klein, H M Pickett, J. Chem. Phys. 644811D. J. Klein and H. M. Pickett, J. Chem. Phys. 64, 4811 (1976).
. Y Kwon, D M Ceperley, R M Martin, Phys. Rev. B. 586800Y. Kwon, D. M. Ceperley, and R. M. Martin, Phys. Rev. B 58, 6800 (1998).
. A Luchow, T C Scott, J. Phys. B: At. Mol. Opt. Phys. 40851A. Luchow and T. C. Scott, J. Phys. B: At. Mol. Opt. Phys. 40, 851 (2007).
. A Luchow, R Petz, T C Scott, J. Chem. Phys. 126144110A. Luchow, R. Petz, and T. C. Scott, J. Chem. Phys. 126, 144110 (2007).
. A H Kulahlioglu, K M Rasch, S Hu, L Mitas, Chem. Phys. Lett. 591170A. H. Kulahlioglu, K. M. Rasch, S. Hu, and L. Mitas, Chem. Phys. Lett. 591, 170 (2014).
. K M Rasch, L Mitas, Chem. Phys. Lett. 52859K. M. Rasch and L. Mitas, Chem. Phys. Lett. 528, 59 (2012).
. K M Rasch, S Hu, L Mitas, J. Chem. Phys. 14041102K. M. Rasch, S. Hu, and L. Mitas, J. Chem. Phys. 140, 041102 (2014).
. M Seidl, Phys. Rev. A. 7562506M. Seidl, Phys. Rev. A 75, 062506 (2007).
. P F Loos, P M W Gill, J. Chem. Phys. 131241101P. F. Loos and P. M. W. Gill, J. Chem. Phys. 131, 241101 (2009).
. P F Loos, P M W Gill, J. Chem. Phys. 132234111P. F. Loos and P. M. W. Gill, J. Chem. Phys. 132, 234111 (2010).
. P F Loos, P M W Gill, Phys. Rev. Lett. 105113001P. F. Loos and P. M. W. Gill, Phys. Rev. Lett. 105, 113001 (2010).
. P F Loos, P M W Gill, Chem. Phys. Lett. 5001P. F. Loos and P. M. W. Gill, Chem. Phys. Lett. 500, 1 (2010).
. P F Loos, P M W Gill, Phys. Rev. Lett. 10883002P. F. Loos and P. M. W. Gill, Phys. Rev. Lett. 108, 083002 (2012).
. P F Loos, Phys. Lett. A. 3761997P. F. Loos, Phys. Lett. A 376, 1997 (2012).
. P F Loos, P M W Gill, J. Chem. Phys. 135214111P. F. Loos and P. M. W. Gill, J. Chem. Phys. 135, 214111 (2011).
. P F Loos, P M W Gill, J. Chem. Phys. 138164124P. F. Loos and P. M. W. Gill, J. Chem. Phys. 138, 164124 (2013).
. P F Loos, J. Chem. Phys. 13864108P. F. Loos, J. Chem. Phys. 138, 064108 (2013).
. P F Loos, P M W Gill, Phys. Rev. B. 83233102P. F. Loos and P. M. W. Gill, Phys. Rev. B 83, 233102 (2011).
. P F Loos, P M W Gill, Phys. Rev. B. 8433103P. F. Loos and P. M. W. Gill, Phys. Rev. B 84, 033103 (2011).
. P M W Gill, P F Loos, Theor. Chem. Acc. 1311069P. M. W. Gill and P. F. Loos, Theor. Chem. Acc. 131, 1069 (2012).
. P F Loos, C J Ball, P M W Gill, J. Chem. Phys. 140P. F. Loos, C. J. Ball, and P. M. W. Gill, J. Chem. Phys. 140, 18A524 (2014).
. P F Loos, Phys. Rev. A. 8952523P. F. Loos, Phys. Rev. A 89, 052523 (2014).
. P F Loos, P M W Gill, Phys. Rev. A. 7962517P. F. Loos and P. M. W. Gill, Phys. Rev. A 79, 062517 (2009).
Roughly speaking, our model can be viewed as an atom in which one only considers the subspace r 1 = r 2 =. Roughly speaking, our model can be viewed as an atom in which one only considers the subspace r 1 = r 2 = . . . = rn = R.
. P J Knowles, N C Handy, Chem. Phys. Lett. 111315P. J. Knowles and N. C. Handy, Chem. Phys. Lett. 111, 315 (1984).
The tiling theorem states that, for a non-degenerate ground state, by applying all possible particle permutations to an arbitrary nodal cell one covers the complete configuration space. The tiling theorem states that, for a non-degenerate ground state, by applying all possible particle permutations to an arbitrary nodal cell one covers the complete configuration space.
. P Pechukas, J. Chem. Phys. 575577P. Pechukas, J. Chem. Phys. 57, 5577 (1972).
. P F Loos, P M W Gill, Phys. Rev. Lett. 103123008P. F. Loos and P. M. W. Gill, Phys. Rev. Lett. 103, 123008 (2009).
. P F Loos, P M W Gill, Mol. Phys. 1082527P. F. Loos and P. M. W. Gill, Mol. Phys. 108, 2527 (2010).
. D R Herrick, F H Stillinger, Phys. Rev. A. 1142D. R. Herrick and F. H. Stillinger, Phys. Rev. A 11, 42 (1975).
. D R Herrick, J. Math. Phys. 16281D. R. Herrick, J. Math. Phys. 16, 281 (1975).
. D J Doren, D R Herschbach, J. Chem. Phys. 854557D. J. Doren and D. R. Herschbach, J. Chem. Phys. 85, 4557 (1986).
. M P Nightingale, M Moodley, J. Chem. Phys. 12314304M. P. Nightingale and M. Moodley, J. Chem. Phys. 123, 014304 (2005).
Quantum monte carlo methods in physics and chemistry. C J Umrigar, Kluwer Academic PressDordrechtC. J. Umrigar, "Quantum monte carlo methods in physics and chemistry," (Kluwer Academic Press, Dordrecht, 1999) pp. 129- 160.
. C.-J Huang, C J Umrigar, M P Nightingale, J. Chem. Phys. 1073007C.-J. Huang, C. J. Umrigar, and M. P. Nightingale, J. Chem. Phys. 107, 3007 (1997).
. C J Umrigar, C Filippi, Phys. Rev. Lett. 94150201C. J. Umrigar and C. Filippi, Phys. Rev. Lett. 94, 150201 (2005).
. J Toulouse, C J Umrigar, J. Chem. Phys. 12684102J. Toulouse and C. J. Umrigar, J. Chem. Phys. 126, 084102 (2007).
. C J Umrigar, J Toulouse, C Filippi, S Sorella, R G Hennig, Phys. Rev. Lett. 98110201C. J. Umrigar, J. Toulouse, C. Filippi, S. Sorella, and R. G. Hennig, Phys. Rev. Lett. 98, 110201 (2007).
. J Toulouse, C J Umrigar, J. Chem. Phys. 128174101J. Toulouse and C. J. Umrigar, J. Chem. Phys. 128, 174101 (2008).
. P F Loos, in preparationP. F. Loos, (in preparation).
R G Parr, W Yang, Density-functional theory of atoms and molecules. OxfordClarendon PressR. G. Parr and W. Yang, Density-functional theory of atoms and molecules (Oxford, Clarendon Press, 1989).
. W Kohn, Phys. Rev. Lett. 763168W. Kohn, Phys. Rev. Lett. 76, 3168 (1996).
E Prodan, W Kohn, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA10211635E. Prodan and W. Kohn, Proc. Natl. Acad. Sci. USA 102, 11635 (2005).
| [] |
[
"Étude des opérateurs d'évolution en caractéristique 2",
"Étude des opérateurs d'évolution en caractéristique 2"
] | [
"Richard Varro :[email protected] \nInstitut Montpelliérain Alexander Grothendieck\nUniversité de Montpellier\nCNRS\nPlace Eu-gène Bataillon35095MontpellierFrance\n"
] | [
"Institut Montpelliérain Alexander Grothendieck\nUniversité de Montpellier\nCNRS\nPlace Eu-gène Bataillon35095MontpellierFrance"
] | [] | We are interested in the evolution operators defined on commutative and nonassociative algebras when the scalar field is of characteristic 2. We distinguish four types : nilpotent, quasi-constant, ultimately periodic and plenary train operators. They are studied and classified for non baric and for baric algebras.Key words : Evolution operators, solvable algebras, quasi-constant algebras, ultimately periodic operator, plenary train algebras, evolution algebras, baric algebras, Bernstein algebras, periodic Bernstein algebras.2010 MSC : Primary : 17D92, Secondary : 17A30.Il résulte immédiatement de la définition queProposition 4. Deux opérateurs d'évolution sont semblables si et seulement si les algèbres sur lesquelles ils opérent sont semi-isomorphes.Proposition 5. Si A 1 et A 2 sont deux F-algèbres semi-isomorphes alors pour tout entier k ≥ 1 les algèbres engendrées par | null | [
"https://arxiv.org/pdf/1911.07001v2.pdf"
] | 208,139,569 | 1911.07001 | dbc1c20d73f871ab24ca7a5051cf951548d6618b |
Étude des opérateurs d'évolution en caractéristique 2
Richard Varro :[email protected]
Institut Montpelliérain Alexander Grothendieck
Université de Montpellier
CNRS
Place Eu-gène Bataillon35095MontpellierFrance
Étude des opérateurs d'évolution en caractéristique 2
arXiv:1911.07001v2 [math.RA] 1 Apr 2020Evolution operatorssolvable algebrasquasi-constant algebrasultimately periodic operatorplenary train algebrasevolution algebrasbaric algebrasBernstein algebrasperiodic Bernstein algebras 2010 MSC : Primary : 17D92Secondary : 17A30
We are interested in the evolution operators defined on commutative and nonassociative algebras when the scalar field is of characteristic 2. We distinguish four types : nilpotent, quasi-constant, ultimately periodic and plenary train operators. They are studied and classified for non baric and for baric algebras.Key words : Evolution operators, solvable algebras, quasi-constant algebras, ultimately periodic operator, plenary train algebras, evolution algebras, baric algebras, Bernstein algebras, periodic Bernstein algebras.2010 MSC : Primary : 17D92, Secondary : 17A30.Il résulte immédiatement de la définition queProposition 4. Deux opérateurs d'évolution sont semblables si et seulement si les algèbres sur lesquelles ils opérent sont semi-isomorphes.Proposition 5. Si A 1 et A 2 sont deux F-algèbres semi-isomorphes alors pour tout entier k ≥ 1 les algèbres engendrées par
Introduction
Soit K un corps commutatif sans condition sur la caractéristique. Etant donnée une K-algèbre A, l'opérateur d'évolution sur A est l'application V : A → A définie par x → x 2 .
Cet opérateur joue un rôle important en algèbre génétique, car il permet d'exprimer la distribution génétique V (x) des descendants à partir de la distribution x de la population parentale (cf. [15], p. 15 ; [22], p. 7). En effet, soient e 1 , . . . , e n des types génétiques autosomiques présents dans une population panmictique et pangamique, si (α i (t)) 1≤i≤n , n i=1 α i (t) = 1 est la distribution des fréquences de ces types à la génération t et si γ ijk est la probabilité que l'union entre les types e i et e j donne un zygote de type e k qui atteint le stade de reproduction, alors on a α k (t + 1) = n i,j=1 γ ijk α i (t) α j (t). On peut représenter algébriquement cette situation en donnant un K-espace vectoriel A de base (e 1 , . . . , e n ) muni de la structure d'algèbre e i e j = n k=1 γ ijk e k , pour x (t) = n i=1 α i (t) e i on a V (x (t)) = n k=1 n i,j=1 γ ijk α i (t) α j (t) e k et donc x (t + 1) = V (x (t)). Les premiers travaux sur les applications quadratiques en lien avec la Génétique datent de 1923. Dans [4,5,6], S. N. Bernstein s'intéresse à la classification des opérateurs d'évolution satisfaisant à ce qu'il appelle le « principe de stationnarité ». Génétiquement cela se traduit dans une population par un équilibre de la distribution des fréquences d'un type génétique après une génération. Mathématiquement cela se traduit par la recherche des applications quadratiques V : S → S définies sur le simplexe standard S = {(x 1 , . . . , x n ) ∈ R n : x i ≥ 0 et x i = 1} par V (x) = n i,j=1 γ ijk x i x j où γ ijk = γ jik ≥ 0 (i, j, k = 1, . . . , n) et n k=1 γ ijk = 1 (i, j = 1, . . . , n) vérifiant la condition V 2 = V . A partir de 1971, Ju. I. Ljubič reprend l'étude du problème de Bernstein,dans ([14] p. 594) il donne la première traduction algébrique de ce problème sous la forme x 2 2 = s (x) 2 x 2 où s est une pondération, la notion de pondération ayant été introduite par I. M. H. Etherington [7]. En 1975, P. Holgate [10] partant de cette interprétation algébrique du problème de Bernstein obtient la décomposition de Peirce et une classification en dimension 3 et 4 de ces algèbres appelées depuis algèbres de Bernstein. En 1976, V. M. Abraham [1,2] donne une généralisation des algèbres de Bernstein : les algèbres de Bernstein d'ordre n qui sont étudiées par C. Mallol [16,17], ces algèbres sont à leur tour généralisées en 1992, en algèbres de Bernstein périodiques [19,20]. Parallèlement, A. A. Krapivin [13] introduit une sous-classe des algèbres de Bernstein : les algèbres quasi-constantes, étudiées par I. Katambe [11,12]. Enfin les algèbres de Bernstein et leurs généralisations appartiennent à une classe plus vaste : les algèbres train plénières, étudiées par C. J. Gutiérrez Fernández [9]. Dans tous ces travaux le corps de base de ces algèbres est supposé de caractéristique différente de 2. Des résultats en caractéristique 2 ont été donnés dans [16] pour les algèbres de Bernstein de degré n, dans [3] pour les algèbres quasi-constantes de degré 1 et les train algèbres de degré 3 et dans [20] pour les algèbres de Bernstein périodiques.
En ce qui concerne le plan de ce papier. Après avoir donné en préliminaire des propriétés des opérateurs d'évolution en caractéristique 2 et définit la notion de semi-isomorphisme d'algèbres, on aborde l'étude et la classification de quatre types d'opérateurs d'évolution : nilpotents, quasi-constants, ultimement périodiques et train pléniers. On montre que sauf pour le type nilpotent, l'existence de ces opérateurs d'évolution sur des algèbres non pondérées impose des conditions sur le cardinal du corps. On étudie ensuite les opérateurs d'évolution des F 2 -algèbres d'évolution de dimension finie. On termine par l'étude des opérateurs d'évolution définis sur des algèbres pondérées.
Tout au long de ce papier, K désigne un corps commutatif sans condition sur la caractéristique, on note F quand le corps K est de caractéristique 2, enfin on utilise la notation classique F 2 p pour le corps de caractéristique 2 à 2 p éléments isomorphe à un corps de décomposition sur Z /2Z du polynôme X 2 p −X. Les K-algèbres et les Falgèbres considérées sont commutatives et ne sont pas nécessairement associatives. L'ensemble N * × N est muni de l'ordre lexicographique .
Préliminaire
Nous donnons ici quelques propriétés des opérateurs d'évolution en caractéristique 2 utilisées dans la suite. Démonstration. La condition suffisante est immédiate. Pour la condition nécessaire, comme A 2 = {0} il existe x, y ∈ A tel que xy = 0, alors de V (x + y) = V (x)+V (y) il résulte 2xy = 0, d'où le résultat.
Soit A une K-algèbre, l'algèbre dérivée A 2 est la sous-algèbre de A engendrée par l'ensemble {xy; x, y ∈ A}, quand le corps K est de caractéristique = 2 et la K-algèbre A est commutative, l'algèbre A 2 est engendrée par l'ensemble V (A) = x 2 ; x ∈ A qui n'est pas un sous-espace de A 2 .
Proposition 2. Soit A une F-algèbre, si le corps F est parfait alors quel que soit l'entier k ≥ 1 l'ensemble V k (A) est un sous-espace vectoriel de A 2 .
Démonstration. On a V (A) ⊂ A 2 ⊂ A on en déduit que V k (A) ⊂ V (A) pour tout k ≥ 1. L'application V étant additive, pour tout entier k ≥ 1 l'ensemble V k (A) est un sous-groupe de A 2 . Soient k ≥ 1, x ∈ A et λ ∈ F, le corps F étant parfait, le La notion de semi-isomorphisme est utile pour la classification des opérateurs d'évolution en caractéristique 2 admettant un polynôme annulateur.
Proposition 6. Soient A une F-algèbre et P ∈ F [X], si P (V ) (x) = 0 pour tout x ∈ A alors pour toute algèbre A ′ semi-isomorphe à A on a P (V ) (x ′ ) = 0 pour tout x ′ ∈ A ′ . Démonstration. Soient P (X) = n k=0 α k X k et f : A → A ′ un semi-isomorphisme. Pour tout x ′ ∈ A ′ il existe x ∈ A tel que f (x) = x ′ et on a P (V ) (x ′ ) = n k=0 α k V k (f (x)) = f n k=0 α k V k (x) = 0.
3. Opérateurs d'évolution nilpotents, quasi-constants, ultimement périodiques et train pléniers
Opérateurs d'évolution nilpotents.
Dans une K-algèbre A, on définit les puissances plénières d'un élément x ∈ A par x [1]
= x et x [n+1] = x [n] x [n] pour n ≥ 1. Définition 7. Soit A une K-algèbre.
Un élément x ∈ A est nil-plénier de degré n si on a x [n+1] = 0 et x [n] = 0. L'algèbre A est nil-plénière de degré n si pour tout x ∈ A on a x [n+1] = 0 et s'il existe dans A un élément nil-plénier de degré n.
L'opérateur d'évolution V sur A est nilpotent de degré n si on a n ≥ 1 et V n = 0 et V n−1 = 0.
En remarquant que pour tout x ∈ A et tout entier n ≥ 1 on a V n (x) = x [n+1] on a immédiatement que Proposition 8. Une K-algèbre A est nil-plénière de degré n si et seulement si l'opérateur d'évolution V sur A est nilpotent de degré n.
De la définition d'une algèbre nil-plénière on déduit.
Proposition 9. Une F-algèbre A est nil-plénière de degré n si et seulement si il existe une base (e i ) i∈I de A telle que V n (e i ) = 0 pour tout i ∈ I et V n−1 (e j ) = 0 pour au moins un j ∈ I.
Démonstration. Pour la condition nécessaire, on a A = ker V n et par définition il existe x ∈ A tel que V n (x) = 0 et V n−1 (x) = 0, il suffit de compléter {x} en une base de A pour établir le résultat. La condition suffisante résulte de l'additivité de V .
Proposition 10. Une F-algèbre de dimension d, nil-plénière de degré n est semiisomorphe à une F-algèbre, notée A (s), définie par la donnée : Démonstration. Soit A est une F-algèbre de dimension d, nil-plénière de degré n. Par restriction du corps des scalaires à F 2 , l'opérateur d'évolution est linéaire sur A et vérifie V n = 0, V n−1 = 0. De la décomposition de Frobenius 1 de V on obtient la décomposition de A en sous-espaces cycliques dont les bases vérifient les conditions de l'énoncé, de plus la suite des invariants de similitude de V étant (X s1 , . . . , X sM ) on en déduit immédiatement que les espaces A (s) et A (t) sont isomorphes si et seulement si on a s = t. Ces résultats sont conservés par extension du corps des scalaires de F 2 à F.
-d'un M -uplet d'entiers s = (s 1 , . . . , s M ) tel que M ≥ 1, n = s 1 ≥ . . . , ≥ s M ≥ 1 et s 1 + · · · + s M = d ; -d'une base M i=1 {e i,j ; 1 ≤ j ≤ s i } telle que e 2 i,j = e i,j+1 si 1 ≤ j ≤ s i − 1 0 si j = s i
Corollaire 11. Si A est une F-algèbre nil-plénière de degré n alors dim A ≥ n.
Proposition 12. Si A est une F-algèbre nil-plénière de degré n de dimension finie, alors il existe une partie finie F de A telle que
A = n k=1 span V k (F ) .
Démonstration. Soit A est une F-algèbre de dimension finie nil-plénière de degré n. Par définition de l'algèbre A, il existe un élément
x 1 ∈ A vérifiant V n (x 1 ) = 0 et V n−1 (x 1 ) = 0, alors le système V k (x 1 ) ; 0 ≤ k ≤ n − 1 est libre. Si le système S = V j (x i ) ; 1 ≤ i ≤ p, 0 ≤ j < s i < n est libre et si S n'est pas générateur de A, alors il existe x p+1 ∈ A et un entier 1 ≤ s p+1 < n tels que V sp+1 (x p+1 ) = 0, V sp+1−1 (x p+1 ) = 0 et V sp+1−1 (x p+1 ) / ∈ span (S ), de plus le système S ∪ V k (x p+1 ) ; 0 ≤ k < s p+1 est libre, en effet si p+1 i=1 si−1 j=1 λ ij V j (x i ) = 0 il en résulte que 0 = V sp+1−1 p+1 i=1 si−1 j=0 λ ij V j (x i ) = λ p+1,sp+1−1 V sp+1−1 (x p+1 ) + p i=1 si−1 j=0 λ ij V sp+1+j−1 (x i ) d'où λ p+1,sp+1−1 = 0 et par récurrence on obtient λ p+1,j = 0 (0 ≤ j < s p+1 ). En poursuivant le raisonnement on obtient une base V j (x i ) ; 1 ≤ i ≤ m, 0 ≤ j < s i de A et en prenant F = {x i ; 1 ≤ i ≤ m} on obtient que A est engendré par n k=1 V k (F ), d'où le résultat.
Opérateurs d'évolution quasi-constants.
Définition 13. Une K-algèbre A est quasi-constante de degré n s'il existe e ∈ A, e = 0 tel que V n (x) = e pour tout x ∈ A, x = 0, avec n ∈ N * minimal, on désigne par (A, e) une telle algèbre. Dans ce cas l'opérateur d'évolution V : A → A est dit quasi-constant de degré n.
La définition d'un opérateur quasi-constant V : A → A dépend de la donnée d'un élément e ∈ A. Démonstration. Soit (A, e) une F-algèbre quasi-constante de degré n. Supposons qu'il existe un élément e ′ ∈ A tel que (A, e ′ ) soit quasi-constante de degré n, de
V n (x) = e ′ et V n (x) = e il vient e ′ = e.
On a e 2 = V (e) = V (V n (e)) = V n (V (e)) et en posant x = V (e) dans la définition on a finalement e 2 = e.
Théorème 15. Pour qu'une F-algèbre A soit quasi-constante il faut que F = F 2 .
Démonstration. Soit (A, e) une F-algèbre quasi-constante de degré n. Pour λ ∈ F, λ = 0 on a e = V n (λe) = λ 2 n e d'où λ 2 n = 1 ce qui s'écrit aussi (λ − 1)
2 n = 0 par conséquent F \ {0} = {1}.
A isomorphisme près il n'existe qu'une seule F 2 -algèbre quasi constante.
Proposition 16. Une F 2 -algèbre quasi-constante est de degré 1, de dimension 1 et isomorphe à l'algèbre F 2 e où e 2 = e.
Démonstration. Soit (A, e) une F 2 -algèbre quasi-constante de degré n. Supposons que A soit de dimension ≥ 2, il existe x, y ∈ A linéairement indépendants donc x + y = 0 et on a V n (x + y) = V n (x) + V n (y) = 0 = e. Par conséquent A est de dimension 1 et comme V (e) = e on a n = 1. -Un élément x ∈ A est ultimement périodique de prépériode n et de période p, en abrégé (n, p)-périodique, s'il vérifie V n+p (x) = V n (x) avec (p, n) ∈ N * × N minimal pour l'ordre lexicographique .
-L'algèbre A et l'opérateur d'évolution V : A → A sont dits ultimement périodiques de prépériode n et de période p, en abrégé (n, p)-périodique, si on a V n+p (x) = V n (x) pour tout x ∈ A et s'il existe dans A au moins un élément (n, p)-périodique.
-Un élément x ∈ A, l'algèbre A ou l'opérateur d'évolution V sont ultimement périodiques s'il existe deux entiers n et p tels que x ou A ou V sont (n, p)-périodiques.
Exemple 18. Soient F = F 2 et n, p > 1 deux entiers. On définit la F 2 -algèbre A de base (a i ) 1≤i≤p ∪ (b j ) 1≤j≤n par : a 2 i = a i+1 1 ≤ i < p a 1 i = p , b 2 j = b j+1 1 ≤ j < n 0 j = n ,
tous les autres produits étant nuls.
On a V p (a i ) = a i , V n (b j ) = 0 et λ 2 k = λ pour tout k ≥ 0, par conséquent pour x = p i=1 α i a i + p j=1 β j b j on a : V n+p (x) = p i=1 α 2 n+p i V n+p (a i ) + p j=1 β 2 n+p j V n+p (b j ) = p i=1 α i V n (a i ) = V n (x) .
Il ne reste plus qu'à montrer que le couple (p, n) est minimal. Pour cela il suffit de considérer y = a 1 + b 1 et de remarquer que pour tout 0 ≤ m < n on a V m (y) = V m (a 1 ) + b m+1 et V n (y) = V n (a 1 ), puis pour tout 1 ≤ q < p on a V n+q (y) = V n (a q+1 ), tout ceci joint au fait que la restriction de V au sous-espace engendré par (a i ) 1≤i≤p est bijective établit que pour tout 0 ≤ i < j < n + p on a V i (y) = V j (y).
Exemple 19. Algèbre associée à un système dynamique fini. Un système dynamique fini (E, f ) est l'itération d'une application f : E → E définie sur un ensemble non vide fini E.
Pour un système dynamique fini (E, f ) il existe des entiers n ≥ 1 et p ≥ 0 tels que m) ; f m+q = f m } n'est pas vide, il admet un élément (p, n) minimal pour l'ordre lexicographique .
f n+p (x) = f n (x) pour tout x ∈ E et (p, n) minimal pour l'ordre . En effet, pour tout x ∈ E l'ensemble {f m (x) ; m ≥ 1} étant fini il existe des entiers n x < m x tels que f mx (x) = f nx (x) ainsi l'ensemble {(m x − n x , n x ) ; f mx (x) = f nx (x)} n'est pas vide et possède un plus petit élément (m x − n x , n x ) pour l'ordre , en posant p x = m x − n x on a donc f nx+px (x) = f nx (x). Soit m = max {n x ; x ∈ E}, pour tout x ∈ E on a f m+px (x) = f m−nx f nx+px (x) = f m−nx f nx (x) = f m (x). En- suite pour tout entier k ≥ 1 et tout x ∈ E on a f m+kpx (x) = f (k−1)px f m+px (x) = f m+(k−1)px (x) d'où on déduit que f m+kpx (x) = f m (x), et pour q = lcm {p x ; x ∈ E} on a f m+q (x) = f m (x) pour tout x ∈ E. Ainsi l'ensemble {(q,
On peut associer une F 2 -algèbre ultimement périodique à un système dynamique fini (E, f ) : soient E un ensemble de cardinal N et f : E → E une application telle que f n+p = f n avec (p, n) minimal pour , On considère le F 2 -espace vectoriel A de base (e i ) 1≤i≤N muni de la structure algébrique :
e 2 i = e f (i) , (1 ≤ i ≤ N ) les autres produits étant arbitraires. On a donc V (e i ) = e f (i) et V n+p (e i ) = e f n+p (i) = e f n (i) = V n (e i ) donc V n+p (x) = V n (x) pour tout x ∈ A.
Par exemple, soient E = 0; 12 et f : E → E l'application qui à tout x associe
x 2 + 2 modulo 13. On trouve f (4) = f (9) = 5, f (5) = f (8) = 1, f (1) = 3, f (3) = f (10) = 11, f (11) = f (2) = 6, f (6) = f (7) = 12, f (12) = 3, f (3) = 11 et f (7) = 12. L'application f vérifie f 8 = f 4 donc la F 2 -algèbre associée à (E, f ) est (4, 4)-périodique. Autre exemple, soient E = F 2 2 ∪ {∞} et f : E → E l'application qui à x associe x + x −1 si x / ∈ {0, ∞} et ∞ sinon. En notant α une racine primitive de X 3 − 1 on a F 2 2 = 0, 1, α, α 2 et f (α) = f α 2 = 1, f (1) = 0, f (0) = ∞ et f (∞) = ∞, l'application f et l'algèbre associée à (E, f ) sont (1, 3)-périodique.
Exemple 20. Automates cellulaires élémentaires sur le réseau Z /nZ, (n ≥ 3).
1) Automate cellulaire élémentaire XOR (règle 90, cf. [18], p. 224). On considère la F 2 -algèbre A de base (e 1 , e 2 , . . . , e n ) définie par :
e 2 i = e 2 + e n si i = 1 e i−1 + e i+1 2 ≤ i ≤ n − 1, et e 1 + e n−1 si i = n e i e j = 0 si i = j.
Pour faciliter les calculs on remarque que si σ dénote la permutation circulaire (1, 2, . . . , n) on a V (e i ) = e σ(i) + e σ −1 (i) . Ainsi on a V 2 (e i ) = e σ 2 (i) + e σ −2 (i) et V 3 (e i ) = e σ 3 (i) + e σ −3 (i) + e σ(i) + e σ −1 (i) par conséquent si n = 3 on a V 3 (e i ) = V (e i ), on en déduit que l'opérateur V est (1, 2)-périodique. De manière analogue dans le cas n = 6 on trouve V 6 (e i ) = V 2 (e i ) donc V est (2, 4)-périodique et si n = 12 l'opérateur V est (4, 8)-périodique.
On peut aussi remarquer que dans le cas n = 4, l'opérateur V est nilpotent de degré 4 et pour n = 8 il est nilpotent de degré 8.
2) Automate cellulaire élémentaire de Fredkin (règle 150, cf. [18], p. 224)). On considère la F 2 -algèbre A de base (e 1 , e 2 , . . . , e n ) munie de la structure :
e 2 i = e 1 + e 2 + e n si i = 1 e i−1 + e i + e i+1 2 ≤ i ≤ n − 1, et e 1 + e n + e n−1 si i = n e i e j = 0 si i = j.
Comme ci-dessus en utilisant la permutation circulaire σ = (1, 2, . . . , n) on a V (e i ) = e σ(i) + e i + e σ −1 (i) . Alors on a V 2 (e i ) = e σ 2 (i) + e i + e σ −2 (i) et V 3 (e i ) = e σ 3 (i) + e σ 2 (i) + e i + e σ −2 (i) + e σ −3 (i) par conséquent dans le cas n = 3 on a V 3 (e i ) = V 2 (e i ) donc l'opérateur V est (2, 1)-périodique. De la même manière on a pour n = 4 que V 4 (e i ) = e i donc V est (0, 4)-périodique, quand n = 6 on a V qui est (4, 2)-périodique et si n = 8 on trouve que V est (0, 8)-périodique.
Propriétés des algèbres ultimement périodiques.
Remarque 21. Quel que soit le corps F, toute F-algèbre A nil-plénière est ultimement périodique car il existe un entier n ≥ 1 tel que V n (x) = 0 pour tout
x ∈ A et V n−1 (x) = 0 pour un x ∈ A, donc V n+1 (x) = V n (x) pour tout x ∈ A et pour un x ∈ A tel que V n−1 (x) = 0 on a V n (x) = V n−1 (x) donc A est (n, 1)-périodique.
Toute F 2 -algèbre quasi-constante est ultimement périodique. En effet, si (A, e) est une algèbre quasi-constante de degré n on a V n (x) = e et e 2 = e on déduit que V n+1 (x) = V n (x), de la condition de minimalité du degré on déduit que A est (n, 1)-périodique.
Désormais dans ce qui suit toutes les algèbres considérées sont supposées non nil-plénières et non quasi-constantes.
Comme pour les algèbres quasi-constantes, l'ultime périodicité d'une F-algèbre impose des conditions sur le corps F.
Théorème 22. Pour qu'une F-algèbre A soit (n, p)-périodique il faut que le corps F vérifie F ⊂ F 2 p . Démonstration. Etant donnée A une F-algèbre (n, p)-périodique. L'algèbre A n'étant pas nil-plénière il existe x ∈ A tel que V n (x) = 0, alors pour tout α ∈ F on a V n+p (x) = V n (x) et V n+p (αx) = V n (αx) d'où α 2 n+p − α 2 n V n (x) = 0, on a donc α 2 n+p − α 2 n = 0 ou encore α 2 p − α 2 n = 0 quel que soit α ∈ F, autrement dit tous les éléments de F sont racine de X 2 p − X, ce qui implique F ⊂ F 2 p .
Désormais quand on parle de F-algèbre (n, p)-périodique on suppose que le corps F vérifie la condition nécessaire du théorème 22.
Dans certains cas, l'étude des algèbres sur un corps de caractéristique 2 se ramène à celle des algèbres ultimement périodiques.
Proposition 23. Si le corps F est fini alors toute F-algèbre de dimension finie est ultimement périodique.
Démonstration. Soient F = {λ 1 , . . . , λ m } et A une F-algèbre de base (e 1 , . . . , e d ). Pour tout 1 ≤ i ≤ d et tout 1 ≤ j ≤ m, le système V k (λ i e j ) ; 0 ≤ k ≤ d est lié donc il existe T ij ∈ F [X] de degré au plus d tel que T ij (V ) (λ i e j ) = 0. Soit P = lcm {T ij ; 1 ≤ i ≤ d, 1 ≤ j ≤ m}, par construction on a P (V ) (x) = 0 pour tout x ∈ A. Alors l'ensemble {P ∈ F [X] ; P (V ) = 0} étant un idéal de F [X] non réduit à {0}, il admet un générateur T unitaire de degré D. Pour chaque entier s ≥ D on note R s ∈ F [X] le reste de la division de X s par T , on a deg (R s ) < D et l'ensemble {R s ; s ≥ D} est de cardinal au plus m D , il existe donc deux entiers 1 ≤ r < s tels que R s = R r , on en déduit V s = V r soit V r+(s−r) = V r . Par conséquent l'ensemble (i, j) ∈ N × N * ; V j+i = V j n'
est pas vide, il admet un plus petit élément (p, n) pour l'ordre lexicographique. Si |F| ≤ 2 p alors l'algèbre A est (n, p)-périodique, si |F| > 2 p il existe un plus petit entier q ≥ 2 tel que 2 qp ≥ |F|, comme V n+qp = V n on en déduit que l'algèbre A est (n, qp)-périodique.
Nous allons établir des propriétés sur les algèbres ultimement périodiques en commençant par les éléments ultimement périodiques.
Proposition 24. Soient A une F-algèbre et x ∈ A un élément (n, p)-périodique. 1) On a V m+kp (x) = V m (x), pour tout m ≥ n et k ≥ 0. 2) Pour tout m ≥ n, il existe 0 ≤ q < p tel que V m (x) = V n+q (x). 3) On a V u (x) = V v (x) pour u, v ≥ n si et seulement si u − v ≡ 0 mod p. 4) Pour tout 0 ≤ k ≤ n ; V k (x) est (n − k, p)-périodique. 5) Pour tout k ≥ n ; V k (x) est (0, p)-périodique. Démonstration. 1) On a V m+kp = V m−n V n+kp , puis par récurrence sur k ≥ 0 on montre que V n+kp = V n d'où la relation cherchée. 2) En effet, si m − n < p on prend q = m − n. Si m − n ≥ p, il existe k ≥ 1 et 0 ≤ q < p tels que m − n = kp + q alors le résultat découle de V m = V n V m−n et de 1).
3) La condition suffisante résulte de 1). Pour la condition nécessaire, étant donnés
u, v ≥ n des entiers, soient u−n ≡ u ′ mod p et v−n ≡ v ′ mod p avec 0 ≤ u ′ , v ′ < p alors d'après 1) on a V u (x) = V n+u ′ (x) et V v (x) = V n+v ′ (x) et donc V n+u ′ (x) = V n+v ′ (x). Si u ′ > v ′ de V n+v ′ +(u ′ −v ′ ) (x) = V n+v ′ (x), de 0 ≤ u ′ − v ′ < p et de la minimalité de (p, n) il vient u ′ − v ′ = 0 et donc u − v ≡ 0 mod p. On aboutit au même résultat si u ′ < v ′ . 4) Pour tout 0 ≤ k ≤ n on a V n+p (x) = V n−k+p V k (x) et V n (x) = V n−k V k (x) par conséquent V n−k+p V k (x) = V n−k V k (x) , montrons que le couple (p, n − k) vérifiant cette relation est minimal pour l'ordre . Soit (q, m) ∈ N * × N tel que (q, m) (p, n) et V k (x) soit (m, q)-périodique, de V m+k+q (x) = V m+k (x) et de la minimalité de (p, n) on a q = p et m + k = n donc m = n − k et V k (x) est (n − k, p)-périodique.
5) En effet, pour tout k ≥ n on a :
V p V k (x) = V k−n V n+p (x) = V k−n (V n (x)) = V k (x) . Et s'il existe q ≤ p tel que V q V k (x) = V k (x) avec (q, 0) minimal, d'après le résultat 2) il existe 0 ≤ s < p tel que V k (x) = V n+s (x) on a donc V n+s+q (x) = V q+k (x) = V k (x) = V n+s (x) et en composant ceci par V p−s on obtient V n+p+q (x) = V n+p (x) d'où V q (V n (x)) = V n (x), or x étant par hypothèse (n, p)-périodique, d'après le résultat 4) l'élément V n (x) est (0, p)-périodique et par minimalité de (p, 0) on a q = p. Soit A une K-algèbre, on appelle orbite de x ∈ A l'ensemble O (x) = V k (x) ; k ≥ 0 et on note |O (x)| son cardinal.
Proposition 25. Soit x un élément (n, p)-périodique d'une F-algèbre, on a :
O V k (x) = n + p − k si 0 ≤ k ≤ n p si n ≤ k. Démonstration. Pour 0 ≤ k ≤ n, on déduit de V n+p−k V k (x) = V n (x) = V n−k V k (x) que O V k (x) = V m V k (x) ; 0 ≤ m < n + p − k il en résulte que O V k (x) ≤ n + p − k. Montrons par l'absurde que pour tout (i, j) tel que 0 ≤ i < j < n+p−k on a V i V k (x) = V j V k (x) . S'il existe 0 ≤ i < j < n+p−k tels que V i V k (x) = V j V k (x) (⋆). On a deux cas : -Si j < p, alors 0 < p + i − j < p et en composant (⋆) par V n+p−k−j on trouve V n+p+i−j (x) = V n (x) ce qui est en contradiction avec la minimalité du couple (p, n). -Si j ≥ p, alors 0 ≤ j −p < n−k et en composant (⋆) par V n−(j−p)−k on obtient V n+p+i−j (x) = V n (x)
avec p + i − j < p ce qui est à nouveau en contradiction avec la minimalité du couple (p, n).
On a établi que O V k (x) = n + p − k.
Quand k > n, d'après le 2) de la proposition 24, pour tout entier
m ≥ 0 il existe 0 ≤ s < p tel que V m+k (x) = V n+s (x) il en résulte que O V k (x) = {V n+s (x) ; 0 ≤ s < p} donc O V k (x) ≤ p. Montrons par l'absurde que pour tout 0 ≤ i < j < p on a V n+i (x) = V n+j (x). S'il existe 0 ≤ i < j < p tels que V n+i (x) = V n+j (x), en composant ceci par V p−j il vient V n+p+i−j (x) = V n (x)
avec p+ i − j < p ce qui est en contradiction avec la minimalité du couple (p, n).
Corollaire 26. Soit x un élément (n, p)-périodique, quel que soit k ≥ n on a :
O V k (x) = O (V n (x)). Démonstration. Il est clair que si k ≥ n on a O V k (x) ⊂ O (V n (x)) et comme d'après la proposition ci-dessus on a O V k (x) = |O (V n (x))| = p il en découle que O V k (x) = O (V n (x)).
On peut caractériser de plusieurs façons les éléments ultimement périodiques.
Proposition 27. Soient A une F-algèbre et x ∈ A, les énoncés suivants sont équivalents : (i) x est (n, p)-périodique. (ii) V i (x) = V j (x) pour tout 0 ≤ i < j < n + p et V n+p (x) = V n (x). (iii) V n+i (x) = V n (x) pour tout 0 < i < p, V j+p (x) = V j (x) pour tout 0 < j < n et V n+p (x) = V n (x). (iv) V n (x) est (0, p)-périodique et V n−1 (x) est (1, p)-périodique. (v) |O (x)| = n + p et |O (V n (x))| = O V n+1 (x) = p. Démonstration. (i) ⇒ (ii) Supposons qu'il existe 0 ≤ i < j < n+p tels que V i (x) = V j (x), en composant ceci par V n+p−j on obtient V n+p+i−j (x) = V n+p (x) = V n (x) avec p + (i − j) < p, contradiction. (ii) ⇒ (iii) est immédiat. (iii) ⇒ (i) Ad absurdum. Supposons qu'il existe (d, m) ≺ (p, n) tel que V m+d (x) = V m (x) ( * ).
Si d < p, on ne peut pas avoir m ≤ n car en composant ( * ) par V n−m on obtient V n+d (x) = V n (x) ce qui est en contradiction avec les hypothèses, on a donc m > n et dans ce cas il existe k ≥ 0 et 0 ≤ r < p tels que
m − n = kp + r d'où V m+d (x) = V n+r+d (x) et V m (x) = V n+r (x) et en compo- sant V n+r+d (x) = V n+r (x) par V p−r on obtient V n+d (x) = V n (x) avec d < p, contradiction. Si d = p, par hypothèse on ne peut pas avoir (p, m) ≺ (p, n) et V m+p (x) = V m (x). (i) ⇒ (iv) De V p (V n (x)) = V n (x), V p+1 V n−1 (x) = V n−1 (x) et de la minimalité de (p, n) on déduit que V n (x) est (0, p)-périodique et que V n−1 (x) est (1, p)-périodique. (iv) ⇒ (v) Puisque V n (x) est (0, p)-périodique, on a V p (V n (x)) = V n (x) par conséquent V n+p (x) = V n (x). Montrons par l'absurde que x est (n, p)-périodique. On suppose que x est (m, d)-périodique avec (d, m) ≺ (p, n), d'après le 4) de la proposition 24 on a que V m (x) est (0, d)-périodique. Si m > n, d'après le 5) de la proposition 24, V m (x) est (0, p)-périodique, par conséquent on a d = p et donc (p, m) ≻ (p, n), contradiction.
Par conséquent on a m ≤ n, alors du 5) de la proposition 24 on a que V n (x) est (0, d)-périodique et de la minimalité de (0, p) il vient que d = p, on ne peut pas avoir m < n car sinon du 5) de la proposition 24 on aurait que V n−1 (x) est (0, p)-périodique, par conséquent on a m = n. Et on obtient les résultats de l'énoncé en appliquant la proposition 25.
(v) ⇒ (i) Montrons par l'absurde que V n+p (x) = V n (x) avec (p, n) minimal. En effet s'il existe 0 ≤ i < j ≤ p tels que V n+i (x) = V n+j (x) on a j = p car dans le cas où j < p on aurait |O (V n (x))| ≤ j < p. On a aussi i = 0 car si i > 0, de V n+i (x) = V n+p (x) il vient V n+1+i (x) = V n+1+p (x) on aurait donc O V n+1 (x) ⊂ V n+1+m (x) ; i ≤ m ≤ p d'où O V n+1 (x) < p. Montrons que le couple (p, n) vérifiant V n+p (x) = V n (x) est minimal. S'il existait (d, m) ∈ N * ×N tel que (d, m) (p, n) et x est (d, m)-périodique, dans le cas où m > n d'après la proposition 25 on aurait |O (V n (x))| = d + m − n et O V n+1 (x) = d + m − n − 1 donc |O (V n (x))| = O V n+1 (x) , contradiction. Par conséquent on a m ≤ n donc |O (V n (x))| = d d'où d = p et |O (x)| = p + m d'où m = n.
Concernant les éléments d'une algèbre ultimement périodique on a les propriétés suivantes.
Proposition 28. Soit A une F-algèbre (n, p)-périodique, on a : 1) Pour tout x ∈ A, il existe 0 ≤ m ≤ n et un diviseur d de p tels que x soit (m, d)-périodique. 2) Si x ∈ A est (m, d)-périodique alors 0 ≤ m ≤ n et d un diviseur de p. 3) Pour tout entier 0 ≤ m ≤ n et tout diviseur d de p, il existe x ∈ A tel que V m+d (x) = V m (x). 4) Un élément x ∈ A est (m, d)-périodique si et seulement si il existe un unique (a, b) ∈ A × A tel que x = a + b avec V d (a) = a, V m (b) = 0 et |O (a)| = d, |O (b)| = m. 5) Si x ′ , x ′′ ∈ A sont respectivement (n ′ , p ′ )-périodique et (n ′′ , p ′′ )-périodique et si x ′ + x ′′ = 0 alors x ′ + x ′′ est (µ, δ)-périodique avec µ, δ vérifiant µ ≤ max (n ′ , n ′′ ) et δ = lcm (p ′ , p ′′ ) si p ′ = p ′′ ou δ diviseur de p ′ quand p ′ = p ′′ . 6) Il existe un entier q multiple de p tel que V 2q (x) = V q (x) pour tout x ∈ A.
Démonstration. 1) Soit x ∈ A, d'après la proposition 25, pour tout j ≥ 0 les
ensembles O V j (x) sont finis, posons d = min O V j (x) ; j ≥ 0 et m = |O (x)| − d, on a donc |O (x)| = m + d. Soit k ≥ 0 le plus petit élément de l'ensemble j; O V j (x) = d , on a V d+k (x) = V k (x), alors |O (x)| = d + k d'où k = m et la relation V m+d (x) = V m (x) ( ‡).
Par construction des entiers d et m, le couple (d, m) vérifiant la relation ( ‡) est minimal pour l'ordre lexicographique. Montrons que m ≤ n et que d divise p.
L'algèbre A étant (n, p)-périodique on a V n+p (x) = V n (x) par conséquent 1 ≤ d ≤ p et m < m + d ≤ n + p. En composant ( ‡) par V n+p−m on obtient V n+p+d (x) = V n+p (x) d'où V n+d (x) = V n (x), on en déduit que V n+qd (x) = V n (x) pour tout q ≥ 1. Il existe deux entiers q et d ′ tels que p = qd + d ′ avec 0 ≤ d ′ < d alors V n+p (x) = V n+qd+d ′ (x) = V n+d ′ (x) donc V n+d ′ (x) = V n (x) et par définition de d on en déduit que d ′ = 0, donc d divise p. Ensuite en composant ( ‡) par V (q−1)d on trouve V m+p (x) = V m (x) et par minimalité du couple (p, n) on a m ≤ n. 2) D'après 1) il existe (m ′ , d ′ ) tel que x est (m ′ , d ′ )-périodique avec 0 ≤ m ′ ≤ n et d ′ divisant p, alors par minimalité de (d, m) on a (m, d) = (m ′ , d ′ ).
3) Par définition d'une algèbre (n, p)-périodique il existe y ∈ A qui est (n, p)périodique. Soit q ≥ 1 un entier tel que p = qd, on pose x = q−1 k=0 V n−m+kd (y) on a V m+d (x) = q k=1 V n+kd (y) = V n (y) + q−1 k=1 V n+kd (y) = V m (x). 4) La condition suffisante est immédiate. Pour la condition nécessaire, par restriction des scalaires sur le
F 2 -espace vectoriel A on a ker V m V d − id = ker V m ⊕ ker V d − id , par conséquent si x est (m, d)-périodique on a x ∈ ker V m V d − id il existe donc a ∈ ker V d − id , b ∈ ker V m tels que x = a + b avec de plus V d−1 (a) = a, V m−1 (b) = 0. 5) Soit m = max (n ′ , n ′′ ) et d = lcm (p ′ , p ′′ ) d'après le 1) de la proposition 24 on a V m+d (x ′ ) = V m (x ′ ) et V m+d (x ′′ ) = V m (x ′′ ) donc V m+d (x ′ + x ′′ ) = V m (x ′ + x ′′ ).
D'après le résultat 1) il existe deux entiers µ, δ vérifiant 0 ≤ µ ≤ n, δ divise p et
x ′ + x ′′ est (µ, δ)-périodique, on a (δ, µ) (d, m). Montrons que µ ≤ m et δ divise d. Si δ = d on a aussitôt µ ≤ m. Etudions le cas δ < d, d'après la propriété 4) on a x ′ = a ′ + b ′ et x ′′ = a ′′ + b ′′ avec V m (b ′ ) = V m (b ′′ ) = 0 donc V m (b ′ + b ′′ ) = 0, comme de plus V µ (b ′ + b ′′ ) = 0 et V µ−1 (b ′ + b ′′ ) = 0 on a µ ≤ m. Soit d = δq + r avec 0 ≤ r < δ, de V m (x ′ + x ′′ ) = V m+d (x ′ + x ′′ ) = V m−µ V µ+δq+r (x ′ + x ′′ ) = V m+r (x ′ + x ′′ ) et de r < δ on déduit que r = 0 et donc δ divise d. Montrons que si p ′ = p ′′ on a δ = d. Considérons le cas p ′ < p ′′ et supposons que δ = d. Posons g = gcd (p ′ , p ′′ ) et p ′ = gq ′ , p ′′ = gq ′′ avec gcd (q ′ , q ′′ ) = 1, on a d = gq ′ q ′′ = p ′ q ′′ = q ′ p ′′ , gcd (p ′ , q ′′ ) = gcd (q ′ , p ′′ ) = 1 par conséquent de δ divise d on déduit que δ divise p ′ ou p ′′ . Si δ divise p ′′ de V δ+µ (x ′ + x ′′ ) = V µ (x ′ + x ′′ ) (⋆) et du résultat 1) de la proposition 24 on a V p ′′ +m (x ′ + x ′′ ) = V m (x ′ + x ′′ ), or on a V p ′′ +m (x ′′ ) = V m (x ′′ ) par conséquent V p ′′ +m (x ′ ) = V m (x ′ ), et d'après le résultat 4) de la proposition 24, ceci implique que p ′ divise p ′′ , il en résulte que d = p ′ et par suite que δ divise p ′ . Alors de la relation (⋆) et du résultat 1) de la proposition 24, il vient V p ′ +m (x ′ + x ′′ ) = V m (x ′ + x ′′ ), ceci joint à V p ′ +m (x ′ ) = V m (x ′ ) implique que V p ′ +m (x ′′ ) = V m (x ′′ ) avec p ′ < p ′′ , contradiction.
6) Soit k ≥ 1 le plus petit entier tel que kp > n. On pose q = kp, avec le 1) de la proposition 24 on a V q = V q−n V n = V q−n V n+kp = V q−n V n+q = V 2q .
Illustrons avec un exemple le résultat 5) de la proposition précédente dans le cas p ′ = p ′′ . Démonstration. (i) ⇒ (ii) D'après la proposition 27 il existe x ∈ A qui est (n, p)périodique donc |O (x)| = n + p , par restriction du corps des scalaires à F 2 , l'opérateur V est linéaire et A = ker (V n+p − V n ), il suffit alors de compléter {x} en une une base de ker (V n+p − V n ).
(ii) ⇒ (i) Soit x = i∈I α i e i , on a V n+p (x) = i∈I α 2 n+p i V n+p (e i ). Or, d'après le théorème 22 on a F ⊂ F 2 p donc α 2 p i = α i d'où i∈I α 2 n+p i V n+p (e i ) = i∈I α 2 n i V n (e i ) = V n (x)
. Enfin comme par hypothèse il existe i ∈ I tel que |O (e i )| = n + p, d'après la proposition 27 on a que e i est (n, p)-périodique et donc l'algèbre A est (n, p)-périodique.
(i) ⇒ (iii) Par restriction du corps des scalaires à F 2 , l'opérateur V est linéaire sur le
F 2 -espace A et on a A = ker (V n (V p − id)) donc A = ker (V n )⊕ker (V p − id).
Si n = 0, de la minimalité du couple (p, n) on déduit qu'il existe Si n = 0, on suit un raisonnement analogue pour A = ker (V p − id). Montrons que le couple (p, n) est minimal pour l'ordre lexicographique. Soient (i, j) ∈ I × J tel que |O (a i )| = n et |O (b j )| = p, ceci implique que V k (a i ) = 0 quel que soit 0 ≤ k < n. On pose z = a i + b j , montrons que V m+q (z) = V m (z) pour tout (m, q) ∈ N × N * tel que (q, m) ≺ (p, n). Supposons qu'il existe un couple (m, q) tel que (q, m) ≺ (p, n) et V m+q (z) = V m (z) ( * ).
x, y ∈ A tels que V n+p−1 (x) = V n−1 (x) et V n+p−1 (y) = V n (y) donc x / ∈ ker V n−1 et y / ∈ ker V p−1 − id ,(iii) ⇒ (i) Soit x = i∈I α i a i + j∈J β j b j ,
on a compte tenu des hypothèses
V n+p (x) = i∈I α 2 n+p i V n+p (a i ) + j∈J β 2 n+p j V n+p (b j ) = j∈J β 2 n+p j V n+p (b j ) et V n (x) = i∈I α 2 n i V n (a i ) + j∈J β 2 n j V n (b j ) = j∈J β 2 n j V n (b j ) .
Si
q = p, de (q, m) ≺ (p, n) il résulte m < n, or V m+p (z) = V m+p (a i ) + V m (b j ) et V m (z) = V m (a i ) + V m (b j ) donc V m+q (a i ) = V m (a i ) ( * * )
, on a m + q < n sinon on aurait V m (a i ) = V m+q (a i ) = 0 avec m < n, alors en composant la relation ( * * ) par V n−m−q on obtient V n−q (a i ) = V n (a i ) = 0 avec n − q < n, ce qui contredit l'hypothèse |O (a i )| = n.
On a donc q < p. Si m < n en composant ( * ) par V kp−m avec kp > n on Pour la structure d'algèbre définie sur A par e 2 i = e σ(i) et les autres produits pris arbitrairement dans A, l'opérateur d'évolution est (0, m)-périodique.
a V kp+q (z) = V kp (z) ce qui entraîne V q (b j ) = b j et donc |O (b j )| < p, d'où une contradiction. Si q < p et m ≥ n alors V m+q (z) = V m+q (b j ) et V m (z) = V m (b j ) on en déduit que |O (V m (b j )| ≤ q,Démonstration. Pour x = m i=1 α i e i on a V m (x) = m i=1 α 2 m i e σ m (i) = x et pour tout k < m on a V k (e 1 ) = e k = e 1 .
Proposition 33. Soient n et p deux entiers tels que n ≥ 0, p = 2 r q où r ≥ 0 et q impair. Alors une F-algèbre (n, p)-périodique de dimension d est semi-isomorphe à une F-algèbre, notée A (s,t) , définie par la donnée :
-d'un M -uplet d'entiers, s = (s 1 , . . . , s M ) tel que M ≥ 1, n = s 1 ≥ . . . , ≥ s ≥ 1, -d'un N -uplet d'entiers, t = (t 1 , . . . , t N ) tel que N ≥ 1, r = t 1 ≥ . . . ≥ t N ≥ 0,
tels que les composantes de s et de t vérifient (s 1 + · · · + s M ) + 2 t1 + · · · + 2 tN q = d; Démonstration. En restreignant le corps des scalaires à F 2 , sur le F 2 -espace A l'opérateur V est linéaire et on a A = ker V n (V p − id) = ker V n ⊕ ker (V q − id) 2 r . La restriction de V à ker V n est nilpotent de degré n, l'existence et les propriétés de la base M i=1 {a i,j ; 1 ≤ j ≤ s i } de ker V n découlent de la proposition 10. Pour le sousespace ker (V q − id) 2 r , la décomposition de Frobenius appliquée à la restriction de V à ker (V q − id) 2 r fournit une décomposition en somme directe de sous-espaces cycliques, par réunion de leurs bases on obtient une base
-d'une base M i=1 {a i,j ; 1 ≤ j ≤ s i } ∪ N i=1 {b i,j ; 1 ≤ j ≤ 2 ti q}, -de la table de multiplication : a 2 i,j = a i,j+1 si 1 ≤ j < s i − 1 0 si j = s i , (i = 1, . . . , M ) , b 2 i,j = b i,N i=1 {b i,j ; 1 ≤ j ≤ 2 ti q} de ker (V q − id) 2 r
vérifiant les produits donnés dans l'énoncé. D'après le lemme cidessus, les restrictions de V aux sous-espaces engendrés par {b i,j ; 1 ≤ j ≤ 2 ti q} sont (0, 2 ti q)-périodiques, en particulier pour i = 1 elle est (0, p)-périodique. D'après la proposition 30, on peut affirmer que l'algèbre A (s,t) est (n, p)-périodique. Enfin la suite des invariants de similitude de ces deux restrictions étant respectivement (X s1 , . . . , X sM ) et (X q − 1) 2 t 1 , . . . , (X q − 1) tN on en déduit aussitôt que les espaces A (s,t) et A (s ′ ,t ′ ) sont isomorphes si et seulement si s = s ′ et t = t ′ . Ces résultats étant conservés par extension du corps des scalaires de F 2 à F, le résultat est prouvé.
Opérateurs d'évolution train pléniers.
Définition 34. Soit A une F-algèbre. L'algèbre A est train plénière de degré n s'il existe un polynôme T ∈ F [X] unitaire de degré n ≥ 1 tel que T (V ) (x) = 0 pour tout x ∈ A et si pour tout S ∈ F [X] de degré < n on a S (V ) = 0. Et dans ce cas on dit que l'opérateur d'évolution V : A → A est train plénier de degré n.
Il résulte de la définition que le polynôme T vérifiant T (V ) = 0 est unique, en effet s'il existe R ∈ F [X], R = T unitaire de degré n tel que R (V ) = 0 alors on a (T − R) (V ) = 0 avec T − R = 0 de degré < n, contradiction. Ce polynôme T est appelé le train polynôme de A Remarque 35. Parmi les algèbres train plénières on trouve les algèbres nil-plénières dont les train polynômes sont du type T (X) = X n , n ≥ 2 et les algèbres ultimement périodiques qui vérifient des identités de la forme X n+p − X n qui ne sont pas nécessairement des train polynômes de ces algèbres. En effet les conditions de minimalité de degré n'étant pas les mêmes une algèbre peut être à la fois ultimement périodique et train plénière sans que les polynômes soient identiques. Considérons par exemple la F 2 -algèbre A de base (e 1 , e 2 , e 3 ) définie par
V (e 1 ) = e 2 , V (e 2 ) = e 3 et V (e 3 ) = e 1 + e 3 , pour x = α 1 e 1 + α 2 e 2 + α 3 e 3 on trouve V (x) = α 3 e 1 + α 1 e 2 + (α 2 + α 3 ) e 3 , V 2 (x) = (α 2 + α 3 ) e 1 + α 3 e 2 + (α 1 + α 2 + α 3 ) e 3 et V 3 (x) = (α 1 + α 2 + α 3 ) e 1 + (α 2 + α 3 ) e 2 + (α 1 + α 2 ) e 3 , on a donc V 3 (x) + V 2 (x) + x = 0 et A est train plénière de degré 3. Mais par ailleurs dans F 2 [X], le polynôme X 3 + X 2 + 1 divise X 8 − X on peut affirmer que l'algèbre A est (1, 7)-périodique.
Compte tenu de cette remarque, les algèbres considérées dans la suite, sont supposées non nil-plénières et les polynômes considérés ont des degrés et des valuations distincts.
Exemple 36. 1) En utilisant l'algèbre définie au 1) de l'exemple 20 on montre que dans le cas n = 5 on a V 5 = V 3 + V , pour n = 7 on montre que V 7 = V 5 + V et quand n = 13 on a V 13 = V 11 + V 9 + V 3 + V .
2) Pour l'algèbre définie au 2) de l'exemple 20 on montre que pour n = 5 l'opé-
rateur d'évolution V vérifie T (V ) = 0 pour T (X) = X 5 − X 4 − X 3 − X 2 − X − 1, pour n = 7 en prenant T (X) = X 7 − X 6 − X 3 − X 2 − X − 1 on a T (V ) = 0.
On a dans les algèbres train plénières un résultat analogue à celui de la proposition 23 pour les algèbres ultimement périodiques.
Proposition 37. Si le corps F est fini alors toute F-algèbre de dimension finie est train plénière.
Démonstration. Il suffit de reprendre le début de la preuve de la proposition 23, le polynôme T construit à partir de l'algèbre A est le train polynôme de A.
Lemme 38. Soit A une algèbre train plénière de degré n de train polynôme T , il
existe x ∈ A vérifiant T (V ) ( x) = 0 et S (V ) ( x) = 0 pour tout S ∈ F [X] de degré < n.
Démonstration. Soit T = r k=1 T ν k k la décomposition de T en facteurs irréductibles dans F [X], en utilisant le fait que V est un morphisme du groupe additif on a A = ker T (V ) = r k=1 ker T ν k k (V ). Montrons que pour chaque 1 ≤ k ≤ r, il existe
un élément x k ∈ ker T ν k k (V ) \ ker T ν k −1 k (V )
. En effet, si pour un entier k on a Démonstration. Soit T (X) = X n − n−1 k=0 α k X k avec α k ∈ F. D'après le lemme ci-dessus, il existe
T ν k −1 k (V ) (x k ) = 0 pour tout x k ∈ ker T ν k k (V ), alors en considérant le polynôme R k = T ν k −1 k i =k T νi i , comme pour tout x ∈ A on a x = r i=1 x i où x i ∈ ker T νi i (V ) on obtient R k (V ) (x) = 0 avec R k de degré < n, ce qui contredit la définition de T . Posons x = r k=1 x k où x k ∈ ker T ν k k (V ) \ ker T ν k −1 k (V ) pour 1 ≤ k ≤ r. Alors pour tout entier 1 ≤ k ≤ r on a R k (V ) ( x) = 0, car si on a R k (V ) ( x) = 0 pour un entier k, alors on a x ∈ ker T ν k −1 k (V ) ⊕ ker i =k T νi i (V ) ce qui implique x = 0, d'x ∈ A tel que T (V ) ( x) = 0 et S (V ) ( x) = 0 pour polynôme S de degré < n. De T (V ) ( x) = 0 il vient V n ( x) − n−1 k=0 α k V k ( x) = 0 et pour tout λ ∈ F de T (V ) (λ x) = 0 il résulte λ 2 n V n ( x) − n−1 k=0 λ 2 k α k V k ( x) = 0, on en déduit que n−1 k=0 α k λ 2 n − λ 2 k V k ( x) = 0, or S (X) = n−1 k=0 α k λ 2 n − λ 2 k X k est un élément de F [X]
de degré < n tel que S (V ) ( x) = 0 par conséquent on a S = 0. Il en résulte que α k λ 2 n − λ 2 k = 0 pour tout 0 ≤ k < n et donc λ 2 n − λ 2 k = 0 pour tout k ∈ E (T ), on en déduit que λ 2 i − λ 2 j = 0 pour tout i, j ∈ E (T ) ce qui entraîne λ 2 j−i − λ = 0 pour tout i, j ∈ E (T ) tels que i < j, par conséquent le scalaire λ est racine des polynômes
X 2 j−i − X, autrement dit F ⊂ F 2 j−i pour tout i, j ∈ E (T ) tels que i < j ce qui équivaut à F ⊂ F 2 σ(T ) .
On en déduit une description des train polynômes.
Corollaire 41. Soit A une F 2 m -algèbre, un polynôme T ∈ F 2 m [X] de degré n et de valuation v est un train polynôme de A, s'il existe un entier σ ≥ 1 tel que m divise σ, σ divise n − v et
T (X) = (n−v) /σ k=0 α v+kσ X v+kσ , avec E (T ) ⊂ {v + kσ; 0 ≤ k ≤ (n−v) /σ}, α v = 0 et α n = 1.
Démonstration. Soit T ∈ F 2 m [X], de degré n, de valuation v et de striction σ un train polynôme de A. D'après le théorème 40 pour que T soit un train polynôme de A il faut que F 2 m ⊂ F 2 σ ce qui implique que m divise σ. Soient n 0 , . . . , n p les éléments de E (T ) indexés de façon que n 0 < · · · < n p , alors n 0 est la valuation de T et n p son degré. On a T (X) = p i=0 α ni X ni , par définition de la striction de T , pour tout 0 ≤ i ≤ p il existe un entier k i tel que n i = n 0 + k i σ, il en
résulte que n i+1 − n i = (k i+1 − k i ) σ d'où p−1 i=0 (n i+1 − n i ) = σ p−1 i=0 (k i+1 − k i ), comme n 0 = v, n p = n et k 0 = 0, on a n − v = k p σ d'où l'on conclut que σ divise n − v et que E (T ) ⊂ {v + kσ; 0 ≤ k ≤ (n−v) /σ}. Enfin en prenant α v+kσ = 0 quand v + kv / ∈ E (T ) on obtient l'expression de T donnée dans l'énoncé.
On a aussi un résultat de factorisation des train polynômes.
Proposition 42. Les train polynômes des F-algèbres train plénières se décomposent en produit de polynômes irréductibles dans F 2 [X].
Démonstration. Soit T ∈ F [X] le train polynôme d'une F-algèbre train plénière de degré n, montrons qu'il existe des entiers s ≥ n et m ≥ 1 tels que T divise le polynôme X s+m − X s . Pour tout entier k ≥ n on note R k le reste de la division euclidienne dans F [X] de X k − X par T . Il résulte du théorème 40 que le corps F est fini, par conséquent l'ensemble {R k ; k ≥ n} est fini de cardinal au plus |F| n , il existe donc deux entiers s et t tels que
n ≤ s < t et R s = R t alors T divise le polynôme (X t − X) − (X s − X) = X t − X s que l'on peut écrire sous la forme X s+m − X s en posant m = t − s. Soit m = 2 a b avec b un entier impair, on a X s (X m − 1) = X s X b − 1 2 a . Or on a la décomposition X b − 1 = d|b Φ d (X) où Φ d ∈ F 2 [X]
est le d-ième polynôme cyclotomique, de plus Φ d admet une factorisation en polynômes unitaires irréductibles dans F 2 [X].
Pour la classification des train algèbres plénières on utilisera les définitions suivantes.
Définition 43. Soit A une F 2 p -train algèbre plénière vérifiant le train polynôme
T = X v f r1 1 . . . f rm m où f 1 , .
. . , f m sont des polynômes unitaires irréductibles dans
F 2 [X] tels que 1 ≤ deg f k < deg f k+1 et f 1 (0) = · · · = f m (0) = 1.
-On dit qu'une partition {I 1 , . . . , I s } de 1, m est F 2 p -compatible si p divise σ i∈I k f ri i pour tout 1 ≤ k ≤ s et si la partition {I 1 , . . . , I s } vérifiant cette condition est la plus fine.
-Etant donnée une partition {I 1 , . . . , I s } de 1, m qui est F 2 p -compatible, soit S l'application qui à I k associe l'ensemble D k des diviseurs de degré ≥1 du polynôme i∈I k f ri i tel que D k soit totalement ordonné pour la relation de divisibilité dans Exemple 44. 1) Soit T = X 12 + X 10 + X 4 + 1, on a σ (T ) = 2 par conséquent une F 2 p -algèbre vérifie T si p divise 2. On a T = X 6 + X 5 + X 2 + 1 2) Soit T = X 6 + 1, on a σ (T ) = 6 donc une F 2 p -algèbre vérifie T si p divise 6. Or T = X 3 + 1
2 = f 1 2 f 2 2 f 3 2 où f 1 = X + 1, f 2 = X 2 + X + 1 et f 3 = X 3 + X 2 + 1,2 = f 1 (X) 2 f 2 (X) 2 avec f 1 = X + 1 et f 2 = X 2 + X + 1, donc σ f 2 i = 2. Par conséquent si p = 1 ou p = 2 la partition {{1} , {2}} est F 2 p -compatible, dans le cas p = 1 une subdivision F 2 -compatible de cette partition est f 1 , f 2 1 , f 2 , f 2 2
, tandis que dans le cas p = 2 la subdivision F 4 -compatible de cette partition est
f 2 1 , f 2 2 . Si p = 3 c'est la partition {{1, 2}} qui est F 8 -compatible et qui a pour subdivision F 8 -compatible f 1 f 2 , (f 1 f 2 ) 2 .
3) Soit T = X 12 + X 9 + X 3 + 1, on a σ (T ) = 3 par conséquent une F 2 p -algèbre vérifie T si p = 1 ou p = 3 . On a T = f 2
1 f 2 2 f 3 où f 1 = X + 1, f 2 = X 2 + X + 1 et f 3 = X 6 + X 3 + 1. Si p = 1, la partition {{1} , {2} , {3}} est F 2 -compatible et elle admet la subdivision F 2 -compatible f 1 , f 2 1 , f 2 , f 2 2 , {f 3 } . Dans le cas p = 3, on a f 1 f 2 = X 3 + 1, f 2 1 f 2 = X 4 + X 3 + X + 1, f 1 f 2 2 = X 5 + X 4 + X 3 + X 2 + X + 1 et f 2 1 f 2 2 = X 6 + 1, par conséquent la partition {{1, 2} , {3}} est F 8 -compatible et la subdivision F 8 -compatible associée est f 1 f 2 , (f 1 f 2 ) 2 , {f 3 } .
Les résultats suivants seront utilisés pour la classification des train algèbres plénières.
Lemme 45. Etant donnés
• P ∈ F 2 [X] , P (X) = X n + n−1 k=v α k X k un polynôme unitaire de valuation v = n et de striction σ (P ) ;
• A un F-espace vectoriel et B = (e 0 , . . . , e n−1 ) une base de A.
• La structure de F-algèbre définie sur A par la donnée de e 2 i = C P (e i ) , (i = 0, . . . , n − 1) où C P est l'endomorphisme défini sur A par :
C P (e i ) = e i+1
si 0 ≤ i < n − 1, Pour x ∈ A, x = n−1 i=0 β i e i avec β i ∈ F, on trouve :
(3.1) P (V ) (x) = n−1 i=0 β 2 n i V n (e i ) + n−1 k=v α k β 2 k i V k (e i ) .
Supposons F ⊂ F 2 σ(P ) et montrons que P (V ) (x) = 0 pour tout x ∈ A. Pour tout k ∈ E (P ), l'entier σ (P ) divise k − v, alors avec F ⊂ F 2 σ(P ) on en déduit que
β 2 k i = β 2 v i , par conséquent pour tout v ≤ k < n − 1 on a α k β 2 k i = α k β 2 v i d'où β 2 n i V n (e i ) + n−1 k=v α k β 2 k i V k (e i ) = β 2 v i V n (e i ) + n−1 k=v α k V k (e i ) = β 2 v i P (V ) (e i ) = 0
pour tout 0 ≤ i < n, soit finalement P (V ) (x) = 0. Réciproquement, si P (V ) (x) = 0 pour tout x ∈ A, en prenant x = β 0 e 0 dans la relation (3.1) on obtient β 2 n 0 V n (e 0 ) + n−1 k=v α k β 2 k 0 V k (e 0 ) = 0 ou encore en utilisant la relation (⋆) on obtient n−1 k=v α k β 2 n 0 + β 2 k 0 e k = 0 d'où il résulte que
β 2 n 0 + β 2 k 0 = 0 pour tout k ∈ E (P ) et tout β 0 ∈ F ce qui implique β 2 j−i 0 = β 0 pour tout i, j ∈ E (P ), i < j et β 0 ∈ F et donc F ⊂ F 2 σ(P ) .
Proposition 46. Soient D ∈ F 2 [X] un polynôme unitaire qui n'est pas réduit à un
monôme et D 1 , . . . , D m ∈ F 2 [X] des diviseurs de D tels que D m = D et D k divise D k+1 pour tout 1 ≤ k < m. Soit p ≥ 1 un entier tel que p divise σ (D k ) pour tout 1 ≤ k ≤ m. Alors le F-espace vectoriel A avec F ⊂ F 2 p , de base m i=0 {e i,j ; 0 ≤ j ≤ d i − 1} où d i = deg D i , muni de la loi d'algèbre : e 2 i,j = C Di (e i,j ) (1 ≤ i ≤ m, 0 ≤ j ≤ d i − 1)
, les produits non mentionnés étant arbitraires dans A, vérifie D (V ) (x) = 0 pour tout x ∈ A.
Démonstration. Notons
A i le sous-espace engendré par {e i,j ; 0 ≤ j ≤ d i − 1}, d'après le lemme 45 on a D i (V ) (x i ) = 0 pour tout x i ∈ A i . Soit x ∈ A, on peut écrire x = m i=1 x i où x i ∈ A i et on a D (V ) (x) = m i=1 D (V ) (x i ), or pour chaque 1 ≤ i ≤ m il existe Q i ∈ F 2 [X] tel que D = Q i D i , il en résulte que D (V ) (x) = m i=1 Q i (V ) D i (V ) (x i ) = 0 pour tout x ∈ A.
On peut maintenant donner une classification des train algèbres plénières.
Proposition 47. Soient T ∈ F 2 [X] un polynôme unitaire de degré n de valuation v = n et p ≥ 1 un entier divisant σ (T ). Soit {I 1 , . . . , I s } une partition F 2 pcompatible et {S (I 1 ) , . . . , S (I s )} une subdivision F 2 p -compatible associée à cette partition.
Si pour chaque k = 1, . . . , s on note :
S (I k ) = T (k,1) , . . . , T (k,n k ) , où 1 ≤ n k , T (k,i+1) | T (k,i) , 1 ≤ deg T (k,n k ) et p | σ T (k,i) , δ (k,i) = deg T (k,i) , pour 1 ≤ i ≤ n k , q k = q (k,1) , . . . , q (k,r k ) , où 1 ≤ r k ≤ n k et 1 ≤ q (k,1) , . . . , q (k,r k ) . Si on pose v = (v 1 , . . . , v m ) où v = v 1 ≥ . . . ≥ v m ≥ 1 et q = (q 1 , . . . , q s ).
Alors une F 2 p -algèbre de dimension d, train plénière de train polynôme T est semi-isomorphe à la F 2 p -algèbre A (v, q) définie par la donnée : 1) , par définition d'une subdivision F 2 p -compatible les polynômes T (1,1) , . . . , T (s,1) sont premiers entre eux, alors par restriction du corps des scalaires
-d'une base B = m i=1 {a i,j /1 ≤ j ≤ v i } ∪ s k=1 r k i=1 q (k,i) j=1 b (k,i),j,p /0 ≤ p ≤ δ (k,i) − 1 telle que m i=1 v i + s k=1 r k i=1 δ (k,i) q (k,i) = d, -et de la table de multiplication a 2 i,j = a i,j+1 si 1 ≤ j ≤ v i − 1, 0 si j = v i , , i = 1, . . . , m, b (k,i),j,p 2 = C T (k,j) b (k,i),j,p , 1 ≤ k ≤ s; 1 ≤ i ≤ r k ; 1 ≤ j ≤ q (k,i) ; 1 ≤ p ≤ δ (k,j) − 1. Démonstration. Soit A une F 2 p -algèbre train plénière de train polynôme T . Pour tout 1 ≤ k ≤ s et tout 1 ≤ i ≤ n k − 1 on a deg T (k,i+1) ≤ deg T (k,i) . On a T = X v s k=1 T (k,à F 2 on munit A d'une structure de F 2 -espace et A = ker V v ⊕ s k=1 ker T (k,1) (V ). La restriction de V à ker V v est nilpotente de degré v, l'existence et les propriétés de la base m i=1 {a i,j ; 1 ≤ j ≤ v i } de ker V v découlent de la proposition 30.
Construisons une base de s k=1 ker T (k,1) (V ) qui vérifie la table de multiplication donnée dans l'énoncé. Pour tout 1 ≤ k ≤ s on a ker T (k,1) (V ) = {0}, sinon A vérifierait un train polynôme de degré < n. Considérons la restriction de V à ker T (k,1) (V ), comme pour tout 1 ≤ i ≤ n k − 1 le polynôme T (k,i+1) divise T (k,i) on a donc ker T (k,i+1) (V ) ⊂ ker T (k,i) (V ), soit E (k,i) un supplémentaire de ker T (k,i+1) (V ) dans ker T (k,i) (V ), en notant E (k,n k ) = ker T (k,n k ) (V ) on a la décomposition ker T (k,1) (V ) = n k i=1 E (k,i) . Pour chaque entier 1 ≤ k ≤ s, on note r k ≥ 1 le plus grand entier tel que E (k,r k ) = {0}, pour chaque 1 ≤ i ≤ r k il existe un élément b (k,i),1,0 ∈ E (k,i) , b (k,i),1,0 = 0, le polynôme s k=1 T (k,1) étant de valuation 0 il en résulte que le polynôme T (k,i) est aussi de valuation 0 et donc le système V j b (k,i),1,0 ; 0 ≤ j < δ (k,i) est libre. Si on note b (k,i),1,j =
V j b (k,i),1,0 on a b (k,i),1,j 2 = V j+1 b (k,i),1,0 = b (k,i),1,j+1 pour 0 ≤ j ≤ δ (k,i) −2 et si on pose T (k,i) (X) = X δ (k,i) + δ (k,i) −1 p=0 λ p X p on obtient b (k,i),1,δ (k,i) −1 2 = V δ (k,i) b (k,i),1,0 = δ (k,i) −1 p=0 λ p V p b (k,i),1,0 = δ (k,i) −1 p=0 λ p b (k,i),1,p , par consé- quent b (k,i),1,δ (k,i) −1 2 = C T (k,i) b (k,i),1,δ (k,i) −1 . Pour chaque entier 1 ≤ k ≤ s et 1 ≤ i ≤ r k , notons S (k,i)
,1 le sous-espace engendré par le système b (k,i),1,j ; 0 ≤ j < δ (k,i) , si S (k,i),1 = E (k,i) il existe un élément b (k,i),2,0 = 0 dans un supplémentaire de S (k,i),1 à E (k,i) à partir duquel on construit comme on vient de le faire le système libre b (k,i),2,i ; 0 ≤ j < δ (k,i) . En poursuivant ainsi on obtient q (k,i) éléments b (k,i),j,0 ; 1 ≤ j ≤ q (k,i) de E (k,i) qui génèrent la base q (k,i) j=1 b (k,i),j,p /0 ≤ p < δ (k,i) de E (k,i) dont les éléments vérifient la table de multiplication donnée dans la proposition.
Finalement, il résulte de la proposition 46 que l'algèbre A (v, q) obtenue ci-dessus vérifie le train polynôme T .
Algèbres d'évolution sur un corps fini de caractéristique 2
Dans ce qui suit nous allons voir que les algèbres d'évolution sur le corps F 2 sont dans certains cas nilpotentes et dans tous les cas ultimement périodiques et train plénières.
Les algèbres d'évolution ont été étudiées et popularisées sous ce nom par J.P. Tian [21]. Elles ont été introduites et utilisées pour la première fois par Etherington [[8], p. 34] dans l'étude de la dynamique d'un système diallélique dans le cas de l'autofécondation stricte et en l'absence de mutation génétique. En introduisant le morphisme d'espace vectoriel S : A → A défini sur la base naturelle (e i ) i∈I par S (e i ) = j∈I α ji e j , on a e 2 i = S (e i ) et l'opérateur d'évolution sur A s'écrit pour tout x ∈ A, x = i∈I λ i e i sous la forme
(4.1) V (x) = i∈I λ 2 i S (e i ) .
En particulier si K = F 2 , on montre aisément que
(4.2) V k (x) = i∈I λ i S k (e i ) , (k ≥ 1) .
En dimension finie, la matrice de l'endomorphisme S dans la base naturelle est appelée la matrice des constantes de structure de l'algèbre d'évolution relativement à la base naturelle. Dans le cas K = F 2 q (q ≥ 2), la situation est moins simple.
A partir de la relation (4.1) on obtient V 2 (x) = k∈I i,j∈I α kj α 2 ji λ 4 i e k et par récurrence on a :
V k+1 (x) = j∈I i0,...,i k ∈I α ji k α 2 i k i k−1 . . . α 2 k+1 i1i0 λ 2 k+1 i0 e j , (k ≥ 1) .
En dimension finie ceci s'écrit plus simplement en utilisant le produit d'Hadamard des matrices :
(a ij ) i,j ⊙ (b ij ) i,j = (a ij b ij ) i,j , sous la forme V k+1 (x) = d j=1 λ 2 k+1 j SS ⊙2 S ⊙4 · · · S ⊙2 k+1 (e j ) .
Avec ceci on a immédiatement le résultat suivant.
Proposition 50. Soient A une algèbre d'évolution sur le corps F 2 q (q ≥ 2) de dimension finie et S la matrice des constantes de structure relativement à une base naturelle de A. L'opérateur d'évolution V de A est nilpotent si et seulement si il existe un entier k ≥ 1 tel que SS ⊙2 S ⊙4 · · · S ⊙2 k+1 = 0.
Et avec les propositions 23 et 37 on a :
Proposition 51. Soient A une algèbre d'évolution sur le corps F 2 q (q ≥ 2) de dimension finie et S la matrice des constantes de structure relativement à une base naturelle de A. L'opérateur d'évolution V de A est ultimement périodique de période p diviseur de q et train plénier.
Démonstration. Montrons le résultat concernant la période de l'opérateur V . Si l'opérateur V est nilpotent de degré n on sait que dans ce cas V est (n, 1)-périodique. Supposons que V ne soit pas nilpotent, soit (e 1 , . . . , e d ) une base naturelle de A, il existe existe un 1 ≤ i ≤ d tel que V k (e i ) = 0, quel que soit k ≥ 1. D'après la proposition 23 il existe un couple minimal (p, n) tel que V est (n, p)-périodique, alors pour λ ∈ F 2 q de V n+p (e i ) = V n (e i ) et de V n+p (λe i ) = V n (λe i ) on déduit que λ 2 n+p − λ 2 n V (e i ) = 0 par conséquent λ 2 p − λ = 0 d'où l'on déduit que p divise q. Définition 52. Soit (A, ω) une F-algèbre pondérée. On dit que A est quasi-constante de degré n s'il existe e ∈ A, e = 0 tel que V n (x) = ω (x) Proposition 53. Pour une F-algèbre pondérée (A, ω), les énoncés suivants sont équivalents :
(i) il existe e ∈ A tel que (A, ω, e) est une algèbre quasi-constante de degré n ;
(ii) il existe e ∈ A tel que V n (x) = e pour tout x ∈ H ω et V n−1 (y) = e pour un y ∈ H ω ;
(iii) il existe e ∈ H ω tel que e 2 = e, V n (z) = 0 pour tout z ∈ ker ω et V n−1 (z ′ ) = 0 pour un z ′ ∈ ker ω. e = e. Pour tout z ∈ ker ω on a e + z ∈ H ω donc V n (e + z) = e, or V n (e + z) = V n (e) + V n (z) = e+V n (z) par conséquent V n (z) = 0. Il existe y ∈ H ω tel que V n−1 (y) = e, il existe z ′ ∈ ker ω tel que y = e + z ′ on a V n−1 (y) = e + V n−1 (z ′ ) et donc nécessairement Démonstration. Soit (A, ω, e) une F-algèbre pondérée de dimension d quasi-constante de degré n. D'après la proposition 53 on a ω (e) = 1 et donc A = Fe ⊕ ker ω, d'après l'assertion (iii) de la proposition 53 l'idéal ker ω est nil-plénier de degré n par conséquent d'après la proposition 10 il est semi-isomorphe à une algèbre de type (ker ω) (s).
Démonstration. (i) ⇒ (ii) Immédiat. (ii) ⇒ (iii) Soit e ∈ A telV n−1 (z ′ ) = 0. (iii) ⇒ (i) Pour tout α ∈ F et z ∈ ker ω on a ω (αe + z) = α et V n (αe + z) = V n (αe)+ V n (z) = α 2 n e,
5.2.
Algèbres pondérées ultimement périodiques (ou algèbres de Bernstein périodiques).
Définition 56. [20] Une F-algèbre pondérée (A, ω) est de Bernstein d'ordre n et de période p, en abrégé B (n, p)-algèbre, si elle vérifie
V n+p (x) = ω (x) 2 n (2 p −1) V n (x) ,
pour tout x ∈ A avec (p, n) ∈ N * × N minimal pour l'ordre lexicographique. Et de manière générale, on dit que (A, ω) est de Bernstein périodique s'il existe deux entiers n et p tels que A soit une B (n, p)-algèbre.
Théorème 57. Une F-algèbre pondérée est de Bernstein périodique si et seulement si elle est quasi-constante.
Démonstration. Pour la condition nécessaire, soient (A, ω) une B (n, p)-algèbre, il existe des entiers a ≥ 0 et q ≥ 1 impair tels que p = 2 a q. Soit x ∈ H ω fixé, on pose e = q−1 k=0 V n+2 a k (x), on a ω (e) = q donc ω (e) ∈ N et V 2 a (e) = q k=1 V n+2 a k (x) = V n+p (x) + q−1 k=1 V n+2 a k (x) = e, en appliquant la forme ω à ce résultat on obtient ω (e) 2 2 a = ω (e), mais le polynôme X 2 2 a − X n'a que deux racines dans N qui sont 0 et 1, par conséquent on a ω (e) = 1. Ensuite on a V 2 a (q+n) (e) = e et V 2 a (q+n) (z) = V p+2 a n (z) = V (2 a −1)n V n+p (z) = 0 pour tout z ∈ ker ω, on en déduit que V 2 a (q+n) (αe + z) = α 2 2 a (q+n) e, autrement dit V 2 a (q+n) (x) = ω (x) 2 2 a (q+n) e pour tout x ∈ A, il en résulte que l'ensemble k ∈ N; V k (x) = ω (x) 2 k e, ∀x ∈ A n'est pas vide, il admet un plus petit élément r ≤ 2 a (q + n) pour lequel l'algèbre A est quasi-constante de degré r.
Pour la condition suffisante, soit A une F-algèbre quasi-constante de degré n, pour tout x ∈ A on a V n (x) = ω (x) 2 n e (⋆) alors V n+1 (x) = ω (x) 2 n 2 V (e) = ω (x) 2 n+1 e = ω (x) 2 n V n (x) car e 2 = e. Montrons par l'absurde que le couple (1, n) est minimal pour l'ordre lexicographique, si l'on suppose qu'il existe m < n tel que V m+1 (x) = ω (x) 2 m V m (x) pour tout x ∈ A alors on aurait V m+1 (z) = 0 pour tout z ∈ ker ω, par conséquent d'après la proposition 53 on aurait m+1 = n et donc V n (x) = ω (x) 2 n−1 V n−1 (x), ceci joint à la relation (⋆) donne ω (x) 2 n−1 V n−1 (x) = ω (x) 2 n e, on en déduit que pour tout x ∈ H ω on aurait V n−1 (x) = e, ce qui d'après le (ii) de la proposition 53 est impossible.
Algèbres pondérées train plénières.
Définition 58. Etant donnée une F-algèbre pondérée (A, ω), on dit que A vérifie une identité train plénière de degré n (n ≥ 1) s'il existe (α 0 , . . . , α n−1 ) ∈ K n \ (0, . . . , 0) tel que pour tout x ∈ A on ait :
V n (x) + n−1 k=0 α i ω (x) 2 n −2 k V k (x) = 0.
Une F-algèbre pondérée (A, ω) est train plénière de degré n si elle vérifie une identité train plénière de degré n et si elle ne vérifie pas d'identité train plénière de degré < n.
En appliquant la forme ω à l'identité train plénière pour x ∈ H ω on obtient que n−1 k=0 α k + 1 = 0. Théorème 59. Une F-algèbre pondérée est une train algèbre plénière si et seulement si elle est quasi-constante.
Démonstration. Soit (A, ω) une algèbre train plénière de degré n. Si A est de dimension 1 le résultat est trivial, on suppose donc que A est de dimension ≥ 2, l'idéal ker ω est nil-plénier, soit p ≤ n son degré. Soit a ∈ H ω donné, pour tout x ∈ H ω tel que x = a on a a − x ∈ ker ω donc V p (a − x) = 0, il en résulte que V p (x) = V p (a), donc si on pose e = V p (a) on a V p (x) = e pour tout x ∈ H ω , on en déduit que V p (x) = ω (x) 2 p e quel que soit x ∈ A tel que ω (x) = 0 et comme V p (z) = 0 pour tout z ∈ ker ω, par additivité de V on a finalement V p (x) = ω (x) 2 p e pour tout x ∈ A. La réciproque découle du théorème 57.
Proposition 1 .
1Soit A une K-algèbre telle que A 2 = {0}. L'application V : A → A,x → x 2 est additive si et seulement si char (K) = 2.
et les autres produits étant définis arbitrairement. Et deux algèbres A (s) et A (t) sont semi-isomorphes si et seulement si s = t.
Proposition 14 .
14L'élément e d'une F-algèbre quasi-constante (A, e) est unique et vérifie e 2 = e.
Définition et exemples. Définition 17. Soit A une F-algèbre. On dit que :
Exemple 29 .
29Soit A la F 2 -algèbre commutative de base (e 1 , e 2 , e 3 , e 4 ) définie par V (e i ) = e i+1 pour i = 1, 2, 3 et V (e 4 ) = e 1 , les produits non mentionnés étant arbitraires. L'algèbre A est (0, 4)-périodique. a) Les éléments x ′ = e 1 , x ′′ = e 2 sont (0, 4)-périodiques et il en est de même pourx ′ + x ′′ = e 1 + e 2 . b) Les éléments x ′ = e 1 , x ′′ = e 3 sont (0, 4)-périodiques, quant à x ′ +x ′′ = e 1 +e 3 il est (0, 2)-périodique.c) Les éléments x ′ = e 1 + e 2 , x ′′ = e 3 + e 4 sont (0, 4)-périodiques, l'élément x ′ + x ′′ = e 1 + e 2 + e 3 + e 4 est idempotent donc (0, 1)-périodique.Concernant les algèbres ultimement périodiques on a les propriétés suivantes.Proposition 30. Les énoncés suivants sont équivalents :(i) A est (n, p)-périodique ; (ii) il existe une base (e i ) i∈I de A telle que V n+p (e i ) = V n (e i ) pour tout i ∈ I et |O (e i )| = n + p pour au moins un i ∈ I ;(iii) il existe une base (a i ) i∈I ∪ (b j ) j∈J de A vérifiant les quatre conditions : V n (a i ) = 0, V p (b j ) = b j , pour tout (i, j) ∈ I × J et |O (a i )| = n, |O (b j )| = p pour au moins un (i, j) ∈ I × J.
dans le premier cas on a |O (x)| = n et on complète {x} en une base (a i ) i∈I de ker (V n ), dans le second cas on a |O (y)| = p et on complète {y} en une base (b j ) j∈J de ker (V p − id). Et après extension du corps des scalaires à F, on a obtenu une base (a i ) i∈I ∪ (b j ) j∈J de A remplissant les conditions de l'énoncé.
conséquent V n+p (x) = V n (x) pour tout x ∈ A.
or d'après la proposition 27, l'élément b j est (0, p)-périodique ce qui entraîne d'après la proposition 26 que |O (V m (b j )| = |O (b j )|, d'où une contradiction. Il résulte aussitôt du résultat (iii) ci-dessus que Corollaire 31. Pour une algèbre (n, p)-périodique A on a dim (A) ≥ n + p. Lemme 32. Soient m = 2 s q avec s ≥ 0 et q impair ; σ le cycle (1, . . . , m) et A un F-espace vectoriel A de base (e 1 , . . . , e m ) où F ⊂ F 2 m .
σi(j) , i = 1, . . . , N ; j = 1, . . . , 2 ti q , où σ i dénote le cycle (1, . . . , 2 ti q) et les produits non mentionnés sont pris arbitrairement dans A. Enfin, deux algèbres A (s,t) et A (s ′ ,t ′ ) sont semi-isomorphes si et seulement si (s, t) = (s ′ , t ′ ).
où une contradiction. On en déduit que pour tout polynôme D diviseur de T de degré < n, on a D (V ) ( x) = 0, en effet s'il existe un diviseur D de T de degré < n tel que D (V ) ( x) = 0 comme D divise l'un des polynômes R k on auraitR k (V ) ( x) = 0. Enfin si on suppose qu'il existe S ∈ F [X] de degré < n tel que S (V ) ( x) = 0, soit D = gcd (T, S), si D = 1 alors D serait un diviseur de T de degré < n tel que D (V ) ( x) =0 et si D = 1 alors on aurait x = 0, dans les deux cas on a une contradiction, par conséquent on a montré que pour tout S ∈ F [X] de degré < n on a S (V ) ( x) = 0. Pour qu'une F-algèbre soit train plénière il faut une condition sur le corps F. Définition 39. Etant donné un polynôme T ∈ K [X], T (X) = n k=0 α k X k qui n'est pas un monôme. Soit E (T ) = {k; α k = 0} l'ensemble des exposants de T , on appelle striction du polynôme T , le nombre σ (T ) défini par σ (T ) = gcd {j − i; i, j ∈ E (T ) , i < j}. Théorème 40. Soit T ∈ F [X] un polynôme de striction σ (T ). Si A est une Falgèbre train plénière de train polynôme T alors on a F ⊂ F 2 σ(T ) .
F 2 [X] et p divise σ (D) pour tout D ∈ D k . L'ensemble {S (I 1 ), . . . , S (I s )} est appelé une subdivision F 2 p -compatible associée à la partition {I 1 , . . . , I s }.
la partition {{1} , {2} , {3}} est F 2 p -compatible pour p = 1 ou p = 2. Dans le cas p = 1, la subdivision F 2 -compatible de cette partition est f 1 , f 2 1 , f 2 , f 2 2 , f 3 , f 2 3 , pour p = 2 la subdivision F 4 -compatible de cette partition est f 2 1 , f 2 2 , f 2 3 .
α k e k si i = n − 1. les produits non mentionnés étant pris arbitrairement dans A. Alors on a P (V ) (x) = 0 pour tout x ∈ A si et seulement si F ⊂ F 2 σ(P ) .Démonstration. Pour tout 1 ≤ k ≤ n−1 on a V k (e 0 ) = e k et V n (e 0 ) = V (e V k (e 0 ) . (⋆)Il en résulte que P (V ) (e 0 ) = V n (e 0 )+ n−1 k=v α k V k (e 0 ) = 0. Ensuite en appliquant l'opérateur V à la relation (⋆) on obtient V n (e 1 ) = V n+1 (e k V k (e 1 ) autrement dit P (V ) (e 1 ) = 0. En poursuivant ainsi on montre que P (V ) (e i ) = 0 pour tout 0 ≤ i ≤ n − 1.
Définition 48 .
48Une K-algèbre A est d'évolution s'il existe une base (e i ) i∈I vérifiant e 2 i = j∈I α ji e j et e i e j = 0 si i = j.Une base de A vérifiant cette propriété est appelée une base naturelle.
Proposition 49 .
49Soient A une F 2 -algèbre d'évolution de dimension finie et S la matrice des constantes de structure relativement à une base naturelle de A. Concernant l'opérateur d'évolution V de A, on a (a) V est nilpotent si et seulement si S est nilpotente. (b) V est ultimement périodique. (c) V est train plénier de degré égal au degré du polynôme minimal de S. Démonstration. (a) Cela résulte immédiatement de la relation (4.2). Soit d la dimension de A. (b) Montrons que l'opérateur V est ultimement périodique. Pour cela on considère l'ensemble S k ; k ≥ 1 , cet ensemble est fini de cardinal au plus 2 d 2 , par conséquent il existe deux entiers n < m tels que S n = S m alors de (4.2) on a V n+(m−n) = V n . Il en résulte que l'ensemble {(s, r) ; V r+s = V r } n'est pas vide, il possède un plus petit élément (p, n) pour l'ordre lexicographique et donc V est (n, p)-périodique. (c) L'opérateur V est train plénier. En effet, si T est le polynôme minimal de S, de T (S) = 0 et de (4.2) on déduit que T (V ) = 0.
5 .
5Algèbres pondérées quasi-constantes, ultimement périodiques et train plénières Usuellement, une algèbre pondérée (A, ω) est la donnée d'une K-algèbre A et d'un morphisme non nul d'algèbres ω : A → K appelé une pondération de A. Pour une pondération ω de A donnée, l'image ω (x) d'un élément x ∈ A est appelé le poids de x et on note H ω = {x ∈ A; ω (x) = 1} l'hyperplan affine des éléments de poids 1 de A, comme ω = 0 on a H ω = Ø. Il est immédiat qu'il n'existe pas d'algèbre nil-plénière pondérée. La pondération permet d'étudier des opérateurs d'évolution vérifiant des identités polynomiales sans faire d'hypothèse sur le corps F comme dans les théorèmes 15, 22 et 40. 5.1. Algèbres pondérées quasi-constantes.
que V n (x) = e pour tout x ∈ H ω . On a ω (e) = 0 sinon on aurait ω (V n (x)) = 0 d'où ω (x) = 0 pour tout x ∈ A. On a ω (e) −1 e ∈ H ω donc V n ω (e) −1 e = e ou V n (e) = ω (e)2 n e, en appliquant la pondération ω à cette relation on obtient ω (e)
e) − 1) = 0 d'où ω (e) = 1. Ensuite on a : e 2 = V (e) = V (V n (e)) = V n (V (e)) = ω e 2 2 n e = ω (e) 2 n+1
e
l'entier n vérifiant cette identité est minimal car s'il existe m < n tel que V m (x) = ω (x) 2 m e pour tout x ∈ A, alors on a V m (z) = 0 pour tout z ∈ ker ω, contradiction.La définition d'une algèbre quasi-constante A dépend de la donnée d'une pondération ω et d'un élément e ∈ A.Proposition 54. La pondération ω et l'élément e d'une F-algèbre quasi-constante (A, ω, e) sont uniques.Démonstration. Soit (A, ω, e) une F-algèbre quasi-constante d'ordre n. Supposons qu'il existe une pondération ω ′ de A et e ′ ∈ A tels que (A, ω ′ , e ′ ) soit quasi-constante de degré n. De V n (x) = ω ′ (x) 2 n e ′ et V n (x) = ω (x) (⋆) pour tout x ∈ A,comme e ′ , e = 0 on en déduit que ker ω ′ = ker ω, ceci ajouté au fait que d'après le résultat (iii) de la proposition 53 on a ω (e) = ω ′ (e ′ ) = 1, permet d'affirmer que ω ′ = ω. Ensuite en prenant x = e ′ dans (⋆) et compte tenu du (iii) de la proposition 53 que ω ′ (e ′ ) = 1, on a e ′ = ω (e ′ ) 2 n e et comme ω (e) = 1 on en déduit que ω (e ′ ) = ω (e ′ ) 2 n par conséquent e ′ = ω (e ′ ) e. Alors avec (iii) de la proposition 53 on a e ′ = e ′2 = ω (e ′ ) 2 e 2 = ω (e ′ ) 2 e = ω (e ′ ) e ′ d'où ω (e ′ ) = 1 et donc e ′ = e. Proposition 55. Une F-algèbre pondérée (A, ω, e) de dimension d ≥ 2 quasiconstante de degré n est semi-isomorphe à une F-algèbre, notée A (s), définie par la donnée : -d'un m-uplet d'entiers s = (s 1 , . . . , s m ) où m ≥ 1, n = s 1 ≥ s 2 ≥ . . . ≥ s m ≥ 1 et s 1 + s 2 + · · · + s m = d − 1 ;-d'une base {e} ∪ m i=1 {e i,j ; 1 ≤ j ≤ s i } telle que ω (e) si 1 ≤ j ≤ s i − 1 0 si j = i , (1 ≤ i ≤ m)et les autres produits étant définis arbitrairement sur ker ω.
morphisme de Frobenius Frob : F → F, α → α 2 est surjectif, il existe donc µ ∈ F tel que λ = Frob k (µ) donc λ = µ 2 k et λV k (x) = V k (µx) d'où λV k (x) ∈ V k (A).Pour la classification des opérateurs d'évolution en caractéristique 2 on utilisera la forme affaiblie d'isomorphisme suivante :
n e pour tout x ∈ A, avec n ∈ N * minimal. On désigne par (A, ω, e) une telle algèbre.
Linearizing Quadratic Transformations in Genetic Algebras. Thesis. V M Abraham, Univ. of LondonAbraham V. M. Linearizing Quadratic Transformations in Genetic Algebras. Thesis, Univ. of London 1976.
Linearizing Quadratic Transformations in Genetic Algebras. V M Abraham, Proc. London Math. Soc. 402Abraham V. M. Linearizing Quadratic Transformations in Genetic Algebras. Proc. London Math. Soc. 40 (2) : 346-363 (1980).
Algunas clases de álgebras báricas sobre un cuerpo de característica 2. R Benavides, C Mallol, Proyecciones. 142Benavides R., Mallol C. Algunas clases de álgebras báricas sobre un cuerpo de característica 2. Proyecciones 14 (2) : 133-138 (1995).
Principe de stationarité et généralisation de la loi de Mendel. S N Bernstein, C.R. Acad. Sci. 177Bernstein S.N. Principe de stationarité et généralisation de la loi de Mendel. C.R. Acad. Sci. Paris, 177 : 528-531 (1923).
Démonstration mathématique de la loi d'hérédité de Mendel. S N Bernstein, C.R. Acad. Sci. 177Bernstein S.N. Démonstration mathématique de la loi d'hérédité de Mendel. C.R. Acad. Sci. Paris, 177 : 581-584 (1923).
Solution of a mathematical problem connected with the theory of heredity. S N Bernstein, Ann. Math. Stat. 13Bernstein S.N. Solution of a mathematical problem connected with the theory of heredity. Ann. Math. Stat. 13 : 53-61 (1942).
Genetic algebras. I M H Etherington, Proc. Roy. Soc. Edinburgh. 59Etherington I. M. H. Genetic algebras. Proc. Roy. Soc. Edinburgh 59 : 242-258 (1939).
Non-associative algebra and the symbolism of genetics. I M H Etherington, Proc. Roy. Soc. Edinburgh. 61Etherington I.M.H. . Non-associative algebra and the symbolism of genetics. Proc. Roy. Soc. Edinburgh 61 : 24-42 (1941).
Principal and plenary train algebras. Gutiérrez Fernández, C J , Comm. Algebra. 282Gutiérrez Fernández C. J. Principal and plenary train algebras. Comm. Algebra 28 (2) : 653-667 (2000).
Genetic algebras satisfying Bernstein's stationarity principle. P Holgate, J. London Math. Soc. 9Holgate P. Genetic algebras satisfying Bernstein's stationarity principle. J. London Math. Soc. 9 : 613-623 (1975).
Les algèbres quasi-constantes et de Bernstein. Thesis. I Katambe, Univ. de MontpellierKatambe I. Les algèbres quasi-constantes et de Bernstein. Thesis, Univ. de Montpellier 1985.
Les algèbres quasi-constantes. I Katambe, A Koulibaly, A Micali, Cahiers Math. Montpellier. 38Katambe I., Koulibaly A., Micali A. Les algèbres quasi-constantes. Cahiers Math. Montpellier 38 : 47-64 (1989).
Quasistationary quadratic maps. Vestnik Khar'kov Univ. A A Krapivin, 134Krapivin A. A. Quasistationary quadratic maps. Vestnik Khar'kov Univ. 134 : 104-108 (1976).
. Ljubič Ju. I. Two-level Bernstein populations. Math. USSR Sbornik. 244Ljubič Ju. I. Two-level Bernstein populations. Math. USSR Sbornik 24 (4) : 593-615 (1974).
Mathematical structures in population genetics. . I Ljubič Ju, Springer-VerlagBerlinLjubič Ju. I. Mathematical structures in population genetics, Springer-Verlag, Berlin, 1992.
A propos des algèbres de Bernstein. Thesis. C Mallol, FranceUniversité de Montpellier IIMallol C., A propos des algèbres de Bernstein. Thesis. Université de Montpellier II. France. 1989.
Sur les Algèbres de Bernstein IV. C Mallol, A Micali, M Ouattara, Lin. Alg. Appl. 158Mallol C., Micali A., Ouattara M. Sur les Algèbres de Bernstein IV. Lin. Alg. Appl. 158 : 1-26 (1991).
Algebraic properties of cellular automata. O Martin, A M Odlyzko, S Wolfram, Commun. Math. Phys. 93Martin O., Odlyzko A.M., Wolfram S. Algebraic properties of cellular automata. Commun. Math. Phys. 93 : 219-258 (1984).
. R Varro, Univ. de MontpellierAlgèbres de Bernstein périodiques. ThesisVarro R. Algèbres de Bernstein périodiques. Thesis, Univ. de Montpellier 1992.
Introduction aux algèbres de Bernstein périodiques. R Varro, Nonassociative algebra and its applications. S. GonsalezOviedo; DordrechtKluwer Academic Publishers303Varro R. Introduction aux algèbres de Bernstein périodiques. In S. Gonsalez, eds. Non- associative algebra and its applications (Oviedo, 1993), Math. Appl. 303, Kluwer Academic Publishers, Dordrecht. pp 384 -388 (1994 ).
Evolution algebras and their applications. J P Tian, Lecture Notes in Math. SpringerTian J. P. Evolution algebras and their applications, Lecture Notes in Math., 1921. Springer, Berlin, 2008.
. A Wörz-Busekros, Lecture Notes in Biomathematics. 36Springer-VerlagAlgebras in GeneticsA. Wörz-Busekros, Algebras in Genetics, Lecture Notes in Biomathematics, 36. Springer- Verlag, Berlin-New York, 1980.
| [] |
[
"On some estimators of the Hurst index of the solution of SDE driven by a fractional Brownian motion",
"On some estimators of the Hurst index of the solution of SDE driven by a fractional Brownian motion"
] | [
"K Kubilius \nInstitute of Mathematics and Informatics\nVilnius University\nAkademijos 4LT-08663VilniusLithuania\n",
"V Skorniakov \nFaculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania\n"
] | [
"Institute of Mathematics and Informatics\nVilnius University\nAkademijos 4LT-08663VilniusLithuania",
"Faculty of Mathematics and Informatics\nVilnius University\nNaugarduko 24LT-03225VilniusLithuania"
] | [] | Strongly consistent and asymptotically normal estimators of the Hurst parameter of solutions of stochastic differential equations are proposed. The estimators are based on discrete observations of the underlying processes. | 10.1016/j.spl.2015.11.013 | [
"https://arxiv.org/pdf/1507.07180v1.pdf"
] | 119,596,896 | 1507.07180 | 36761644f5627f9e240d5a8027f27cf78cd70fe0 |
On some estimators of the Hurst index of the solution of SDE driven by a fractional Brownian motion
26 Jul 2015
K Kubilius
Institute of Mathematics and Informatics
Vilnius University
Akademijos 4LT-08663VilniusLithuania
V Skorniakov
Faculty of Mathematics and Informatics
Vilnius University
Naugarduko 24LT-03225VilniusLithuania
On some estimators of the Hurst index of the solution of SDE driven by a fractional Brownian motion
26 Jul 2015fractional Brownian motionstochastic differential equationHurst index
Strongly consistent and asymptotically normal estimators of the Hurst parameter of solutions of stochastic differential equations are proposed. The estimators are based on discrete observations of the underlying processes.
Introduction
Recently long range dependence (LRD) became one of the most researched phenomena in statistics. It appears in various applied fields and inspires new models to account for it. Stochastic differential equations (SDEs) are widely used to model continuous time processes. Within this framework, LRD is frequently modeled with the help of SDEs driven by a fractional Brownian motion (fBm). It is well known that the latter Gaussian process is governed by a single parameter H ∈ (0, 1) (called the Hurst index) and that values of H in (1/2, 1) correspond to LRD models. In applications, the estimation of H is a fundamental problem. Its solution depends on the theoretical structure of a model under consideration. Therefore, particular models usually deserve separate analysis. In this paper, we concentrate on the estimation of H under the assumption that an observable continuous time process (Xt) t∈[0,T ] satisfies SDE Xt = ξ + where T > 0 is fixed, ξ is an initial r.v., f and g are continuous functions satisfying some regularity conditions, and (B H t ) t∈[0,T ] is a fBm with the Hurst index 1/2 < H < 1. Our goal is to construct a strongly consistent and asymptotically normal estimator of the H from discrete observations Xt 1 , . . . , Xt n of trajectory Xt, t ∈ [0, T ].
We consider two cases. First, we examine the case when g is completely specified. Next, we relax this restriction and allow g to be unknown. Such situation may appear when g depends on additional nuisance parameters. In both cases, boundedness of 1/g plays an important role and is assumed to hold.
To our best knowledge, so far only several studies investigated this question. The pioneering work has been done by [3] of Berzin and León as well as lecture notes [4] with references therein. [12], [14] and [15] were also devoted to the problems of the same nature. However, all of these works focused on the strong consistency. The present paper is a generalization of [13] where a special case of (1.1) was considered.
The paper is organized in the following way. In Section 2 we present the main results of the paper. Section 3 is devoted to several results needed for the proofs. Sections 4-5 contain the proofs of the main results. Finally, in Section 6 two examples are given in order to illustrate the obtained results.
Main results
To avoid cumbersome expressions, we introduce symbols Oω, oω. Let (Yn) be a sequence of r.v., ς is an a.s. non-negative r.v. and (an) ⊂ (0, ∞) vanishes. Yn = Oω(an) means that |Yn| ς · an; Yn = oω(an) means that |Yn| ς · bn with bn = o(an). In particular, Yn = oω(1) corresponds to the sequence (Yn) which tends to 0 a.s. as n → ∞.
Let πn = {τ n k , k = 0, . . . , in}, n 1, N ∋ in ↑ ∞, be a sequence of partitions of the interval [0, T ]. If partition πn is uniform then τ n k = kT in for all k ∈ {0, . . . , in}. If in ≡ n, we write t n k instead of τ n k . In order to formulate our main results, we state two hypotheses:
(H) ∆X τ n k = X τ n k − X τ n k−1 = Oω d H−ε n , k = 1, . . . , in; (H1) ∆ (2) X τ n k = X τ n k − 2X τ n k−1 + X τ n k−2 = g(X τ n k−1 )∆ (2) B H τ n k + Oω d 2(H−ε) n , (2.1) k = 2, . . . , in,
for all ε ∈ (0, H − 1 2 ), where dn = max 1 k in (τ n k − τ n k−1 ). Theorem 2.1. Assume that solution of Eq. (1.1) satisfies hypothesis (H1). Moreover, let the function g is known, Lipschitz-continuous and there exists a random variable ς such that P(ς < ∞) = 1 and
sup t∈[0,T ] 1 |g(Xt)| ς a.s. (2.2)
Then
H (1) n −→ H a.s., 2 √ n ln n T ( H (1) n − H) d − − → N (0; σ 2 H ) for H ∈ (1/2, 1),
where σ 2 H is a known variance defined in Subsection 3.1,
H (1) n =ϕ −1 n,T 1 n n i=2 ∆ (2) X t n i g X(t n i−1 ) 2 for n > T, ϕn,T (x) = T n 2x
(4 − 2 2x ) and ϕ −1 n,T denotes an inverse of ϕn,T , x ∈ (0, 1), n > T.
It is natural to try to drop restriction of the known g. For this purpose we need several additional definitions. Assume that the process X is observed at time points i mn T , i = 1, . . . , mn, where mn = nkn, and kn grows faster than n ln n, but the growth does not exceed polynomial, e.g. kn = n ln θ n, θ > 1, or kn = n 2 . Denote
W n,k = kn j=−kn+2 ∆ (2) X s n j +t n k = kn j=−kn+2 X s n j +t n k − 2X s n j−1 +t n k + X s n j−2 +t n k 2 ,
where 1 k n − 1 and s n j = j mn T . Theorem 2.2. Assume that solution of Eq. (1.1) satisfies hypotheses (H) and (H1). Moreover, let the function g is Lipschitz-continuous and there exists a random variable ς such that P(ς < ∞) = 1 and inequality (2.2) holds. Then
H (2) n −→ H a.s., 2 √ n ln n T ( H (2) n − H) d − − → N (0; σ 2 H ) for H ∈ (1/2, 1), where H (2) n = 1 2 + 1 2 ln kn ln 2 n n k=2 ∆ (2) X t n k 2 W n,k−1 ,
and σ 2 H is a known variance defined in Subsection 3.1.
Preliminaries
Several results on fBm
Recall that fBm (B H t ) t 0 with the Hurst index H ∈ (0, 1) is a real-valued continuous centered Gaussian process with covariance given by
E(B H t B H s ) = 1 2 s 2H + t 2H − |t − s| 2H .
For consideration of strong consistency and asymptotic normality of the given estimators we need several facts regarding B H .
Limit results. Let
Vn,T = n 2H−1 T 2H (4 − 2 2H ) n k=2 ∆ (2) B H t n k 2 , H = 1 2 .
Then (see [2], [10], [1])
Vn,T − −−− → n→∞ 1 a.s., √ n (Vn,T − 1) d − − → N 0, σ 2 H , where σ 2 H = 2 1 + 2 ∞ j=1 ρ 2 H (j) , ρH(j) = − |j − 2| 2H − 4|j − 1| 2H + 6|j| 2H − 4|j + 1| 2H + |j + 2| 2H 2(4 − 2 2H ) .
Hölder-continuity of B H . It is known that almost all sample paths of an fBm B H are locally Hölder of order strictly less than H, H ∈ (0, 1). To be more precise, for all 0 < ε < H and T > 0 there exists a nonnegative random variable Gε,T such that E(|Gε,T | p ) < ∞ for all p 1, and
|B H t − B H s | Gε,T |t − s| H−ε a.s. (3.1) for all s, t ∈ [0, T ].
Concentration inequality
Let
Y k,n = n H T H √ 4 − 2 2H ∆ (2) B H t n k , d (2)n jk = EYj,nY k,n , j, k = 2, . . . , n.
Note that d
(2)n jk = ρH(j −k).
In the sequel we make use of the following modified version of an inequality of concentration from [6].
Lemma 3.1. For all z > 0 and any H ∈ (0, 1),
P (n − 1) −1/2 n k=2 (Y 2 k,n − 1) > z 2 exp − z 2 32 3 ( z √ n−1 + 1)
.
Proof. Let κ = sup H∈(0,1) j∈Z |ρH(j)|. Following an argument of the paper [6], one gets bound
2(n − 1) −1 n k,j=2 Y k,n Yj,n d (2)n kj 2(n − 1) −1 n k,j=2 |Y k,n | · |Yj,n| · |ρH (j − k)| 2(n − 1) −1 n k,j=2 Y 2 k,n |ρH(j − k)| 2(n − 1) −1 κ n k=2 Y 2 k,n = 2(n − 1) −1 κ n k=2 Y 2 k,n − 1 + 2κ = 2κ √ n − 1 1 √ n − 1 n k=2 Y 2 k,n − 1 + 2κ.
Thus (see [6]),
P 1 √ n − 1 n k=2 Y 2 k,n − 1 > z 2 exp − z 2 4κ( z √ n−1 + 1) .
In paper [5] it was proved that
j∈Z |ρH(j)| = 1 + 10−7·2 2H +2·3 2H 4−2 2H
for H 1/2,
1 + 4−2 2H 4−2 2H = 2,
for H 1/2, and κ = sup
H∈(0,1) j∈Z |ρH (j)| = lim H→0+ j∈Z |ρH (j)| = 8 3 .
This yields the required inequality.
ϕn,T (x) = T n 2x (4 − 2 2x ),
x ∈ (0, 1), is continuous and strictly decreasing for n > T . Thus, it has an inverse ϕ −1 n,T for n > T . By hypothesis (H1),
ϕn,T ( H (1) n ) ϕn,T (H) = T n 2H (4 − 2 2H ) −1 ϕn,T ϕ −1 n,T 1 n n i=2 ∆ (2) X t n i g X t n i−1 2 = T n 2H (4 − 2 2H ) −1 1 n n i=2 ∆ (2) X t n i g X t n i−1 2 = n 2H−1 T 2H (4 − 2 2H ) n i=2 ∆ (2) B H t n i 2 + Oω 1 n 3(H−ε)−1 = n 2H−1 T 2H (4 − 2 2H ) n i=2 ∆ (2) B H t n i 2 + Oω 1 n H−3ε = Vn,T + Oω 1 n H−3ε for 3ε < H. Therefore ϕn,T ( H (1) n ) ϕn,T (H) −→ 1 a.s. as n → ∞.
Using the same arguments as in [13] it is possible to prove that the estimator H
(1) n is strongly consistent and asymptotically normal.
Proof of the main Theorem
Before presenting the proof of this theorem, we give two auxiliary lemmas. The following relation holds:
max 1 k n−1
Vn,T (k) − 1 = Oω ln n kn .
Proof. By self similarity and stationarity of increments of fBm,
Vn,T (k) d = 2 2H−1 m 2H n 2 2H kn(4 − 2 2H ) kn j=−kn+2 B H j+kn mn − 2B H j+kn −1 mn + B H j+kn −2 mn 2 d = (2mn) 2H−1 n (2n) 2H (4 − 2 2H ) kn j=−kn+2 B H j kn +1 − 2B H j−1 kn +1 + B H j−2 kn +1 2 d = (2kn) 2H−1 2 2H (4 − 2 2H ) 2kn j=2 B H j kn − 2B H j−1 kn + B H j−2 kn 2 d = (2kn) 2H−1 4 − 2 2H 2kn j=2 B H j 2kn − 2B H j−1 2kn + B H j−2 2kn 2 d = V 2kn,1 .
Therefore, P max
1 k n−1 Vn,T (k) − 1 > δ n−1 k=1 P Vn,T (k) − 1 > δ n P (|V 2kn,1 − 1| > δ) for all δ > 0.
Put Vn,T = n n−1 Vn,T . Note that V 2kn,1 − 1 V 2kn,1 − 1 + 1 2kn
.
Let (δn) be a sequence of positive numbers such that δn ↓ 0 as n → ∞ and k −1 n < δn. By Lemma 3.1,
P (|V 2kn ,1 − 1| > 2δn) P | V 2kn,1 − 1| + 1 2kn > 2δn P | V 2kn,1 − 1| > δn = P √ 2kn − 1 | V 2kn,1 − 1| > δn √ 2kn − 1 2 exp − δ 2 n (2kn − 1) 32 3 (δn + 1)
.
Set δn = a ln n 2kn−1 . Since kn n ln n, then
P (|V 2kn,1 − 1| > 2δn) 2 exp − 3a ln n 32 a ln n 2n ln n−1 + 1 .
If a 3 and n 2, then δn > k −1 n . Moreover, P (|V 2kn,1 − 1| > 2δn) 2n −3 for a > 32 and n large enough. Therefore series
W n,k = g 2 X t n k T 2H (4 − 2 2H ) (2kn) 2H−1 + Oω √ kn ln n m 2H n + Oω kn n H−ε m 2(H−ε) n .
Proof.
Step 1. By hypothesis (H1),
∆ (2) X s n j +t n k = g X s n j−1 +t n k ∆ (2) B H s n j +t n k + Oω 1 m 2(H−ε) n , j = −kn + 2, . . . , kn.
Next, note that
|g 2 (Xt) − g 2 (Xs)| =|g(Xt) − g(Xs)||g(Xt) + g(Xs)| L|Xt − Xs||g(Xs) + g(Xt)| 2L sup u∈[0,T ] |g(Xu)||Xt − Xs|
where t > s and L is Lipschitz constant. Thus, hypothesis (H) with a.s. continuity of t → g(Xt) lead to
g 2 X s n j−1 +t n k − g 2 X t n k = Oω 1 n H−ε .
Step 2. Assume that ε < (H − 1/2)/3.
g 2 X s n j−1 +t n k − g 2 X t n k ∆ (2) B H s n j +t n k 2 + Oω kn m 3(H−ε) n =g 2 X t n k 2knT 2H (4 − 2 2H ) m 2H n Vn,T (k) + Oω kn n H−ε m 2(H−ε) n + Oω kn m 3(H−ε) n =g 2 X t n k 2knT 2H (4 − 2 2H ) m 2H n + Oω √ kn ln n m 2H n + Oω kn n H−ε m 2(H−ε) n .
Consequently the proof of lemma is completed.
Proof of Theorem 2.2. Put
Sn,T := 2 nk 2H−1 n n k=2 ∆ (2) X t n k 2 W n,k−1 .
It follows from (2.1)-(2.2) and Lemma 5.2 that To prove asymptotic normality of the estimator H for ε > 0 small enough. Now apply Slutsky's theorem and limit results of Section 3.1.
Sn,T = 2 nk 2H−1 n n k=2 g 2 X t n k ∆ (2) B H t n k 2 + Oω(n −3(H−ε) ) g 2 X t n k 2kn T 2H (4−2 2H ) m 2H n + Oω √ kn ln n m 2H n + Oω kn n H−ε m 2(H−ε) n = 2 nk 2H−1 n n k=2 ∆ (2) B H t n k 2 + Oω n −3(H−ε) 2knT 2H (4−2 2H ) m 2H n + Oω √ kn ln n m 2H n + Oω kn n H−ε m 2(H−ε) n = m 2H n nk 2H n T 2H (4 − 2 2H ) n k=2 ∆(
Examples
As mentioned previously, in this section we present two examples of applications of the obtained results. The first one deals with a general form of equation (1.1) and relies on certain restrictions on functions f and g. The second one describes a particular model which formally does not fit into the scope of the first one.
Example 1
In order to present an example, we need several facts on variation. To make the paper more self-contained and the structure clearer, the mentioned facts are briefly reminded in subsection 6.1.1. For details we refer the reader to [9].
Variation
Fix p > 0 and −∞ < a < b < ∞. Let κ = {{x0, . . . , xn} | a = x0 < · · · < xn = b, n 1} denotes a set of all possible partitions of [a, b]. For any f : • f ∈ Wp ⇒ sup x∈ [a,b] |f (x)| < ∞.
[a, b] → R define vp(f ; [a, b]) = sup κ n k=1 |f (x k ) − f (x k−1 )| p , Vp(f ; [a, b]) = v 1/p p (f ; [a, b]), Wp([a, b]) = {f : [a, b] → R | vp(f ; [a, b]) < ∞}, CWp([a, b]) = {f ∈ Wp([a, b]) | f is continuous}.
• f, g ∈ Wp ⇒ f g ∈ Wp.
• q > p 1 ⇒ Wp ⊂ Wq.
• Let f ∈ Wq, h ∈ Wp with p, q ∈ (0, ∞) such that 1/p + 1/q > 1. Then an integral b a f dh exists as the Riemann-Stieltjes integral provided f and h have no common discontinuities. If the integral exists, the Love-
Young inequality b a f dh − f (y) h(b) − h(a)
Cp with Gε,T of (3.1).
Assumptions and properties of solution of SDE
Let (B H t ) t∈[0,T ] , H ∈ (1/2, 1), be a fixed fBm defined on some probability space (Ω, F, P, F).
Let α ∈ 1 H − 1; 1 , C 1+α (R) = {h : R → R | h ′ exists and sup x |h ′ (x)| + sup x =y |h ′ (x)−h ′ (y)| |x−y| α < ∞}.
Assume that f is Lipschitz and g ∈ C 1+α (R). In such case there exists a unique solution of (1.1) having the following properties: i) (Xt) t∈[0,T ] is F adapted and almost all sample paths are continuous; ii) X0 = ξ a.s.; iii) ∀p > 1 H , P(Vp(X; [0, T ]) < ∞) = 1 (see [8], [16], [17] and [11]). where dn = max 1 k in (τ n k − τ n k−1 ). Proof. Let a sample path t → Xt be continuous. We first prove (6.3). Note that ∆X τ n k = X τ n k − X τ n k−1 =
Vn,T (k) − 1 > δn) converges and by the Borel-Cantelli lema max 1 k n−1 Vn,T (k) Assume that function g is Lipschitz-continuous. If ε < (H − 1/2)/3, then for each k = 1, . . . , n − 1
Recall that vp is called p-variation of f on [a, b] and any f in Wp([a, b]) is said to have bounded p-variation on [a, b]. For short we omit an interval [a, b] in the notations introduced above whenever there is no ambiguity. Below we list several facts used further on.• f → Vp(f ) is a seminorm on Wp; Vp(f ) = 0 if and only if f is a constant.
Lemma 6 . 1 .
61Let X satisfies (1.1), ε ∈ (0, H − 1 2 ). There exists a.s. finite r.v. Lε,T such thatVH ε X; [s, t] Lε,T (t − s) (H−ε) , 0 s < t T. (6.2)Proof. Since Vp is seminorm and non-increasing function of p 1, inequalities of subsection 6.1.1 give boundVH ε X; [s, t] Xu)|(t − s) + CH ε,Hε VH ε g • X·; [s, t] + sup u∈[0,T ] |g(Xu)| VH ε B H ; [s,t] Lε,T (t − s) H−ε with Lε,T = sup u∈[0,T ] |f (Xu)|T 1−H+ε + CH ε,Hε VH ε g • X·; [s, t] + sup u∈[0,T ] |g(Xu)| Gε,T and Gε,T of (6.1). A. s. continuity of t → Xt together with continuity of f and g implie a.s. boundedness of sup u∈[0,T ] |f (Xu)| and sup u∈[0,T ] |g(Xu)|. It is not difficult to show that VH ε g • X·; [s, t] sup u |g ′ (u)| VH ε X; [s, t] . Hence, continuity of g ′ and a.s. boundedness of VH ε X; [s, t] guarantees a.s. boundedness of VH ε g • X·; [s, t] along with that of Lε,T .
Lemma 6. 2 .
2Let X satisfies (1.1), ε ∈ (0, H − 1 2 ), and conditions stated above are true. Then the following relations hold:
Xs) − g(X τ n k−1 ) dB H s + g(X τ n k−1 )∆B H τ n k .1 n 2(H−ε) , k = 2, . . . , n. Finally, explicit form of Xt implies existence of an a.s. finite r.v. ς such that sup t∈[0,T ] 1 |Xt| ς a.s. Thus, one can use Theorem 2.1 if constant σ is known and Theorem 2.2 in general situation.
,qVq f Vp h holds for all y ∈ [a, b], where Cp,q = ζ(p −1 + q −1 ) and ζ(s) = n 1 n −s . Moreover,Vp · a f dh; [a, b] ≤ Cp,qVq,∞ f Vp h , where Vq,∞(f ) = Vq(f ) + sup x∈[a,b] |f (x)|.Also note that Vq,∞ is a norm on Wq, q 1.Remark 6.1. The left-hand side of (3.1) can be replaced by VH ε (B H ; [s, t]), ∀ε ∈ (0, H − 1 2 ), i.e. VH ε (B H ; [s, t]) Gε,T |t − s| H−ε a.s. (6.1)
t n k + Oω 1 n , k = 1, . . . , n, ∆ (2) X t n k =X t n k−1 σ∆ (2) B H t n k + Oω
An application of inequalities (6.1)-(6.2) together with continuity of f , g, g ′ and mean value theorem yield |f (Xu)| (τ n k − τ n k−1 ) = Oω(dn),Therefore (6.3) holds. Next we prove (6.4). Sincethen by mean value theorem with K equal to Lipschitz constant of f ,We conclude that (6.4) also holds.ApplicationResults of subsection 6.1.2 imply proposition constituting the basis of the first example.Example 2Consider the Verhulst equation
Asymptotic development and central limit theorem for quadratic variations of Gaussian processes. A Bégyn, Bernoulli. 133A. Bégyn, Asymptotic development and central limit theorem for quadratic variations of Gaussian processes, Bernoulli, 13(3) (2007), 712-753.
Identification of filtered white noises. A Benassi, S Cohen, J Istas, S Jaffard, Stochastic Processes and their Applications. 75A. Benassi, S. Cohen, J. Istas, and S. Jaffard, Identification of filtered white noises, Stochastic Processes and their Applications, 75 (1998), 31-49.
Estimation in models driven by fractional Brownian motion, Annales de l'Institut Henri Poincaré. C Berzin, J R León, 44C. Berzin and J.R. León, Estimation in models driven by fractional Brownian motion, Annales de l'Institut Henri Poincaré, 44 (2008), 191-213.
Inference on the Hurst Parameter and the Variance of Diffusions Driven by Fractional Brownian Motion. C Berzin, A Latour, J R León, Lecture Notes in Statistics. 216SpringerC. Berzin, A. Latour, J.R. León, Inference on the Hurst Parameter and the Variance of Diffusions Driven by Fractional Brownian Motion, Lecture Notes in Statistics 216, Springer (2014).
Confidence intervals for the Hurst parameter of a fractional Brownian motion based on finite sample size. J.-C Breton, J.-F Coeurjolly, Stat Inference Stoch Process. 15126J.-C. Breton, J.-F. Coeurjolly, Confidence intervals for the Hurst parameter of a fractional Brownian motion based on finite sample size, Stat Inference Stoch Process, 15 (2012), 126.
Exact confidence intervals for the Hurst parameter of a fractional Brownian motion. J.-C Breton, I Nourdin, G Peccati, Electron J. Stat. 3416425J.-C. Breton, I. Nourdin, G. Peccati, Exact confidence intervals for the Hurst parameter of a fractional Brownian motion, Electron J. Stat., 3 (2009), 416425.
Estimating the parameters of a fractional Brownian motion by discrete variations of its sample paths. J.-F Coeurjolly, Statistical Inference for Stochastic Processes. 4J.-F. Coeurjolly, Estimating the parameters of a fractional Brownian motion by discrete variations of its sample paths, Statistical Inference for Stochastic Processes, 4 (2001), 199-227.
R M Dudley, Mini-proceedings: Workshop on Product Integrals and Pathwise Integration. MaPhySto, AarhusPicard iteration and p-variation: the work of LyonsR.M. Dudley, Picard iteration and p-variation: the work of Lyons (1994), Mini-proceedings: Workshop on Product Integrals and Pathwise Integration, MaPhySto, Aarhus, (1999).
Concrete Functional Calculus. R M Dudley, R Norvaiša, SpringerNew YorkR.M. Dudley, R. Norvaiša, Concrete Functional Calculus. Springer Monographs in Mathematics. New York, Springer (2011).
Quadratic variations and estimation of the local Hölder index of a Gaussian process. J Istas, G Lang, Ann. Inst. Henri Poincaré. 33Probab. Stat.J. Istas, G. Lang, Quadratic variations and estimation of the local Hölder index of a Gaussian process. Ann. Inst. Henri Poincaré, Probab. Stat., 33 (1997), 407-436.
The existence and uniqueness of the solution of the integral equations driven by fractional Brownian motion. K Kubilius, Liet. mat. rink. 40K. Kubilius, The existence and uniqueness of the solution of the integral equations driven by fractional Brownian motion, Liet. mat. rink., 40 (Spec. Iss.) (2000), 104-110.
Quadratic variations and estimation of the Hurst index of the solution of SDE driven by a fractional Brownian Motion. K Kubilius, D Melichov, Lithuanian Mathematical Journal. 504K. Kubilius, D. Melichov, Quadratic variations and estimation of the Hurst index of the solution of SDE driven by a fractional Brownian Motion, Lithuanian Mathematical Journal, 50(4) (2010), 401-417.
Estimation of parameters of SDE driven by fractional Brownian motion with polynomial drift. K Kubilius, V Skorniakov, D Melichov, ArxivK. Kubilius, V. Skorniakov, and D. Melichov, Estimation of parameters of SDE driven by fractional Brownian motion with polynomial drift, Arxiv.
On comparison of the estimators of the Hurst index of the solutions of stochastic differential equations driven by the fractional Brownian motion. K Kubilius, D Melichov, Informatica. 22197114K. Kubilius, D. Melichov, On comparison of the estimators of the Hurst index of the solutions of stochastic differential equations driven by the fractional Brownian motion, Informatica 22(1) (2011) 97114.
The rate of convergence of estimate for Hurst index of fractional Brownian motion involved into stochastic differential equation. K Kubilius, Y Mishura, Stochastic Processes and their Applications. 122K. Kubilius, Y. Mishura, The rate of convergence of estimate for Hurst index of fractional Brownian motion involved into stochastic differential equation, Stochastic Processes and their Applications, 122(11) (2012), 3718-3739.
Differential equations driven by rough signals (I): An extension of an inequality of L.C. Young. T Lyons, Mathematical Research Letters. 1T. Lyons, Differential equations driven by rough signals (I): An extension of an inequality of L.C. Young, Mathe- matical Research Letters 1 (1994), 451-464.
Differential equations driven by rough paths, Ecole dEté de Probabilités de Saint-Flour XXXIV2004. T Lyons, M Caruana, T Lévy, Lecture Notes in Math. J. Picard1908SpringerLyons, T., Caruana, M., Lévy, T., Differential equations driven by rough paths, Ecole dEté de Probabilités de Saint-Flour XXXIV2004, (J. Picard, ed.), Lecture Notes in Math., vol. 1908, Springer, Berlin (2007).
| [] |
[
"NOTES ON FINITE TOTALLY 2-CLOSED PERMUTATION GROUPS",
"NOTES ON FINITE TOTALLY 2-CLOSED PERMUTATION GROUPS"
] | [
"Chen Gang ",
"Qing And ",
"Ren "
] | [] | [] | Let N be a normal subgroup of a finite group G. For a faithful N -set ∆, applying the university embedding theorem one can construct a faithful G-set Ω. In this short note, it is proved that if the 2-closure of N in Ω is equal to N , then the 2-closure of N in ∆ is also equal to N ; in addition, it is proved that any abelian normal subgroup of a finite totally 2-closed group is cyclic; finally, it is proved that if a finite nilpotent group is a direct of two nilpotent subgroups where the two factors have coprime orders and both of them are totally 2-closed then G is totally 2-closed. As corollaries, several well-known results on finite totally 2-closed groups are reproved in more simple ways. | null | [
"https://arxiv.org/pdf/2202.09765v2.pdf"
] | 247,011,387 | 2202.09765 | 1071f7d078c1318a24b9103a2856dd23f941ca99 |
NOTES ON FINITE TOTALLY 2-CLOSED PERMUTATION GROUPS
22 Feb 2022
Chen Gang
Qing And
Ren
NOTES ON FINITE TOTALLY 2-CLOSED PERMUTATION GROUPS
22 Feb 2022arXiv:2202.09765v2 [math.GR]totally 2-closed group2-closurenormal abelian subgroup MSC Classification: 20B0520D1020D25
Let N be a normal subgroup of a finite group G. For a faithful N -set ∆, applying the university embedding theorem one can construct a faithful G-set Ω. In this short note, it is proved that if the 2-closure of N in Ω is equal to N , then the 2-closure of N in ∆ is also equal to N ; in addition, it is proved that any abelian normal subgroup of a finite totally 2-closed group is cyclic; finally, it is proved that if a finite nilpotent group is a direct of two nilpotent subgroups where the two factors have coprime orders and both of them are totally 2-closed then G is totally 2-closed. As corollaries, several well-known results on finite totally 2-closed groups are reproved in more simple ways.
Introduction
Let G be a finite group acting faithfully on a finite set Ω. For a positive integer k, G acts naturally on the Catesian product Ω k := Ω × . . . × Ω. In the year 1969, Wielandt [10,Definition 5.3] introduced the definition of k-closure G (k),Ω of G ≤ Sym(Ω), which is defined as the largest subgroup of Sym(Ω) leaving each orbit of G on Ω k invariant. A finite group G is said to be a totally k-closed group, if G = G (k),Ω for any faithful G-set Ω.
In recent years, finite totally k-closed groups have been studied in several papers. In [1], it is proved that the center of every finite totally 2-closed group is cyclic; and finite nilpotent totally 2-closed groups are characterized. In [5], for a finite abelian group G, the minimal positive integer k for which G is totally k-closed is given. Actually, the minimal positive integer k is proved to be 1 plus the number of invariant factors of G. In [3], it is proved that finite soluble totally 2-closed groups must be nilpotent; the Fitting subgroup of a totally 2-closed group is shown to be totally 2-closed. In [2], the nontrivial finite totally 2-closed groups with trivial Fitting subgroups are classified.
In this short note, we will focus on finite totally 2-closed groups. Our first result, Theorem A, is a property on 2-closures, the second result, Theorem B, is a property on finite totallly 2-closed goups, and the third result, Theorem C, is on construction of finite nilpotent totally 2-closed groups. Applying our main results, several results on finite totally 2-closed groups will be reproved in self-contained and more simple ways.
Let G be a finite group and N a normal subgroup of G. Assume ∆ is a faithful Nset. By university embedding theorem (ref. Theorem 3.1), we may construct a faithful G-set Ω. Our first result is the following:
Theorem A. Keep the above notation. If N (2),Ω = N , then N (2),∆ = N .
Theorem A captures the common features of the Fitting subgroup, the centralizer of any normal subgroup in a finite totally 2-closed group. And these subgroups are proved to be totally 2-closed in a totally 2-closed group, see Corollary 3.1.
Base on Theorem A and the known result that every finite totally 2-closed abelian group is cyclic, we have the second main result of this paper.
Theorem B. Every normal abelian subgroup of a finite totally 2-closed group is cyclic.
Since the classification of finite p-groups in which every normal abelian subgroup is cyclic is well known, applying this classification it is proved that a finite nilpotent totally 2-closed groups is either cyclic or a direct product of a generalized quarternion goup with a cyclic group of odd order.
Additionally, based on the known result that the 2-closure of any finite nilpotent permutation group is still nilpotent we have the following result, which is the third main result of the paper.
Theorem C. Let G = H × K be a finite nilpotent group with (|H|, |K|) = 1 and both H and K totally 2-closed. Then G is totally 2-closed.
By Theorem B and Theorem C, the result on the characterization of finite nilpotent totally 2-closed groups in [1] is reproved in a more simple way.
In the final section of this paper, we discuss some known results on 2-closures of permutation groups in contrast with some results on coherent configurations .
Notation. Throughout the paper, Ω denotes a finite set and the symmetric group of Ω is denoted by Sym(Ω). The Cartesian product Ω × Ω is written as Ω 2 .
All groups are assumed to be finite nontrivial groups; if a finite group G acts on Ω faithfully, Ω is also called a faithful G-set.
If a group G acts on Ω, the image of α ∈ Ω under the action of g ∈ G is denoted by α g .
The center of a group G is denoted by Z(G), the centralizer of a subgroup N is denoted by C G (N ), and the Fitting subgroup of G is denoted by F (G).
If H is a subgroup of a group G, the core of H in G is dentoed by H G , i.e., H G is the intersection of all conjugates of H in G.
For a prime p, the finite p-group M p n+1 is defined as
M p n+1 = a, b|a p n = b p = 1, b −1 ab = a 1+p n−1 .
The cyclic group of order n is denoted by C n .
Preliminaries
Let G be a finite group and Ω a faithful G-set. The 2-closure of G is defined as the largest subgroup of Sym(Ω) that leaves every orbit of G on Ω 2 invariant. Let Ω be a faithful G-set. Then x ∈ G (2),Ω if and only if for any α, β ∈ Ω there exists g ∈ G such that (α, β) x = (α, β) g .
Note that in the above lemma for a fixed element x ∈ G (2),Ω the element g depends on the choice of the pair (α, β) ∈ Ω 2 . Also, it is easily seen that G (2),Ω contains G as a subgroup.
Lemma 2.2. ( [1, Lemma 2.2])Let G ≤ Sym(Ω)
and H ≤ Sym(Γ), and let (θ, λ) be a permutation isomorphism from (G, Ω) to (H, Γ). Then (θ, λ) can be extended a permutation isomorphism (θ,λ) from (G (2),Ω , Ω) to (H (2),Γ , Γ), whereλ = λ and the restriction ofθ to G is equal to θ.
Lemma 2.3. ( [1, Lemma 2.1], [3, Lemma 1.1])
Let G be a finite totally 2-closed group and N a subgroup of G. Then C G (N ) (2),Ω = C G (N ), for any finite faithful Gset Ω.
The following lemma is a special case of Lemma 3.2 of [5]. For completeness, we give a proof. Proof. Let Ω = {1, 2, . . . , 2n + m}. Choose two permutations in Sym(Ω) in the following way:
x 1 = (1 . . . n)(n + 1, . . . , 2n), x 2 = (n, n − 1, . . . , 1)(2n + 1, . . . , 2n + m).
One can easily see that the subgroup H = x 1 , x 2 of Sym(Ω) is isomorphic to the direct product C n × C m . However, the permutation (1 . . . n) belongs to H (2),Ω but not to H. The lemma then follows. Proof. It suffices to show that for every prime divisor p of |G|, the Sylow psubgroup S p of G is cyclic.
Suppose on the contrary that the Sylow p-subgroup S p of G is not cyclic for some prime divisor p of |G|. Then there exist positive integers e 1 ≥ e 2 and a subgroup K of G such that G = (C p e 1 × C p e 2 ) × K. By Lemma 2.5, C p e 1 × C p e 2 must be totally 2-closed. This is a contradiction to Lemma 2.4. The following lemma is based on the fact that as permutation groups cyclic groups of prime power order and generalized quaternion groups have trivial one point stabilizers.
Main Results
The following result plays a key role in the whole theory. uψ(x) for all u ∈ K. Then φ(x) =: (f x , ψ(x)) defines an embedding φ of G into N ≀ K. Furthermore, if N acts faithfully on a set ∆ then G acts faithfully on Ω := ∆× K by the following rule (δ, k) x = (δ fx(k) , kψ(x)).
In the first paragraph on page 9 of [3], it was shown that the action of N on Ω is permutation isomorphic to the natural action of N on a disjoint union of |G : N | copies of ∆. Here, for each g ∈ K, the corresponding copy may be denoted as ∆ g := {(δ, g) : δ ∈ ∆}, and the permutation isomorphism is given by
(θ g , λ g ) : (N, ∆ g ) → (N, ∆), n → t g nt −1
g , (δ, g) → δ. Then N acts on Ω in the following way: for any δ ∈ ∆, g ∈ K, and x ∈ N , (δ, g) x = (δ θg (x) , g).
One can obtain a permutation isomorphism (µ g , λ g ) : (N (2),∆g , ∆ g ) → (N (2),∆ , ∆), by Lemma 2.2, for each g ∈ K. Also, the restriction of µ g to N is equal to θ g . Furthermore, N (2),∆g acts faithfully on Ω in the following way:
(δ, g) x = (δ µg (x) , g), ∀δ ∈ ∆, g ∈ K, and x ∈ N (2),∆g .
For any x ∈ N (2),∆ , denote the images of x in Sym(Ω) byx. For any (δ, g) ∈ Ω,
(δ, g)x = (δ, g) µ −1 g (x) = (δ x , g). (3.1)
The following theorem, which is Theorem A in this paper, is a generalization of [3, Proposition 3.6].
Theorem 3.2. In the context of Theorem 3.1 , if we keep the above notation and assume that N (2),Ω = N , then N (2),∆ = N .
Proof. It suffices to show that, as permutation groups on Ω, the cardinality of the image of N (2),∆ in Sym(Ω) is not greater than that of N .
For any x ∈ N (2),∆ , and any (δ 1 , g 1 ), (δ 2 , g 2 ) ∈ Ω, there exists n ∈ N such that (δ 1 , δ 2 ) x = (δ 1 , δ 2 ) n . Denote the images of x and n in Sym(Ω) byx andñ, respectively. By formula (3.1), one can see that ((δ 1 , g 1 ), (δ 2 , g 2 ))x = ((δ x 1 , g 1 ), (δ x 2 , g 2 )) = ((δ n 1 , g 1 ), (δ n 2 , g 2 )) = ((δ 1 , g 1 ), (δ 2 , g 2 ))ñ. We conclude thatx ∈ N (2),Ω = N . Thus, the cardinality of the image of N (2),Ω is not greater than that of N , as required. Proof. By Corollary 3.1, the center is totally 2-closed and thus is cyclic by Lemma 2.6.
The following result is the second main result, Theorem B, in this paper.
Theorem 3.3. Let G be a finite totally 2-closed group and N a normal abelian subgroup of G. Then N is cyclic.
Proof. By Corollary 3.1, C G (N ) is totally 2-closed. Thus the center of C G (N ) is cyclic by Lemma 3.1. Note that N is a central subgroup of C G (N ). It then follows that N is cyclic.
Theorem 3.4. Let p be a prime and G is a finite totally 2-closed nontrivial p-group. Then either G is cyclic or G is a generalized quaternion group.
Proof. Assume that G is a finite totally 2-closed p-group and that G is not cyclic.
By Theorem 3.3, every normal abelian subgroup of G is cyclic. By Lemma 2.7, we see that p = 2 and G has a cyclic subgroup of index 2. By [4,Theorem 1.2], G is either a dihedral group, a generalized quaternion group, M 2 n+1 , or a semidihedral group.
However, except for the generalized quaternion group, the other three groups are semidirect products of two proper subgroups. By Lemma 2.5, if G is equal to a semidirect product of two proper subgroups, then G must be the direct product of these two proper subgroups, which is a contradiction. We thus conclude that G is a generalized quaternion group.
As a consequence of Lemma 2.5 and Theorem 3.4, we obtain the following.
Corollary 3.2. If G is a finite nilpotent totally 2-closed group, then G is cyclic or a direct product of a generalized quaternion group with a cyclic group of odd order.
The following theorem is the third main result, Theorem C, in this paper. Proof. Let Ω be a faithful G-set. By Lemma 2.9, G (2),Ω is nilpotent. We may write G (2),Ω = J × L, where (|J|, |L|) = 1, H ≤ J, and K ≤ L.
Set j := |J| and l := |L|. For any x ∈ G (2),Ω and any (α, β) ∈ Ω 2 , there exists g ∈ G such that (α, β) x = (α, β) g by Lemma 2.1. By the assumption, one can write x = yz with y ∈ J and z ∈ L, and g = hk with h ∈ H and k ∈ K. Thus,
(α, β) x l = (α, β) g l ⇒ (α, β) y l = (α, β) h l .
This implies that y l ∈ H (2),Ω by Lemma 2.1. As we are assuming that H is totally 2closed, y l ∈ H. In addition, the fact that (l, |H|) = 1 yields that y ∈ H. Similarly, one can show that z ∈ K.
It follows that x = yz ∈ HK = G and therefore G (2),Ω = G. Since this is true for any faithful G-set, G is totally 2-closed. Corollary 3.3. Let G be a finite cyclic group or a direct of a generalized quaternion group with a cyclic group of odd order. Then G is totally 2-closed.
Proof. By the assumption we may decompose G into a direct product of its Sylow subgroups, where each Sylow subgroup is totally 2-closed by Lemma 2.7. Thus, G is totally 2-closed by Theorem 3.5.
As consequences of Corollaries 3.2 and 3.3, we have the following.
Concluding Remarks
Let G ≤ Sym(Ω). Then the action of G on Ω 2 generates a combinatorial structure, i.e., the coherent configuration associated with G, denoted by Inv(G). The 2closure of (G, Ω) is then the automorphism group of the coherent configuration. In other words, G (2),Ω = Aut(Inv(G)), see [6, Definition 2.2.14].
Several known results on 2-closures of finite permutation groups have their correspondents in the theory of coherent configurations. For example, Theorem 3 in [3] corresponds to Theorem 3.2.21 and Theorem 3.4.6 in [6]. And Corollary 3.2 in [3] corresponds to Exercise 2.7.17 (4) in [6]. Finally, Lemma 1.4 in [3] or Lemma 2.9 in [1] is a consequence of Theorem 3.2.5 in [6]. More precisely, if G i ≤ Sym(Ω i ) and Ω is the disjoint union of Ω ′ i s, i = 1, . . . , n, then G = G 1 × · · · × G n acts faithfully on Ω and G (2),Ω = G (2),Ω1 1 × · · · × G (2),Ωn n by the cited Theorem.
Lemma 2.1. ( [10, Theorem 5.6])
m, n be positive integers with m divisible by n. Then C n × C m is not totally 2-closed.
Lemma 2.5. ( [3, Lemma 2.1]) Let G = HK be a totally 2-closed group, where H and K are proper subgroups of G. If H G ∩ K G = 1, then G = H G × K G . Furthermore, H G and K G are totally 2-closed. In particular, if H ∩K = 1, then G = H ×K and both H and K are totally 2-closed. Lemma 2.6. ( [1, Theorem 3])If G is a finite totally 2-closed abelian group, then G is cyclic.
Lemma 2.7. ( [8, Chap. 3, Theorem 7.6])Let p be a prime and G a finite nontrivial p-group such that every normal abelian subgroup of G is cyclic. Then (i) If p > 2, then G is cyclic.(ii) If p = 2, then G has a cyclic subgroup of index 2.
Finite cyclic groups of prime power orders and generalized quaternion groups are totally 2-closed. Lemma 2.9. ( [3, Corollary 3.4]) Let G ≤ Sym(Ω). Then G is nilpotent if and only if G (2),Ω is nilpotent.
Theorem 3.1. (University embedding theorem [7, Theorem 2.6 A]) Let G be an arbitrary group with a normal subgroup N and put K := G/N . Let ψ : G → K be a homomorphism of G onto K with kernel N . Let T := {t u |u ∈ K} be a set of right coset representatives of N in G such that ψ(t u ) = u for each u ∈ K. Let x ∈ G and f x : K → N be the map with f x (u) = t u xt −1
Let G be a finite totally 2-closed group and N a normal subgroup of G. Then C G (N ), Z(G), and F (G) are totally 2-closed group.Proof. The corollary is a consequence of Lemma 2.
center of every totally 2-closed group is cyclic.
Theorem 3 . 5 .
35Let G = H × K be a finite nilpotent group with (|H|, |K|) = 1 and both H and K totally 2-closed. Then G is totally 2-closed.
Corollary 3. 4 .
4( [1, Theorem 2]) A finite nilpotent group is totally 2-closed if and only if it is cyclic or a direct product of a generalized quaternion group with a cyclic group of odd order.
Acknowledgements. The authors would like to thank the refree for valuable comments.
Finite nilpotent groups that coincide with their 2-closures in all of their faithful permutation representations. A Abdollahi, M Arezoomand, J. Algebra Appl. 1741850065A. Abdollahi and M.Arezoomand, Finite nilpotent groups that coincide with their 2- closures in all of their faithful permutation representations. J. Algebra Appl. 17(4) (2018), 1850065.
Totally 2-closed finite groups with trivial Fitting subgroup. M Arezoomand, M A Iranmanesh, C E Praeger, G Tracey, arXiv:2111.02253arxiv preprintM.Arezoomand, M. A. Iranmanesh, C. E. Praeger and G.Tracey, Totally 2-closed finite groups with trivial Fitting subgroup, arxiv preprint, arXiv:2111.02253, 2021.
On finite totally 2-closed groups. A Abdollahi, M Arezoomand, G Tracey, arXiv:2001.09597arxiv preprintA. Abdollahi, M. Arezoomand, and G.Tracey, On finite totally 2-closed groups, arxiv preprint, arXiv:2001.09597, 2021.
Y Berkovich, Groups of prime power order. BerlinWalter de Gruyter GmbH & Co.KG1Y. Berkovich, Groups of prime power order, Vol. 1, Expositions in Mathematics 46, Walter de Gruyter GmbH & Co.KG, Berlin, 2008.
Finite totally k-closed groups. D Churikov, C E Praeger, Tr. Inst. Mat. Mekh. 271D. Churikov and C. E. Praeger, Finite totally k-closed groups, Tr. Inst. Mat. Mekh. 27 (1)(2021), 240-246.
G Chen, I Ponomarenko, Coherent Configurations. WuhanCentral China Normal Universit PressG. Chen and I. Ponomarenko, Coherent Configurations, Central China Normal Uni- versit Press, Wuhan, 2019.
J D Dixon, B Mortimer, Permutation groups. New YorkSpringerJ.D. Dixon and B. Mortimer, Permutation groups, Springer, New York, 1996.
Endliche Gruppen. I. Die Grundlehren der Mathematischen Wissenschaften. B Huppert, Band. 134Spring-VerlagB. Huppert, Endliche Gruppen. I. Die Grundlehren der Mathematischen Wis- senschaften, Band 134. Spring-Verlag, Berlin-New York, 1967.
H Wielandt, Finite permutation groups. New YorkAcademic PressH. Wielandt, Finite permutation groups, Academic Press, New York, 1964.
Permutation groups through invariant relations and invariant functions', Lecture Notes. H W Wielandt, Group theory. Wielandt, Helmut, Mathematische WerkeBerlinWalter der Gruyter & Co1Ohio State UniversityMathematical worksH.W. Wielandt, 'Permutation groups through invariant relations and invariant func- tions', Lecture Notes, Ohio State University, 1969. Also published in: Wielandt, Hel- mut, Mathematische Werke (Mathematical works) Vol. 1. Group theory, Walter der Gruyter & Co., Berlin, 1994, pp. 237-296.
. P. R. China. Email address: [email protected] School of Mathematics and Statistics. 71010School of Mathematics and Statistics, and Hubei Key Laboratory of Mathematical Sciences, Central China Normal University ; Hubei Key Laboratory of Mathematical Sciences, Central China Normal UniversityP. R. China.. Email address: [email protected] of Mathematics and Statistics, and Hubei Key Laboratory of Mathematical Sciences, Central China Normal University, P.O. Box 71010, Wuhan 430079, P. R. China. Email address: [email protected] School of Mathematics and Statistics, and Hubei Key Laboratory of Mathematical Sciences, Central China Normal University, P.O. Box 71010, Wuhan 430079, P. R. China. Email address: [email protected]
| [] |
[
"Artificial boundary conditions for stationary Navier-Stokes flows past bodies in the half-plane",
"Artificial boundary conditions for stationary Navier-Stokes flows past bodies in the half-plane"
] | [
"Christoph Boeckle [email protected] ",
"Peter Wittwer [email protected] ",
"\nTheoretical Physics Department\nTheoretical Physics Department\nUniversity of Geneva\nSwitzerland\n",
"\nUniversity of Geneva\nSwitzerland\n"
] | [
"Theoretical Physics Department\nTheoretical Physics Department\nUniversity of Geneva\nSwitzerland",
"University of Geneva\nSwitzerland"
] | [] | We discuss artificial boundary conditions for stationary Navier-Stokes flows past bodies in the half-plane, for a range of low Reynolds numbers. When truncating the half-plane to a finite domain for numerical purposes, artificial boundaries appear. We present an explicit Dirichlet condition for the velocity at these boundaries in terms of an asymptotic expansion for the solution to the problem. We show a substantial increase in accuracy of the computed values for drag and lift when compared with results for traditional boundary conditions. We also analyze the qualitative behavior of the solutions in terms of the streamlines of the flow. The new boundary conditions are universal in the sense that they depend on a given body only through one constant, which can be determined in a feed-back loop as part of the solution process. | 10.1016/j.compfluid.2013.04.023 | [
"https://arxiv.org/pdf/1208.3648v1.pdf"
] | 14,737,945 | 1208.3648 | 3fbf6f7effd9717530d0e9acd06d4f63fec07871 |
Artificial boundary conditions for stationary Navier-Stokes flows past bodies in the half-plane
May 2, 2014
Christoph Boeckle [email protected]
Peter Wittwer [email protected]
Theoretical Physics Department
Theoretical Physics Department
University of Geneva
Switzerland
University of Geneva
Switzerland
Artificial boundary conditions for stationary Navier-Stokes flows past bodies in the half-plane
May 2, 2014Navier-Stokesexterior domainfluid-structure interactioncomputational fluid dynam- icsartificial boundary conditions
We discuss artificial boundary conditions for stationary Navier-Stokes flows past bodies in the half-plane, for a range of low Reynolds numbers. When truncating the half-plane to a finite domain for numerical purposes, artificial boundaries appear. We present an explicit Dirichlet condition for the velocity at these boundaries in terms of an asymptotic expansion for the solution to the problem. We show a substantial increase in accuracy of the computed values for drag and lift when compared with results for traditional boundary conditions. We also analyze the qualitative behavior of the solutions in terms of the streamlines of the flow. The new boundary conditions are universal in the sense that they depend on a given body only through one constant, which can be determined in a feed-back loop as part of the solution process.
Introduction
We numerically solve the Navier-Stokes equations for the flow past a body moving at constant velocity parallel to the boundary of a half-plane. We are particularly interested in the computation of the hydrodynamic forces acting on the body in the case where the body is small, and the flow is laminar.
We first introduce the viscous length scale,
v = ν u ∞ = ν v B ,(1)
with ν the dynamic viscosity of the fluid and u ∞ and v B the magnitude of the velocity field at infinity (as viewed from the moving body) and the magnitude of the body's translation velocity (as viewed from the fluid at rest), respectively. In addition to this dynamic length, there are two geometrical lengths which are important in this problem: the body-center-to-wall (hereafter just "body-wall") distance d and the body size 2r, d > r. In terms of the viscous length, we define the Reynolds number Re as
Re = 2rv B ν = 2r v .(2)
We shall keep to small but non-negligible values of the Reynolds number throughout this work, i.e., Re = 0.5, . . . , 25. In this regime, it is expected that the flow remains stationary and that viscous and inertial phenomena have similar importance, i.e., the flow is neither creeping nor turbulent.
Since we truncate the unbounded domain to finite sub-domains for the numerical treatment, the question of boundary conditions at the resulting artificial boundaries arises. We show that, when compared to traditional methods of "open" boundary conditions (see for example [14]), a significant gain in accuracy in the computed values of drag and lift can be obtained when using the asymptotic expansion for the velocity field constructed in [2] as Dirichlet boundary conditions. Our method of adaptive boundary conditions is universal in the sense that it depends on the boundary conditions at the body surface and the shape of the body only through one constant. We mainly concentrate on drag and lift for the quantitative comparison of different boundary conditions because they are important quantities in engineering and theoretical work alike. Besides increased accuracy, the qualitative behavior of the flow within the computational domain is also improved with our boundary conditions in the sense that the streamlines are not significantly influenced by the artificial boundary, contrary to the cases where traditional boundary conditions are used.
We would like to emphasize that the aim of this paper is not to constitute a benchmark for the considered problem nor to achieve the highest possible precision, but rather to highlight the fundamental importance of the choice of boundary conditions when precision and qualitative correctness of the flow patterns are desired alongside a decrease in hardware requirements (especially in memory due to smaller computational domains). In order to make the results easily accessible and useful for applications, we use for the numerical implementation a widely used commercial code targeted for industrial applications, namely COMSOL Multiphysics. See [10,33] for recent research in CFD using COMSOL Multiphysics in low Reynolds numbers regimes.
The present work is part of an ongoing project in a bottom-up approach to problems with fluidstructure interaction, starting with the mathematical analysis of the equations and going all the way to numerical applications. Conceptually, we make heavy use of the equivalence between the present problem, which we shall refer to as the "body-problem", and a problem without a body, but with a force term of compact support, which we shall refer to as the "force-problem". For the mathematical treatment of the Oseen-linearized force-problem, see [21], for an existence theorem for the nonlinear force-problem see [22], and for the proof of uniqueness of solutions and the equivalence of the body-problem and the force problem see [23]. A precise bound on the vorticity for the force-problem was derived in [3], which is the key input for the extraction, in [2], of the asymptotic expansion for the velocity field up to second order, which is used in the current work to define the adaptive boundary conditions. For a general introduction to the mathematical method used for the derivation of adaptive boundary conditions like the ones presented here, see [19].
This work is part of an ongoing effort to achieve higher numerical efficiency in the simulation of exterior problems. Adaptive boundary conditions of the type described here have been used with success for the problem in the full plane, see [4], [5] and [24], and the three dimensional case is discussed in [18]. In three dimensions, other approaches for the numerical treatment of motions near a wall at low Reynolds numbers have been put into practice. See for example [7], where the authors use what we shall refer to as classic boundary conditions, and [36], which involves what we shall call simple boundary conditions. Other types of artificial boundary conditions for incompressible viscous flows in exterior domains have been developed in the past for various cases. For the time-dependent Oseen-linearized Navier-Stokes flow in the full plane, artificial boundary conditions for the normal constraint were proposed in the form of differential equations in [16]. For the nonlinear Navier-Stokes equations in the full space, a boundary condition based on the leading asymptotic terms of the solution was obtained in [26]. For high Reynolds number (Re > 10 7 ) flows in aerodynamics (mainly compressible, but including the incompressible limit) artificial boundary conditions based on the Calderón-Ryaben'kii method involving difference potentials and pseudo-differential boundary operators has culminated in the work in [32], which includes numerical applications. Another artificial boundary condition based on a detailed mathematical analysis of bounded domains successively approximating an unbounded one is derived in [9] with error estimates given in [8]. A heuristic approach to artificial boundary conditions, the so-called "no-boundary" or "do nothing" boundary conditions, was pioneered in [25,27,20]. It consists in extending the governing equations to the artificial boundary ( [15] showed that this method implicitly imposes the (p + 1) th derivative of the solution to vanish at a point close to the outflow, where p is the degree of the finite elements), but this approach was applied for domains where only the outflow is an artificial boundary, together with prescribed inflow, and its validity in the case of full exterior domains is unclear. Finally, in [17], a procedure based on the conservation of mass and vorticity considerations is used to extrapolate the radial velocity at the artificial boundaries for the problem of the flow past a body placed in a channel, with improved behavior of the streamlines at the outlet even in non-stationary cases.
Experimental work on single bodies moving slowly parallel to a wall has been a recurring topic for over fifty years, with works such as [13] and [1]. More recently, in [31] and [30], experiments were reported that show that the transverse force on small spheroidal bubbles moving close to a recipient wall appears to change sign for particular parameters of the flow, the bubble-fluid interface type and small deformations of the bubbles. Two types of bubble-fluid interfaces were studied, so-called "clean" and "contaminated" bubbles, which are modeled mathematically as "slip" and "noslip" boundary conditions, respectively. The authors suggest that the change in sign of the transverse force is due to two competing mechanisms. On the one hand, vorticity generated at the bubble surface is advected and diffused downstream, creating a wake whose symmetry is broken by the wall, resulting in a force pushing the bubble away from the wall. On the other hand, one expects that a contribution to the lift related to the Bernouilli effect would attract the bubble towards the wall. The question is which mechanism dominates. For clean bubbles, which generate less vorticity at the bubble-liquid interface than contaminated bubbles, the sign of the wall-induced lift changes at Re ≈ 35. The bubble shape is known to play a more important role than the Reynolds number for the loss of stability of the paths of rising bubbles [37]. The greater the aspect ratio between the axis which is perpendicular to the bubble's trajectory and the axis parallel to it, the sooner the instability arises, and this is suggested to be related to an increased vorticity generation on the bubble surface (see [37]). While this result concerns unsteady flows, we will nevertheless use this finding on the bubble shape, as well as the other experimental insights reported in this paragraph, to guide us in our own numerical work in Section 4.
We now introduce the basic mathematical notions for the current work. The Navier-Stokes equations in the time dependent domain Ω + \B(t), with Ω + = R×[0, ∞) and
B(t) = {x ∈ Ω + | (x+v B t) 2 +(y−d) 2 ≤ r 2 } are ∂ t U + U · ∇U + ∇P − ν∆U = 0 ,(3)∇ · U = 0 ,(4)
where, with x = (x, y) T , U = U (x, t) is the velocity field and P = P (x, t) the pressure field. The boundary conditions at the wall (placed at y = 0) and at infinity are
U | y=0 = 0 ,(5)lim x→∞ U = 0 ,(6)
whereas on the body we may consider either noslip boundary conditions
U | ∂B = v B = (−v B , 0) T ,
or slip boundary conditions
U · n| ∂B = 0 , t · [−P I + νD(u)] · n| ∂B = 0 ,
where n and t are respectively the normal and tangential unit vector at the surface of the body, I is the identity matrix and D(u) = (∇u) + (∇u) T . We are interested in solutions which are stationary when viewed in a reference frame comoving with the body. In terms of the velocity field u relative to the body, we have
U (x, t) = u(x − v B t) + v B ,(7)
and u satisfies, in the time-independent domain Ω = Ω
+ \ B, with B = {x ∈ Ω | x 2 + (y − d) 2 ≤ r 2 }, the time-independent equations u · ∇u + ∇p − ν∆u = 0 ,(8)∇ · u = 0 ,(9)
with boundary conditions on the wall and at infinity
u| y=0 = u ∞ ,(10)lim |x|→∞ u = u ∞ ,(11)
with u ∞ = −v B = (v B , 0) T . In terms of u, the noslip boundary conditions on the body become
u| ∂B = 0 ,(12)
and the slip boundary conditions become
u · n| ∂B = 0 ,(13)t · [−pI + νD(u)] · n| ∂B = 0 .(14)
For the numerical treatment of these equations, we solve (8) and (9) in the bounded domainΩ =Ω + \B, whereΩ + = {(x, y) ∈ (−l, l) × (0, l)} and l > d + r is an arbitrary truncation length. The choice to truncate the domain at equal lengths upstream and downstream is motivated by technical reasons, see B.
Other choices of domain can be considered, but we do not want to indulge here in questions of domain optimization. The truncation introduces artificial boundaries at x = ±l and y = l. The main focus here will be on the choice of boundary conditions on these boundaries, which will be discussed in Section 2. For convenience later on, we note that the drag and the lift on the body are given by
F = (F D , F L ) T , where F = ∂B (−pI + D(u))ds .(15)
The rest of this paper is organized as follows: in Section 2 we present the different boundary conditions which we will implement on the artificial boundaries, in Section 3 we discuss the numerical aspects of the work and in Section 4 we present results which validate the adaptive boundary conditions. In Section 5, we show that the theoretical behavior of the flow described in [23] is numerically verified in the range of simulations we have run. In Section 6 we present the hydrodynamic forces as a function of the body-wall distance for different sizes of the body. The appendix, finally, contains technical points concerning the adaptive boundary conditions.
Artificial boundaries
Theoretically, the correct way to treat the edges of a domainΩ which is obtained by a truncation of the half-plane would be to use the solution of the original problem in the half-plane evaluated along those edges as a Dirichlet boundary condition. Of course, the solution of the original problem is unknown. One must therefore find boundary conditions which represent a good approximation to the solution of the original problem. We shall define and investigate three choices: simple boundary conditions (s.b.c.), classic (or open) boundary conditions (c.b.c.) and adaptive boundary conditions (a.b.c.), i.e., the new scheme which we propose here. More precisely:
• The s.b.c. simply prescribe u ∞ on all the artificial edges.
• The c.b.c. prescribe u ∞ on the upstream vertical boundary in order to fix the inflow, and impose (14) on the remaining artificial boundaries, allowing in-and outflow.
• The a.b.c. use expressions (16) and (17) which are based on the asymptotic expansion of the solution of the original problem, to prescribe Dirichlet boundary conditions. The s.b.c., while a reasonable starting point since they are in particular also used in the construction of weak solutions, are nevertheless problematic, as they do not allow fluid to move through the artificial boundary parallel to the wall, making the problem effectively a channel flow. This impacts flow rate conservation in two problematic ways: first, the velocity of the fluid must increase artificially above and below the body, it cannot be adjusted thanks to fluid "exiting" through the boundary parallel to the wall; second, the flow rate should in fact be lower in the truncated domain in comparison with the flow without a body, however the use of u ∞ at the inlet boundary (i.e., the upstream artificial boundary) prescribes the same flow as without a body. See [36] for an example where such boundary conditions are used in the three-dimensional version of the problem considered here. A recent work using these boundary conditions, albeit for a flow in the full plane around two side-by-side cylinders, is [34], where the authors have run simulations in domains with sizes 750 by 500 cylinder radii to ensure that perturbations due to the boundary conditions are small enough.
The c.b.c. are mixed Dirichlet (pressure) and Neumann (velocity) boundary conditions, and are a standard feature of the COMSOL program. They have been used in problems with artificial boundaries, see for example [11] and [29] for the case of an outflow of a channel with a backward-facing step, or again as mentioned before, [7], for a three-dimensional implementation of the problem considered here. While the c.b.c. are less restrictive on the flow rate than the s.b.c., the inlet boundary conditions still prescribe a flow rate which is too large.
We now present our adaptive boundary conditions. As already mentioned before, they are Dirichlet boundary conditions on the velocity, based on the asymptotic expansion for the solution of the problem. Our adaptive boundary conditions are given by u * = (u * , v * ), with
u * (x, y) = u ∞ 1 + c * 1 (y/ v ) 3/2 ϕ 1 (x/y) + c * 1 (y/ v ) 2 ϕ 2,1 (x/y) + c * 2 (y/ v ) 2 ϕ 2,2 (x/y) − c * 1 (y/ v ) 2 η 1 ( v x/y 2 ) − c * 1 (y/ v ) 3 η 2 ( v x/y 2 ) ,(16)v * (x, y) = u ∞ c * 1 (y/ v ) 3/2 ψ 1 (x/y) + c * 1 (y/ v ) 2 ψ 2,1 (x/y) + c * 2 (y/ v ) 2 ψ 2,2 (x/y) + c * 1 (y/ v ) 3 ω 1 ( v x/y 2 ) + c * 1 (y/ v ) 4 ω 2 ( v x/y 2 ) ,(17)
where
ϕ 1 (z) = − 1 4 √ π r + 1 − z 2 + zr + 2z r 3 √ r + 1 ,(18)ψ 1 (z) = − 1 4 √ π r + 1 − z 2 − zr − 2z r 3 √ r + 1 ,(19)ϕ 2,1 (z) = − 1 π 2z r 4 ,(20)ϕ 2,2 (z) = 1 2π 1 − z 2 r 4 ,(21)ψ 2,1 (z) = − 1 π 1 − z 2 r 4 ,(22)ψ 2,2 (z) = − 1 2π 2z r 4 ,(23)
where
r = 1 + z 2 ,
and where
η 1 (z) = η W (z) ,(24)ω 1 (z) = ω W (z) ,(25)η 2 (z) = η B (z) − 2η W (z) − 2zη W (z) ,(26)ω 2 (z) = ω B (z) − 3ω W (z) − 2zω W (z) ,(27)
where " " in (26) and (27) denotes the derivative, and
η W (z) = − 1 2 √ πz 3 e −1/4z , z ≥ 0 0, z < 0 ,(28)ω W (z) = 1 4 √ πz 5 (1 − 2z)e −1/4z , z ≥ 0 0, z < 0 ,(29)η B (z) = − 1 4πz 3 2z + π|z|(1 − 2z) e −1/4z − D (1/2|z| 1/2 ) ,(30)ω B (z) = 1 8πz 4 2z(1 − 4z) + π|z|(1 − 6z) e −1/4z − D (1/2|z| 1/2 ) ,(31)
where
D (z) = e −z 2 erfi z , z ≥ 0 e z 2 erf z , z < 0 ,
is the Dawson function, erf is the Gauss error function
erf z = 2 √ π z 0 e −ζ 2 dζ ,
and erfi(z) = −i erf(iz) is the imaginary error function. For a derivation of the adaptive boundary conditions, see A. In particular, note that we set c * 2 = 0 throughout the present work. Therefore, the adaptive boundary conditions only reference the characteristics of the body (size, position, shape, boundary conditions at the interface) through the constant c * 1 ∈ R. This constant should be equal to a constant based on the solution in the original unbounded domain, defined in [2] and represented there by c 1 . A good approximation can be determined as part of the solution process, for example by the algorithm described in the next paragraph. In essence, our algorithm used to determine c * 1 searches for the root of the function
g(x) = 1 n 1 Ω ∇ · (T(u, p)V )dω − x ,(32)
where
T(u, p) = −u ⊗ u + νD(u) − pI
is calculated with u and p computed from the numerical solution obtained with the a.b.c. used with c * 1 such that g(c * 1 ) = 0 and where ⊗ represents the dyadic product (i.e., (a ⊗ b) ij = a i b j ), and
V = ( √ yχΩ(x, y), 0) T ,
with χΩ(x, y) = χ x (x) · χ y (y) a user-defined differentiable cut-off function that cuts a channel perpendicular to the wall centered on the body and two strips, one adjacent to the wall and one adjacent to the artificial boundary parallel to the wall. The scalar n 1 in (32), given by
n 1 = ∞ 1 χ y (l/z) 3/2 v ϕ 1 (z) − ψ 1 (z) z + 2 2 v ϕ 2,1 (z) √ zl dz − ∞ v /l χ y ( v l/z) 2 v η 1 (z) ( v lz 3 ) 1/4 + 3 v η 2 (z) − η 2 (−z) ( 3 v l 3 z) 1/4 dz ,
is independent of the solution. Note that, c * 1 and n 1 depend on l and that for l → ∞ the function g(x) vanishes for x = c 1 (i.e., the exact constant associated to the solution of the original problem), see B for more details.
The root-finding algorithm, in our case Brent's method (see [6]), operates on a sequence of simulations, refining c * 1 at each step. Since Brent's method starts with the bisection method, we must first bracket the root. We do this by imposing the heuristically derived values c
(i), * 1 = 10Re(i − 1)d with i = 1, 2, .
. ., so that for the first run the simulation coincides with the one for the s.b.c. As soon as the root is bracketed, we begin Brent's method until the desired tolerance is achieved.
We finally discuss the regime of applicability of our adaptive boundary conditions with regard to the size l of the computational domain. There are three conditions that have to be respected. First, l min has to be large enough in order for the asymptotic expansion to be indeed evaluated in its domain of validity, i.e., y should be large enough for the ratio v x/y 2 to be small. Thus, when x ∼ y (see A for a motivation why x large is also a valid region), we must have l min = y ∼ C 1 v . Second, the artificial boundary must be beyond any standing eddies which are not taken into account by our asymptotic expansion. On the basis of measurements by Gerrard (see [12, p. 362]), standing eddies typically extend up to three times the body diameter at Re ≈ 40, so that we choose l min = x ∼ C 2 r. Thirdly, the wake should have interacted with the wall by the time it reaches the artificial boundary, because the asymptotic expansion is made under that assumption. Because the wake scales as v x/y 2 and the body is at y = d, we must
have l min = x ∼ C 3 −1 v d 2 .
Collecting the requirements, one gets
l min ∼ max{C 1 v , C 2 r, C 3 −1 v d 2 } .(33)
Of course, these requirements are qualitative only, since we have no way to estimate the constants C 1 , C 2 and C 3 . They do however explain the observation that the adaptive boundary conditions are particularly useful in an intermediate range of parameters, to be specified in what follows.
Numerics
To numerically solve (8)-(10) with all the different choices of boundary conditions, we use COMSOL Multiphysics 3.5a, controlled through a Matlab 2009a script. The linear system is solved using a direct linear solver, PARDISO, and the nonlinear solver is a damped Newton fixed point solver. The mesh elements are a mix of triangles and quadrilaterals of Lagrange type, of degree two for the velocity and degree one for the pressure. For all the simulations presented below, a workstation with 36 GB RAM and a 12-core processor was used. The algorithms mentioned above are features of the COMSOL program, and we refer to the product documentation for additional information.
Validation
The first set of numerical tests is used to validate the adaptive boundary conditions. For the sake of convenience, we have set 2r = 1 for all simulations of this validation run, so that the Reynolds number is given by the inverse of the viscous length. The body is at a fixed distance d = 1 from the wall. We show the effect of the domain size for all choices of boundary conditions on the computation of the drag and lift.
Mesh
We now present our method for generating meshes for the domainΩ. Figure 1 represents the coarsest and smallest mesh used, from which all others are constructed. The smallest mesh will have a domain with truncation length l = 10 in the same units as the body size. We adopt the same philosophy as [17]: in order to minimize possible effects from one mesh to another due to element placement, we define a small rectangular zone (of size 10 by 5) around the body meshed with a constant node distance h on the top, left and right boundaries, and using triangles placed according to an advancing front algorithm provided by COMSOL. Since the body size is fixed, this mesh will be the same for all simulations. The rest of the domain is paved with strictly identical square elements with sides of length h. Increasing the domain size then consists in adding supplementary square elements of the same size, guaranteeing that the meshes always have the same structure regardless of domain size. The base mesh for l = 10 has h 0 = 1.25, 367 triangles and 96 squares, for a total of 2718 degrees of freedom. A step of mesh refinement simply consists in dividing all mesh cell lengths by a factor m r = 2 effectively dividing each element into four identical smaller ones. Thus the recurrence relation h i = h i−1 /m r .
We compute simulations with mesh sizes h 0 to h 3 on domains with truncation lengths l = 10, 20, . . . , 90. All figures in this article representing the velocity fields are obtained from simulations computed on meshes with elements of size h 3 . Since the drag and the lift are given by integrals, we take advantage of the fact that each of our successively refined meshes is simply a subdivision of the preceding one and use a Richardson extrapolation scheme (see [28]) to accelerate convergence. We base our scheme on the hypothesis that the error of the drag and the lift (15) is of the form
I hn = I + C 2 h 2 n + C 3 h 3 n + C 4 h 4 n + O(h 5 n ) ,(34)
with C i unknown constants. This yields the extrapolation formula
I R = m 9 r · I h3 − (m 7 r + m 6 r + m 5 r ) · I h2 + (m 4 r + m 3 r + m 2 r ) · I h1 − I h0 m 9 r − (m 7 r + m 6 r + m 5 r ) + (m 4 r + m 3 r + m 2 r ) − 1 + O(h 5 0 ) = 512 · I h3 − 224 · I h2 + 28 · I h1 − I h0 315 + O(h 5 0 ) ,(35)
for the Richardson extrapolate I R of I. The error terms in (34) start at h 2 n because the Lagrange elements are of degree two for the velocity and one for the pressure. Note that if our Ansatz (34) should miss intermediary terms of order h n , n > 2, then they are not eliminated by the scheme, but only somewhat damped. Thus, the extrapolation scheme will always diminish the magnitude of the error, even though not necessarily to order O(h 5 0 ).
Results
Qualitative aspects
In a first step, we show that the choice of boundary conditions has an important impact on the quantities being calculated. To showcase this, we represent in Figure 2 the streamlines for the velocity field as seen from the body, i.e., the velocity field from which the constant flow u ∞ has been subtracted. In the case of the simple boundary conditions (s.b.c.), we observe an artificial backflow (moving clockwise on our figure). A very important proportion of the computational domain is thus used to compute this non-physical flow. In the case of the classic boundary conditions (c.b.c.), this backflow is not present, but the flow is still significantly influenced by the artificial boundaries, as can be seen by looking at the streamlines. The adaptive boundary conditions (a.b.c.) are the only ones that yield qualitatively satisfying flows all the way up to the boundary, and the streamlines exhibit almost no distortion near the boundary. Figure 3 presents the streamlines for Reynolds numbers Re = 0.5, 5, 10, 25.
We now discuss the qualitative behavior of the flow computed with the a.b.c. in more detail. The case of Re = 25 shows signs of artificial behavior at the right-hand boundary, which could be due to the fact that the asymptotic expansion ignores the actual position of the body. More precisely, the a.b.c. assume that the wake has already interacted with the boundary before the position of the artificial boundary. The wake is asymptotically governed by the first order in the v x/y 2 -scaling of the asymptotic expansion, and larger Reynolds numbers therefore yield narrower wakes which interact with the wall farther downstream, see (33). In order to simulate higher Re flow, one would have to bring the body closer to the wall, which would change the actual problem being solved, or use a larger domain, which was outside our computational power. This limitation to Re ≤ 25 is not so stringent, because in the absence of any wall, the flow behind a circular body becomes unsteady at Re ≈ 30, see [35, p. 150], and we do not expect that the situation be radically different in our problem.
The case of Re = 0.5, also shows distortions in the streamlines, especially downstream. Since Re = −1 v in the validation run, flows with Reynolds numbers smaller than unity need a large domain too, see (33). Indeed, since the wake scales like v x/y 2 , a low Reynolds number actually delays the distance after which the solution may be represented by the asymptotic expansion (which is obtained for y → ∞, i.e., v x/y 2 → 0), so that the a.b.c. should be used in conjunction with larger numerical domains. As can be seen in Figure 4, increasing the domain size does indeed improve the behavior of the streamlines. This case is different from the large Reynolds number case, since the wake does interact with the wall, but the flow region which is represented is too small compared to the viscous length for the asymptotic expansion to be accurate (the flow would probably be more accurately represented under the Stokes approximation). It is nevertheless possible to simulate flows for lower Reynolds numbers, but one must respect particular scaling conditions. This procedure will be presented in Section 5 where we will investigate a sequence of simulations with progressively smaller bodies.
Quantitative analysis
We present in Figures 5-9 the drag and lift for a sequence of simulations with Reynolds numbers Re = 0.5, 1, 5, 10, 25, performed with the three boundary conditions. As the simulations with adaptive boundary conditions overall present the least variation with domain size, we use the largest simulation (l = 90) feasible on our workstation using these boundary conditions as a reference for the computation of relative errors. In Figures 10-11 we summarize the relative errors for the drag and lift obtained for a body with slip boundary conditions.
In Tables 1-2 we present the extrapolated values of the drag and lift for circular objects with noslip and slip boundary conditions at Reynolds numbers Re = 0.5, 1, 5, 10, 25. The extrapolation was done using formula (35) on the values obtained from simulations with mesh sizes h 0 to h 3 , for the largest domain (l = 90).
We observe that, for both boundary conditions on the body, the simple boundary conditions overestimate the values of the forces compared to the classic boundary conditions which in turn overestimate the drag and lift compared to adaptive boundary conditions, except for the drag in the case Re = 0.5 on the smallest domain. This was to be expected, since both the s.b.c. and c.b.c. impose a flow rate only appropriate in absence of a body. In a bounded domain, this flow rate is higher than when there is a body, leading to an inevitable overestimation in the hydrodynamic forces. However, the modification "Noslip body" Re = 0. of the flow rate due to the body decays as the inverse of the square root of the size of the domain, so that in the limit of a complete half-plane, the difference vanishes. This is different from the case of a body in the whole space, where the presence of abody imposes a non-vanishing modification to the flow rate (see [4] for the asymptotic expansion for the velocity field in this case). Nevertheless, in the case of truncated domains we show that the difference in flow rate implied by the various boundary conditions has a non-negligible effect.
Body shape
We next demonstrate the range of applicability of the a.b.c. All simulations were performed using a triple refinement of the mesh shown in Figure 12, obtained according to the advancing front algorithm provided by COMSOL (maximum triangle side is 1.0). We recall that one step of refinement is defined as the division of all triangles into four smaller ones of equal size. The characteristic body size is defined by the projection of the shape on the y-axis. We present four shapes in Figure 13: an ellipse with its large axis perpendicular to the flow of length 1.0 and height 2.0, a square of side 9/8, a "marmite" composed of half a circle and half an ellipse whose longer axis is twice the smaller one, of total height 1.0, and a "bone" of length 1 + √ 2 and height 1.0. These bodies serve to illustrate that the a.b.c. work well for shapes with concavities and asymmetries, as well as non-smooth boundaries, even though the theoretical developments are expressly derived for smooth bodies.
It is also possible to simulate collections of bodies. We tested arrangements of two and three circles, contained in a circular area of radius r = 1.0, see Figure 14. These two simulations were done on a smaller domain, with l = 10 (the mesh was generated in the same way as for the other shapes).
Conformance to expected theoretical behavior
A result of [23, Section 2.3] states that solutions of (8) and (9) tend to zero when the obstacle size tends to zero. This theoretical result may seem obvious at first, but because of the Stokes paradox this statement requires a proof. Since the FEM resolution scheme used by COMSOL is based on a Galerkin method, it is natural to test this result. In the present work we examine the relative velocity, so that the result shows that for vanishing body size, the flow tends to u ∞ . Having validated our scheme, we exclusively use the adaptive boundary conditionsfrom now on. In what follows, the viscous length is fixed for all simulations at v = 1, all the bodies have noslip boundary conditions, and their sizes are chosen as 2r = 0.005, 0.01, 0.05, 0.1, 0.5, 1, 2. For reasons of computational tractability and because of the requirement of placing the artificial boundary far enough to ensure that they are evaluated in the asymptotic physical regime, we choose the truncation length of the domain to be l = 20 for all simulations. This size is already quite impressive compared to the size of the bodies. Since we work with one fixed domain size only, the meshes are for simplicity chosen to be triangular pavements of the fluid domains, as given by COMSOL and similar to that of Figure 12. We however go as far as quadruple refinement (leading to approximately 3 million d.o.f.) and then apply an appropriate Richardson extrapolation, similar to (35). Meshing difficulties arise with smaller bodies, in the sense that the mesh elements become smaller near the body, thus posing a problem with regard to memory or mesh quality (ratio of the smallest to the largest element).
Following the proof in [23], we wish to provide evidence that the norm of the velocity gradient integrated over the fluid domain vanishes as the body disappears. The norm of the velocity gradient is given by
∇u = (∂ x u) 2 + (∂ y u) 2 + (∂ x v) 2 + (∂ y v) 2 .(36)
For the sake of comparison, we also compute the norm of the velocity field as seen by the body, i.e.,
u − u ∞ = (u − u ∞ ) 2 + v 2 ,(37)
and both norms are integrated over the whole computational domain and normalized by its surface. Figure 15 shows the evolution of these norms as a function of body diameter (2r). Over the range of body sizes that we investigated, the results suggest that the norm of the velocity gradient vanishes as | log (r)| −1 as r → 0. This investigation shows that given the right choice of parameters, the adaptive boundary conditions may be used to simulate flows with smaller Reynolds number (in this run Re = 2r) than what was expected at first from the results of the validation. Indeed, here the body size is decreased and the viscous length remains the same (resulting in a decrease of the Reynolds number), and thus the a.b.c., which explicitly depend on v , are always evaluated at some well-adapted distance regardless of body size, whose effect is only reflected through the value of c * 1 . This is in contrast to the simulations presented in Section 4, where the body size was kept fixed, and the Reynolds number (and thus the viscous length) changed, which would have required to adapt the domain size appropriately for the a.b.c. to be useful. 6 Force as a function of wall distance
Mesh
In this investigation, we keep to the triangle-only family of meshes, similar to the ones already used in Sections 4.3 and 5. We choose a domain truncation length l = 40 for all simulations, and the most refined meshes are composed of almost 370'000 elements and approaching 1.7 million degrees of freedom.
Results
We only treat the case of bodies with slip boundary condition, because, according to [30], it is this type of body which is susceptible to experience a zero transverse force for a certain body-wall distance. We present the results for simulations for bodies with circular and elliptical shapes. For the case of the elliptic bodies, we choose a low ratio of axes a x /a y , since this is the shape with the smallest lift for a given body-wall distance (see Figure 16). An ellipse with its longer axis perpendicular to the flow direct may also act as a simple model for bubbles in flows of Reynolds number Re ∼ O(100), where real bubbles flatten, see [37].
Again we tested the range of Reynolds numbers Re = 0.5, 1, 5, 10, 25 (obtained numerically by changing the dynamic viscosity ν), for the range d = 0.6, . . . , 2.5 (with increments in steps of 0.1). Figure 17 shows the hydrodynamic forces in the case of a circular body at Re = 1, and Figure 18 shows the same forces for the elliptic body with a x /a y = 0.1 at Re = 25. In none of these combinations of parameters do we observe a change of sign in the lift, in contrast to what is predicted by the experiments for the three dimensional case. Figure 18: Hydrodynamic forces versus body-wall distance, for an elliptical object with aspect ratio a x /a y = 0.1, at Re = 25.
Conclusion
We have presented simulations of flow around bodies in a half-plane for a wide range of parameters, including a range of three orders of magnitude of Reynolds numbers for some cases. We have shown that thanks to the adaptive boundary conditions, the dependence of the computed values for drag and lift on the size of the computational domain is drastically reduced, achieving accuracy better by one to two orders of magnitude when compared to simulations with simple or classic boundary conditions. Therefore, with the a.b.c. a given accuracy can be obtained on much smaller domains, thus bringing down the hardware requirements (CFD on a laptop). We have also shown a substantial qualitative improvement of the physical behavior of the flow, in the sense that the adaptive boundary conditions have a minimal influence on the streamlines, in particular close to the artificial boundaries.
The results obtained in the numerical simulations seem to indicate that the lift monotonically decreases with the distance to the wall, but no point where the transverse force vanishes could be found. It is an open question if this experimental result depends on dimensionality, on the exact geometry of the body or fluid container, or if it is the result of the flow being unsteady.
Finally, the adaptive boundary conditions has been shown to work well for a variety of smooth bodies, even in collections.
A Deriving the asymptotic boundary condition
We show how to derive the adaptive boundary conditions (16) and (17). In [23] it was shown that the non-dimensional system ∂xũ +ũ ·∇ũ +∇p −∆ũ = 0 ,
∇ ·ũ = 0 ,(38)
in the domainΩ = {(x,ỹ) ∈ R × [1, ∞) \ B}, is equivalent to the same problem without the body, but with some force term of compact support, in the sense that the solutions coincide outside some compact set containing the body. For the system without a body, an asymptotic expansion of the velocity field (withỹ −1 playing the role of the small parameter) was obtained in [2], given bỹ
u as (x,ỹ) = c 1 y 3/2 ϕ 1 (x/ỹ) + c 1 y 2 ϕ 2,1 (x/ỹ) + c 2 y 2 ϕ 2,2 (x/ỹ) − c 1 y 2 η W (x/ỹ 2 ) − c 1 y 3 η B (x/ỹ 2 ) ,(40)v as (x,ỹ) = c 1 y 3/2 ψ 1 (x/ỹ) + c 1 y 2 ψ 2,1 (x/ỹ) + c 2 y 2 ψ 2,2 (x/ỹ) + c 1 y 3 ω W (x/ỹ 2 ) + c 1 y 4 ω B (x/ỹ 2 ) ,(41)
where (x,ỹ) ∈ R × [1, ∞), and where the functions ϕ 1 , ϕ 2,1 , ϕ 2,2 , ψ 1 , ψ 2,1 , ψ 2,2 , η B , η W , ω B and ω W are as defined in (18)-(23) and (28)- (31). The constants c 1 and c 2 depend on the solution and are defined more precisely in [2]. The equations (8) and (9) are obtained from (38) and (39) by setting
u(x) = u −1 ∞ u(x) − e 1 ,(42)x y = −1 v x −1 v y + 1 .(43)
Inserting this into (40) and (41) we get u −1 ∞ u as (x, y) = 1 +ũ as (x/ v , 1 + y/ v )) = 1 + c 1 (1 + y/ v ) 3/2 ϕ 1 (x/ỹ)
+ c 1 (1 + y/ v ) 2 ϕ 2,1 (x/ỹ) + c 2 (1 + y/ v ) 2 ϕ 2,2 (x/ỹ) − c 1 (1 + y/ v ) 2 η W (x/ỹ 2 ) − c 1 (1 + y/ v ) 3 η B (x/ỹ 2 ) .(44)
For y → ∞ we have, for example
1 (1 + y/ v ) 2 ∼ 2 v y 2 − 2 3 v y 3 + . . . and η W x/ v (1 + y/ v ) 2 = η W x/ v (y/ v ) 2 (1 + ( v /y)) −2 ∼ η W x/ v (y/ v ) 2 − 2 v y x/ v (y/ v ) 2 + . . . ∼ η W v x y 2 − 2 v y v x y 2 η W v x y 2 + . . .
where we have used the Taylor series of the function η W for small v x/y 2 . Applying this to each term in (44), then discarding terms decaying faster than y −2 obtained from thex/ỹ scale, and decaying faster than y −3 obtained from thex/ỹ 2 scale, and remembering that any explicit x be grouped with scalingappropriate powers of y −1 , we get
u −1 ∞ u as (x, y) ∼ 1 +ũ as (x/ v , y/ v ) − 2 3 v c 1 y 3 η W ( v x/y 2 ) − 2 3 v c 1 y 3 v x y 2 η W ( v x/y 2 ) ,
and similarly, this time discarding terms decaying faster than y −4 obtained from thex/ỹ 2 scale
u −1 ∞ v as (x, y) ∼ṽ as (x/ v , y/ v ) − 3 4 v c 1 y 4 ω W ( v x/y 2 ) − 2 4 v c 1 y 4 v x y 2 ω W ( v x/y 2 ) .
This procedure is a resummation technique, which happens to yield a new expansion which respects the boundary condition (10) if we set c 2 = 0, i.e., lim y→0 x∈R\{0} u as (x, y) = u(x = 0, 0) = u ∞ .
Remark 1 It is the fact that the asymptotic expansions involve scale-invariant functions which makes this exchange of limits possible, allowing to extend the validity of the expansion, which was originally valid for large y only, also to large x. A detailed mathematical proof of this feature has however not been carried out yet. Expressions (16) and (17) now follows from these results, with the constants c * 1 and c * 2 replacing c 1 and c 2 since the domain is finite in the numerical implementation (see B for more details).
B Approximating the unknown constant of the asymptotic expansion
We motivate the choice of the function g given in (32). First, we must find an asymptotic description for the pressure, valid in the domain Ω. Using the Ansatz
p = − 1 2 u E · u E + ρ ,
where the index E means that we only retain functions with x/y as the argument in the asymptotic expansion of the velocity. Inserting this into (8) we get u · ∇u − u E · ∇u E + ∇ρ − ν∆u = 0 , and then taking the divergence yields ∆ρ = − (∇u) T : (∇u) + (∇u E ) T : (∇u E ) ,
where " : " denotes the tensorial scalar product (i.e., A : B = i,j a ij b ij ). The dominant terms on the r.h.s. are products of functions of x/y and x/y 2 , or x/y 2 and x/y 2 , decaying at least as fast as y −11/2 . Since for large y the dominant terms on the l.h.s. are those obtained from ∂ 2 y , integration shows that ρ ∼ y −7/2 , which is beyond the terms we retain, so that to leading order the pressure is given by
p ∼ −u 2 ∞ 1 2 + c 1 (y/ v ) 3/2 ϕ 1 (x/y) + c 1 (y/ v ) 2 ϕ 2,1 (x/y) .(45)
Second, we obtain a formula which gives the constant c * 1 as a ratio between an integral over the domaiň Ω and an integral over its boundaries. We define the tensor T by
T = −u ⊗ u + νD(u) − pI ,
where ⊗ represents the dyadic product (i.e., (a ⊗ b) ij = a i b j ) and D(u) = ∇u + (∇u) T , with ∇ · T yielding (8). We choose the vector V = ( √ yχΩ(x, y), 0) T , where the factor √ y is used for reasons that will become clear later on and where χΩ(x, y) = χ x (x) · χ y (y)
is a cutoff function which cuts a channel perpendicular to the wall centered on the body of radius r and two strips, one adjacent to the wall and one adjacent to the artificial boundary parallel to the wall of the domainΩ =Ω + \ B, whereΩ = (−l, l) × (0, l) and B = {x ∈Ω | x 2 + (y − d) 2 ≤ r 2 } (d is the distance between the body center and the wall), with l > d + r and d > r > 0. Namely, The factor 1/4 in the arguments of χ x and χ y is arbitrary and is chosen such as to have numerically reasonable gradients. The power 4 in the definition of χ is arbitrary as well and is simply chosen to yield a sufficiently smooth expression. We have
TV = √ yχΩ(x, y) − u 2 uv + ν 2∂ x u ∂ x v + ∂ y u − p 0 .
Integrating over the whole domain, we get, by Gauss's theorem,
The integral over the surface of the body ∂B vanishes by the choice of the cutoff function. We have IΩ := Ω ∇ · TV dω = Ω χ x (x) χ y (y) 2 √ y + √ y∂ y χ y (y) (−uv + ν∂ x v + ν∂ y u)dω
+ Ω √ yχ y (y)∂ x χ x (x)(−u 2 + 2ν∂ x u − p)dω ,(47)
On ∂Ω we use (16) and (17) to represent the velocity field. We do not consider terms which decay faster than 1/y 2 in the x/y scaling, since these would be of the same order as those we neglect in our asymptotic expansion. In the x/y 2 scaling, we neglect those terms which decay faster than 1/y 3 for the same reasons. Mixed terms (comprised of a product of functions of either scaling behavior) are neglected if they decay
Figure 1 :
1The coarsest mesh (h = 1.25) on the smallest domain (l = 10) used in the simulations of Section 4.
Figure 2 :
2Streamlines for a domain with truncation length l = 10 and Reynolds number Re = 1, for a body with noslip boundary conditions. The colored map represents the velocity norm. Streamlines (top left). Streamlines as seen from the body: s.b.c. (top right), c.b.c. (bottom left) and a.b.c. (bottom right).
Figure 3 :
3Streamlines for a domain with truncation length l = 10 with adaptive boundary conditions, and for a body with noslip boundary conditions. The colored map represents the velocity norm. Re = 0.5, 5, 10, 25 (top to bottom, left to right).
Figure 4 :
4At Re = 0.5, a larger domain diminishes the distortions in the streamlines.
Figure 5 :
5Re = 0.5 simulations, for a body with noslip boundary conditions. Top: Drag (left) and lift (right) as a function of domain size for the three boundary conditions. Bottom: relative error on drag (left) and lift (right) as a function of domain size (log-log scales).
Figure 6 :
6Re = 1 simulations, for a body with noslip boundary conditions. Top: Drag (left) and lift (right) as a function of domain size for the three boundary conditions. Bottom: relative error on drag (left) and lift (right) as a function of domain size (log-log scales).
Figure 7 :
7Re = 5 simulations, for a body with noslip boundary conditions. Top: Drag (left) and lift (right) as a function of domain size for the three boundary conditions. Bottom: relative error on drag (left) and lift (right) as a function of domain size (log-log scales).
Figure 8 :
8Re = 10 simulations, for a body with noslip boundary conditions. Top: Drag (left) and lift (right) as a function of domain size for the three boundary conditions. Bottom: relative error on drag (left) and lift (right) as a function of domain size (log-log scales).
Figure 9 :
9Re = 25 simulations, for a body with noslip boundary conditions. Top: Drag (left) and lift (right) as a function of domain size for the three boundary conditions. Bottom: relative error on drag (left) and lift (right) as a function of domain size (log-log scales).
Figure 10 :
10Simulations for a body with slip boundary conditions. Relative error for the drag as a function of domain size (log-log scales), for Re = 0.5, 1, 10 and 25.
Figure 11 :
11Simulations for a body with slip boundary conditions. Relative error for the lift as a function of domain size (log-log scales), for Re = 0.5, 1, 10 and 25.
Figure 12 :
12Mesh composed of triangles obtained from COMSOL's advancing front algorithm on a domain with truncation length l = 20, a circular body with r = 0.5 and d = 1.0.
Figure 13 :
13Top left: ellipse; top right: square; bottom left: "marmite"; bottom right: "bone".
Figure 14 :
14Collections of two and three circles.
Figure 15 :
15The norms of the velocity gradient and the velocity field as seen by the body, as a function of body size normalized by the fluid domain surface.
Figure 17 :
17Hydrodynamic forces versus body-wall distance, for a circular object, at Re = 1.
χ x (x) = χ(−(r + x)/4) · χ((x − r)/4) and χ y (y) = χ(y/4) · χ((l − y)/4) ,where χ is an arbitrary smooth cutoff function which we have chosen as follows, η) 4 dη.
(
TV ) · ndσ + ∂Ω (TV ) · ndσ .
√
which is computed numerically from the FEM solution. Due to the choice of the cutoff function, we also haveI ∂Ω := ∂Ω (TV ) · ndσ = 0 l √ yχΩ(x, y)(−u 2 + 2ν∂ x u − p) x=−l (yχ Ω (x, y)(u 2 − 2ν∂ x u + p) x=+l dy .
5 Re = 1 Re = 5 Re = 10 Re = 25s.b.c.
20.020
10.601 2.9595
1.9164
1.1896
drag: c.b.c.
20.011
10.597 2.9582
1.9155
1.1888
a.b.c.
20.007
10.594 2.9571
1.9145
1.1880
s.b.c.
1.4480
1.3912 1.0454 0.77384 0.37497
lift:
c.b.c.
1.4460
1.3898 1.0448 0.77344 0.37481
a.b.c.
1.4450
1.3890 1.0442 0.77304 0.37464
Table 1: Extrapolated values for drag and lift computed with the largest domain (l = 90) for a body
with noslip boundary conditions.
"Slip body"
Re = 0.5 Re = 1 Re = 5 Re = 10 Re = 25
s.b.c.
14.505
7.6636
2.0929
1.3023
0.71367
drag: c.b.c.
14.500
7.6611
2.0922
1.3018
0.71338
a.b.c.
14.497
7.6597
2.0916
1.3014
0.71307
s.b.c.
0.88168 0.84854 0.59285 0.37608 0.10188
lift:
c.b.c.
0.88065 0.84779 0.59250 0.37585 0.10176
a.b.c. 0.88013 0.84732 0.59219 0.37563 0.10162
Table 2 :
2Extrapolated values for drag and lift computed with the largest domain (l = 90) for a body with slip boundary conditions.
Figure 16: Hydrodynamic forces at Re = 1 versus ellipse aspect ratio, for a body to wall distance d = 1.0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
6.5
7
7.5
8
8.5
9
body aspect ratio (ax/ay)
drag
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0.2
0.4
0.6
0.8
1
1.2
lift
drag
lift
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
0.7
0.75
0.8
0.85
0.9
0.95
1
1.05
Lift force in function of body position, slip bubble, Re = 1
body position
lift
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
4
Lift force in function of body position, slip bubble, Re = 25 Drag force in function of body position, slip bubble, Re = 250.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
0
0.02
0.04
0.06
0.08
0.1
body position
lift
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
body position
drag
AcknowledgmentsWe would like to thank Julien Guillod and Matthieu Hillairet for fruitful discussions on the topic of this article and related matters.where we use that ϕ 1 (−z) = ψ 1 (z) and that ϕ 2,1 is an odd function. We insert this approximation into (48), where χΩ(±l, y) = χ y (y) (i.e., the fact that the computational domain is of equal length upstream and downstream simplifies the expressions), and treat the integral separately according to the two scalings. We getwhere we see that the factor √ y in V is chosen in order to obtain a non-vanishing expression for large l,and Ix/y 2Note that in the derivation of the integrals on the upstream and downstream boundary of the truncated domainΩ, we assumed that the asymptotic expansion may be extended, for large fixed x, to all y although the asymptotic expansion is a priori only valid for large y. See Remark 1 in A for a motivation. We then define n 1 (l, v , χΩ) := (I x/y ∂Ω + Ix/y 2 ∂Ω )/c * 1 , which can be computed numerically. We then have thatand we expect that the constant c 1 associated to the solution of the original problem is given byThis motivates the definition of (32).
Effect of a plane wall on a sphere moving parallel to it. A Ambari, B Gauthier-Manuel, E Guyon, Journal de Physique Lettres. 44A. Ambari, B. Gauthier-Manuel, and E. Guyon, Effect of a plane wall on a sphere moving parallel to it, Journal de Physique Lettres 44 (1983), 143-146.
Asymptotics of solutions for a basic case of fluid structure interaction. Christoph Boeckle, Peter Wittwer, Christoph Boeckle and Peter Wittwer, Asymptotics of solutions for a basic case of fluid structure interaction, http://arxiv.org/abs/1109.1431, 2011.
Decay estimates for solutions of the two-dimensional Navier-Stokes equations in the presence of a wall. SIAM Journal on Mathematical Analysis. , Decay estimates for solutions of the two-dimensional Navier-Stokes equations in the presence of a wall, accepted for publication in SIAM Journal on Mathematical Analysis, http://arxiv.org/abs/1104.0619, 2012.
Adaptive boundary conditions for exterior flow problems. Sebastian Bönisch, Vincent Heuveline, Peter Wittwer, Journal of Mathematical Fluid Mechanics. 71Sebastian Bönisch, Vincent Heuveline, and Peter Wittwer, Adaptive boundary conditions for exterior flow problems, Journal of Mathematical Fluid Mechanics 7 (2005), no. 1, 85-107.
Second order adaptive boundary conditions for exterior flow problems: Non-symmetric stationary flows in two dimensions. Journal of Mathematical Fluid Mechanics. 101, Second order adaptive boundary conditions for exterior flow problems: Non-symmetric sta- tionary flows in two dimensions, Journal of Mathematical Fluid Mechanics 10 (2008), no. 1, 45-70.
An algorithm with guaranteed convergence for finding a zero of a function. Richard P Brent, The Computer Journal. 144Richard P. Brent, An algorithm with guaranteed convergence for finding a zero of a function, The Computer Journal 14 (1971), no. 4, 422-425.
Simulation of a single bubble rising along an inclination surface. Bin Chen, Takafumi Kawamura, Yoshiaki Kodama, Proceedings of the 17th CFD Symposium. the 17th CFD SymposiumTokyoBin Chen, Takafumi Kawamura, and Yoshiaki Kodama, Simulation of a single bubble rising along an inclination surface, Proceedings of the 17th CFD Symposium (Tokyo), 2003, pp. E8-3.
Finite element error estimates for 3D exterior-incompressible flow with nonzero velocity at infinity. Paul Deuring, Numerische Mathematik. 114Paul Deuring, Finite element error estimates for 3D exterior-incompressible flow with nonzero ve- locity at infinity, Numerische Mathematik 114 (2009), 233-270.
Exterior stationary Navier-Stokes flows in 3d with non-zero velocity at infinity: approximation by flows in bounded domains. Paul Deuring, Stanislav Kračmar, Mathematische Nachrichten. 1Paul Deuring and Stanislav Kračmar, Exterior stationary Navier-Stokes flows in 3d with non-zero velocity at infinity: approximation by flows in bounded domains, Mathematische Nachrichten 269- 270 (2004), no. 1, 86-115.
Three-dimensional to two-dimensional crossover in the hydrodynamic interactions between micron-scale rods. R Di Leonardo, E Cammarota, G Bolognesi, H Schäfer, M Steinhart, Physical Review Letters. 10744501R. Di Leonardo, E. Cammarota, G. Bolognesi, H. Schäfer, and M. Steinhart, Three-dimensional to two-dimensional crossover in the hydrodynamic interactions between micron-scale rods, Physical Review Letters 107 (2011), 044501.
A test problem for outflow boundary conditions -flow over a backward-facing step. David K Gartling, International Journal for Numerical Methods in Fluids. 117David K. Gartling, A test problem for outflow boundary conditions -flow over a backward-facing step, International Journal for Numerical Methods in Fluids 11 (1990), no. 7, 953-967.
The wakes of cylindrical bluff bodies at low Reynolds number. J H Gerrard, Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences. 288J. H. Gerrard, The wakes of cylindrical bluff bodies at low Reynolds number, Philosophical Trans- actions of the Royal Society of London. Series A, Mathematical and Physical Sciences 288 (1978), 351-382.
Slow viscous motion of a sphere parallel to a plane wall -I Motion through a quiescent fluid. A J Goldman, Raymond G Cox, Howard Brenner, Chemical Engineering Science. 22A. J. Goldman, Raymond G. Cox, and Howard Brenner, Slow viscous motion of a sphere parallel to a plane wall -I Motion through a quiescent fluid, Chemical Engineering Science 22 (1967), 637-651.
Numerical simulation in fluid dynamics: a practical introduction. M Griebel, T Dornseifer, T Neunhoeffer, Society for Industrial and Applied Mathematics Philadelphia. M. Griebel, T. Dornseifer, and T. Neunhoeffer, Numerical simulation in fluid dynamics: a practical introduction, Society for Industrial and Applied Mathematics Philadelphia, PA, USA, 1998.
The "no boundary condition" outflow boundary condition. David F Griffiths, 393-411. MR MR1431133Internat. J. Numer. Methods Fluids. 24476065David F. Griffiths, The "no boundary condition" outflow boundary condition, Internat. J. Numer. Methods Fluids 24 (1997), no. 4, 393-411. MR MR1431133 (98c:76065)
Artificial boundary conditions for incompressible viscous flows. Laurence Halpern, Michelle Schatzman, SIAM Journal on Mathematical Analysis. 202Laurence Halpern and Michelle Schatzman, Artificial boundary conditions for incompressible viscous flows, SIAM Journal on Mathematical Analysis 20 (1989), no. 2, 308-353.
On the outflow boundary condition for external incompressible flows: A new approach. Nadeem Hasan, Sanjeev Syed Fahad Anwer, Sanghi, Journal of Computational Physics. 206Nadeem Hasan, Syed Fahad Anwer, and Sanjeev Sanghi, On the outflow boundary condition for external incompressible flows: A new approach, Journal of Computational Physics 206 (2005), 661- 683.
Adaptive boundary conditions for exterior stationary flows in three dimensions. Vincent Heuveline, Peter Wittwer, Journal of Mathematical Fluid Mechanics. 124Vincent Heuveline and Peter Wittwer, Adaptive boundary conditions for exterior stationary flows in three dimensions, Journal of Mathematical Fluid Mechanics 12 (2010), no. 4, 554-575.
Exterior flows at low Reynolds numbers: Concepts, solutions, and applications, Fundamental Trends in Fluid-Structure Interaction. Giovanni P. Galdi and Rolf RannacherWorld Scientific1Contemporary Challenges in Mathematical Fluid Dynamics and Its Applications, Exterior flows at low Reynolds numbers: Concepts, solutions, and applications, Fundamental Trends in Fluid-Structure Interaction (Giovanni P. Galdi and Rolf Rannacher, eds.), Contemporary Challenges in Mathematical Fluid Dynamics and Its Applications, vol. 1, World Scientific, 2010, pp. 77-169.
Artificial boundaries and flux and pressure conditions for the incompressible Navier-Stokes equations. John G Heywood, Rolf Rannacher, Stefan Turek, Int. J. Numer. Math. Fluids. 22John G. Heywood, Rolf Rannacher, and Stefan Turek, Artificial boundaries and flux and pressure conditions for the incompressible Navier-Stokes equations, Int. J. Numer. Math. Fluids 22 (1992), 325-352.
On the vorticity of the Oseen problem in a half plane. Matthieu Hillairet, Peter Wittwer, Physica D -Nonlinear Phenomena. 237Matthieu Hillairet and Peter Wittwer, On the vorticity of the Oseen problem in a half plane, Physica D -Nonlinear Phenomena 237 (2008), 1388-1421.
Existence of stationary solutions of the Navier-Stokes equations in two dimensions in the presence of a wall. Journal of Evolution Equations. 94, Existence of stationary solutions of the Navier-Stokes equations in two dimensions in the presence of a wall, Journal of Evolution Equations 9 (2009), no. 4, 675-706.
Asymptotic description of solutions of the exterior Navier-Stokes problem in a half space, Archive for Rational Mechanics and Analysis Online First. 32, Asymptotic description of solutions of the exterior Navier-Stokes problem in a half space, Archive for Rational Mechanics and Analysis Online First (2012), 32, http://arxiv.org/abs/1107.1028.
Simulating an exterior domain for drag force computations in the lattice boltzmann method. Jonas Latt, Yannick Grillet, Bastien Chopard, Peter Wittwer, Math. Comp. Sim. 72Jonas Latt, Yannick Grillet, Bastien Chopard, and Peter Wittwer, Simulating an exterior domain for drag force computations in the lattice boltzmann method, Math. Comp. Sim. 72 (2006), 169-172.
Unsteady free surface flows on truncated domains. N Malamataris, T C Papanastasiou, Industrial & Chemical Engineering Research. 30N. Malamataris and T. C. Papanastasiou, Unsteady free surface flows on truncated domains, Indus- trial & Chemical Engineering Research 30 (1991), 2211-2219.
Nonlinear artificial boundary conditions with pointwise error estimates for the exterior three dimensional Navier-Stokes problem. A Sergej, Maria Nazarov, Specovius-Neugebauer, 86-105. MR MR1903042Mathematische Nachrichten. 25276042Sergej A. Nazarov and Maria Specovius-Neugebauer, Nonlinear artificial boundary conditions with pointwise error estimates for the exterior three dimensional Navier-Stokes problem, Mathematische Nachrichten 252 (2003), 86-105. MR MR1903042 (2003m:76042)
T C Papanastasiou, N Malamataris, K Ellwood, A new outflow boundary condition. 14T. C. Papanastasiou, N. Malamataris, and K. Ellwood, A new outflow boundary condition, Interna- tional Journal for Numerical Methods in Fluids 14 (1992), no. 5, 587-608.
The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam. Lewis Fry Richardson, Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character. 210Lewis Fry Richardson, The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam, Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 210 (1911), 307-357.
Résumé and remarks on the open boundary condition minisymposium. R L Sani, P M Gresho, International Journal for Numerical Methods in Fluids. 18R. L. Sani and P. M. Gresho, Résumé and remarks on the open boundary condition minisymposium, International Journal for Numerical Methods in Fluids 18 (1994), 983-1008.
The transverse force on clean and contaminated bubbles rising near a vertical wall at moderate Reynolds number. Fumio Takemura, Jacques Magnaudet, Journal of Fluid Mechanics. 495Fumio Takemura and Jacques Magnaudet, The transverse force on clean and contaminated bubbles rising near a vertical wall at moderate Reynolds number, Journal of Fluid Mechanics 495 (2003), 235-253.
Drag and lift forces on a bubble rising near a vertical wall in a viscous liquid. Fumio Takemura, Shu Takagi, Jacques Magnaudet, Yoichiro Matsumoto, Journal of Fluid Mechanics. 461Fumio Takemura, Shu Takagi, Jacques Magnaudet, and Yoichiro Matsumoto, Drag and lift forces on a bubble rising near a vertical wall in a viscous liquid, Journal of Fluid Mechanics 461 (2002), 277-300.
External boundary conditions for three-dimensional problems of computational aerodynamics. V Semyon, Tsynkov, SIAM Journal on Scientific Computing. 211Semyon V. Tsynkov, External boundary conditions for three-dimensional problems of computational aerodynamics, SIAM Journal on Scientific Computing 21 (2000), no. 1, 166-206.
Fluid flow control with transformation media. A Yaroslav, David R Urzhumov, Smith, Physical Review Letters. 10774501Yaroslav A. Urzhumov and David R. Smith, Fluid flow control with transformation media, Physical Review Letters 107 (2011), 074501.
Two-dimensional side-by-side circular cylinders at moderate Reynolds numbers. Ali Vakil, Sheldon I Green, Computers & Fluids. 51Ali Vakil and Sheldon I. Green, Two-dimensional side-by-side circular cylinders at moderate Reynolds numbers, Computers & Fluids 51 (2011), 136-144.
Perturbation methods in fluid mechanics. Milton Van Dyke, The Parabolic PressStanford, CaliforniaAnnotated ed.Milton van Dyke, Perturbation methods in fluid mechanics, Annotated ed., The Parabolic Press, Stanford, California, 1975.
Wall-induced forces on a rigid sphere at finite Reynolds number. Lanying Zeng, S Balachandar, Paul Fischer, Journal of Fluid Mechanics. 536Lanying Zeng, S. Balachandar, and Paul Fischer, Wall-induced forces on a rigid sphere at finite Reynolds number, Journal of Fluid Mechanics 536 (2005), 1-25.
Path instability of rising spheroidal air bubbles: A shapecontrolled process. Roberto Zenit, Jacques Magnaudet, Physics of Fluids. 2061702Roberto Zenit and Jacques Magnaudet, Path instability of rising spheroidal air bubbles: A shape- controlled process, Physics of Fluids 20 (2008), 061702.
| [] |
[
"Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations",
"Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations"
] | [
"W D Apel \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"J C Arteaga-Velázquez \nInstituto de Física y Matemáticas\nUniversidad Michoacana\nMoreliaMichoacánMexico\n",
"L Bähren \nASTRON\nDwingelooThe Netherlands\n",
"K Bekk \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"M Bertaina \nDipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly\n",
"P L Biermann \nMax-Planck-Institut für Radioastronomie\nBonnGermany\n",
"J Blümer \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n\nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"H Bozdog \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"I M Brancus \nNational Institute of Physics and Nuclear Engineering\nBucharest-MagureleRomania\n",
"E Cantoni \nDipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly\n\nOsservatorio Astrofisico di Torino\nINAF Torino\nTorinoItaly\n",
"A Chiavassa \nDipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly\n",
"K Daumiller \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"V De Souza \nInstituto de Física de São Carlos\nUniversidade de São Paulo\nSão CarlosBrasil\n",
"F Di Pierro \nDipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly\n",
"P Doll \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"R Engel \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"H Falcke \nASTRON\nDwingelooThe Netherlands\n\nMax-Planck-Institut für Radioastronomie\nBonnGermany\n\nDepartment of Astrophysics\nRadboud University Nijmegen\nNijmegenAJThe Netherlands\n",
"B Fuchs \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"H Gemmeke \nInstitut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"C Grupen \nFaculty of Natural Sciences and Engineering\nUniversität Siegen\nSiegenGermany\n",
"A Haungs \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"D Heck \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"R Hiller \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"J R Hörandel \nDepartment of Astrophysics\nRadboud University Nijmegen\nNijmegenAJThe Netherlands\n",
"A Horneffer \nMax-Planck-Institut für Radioastronomie\nBonnGermany\n",
"D Huber \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"T Huege \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"P G Isar \nInstitute for Space Sciences\nBucharest-MagureleRomania\n",
"K.-H Kampert \nFachbereich C\nBergische Universität Wuppertal\nPhysik, WuppertalGermany\n",
"D Kang \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"O Krömer \nInstitut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"J Kuijpers \nDepartment of Astrophysics\nRadboud University Nijmegen\nNijmegenAJThe Netherlands\n",
"K Link \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"P Luczak \nDepartment of Astrophysics\nNational Centre for Nuclear Research\nLódźPoland\n",
"M Ludwig \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"H J Mathes \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"M Melissas \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"C Morello \nOsservatorio Astrofisico di Torino\nINAF Torino\nTorinoItaly\n",
"S Nehls \nStudsvik Scandpower GmbH\nHamburgGermany\n",
"J Oehlschläger \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"N Palmieri \nInstitut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"T Pierog \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"J Rautenberg \nFachbereich C\nBergische Universität Wuppertal\nPhysik, WuppertalGermany\n",
"H Rebel \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"M Roth \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"C Rühle \nInstitut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"A Saftoiu \nNational Institute of Physics and Nuclear Engineering\nBucharest-MagureleRomania\n",
"H Schieler \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"A Schmidt \nInstitut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"S Schoo \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"F G Schröder \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"O Sima \nDepartment of Physics\nUniversity of Bucharest\nBucharestRomania\n",
"G Toma \nNational Institute of Physics and Nuclear Engineering\nBucharest-MagureleRomania\n",
"G C Trinchero \nOsservatorio Astrofisico di Torino\nINAF Torino\nTorinoItaly\n",
"A Weindl \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"J Wochele \nInstitut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"J Zabierowski \nDepartment of Astrophysics\nNational Centre for Nuclear Research\nLódźPoland\n",
"J A Zensus \nMax-Planck-Institut für Radioastronomie\nBonnGermany\n"
] | [
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Instituto de Física y Matemáticas\nUniversidad Michoacana\nMoreliaMichoacánMexico",
"ASTRON\nDwingelooThe Netherlands",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Dipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly",
"Max-Planck-Institut für Radioastronomie\nBonnGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"National Institute of Physics and Nuclear Engineering\nBucharest-MagureleRomania",
"Dipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly",
"Osservatorio Astrofisico di Torino\nINAF Torino\nTorinoItaly",
"Dipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Instituto de Física de São Carlos\nUniversidade de São Paulo\nSão CarlosBrasil",
"Dipartimento di Fisica\nUniversità degli Studi di Torino\nTorinoItaly",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"ASTRON\nDwingelooThe Netherlands",
"Max-Planck-Institut für Radioastronomie\nBonnGermany",
"Department of Astrophysics\nRadboud University Nijmegen\nNijmegenAJThe Netherlands",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Faculty of Natural Sciences and Engineering\nUniversität Siegen\nSiegenGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Department of Astrophysics\nRadboud University Nijmegen\nNijmegenAJThe Netherlands",
"Max-Planck-Institut für Radioastronomie\nBonnGermany",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institute for Space Sciences\nBucharest-MagureleRomania",
"Fachbereich C\nBergische Universität Wuppertal\nPhysik, WuppertalGermany",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Department of Astrophysics\nRadboud University Nijmegen\nNijmegenAJThe Netherlands",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Department of Astrophysics\nNational Centre for Nuclear Research\nLódźPoland",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Osservatorio Astrofisico di Torino\nINAF Torino\nTorinoItaly",
"Studsvik Scandpower GmbH\nHamburgGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Experimentelle Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Fachbereich C\nBergische Universität Wuppertal\nPhysik, WuppertalGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"National Institute of Physics and Nuclear Engineering\nBucharest-MagureleRomania",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Prozessdatenverarbeitung und Elektronik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Department of Physics\nUniversity of Bucharest\nBucharestRomania",
"National Institute of Physics and Nuclear Engineering\nBucharest-MagureleRomania",
"Osservatorio Astrofisico di Torino\nINAF Torino\nTorinoItaly",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Institut für Kernphysik\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Department of Astrophysics\nNational Centre for Nuclear Research\nLódźPoland",
"Max-Planck-Institut für Radioastronomie\nBonnGermany"
] | [] | LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of 2.6 ± 0.2, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published in Astroparticle Physics 50-52 (2013) 76-91 [1]: With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data. | 10.1016/j.astropartphys.2015.09.002 | [
"https://arxiv.org/pdf/1507.07389v3.pdf"
] | 11,845,151 | 1507.07389 | a7bc83cd76366c9891abb46887a802326b45e9ce |
Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations
18 Dec 2015
W D Apel
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
J C Arteaga-Velázquez
Instituto de Física y Matemáticas
Universidad Michoacana
MoreliaMichoacánMexico
L Bähren
ASTRON
DwingelooThe Netherlands
K Bekk
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
M Bertaina
Dipartimento di Fisica
Università degli Studi di Torino
TorinoItaly
P L Biermann
Max-Planck-Institut für Radioastronomie
BonnGermany
J Blümer
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
H Bozdog
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
I M Brancus
National Institute of Physics and Nuclear Engineering
Bucharest-MagureleRomania
E Cantoni
Dipartimento di Fisica
Università degli Studi di Torino
TorinoItaly
Osservatorio Astrofisico di Torino
INAF Torino
TorinoItaly
A Chiavassa
Dipartimento di Fisica
Università degli Studi di Torino
TorinoItaly
K Daumiller
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
V De Souza
Instituto de Física de São Carlos
Universidade de São Paulo
São CarlosBrasil
F Di Pierro
Dipartimento di Fisica
Università degli Studi di Torino
TorinoItaly
P Doll
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
R Engel
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
H Falcke
ASTRON
DwingelooThe Netherlands
Max-Planck-Institut für Radioastronomie
BonnGermany
Department of Astrophysics
Radboud University Nijmegen
NijmegenAJThe Netherlands
B Fuchs
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
H Gemmeke
Institut für Prozessdatenverarbeitung und Elektronik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
C Grupen
Faculty of Natural Sciences and Engineering
Universität Siegen
SiegenGermany
A Haungs
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
D Heck
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
R Hiller
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
J R Hörandel
Department of Astrophysics
Radboud University Nijmegen
NijmegenAJThe Netherlands
A Horneffer
Max-Planck-Institut für Radioastronomie
BonnGermany
D Huber
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
T Huege
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
P G Isar
Institute for Space Sciences
Bucharest-MagureleRomania
K.-H Kampert
Fachbereich C
Bergische Universität Wuppertal
Physik, WuppertalGermany
D Kang
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
O Krömer
Institut für Prozessdatenverarbeitung und Elektronik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
J Kuijpers
Department of Astrophysics
Radboud University Nijmegen
NijmegenAJThe Netherlands
K Link
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
P Luczak
Department of Astrophysics
National Centre for Nuclear Research
LódźPoland
M Ludwig
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
H J Mathes
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
M Melissas
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
C Morello
Osservatorio Astrofisico di Torino
INAF Torino
TorinoItaly
S Nehls
Studsvik Scandpower GmbH
HamburgGermany
J Oehlschläger
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
N Palmieri
Institut für Experimentelle Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
T Pierog
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
J Rautenberg
Fachbereich C
Bergische Universität Wuppertal
Physik, WuppertalGermany
H Rebel
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
M Roth
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
C Rühle
Institut für Prozessdatenverarbeitung und Elektronik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
A Saftoiu
National Institute of Physics and Nuclear Engineering
Bucharest-MagureleRomania
H Schieler
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
A Schmidt
Institut für Prozessdatenverarbeitung und Elektronik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
S Schoo
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
F G Schröder
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
O Sima
Department of Physics
University of Bucharest
BucharestRomania
G Toma
National Institute of Physics and Nuclear Engineering
Bucharest-MagureleRomania
G C Trinchero
Osservatorio Astrofisico di Torino
INAF Torino
TorinoItaly
A Weindl
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
J Wochele
Institut für Kernphysik
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
J Zabierowski
Department of Astrophysics
National Centre for Nuclear Research
LódźPoland
J A Zensus
Max-Planck-Institut für Radioastronomie
BonnGermany
Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations
18 Dec 2015Preprint submitted to Astroparticle Physics December 21, 2015arXiv:1507.07389v3 [astro-ph.HE]cosmic raysextensive air showersradio emissionLOPESabsolute calibration
LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of 2.6 ± 0.2, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published in Astroparticle Physics 50-52 (2013) 76-91 [1]: With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data.
Improved calibration of LOPES
LOPES was the radio extension of the KASCADE-Grande particle-detector array for cosmic-ray air showers [2,3]. Triggered by KASCADE-Grande, it detected the radio emission of the same air showers as measured by the particle-detector array in the effective frequency band of 43 − 74 MHz.
For calibration of LOPES, we used an externally calibrated reference source consisting of a signal generator and a biconical antenna [4]. This reference source emits a train of equidistant pulses, which in the frequency domain corresponds to a comb with 1 MHz spacing. The manufacturer provided reference values for this source with an overall uncertainty of 2.5 dB, which was the main contribution to the total two-sigma uncertainty of roughly 35 % for the LOPES amplitude scale as published earlier [4,1]. We provided the reference source also to other experiments, namely LOFAR [5] and Tunka-Rex [6], to have a consistent absolute amplitude scale between these experiments. Their results can now be compared on an absolute level, which was a problem for historic experiments [7].
In this context, the reference source has been remeasured by the manufacturer [8]. The old calibration values used for previous LOPES publications characterized the reference source for free-field conditions. This means that a reflective ground in a horizontal setup was used in the manufacturer's calibration measurement of the reference source. Such a setup is useful for ground-based communication applications, but leads to significant interference effects. The manufacturer's measurement was performed at several heights above ground, finally taking the maximum value. Consequently, the effect of constructive interference was significantly enhanced. Because ground effects are already taken into account in the simulation of the LOPES antennas used for the evaluation of air-shower measurements, this led to a significant overestimation of the amplitudes measured with LOPES. To first approximation, a factor of two difference is expected between free-field conditions with constructive interference of ground reflections and the now-used free-space conditions corresponding to no reflections.
Based on a new measurement of our reference source (not just of a source of the same type), the * Corresponding author: [email protected] 2.6 ± 0.1 Figure 1: Change in amplitude (electric-field strength) of LOPES measurements due to the new, improved absolute calibration for the height of the interferometric cross-correlation beam and ǫ100. The change is not exactly equal for all events, because the calibration is slightly frequency-dependent, and each LOPES measurement might have a different frequency spectrum, since this depends on the shower geometry.
new calibration values have now been determined for free-space conditions, which better match the situation of air showers. The two-sigma scale uncertainty of the amplitude is still given as 2.5 dB by the manufacturer. This corresponds to a one-sigma uncertainty of 16 % for the amplitude (field strength) scale. This uncertainty covers potential repeated measurements under equal conditions, not the change between different conditions. At the more sensitive instrument LOFAR it has been checked that these new free-space reference values are consistent within the scale uncertainty with an independent calibration on galactic background [9]. It turns out that the new reference values lead to a significant change of the LOPES amplitude scale.
Impact on shower reconstruction
We have analyzed the impact of the improved calibration on the reconstruction of the radio emission measured with LOPES for a data set of about 500 events recorded with east-west aligned antennas which has been used in reference [1]. The number of selected events is slightly lower, since due to the improved calibration a few events close to threshold do not pass anymore the quality cuts. Moreover, we have made a few smaller improvements in the analysis pipeline [10,11] which, however, have no significant impact on the results reported here. For the selection of events we apply the digital interferometric technique of cross-correlation beamforming, and then use pulse measurements in individual antennas for further analyses. As in references [12,13,1] we fit measured lateral distributions of amplitude versus distance to shower axis d with an exponential function ǫ(d) = ǫ 100 exp[−η(d − 100 m)]. This function has two parameters: ǫ 100 , the amplitude (electric-field strength) at 100 m distance from the shower axis, and η describing the slope of the lateral distribution. Fig. 1 shows that the amplitude scale of both the cross-correlation beam and of ǫ 100 is lowered by the average factor of 2.6 ± 0.2 due to the improved calibration (combining both values in Fig. 1). The shape of the lateral distribution remains practically unchanged: the average value for η changes by less than its systematic uncertainty of about 1 km -1 (not shown here). Consequently, previously published LOPES results remain valid, but all field-strength values have to be divided by 2.6 ± 0.2, which affects, e.g., the proportionality factor in published formulas for energy reconstruction [14,15,16].
Significance for comparison to simulations
The lowered amplitude scale has important consequences for the comparison of simulated and mea-sured lateral distributions. In reference [1], we compared LOPES data to the now obsolete predictions by REAS 3.11 [17] and those of its state-of-the-art successor CoREAS [18], which predicts roughly two times lower amplitudes ǫ 100 than REAS 3.11. Both, REAS 3.11 and CoREAS, are microscopic simulation codes based on CORSIKA [19], and implicitly include all emission mechanisms known to be relevant. The main difference is that CoREAS calculates the radio emission directly during the simulation of the particle shower, while REAS 3.11 calculates it afterwards based on histograms which neglect some correlations of the particles in the shower development.
With the improved calibration the amplitude parameter ǫ 100 of the CoREAS simulations almost perfectly matches the measured data (event-by-event comparison and histograms in Fig. 2). The mean deviation is only 2 % with protons as primary particles, and 9 % for iron nuclei as primary particles. Both deviations are much smaller than the remaining onesigma uncertainty of the LOPES amplitude scale of approximately 16 %.
For REAS 3.11 the situation is different (event-byevent comparison in Fig. 3, histograms not shown). The scaling factor needed to bring REAS 3.11 simulations in perfect agreement with LOPES measurements now is f = 0.43 and f = 0.46 for proton and iron primaries, respectively. Even including an ad- ditional 20 % uncertainty for the KASCADE-Grande energy scale [20] used as input for the simulations, this is at tension with the measurements. In other aspects tested in reference [1], in particular with respect to the slope parameter η and its dependence on the geometry, LOPES measurements continue to be compatible with the simulations. However, for the dependence of the amplitude on the zenith angle, we observe a slight difference between measurements and CoREAS simulations reported in reference [14]. This difference remains, i.e. the scaling factor f varies by several 10 % over zenith angle [10,11]. Nevertheless, this is not necessarily a problem of the simulations, since the effect could be due to systematic uncertainties of the antenna model used for conversion of LOPES measurements. Hence, this should be checked with other experiments using different antennas. Consequently, all tested aspects of CoREAS are compatible with LOPES measurements, which is remarkable because CoREAS does not feature any free parameters tuned against the data.
Figure 2 :
2Comparison of the amplitude at 100 m, ǫ100, between LOPES measurements and CoREAS simulations. Left: scatter plot of for all 502 LOPES events compared to CoREAS simulations with proton primaries. The result for iron nuclei as primaries is very similar. Right: Deviation of each event divided by the uncertainty of the event, after multiplying the simulated amplitudes with a scaling factor f (0.98 for proton, and 1.09 for iron simulations) such that the distributions are centered around 0. A Gaussian fitted to the histogram has a width of approximately 1 (0.95 ± 0.04 for proton, and 0.98 ± 0.04 for iron simulations), which means that the spread visible in the left figure corresponds to the spread expected by the measurement uncertainties, except of the few outliers of unknown origin.
Figure 3 :
3Comparison of ǫ100 between LOPES measurements and REAS 3.11 simulations.
. W D Apel, LOPES CollaborationAstropart. Phys. Apel, W. D., et al. -LOPES Collaboration, Astropart. Phys. 50-52 (2013) 76-91.
. W D Apel, KASCADE-Grande CollaborationNucl. Instr. Meth. A. 620Apel, W. D., et al. -KASCADE-Grande Collaboration, Nucl. Instr. Meth. A 620 (2010) 202-216.
. H Falcke, LOPES CollaborationNature. 435Falcke, H., et al. -LOPES Collaboration, Nature 435 (2005) 313-316.
. S Nehls, Nucl. Instr. Meth. A. 589Nehls, S., et al., Nucl. Instr. Meth. A 589 (2008) 350-361.
. P Schellart, LOFAR CollaborationAstronomy and Astrophysics. 56098Schellart, P., et al. -LOFAR Collaboration, Astronomy and Astrophysics 560 (2013) A98.
. P A Bezyazeekov, Tunka-Rex CollaborationNucl. Instr. Meth. A. 802Bezyazeekov, P. A., et al. -Tunka-Rex Collaboration, Nucl. Instr. Meth. A 802 (2015) 89-96.
. V B Atrashkevich, Sov. J. Nucl. Phys. 28366Atrashkevich, V. B., et al., Sov. J. Nucl. Phys. 28 (1978) 366.
The calibration is done in line with EN ISO/IEC 17025. The calibration is done in line with EN ISO/IEC 17025.
. A Nelles, LOFAR CollaborationJINST. 1011005Nelles, A., et al. -LOFAR Collaboration, JINST 10 (2015) P11005.
K Link, LOPES CollaborationProceeding of Science ICRC2015. eeding of Science ICRC2015311Link, K., et al. -LOPES Collaboration, Proceeding of Science ICRC2015 (2015) 311.
F G Schröder, LOPES CollaborationProceeding of Science ICRC2015. eeding of Science ICRC2015317Schröder, F.G., et al. -LOPES Collaboration, Proceeding of Science ICRC2015 (2015) 317.
. W D Apel, LOPES CollaborationAstropart. Phys. 32Apel, W. D., et al. -LOPES Collaboration, Astropart. Phys. 32 (2010) 294-303.
. W D Apel, LOPES CollaborationPhys. Rev. D. 8571101R)Apel, W. D., et al. -LOPES Collaboration, Phys. Rev. D 85 (2012) 071101(R).
. W D Apel, LOPES CollaborationPhys. Rev. D. 9062001Apel, W. D., et al. -LOPES Collaboration, Phys. Rev. D 90 (2014) 062001.
A Horneffer, LOPES CollaborationProc. of the 30th International Cosmic Ray Conference. of the 30th International Cosmic Ray ConferenceMerida, Mexico4Horneffer, A., et al. -LOPES Collaboration, in: Proc. of the 30th International Cosmic Ray Conference 2007, Merida, Mexico, volume 4, pp. 83-86.
F G Schröder, LOPES CollaborationAIP Conference Proceeding 1535. Schröder, F. G., et al. -LOPES Collaboration, AIP Con- ference Proceeding 1535 (2013) 78-83.
. M Ludwig, T Huege, Astropart. Phys. 34Ludwig, M., Huege, T., Astropart. Phys. 34 (2011) 438- 446.
T Huege, M Ludwig, C James, AIP Conference Proceeding 1535. Huege, T., Ludwig, M., James, C., AIP Conference Pro- ceeding 1535 (2013) 128-132.
. D Heck, FZKA Report. 6019Forschungszentrum KarlsruheHeck, D. et al.. FZKA Report 6019, Forschungszentrum Karlsruhe (1998).
. W D Apel, KASCADE-Grande CollaborationAstropart. Phys. 36Apel, W. D., et al. -KASCADE-Grande Collaboration, Astropart. Phys. 36 (2012) 183-194.
| [] |
[
"Photon-Photon and Pomeron-Pomeron Processes in Peripheral Heavy Ion Collisions",
"Photon-Photon and Pomeron-Pomeron Processes in Peripheral Heavy Ion Collisions"
] | [
"C G Roldão \nInstituto de Física Teórica\nUniversidade Estadual Paulista Rua Pamplona\n14501405-900São PauloSPBrazil\n",
"A A Natale \nInstituto de Física Teórica\nUniversidade Estadual Paulista Rua Pamplona\n14501405-900São PauloSPBrazil\n"
] | [
"Instituto de Física Teórica\nUniversidade Estadual Paulista Rua Pamplona\n14501405-900São PauloSPBrazil",
"Instituto de Física Teórica\nUniversidade Estadual Paulista Rua Pamplona\n14501405-900São PauloSPBrazil"
] | [] | We estimate the cross sections for the production of resonances, pion pairs and a central cluster of hadrons in peripheral heavy-ion collisions through two-photon and double-pomeron exchange, at energies that will be available at RHIC and LHC. The effect of the impact parameter in the diffractive reactions is introduced, and imposing the condition for realistic peripheral collisions we verify that in the case of very heavy ions the pomeron-pomeron contribution is indeed smaller than the electromagnetic one. However, they give a non-negligible background in the collision of light ions. This diffractive background will be more important at RHIC than at LHC. PACS number(s): 25.75.Dw, Typeset using REVT E X | 10.1103/physrevc.61.064907 | [
"https://arxiv.org/pdf/nucl-th/0003038v1.pdf"
] | 35,333,132 | nucl-th/0003038 | ecce0b48c5793a22a54aa572e2829952a543d795 |
Photon-Photon and Pomeron-Pomeron Processes in Peripheral Heavy Ion Collisions
arXiv:nucl-th/0003038v1 17 Mar 2000
C G Roldão
Instituto de Física Teórica
Universidade Estadual Paulista Rua Pamplona
14501405-900São PauloSPBrazil
A A Natale
Instituto de Física Teórica
Universidade Estadual Paulista Rua Pamplona
14501405-900São PauloSPBrazil
Photon-Photon and Pomeron-Pomeron Processes in Peripheral Heavy Ion Collisions
arXiv:nucl-th/0003038v1 17 Mar 2000
We estimate the cross sections for the production of resonances, pion pairs and a central cluster of hadrons in peripheral heavy-ion collisions through two-photon and double-pomeron exchange, at energies that will be available at RHIC and LHC. The effect of the impact parameter in the diffractive reactions is introduced, and imposing the condition for realistic peripheral collisions we verify that in the case of very heavy ions the pomeron-pomeron contribution is indeed smaller than the electromagnetic one. However, they give a non-negligible background in the collision of light ions. This diffractive background will be more important at RHIC than at LHC. PACS number(s): 25.75.Dw, Typeset using REVT E X
I. INTRODUCTION
Collisions at relativistic heavy ion colliders like the Relativistic Heavy Ion Collider RHIC/Brookhaven and the Large Hadron Collider LHC/CERN (operating in its heavy ion mode) are mainly devoted to the search of the Quark Gluon Plasma. However, peripheral heavy ion collisions also open up a broad area of studies as advocated by Baur and collaborators [1,2]. Examples are the possible discovery of an intermediate-mass Higgs boson [3,4] or beyond standard model physics [5] using peripheral ion collisions, which have been discussed at length in the literature. More promissing than these may be the study of hadronic physics, which will appear quite similarly to the two-photon hadronic physics at e + e − machines with the advantage of a huge photon luminosity peaked at small energies [1,2,6]. Due to this large photon luminosity it will become possible to discover resonances that couple weakly to the photons [7]. Double-pomeron exchange will also occurs in peripheral heavy ion collisions and their contribution is similar to the two-photons one as discussed by Baur [1] and Klein [6]. A detailed calculation performed by Müller and Schramm of Higgs boson production have shown that the diffractive contribution is much smaller than the electromagnetic one [8].
We can easily understand this result remembering that the coupling between the Higgs boson and the pomerons is intermediated by quarks, and according to the pomeron model of Donnachie and Landshoff [9] when in the vertex pomeron-quark-quark any of the quark legs goes far "off-shell" the coupling with the pomeron decreases. Therefore, we do not need to worry about the pomeron-pomeron contribution in peripheral heavy ion collisions when heavy (or far "off-shell") quarks are present. However, this is not what happens in the case of light resonances [7], where double diffraction were claimed to be as important as photon initiated processes. In particular, Engel et al. [10] have shown that at the LHC the diffractive production of hadrons may be a background for the photonic one.
In Ref. [1] it was remarked that the effect of removing "central collisions" should also be performed in the double-pomeron calculation, implying in a considerable reduction of the background calculated in Ref. [10]. This claim is the same presented by Baur [2] and Cahn and Jackson [4] in the case of early calculations of peripheral heavy ion collisions. Roughly speaking we must enforce that the minimum impact parameter (b min ) should be larger than (R 1 + R 2 ), where R i is the nuclear radius of the ion "i", in order to have both ions coming out intact after the interaction.
In this work we will compute the production of resonances, pion-pairs and a hadron cluster with invariant mass M X through photon-photon and pomeron-pomeron fusion in peripheral heavy ion collisions at the energies of RHIC and LHC. We will take into account the effect of the impact parameter as discussed in the previous paragraph for photons as well as for pomerons. We also compare this approach to cut the central collisions with the use of an absorption factor in the Glauber approximation. The inclusion of pion-pairs production is important because they certainly will be studied at these colliders, and they also represent a background for glueball (and other hadrons) detection. The pomeron physics within the ion will be described by the Donnachie and Landshoff model [9,11]. We will focus on the values of the cross sections that shall be measured in the already quoted ion colliders, and point out when pomeron-pomeron processes can be considered competitive or not with photon-photon collisions. The arrangement of our paper is the following: Section 2 contains a discussion of the photons and pomerons distributions in the nuclei. In Sect. 3 we introduce the cross section for the elementary processes. Finally, Sect. 4 contains the results and conclusions.
II. PHOTONS AND POMERONS DISTRIBUTION FUNCTIONS
A. Photons in the nuclei
The photon distribution in the nucleus can be described using the equivalent-photon or Weizsäcker-Williams approximation in the impact parameter space. Denoting by F (x)dx the number of photons carrying a fraction between x and x + dx of the total momentum of a nucleus of charge Ze, we can define the two-photon luminosity through
dL dτ = 1 τ dx x F (x)F (τ /x), (2.1)
where τ =ŝ/s,ŝ is the square of the center of mass (c.m.s.) system energy of the two photons and s of the ion-ion system. The total cross section ZZ → ZZγγ → ZZX, where X is the particle produced within the rapidity gap, is
σ(s) = dτ dL dτσ (ŝ), (2.2)
whereσ(ŝ) is the cross-section of the subprocess γγ → X.
There remains only to determine F (x). In the literature there are several approaches for doing so, and we choose the conservative and more realistic photon distribution of Ref.
[4]. Cahn and Jackson [4], using a prescription proposed by Baur [1], obtained a photon distribution which is not factorizable. However, they were able to give a fit for the differential luminosity which is quite useful in practical calculations:
dL dτ = Z 2 α π 2 16 3τ ξ(z),(2.3)
where z = 2MR √ τ , M is the nucleus mass, R its radius and ξ(z) is given by .
ξ(z) = 3 i=1 A i e −b i z ,(2.
(2.5)
The condition for realistic peripheral collisions (b min > R 1 + R 2 ) is present in the photon distributions showed above, and the applications of Sect. 4 are straightforward once we determine the cross sections for the elementary processes.
B. Pomerons in the nuclei
In the case where the intermediary particles exchanged in the nucleus-nucleus collisions are pomerons instead of photons, we can follow closely the work of Müller and Schramm [8] and make a generalization of the equivalent photon approximation method to this new situation. So the cross section for particle production via two pomerons exchange can be written as
σ P P AA = dx 1 dx 2 f P (x 1 )f P (x 2 )σ P P (s P P ), (2.6)
where f P (x) is the distribution function that describe the probability for finding a pomeron in the nucleus with energy fraction x and σ P P (s P P ) is the subprocess cross section with energy squared s P P . In the case of inclusive particle production we use the form given by Donnachie and Landshoff [12] f P (x) = 1
4π 2 x −(xM ) 2 −∞ dt |β AP (t)| 2 |D P (t; s ′ )| 2 , (2.7)
where D P (t; s ′ ) is the pomeron propagator [11]
D P (t; s) = (s/m 2 ) α P (t)−1 sin( 1 2 πα P (t)) exp − 1 2 iπα P (t) ,
with s the total squared c.m. energy. The Regge trajectory obeyed by the pomeron is α P (t) = 1 + ε + α ′ P t, where ε = 0.085, α ′ P = 0.25 GeV −2 and t is a small exchanged fourmomentum square, t = k 2 << 1, so the pomeron behaves like a spin-one boson. The term in the denominator of the pomeron propagator, [sin( 1 2 πα P (t))] −1 , is the signature factor that express the different properties of the pomeron under C and P conjugation. At very high c.m. energy this factor falls very rapidly with k 2 , whose exponential slope is given by α ′ P ln(s/m 2 ), m is the proton mass, and it is possible to neglect this k 2 dependence,
sin 1 2 π(1 + ε − α ′ P k 2 ) ≈ cos( 1 2 πε) ≈ 1.
If we define the pomeron range parameter r 0 as
r 2 0 = α ′ P ln(s/m 2 ), (2.8)
the pomeron propagator can be written as
|D P (t = −k 2 ; s)| = (s/m 2 ) ε e −r 2 0 k 2 . (2.9)
Since we are interested in the spatial distribution of the virtual quanta in the nuclear rest frame we are using t = −k 2 .
The nucleus-pomeron coupling has the form [12] β
AP (t) = 3Aβ 0 F A (−t),
where β 0 = 1.8 GeV −1 is the pomeron-quark coupling, A is the atomic number of the colliding nucleus, and F A (−t) is the nuclear form factor for which is usually assumed a
Gaussian expression (see, e.g., Drees et al. in [3])
F A (−t) = e t/2Q 2 0 ,(2.10)
where Q 0 = 60 MeV.
Performing the t integration of the distribution function in Eq.(2.7) we obtain
f P (x) = (3Aβ 0 ) 2 (2π) 2 x s ′ m 2 2ε −(xM ) 2 −∞ dt e t/Q 2 0 = (3Aβ 0 Q 0 ) 2 (2π) 2 x s ′ m 2 2ε exp − xM Q 0 2 .
The total cross section for a inclusive particle production is obtained with the above distribution and also with the expression for the subprocess P P → X as prescribed in
Eq.(2.6). However, in Eq.(2.6) the cases where the two nuclei overlap are not excluded.
To enforce the realistic condition of a peripheral collision it is necessary to perform the calculation taking into account the impact parameter dependence, b. It is straightforward to verify that in the collision of two identical nuclei the total cross section of Eq.(2.6) is modified to [8]
d 2 σ P P →X AA d 2 b = Q ′2 2π e −Q ′2 b 2 /2 σ P P AA , (2.11) where (Q ′ ) −2 = (Q 0 ) −2 + 2r 2 0 .
The total cross section for inclusive processes is obtained after integration of Eq.(2.11) with the condition b min > 2R in the case of identical ions.
For exclusive particle production the determination of the pomeron distribution function in the nuclei is slightly modified, because in this case it is necessary some specific assumption about the pomeron internal structure [13]. Following Ref. [8] the distribution function of pomerons is 12) and the cross section for a resonance production as a function of the impact parameter
f P (x) = (3Aβ 0 ) 2 (2π) 2 x −(xM ) 2 −∞ dt (−t − x 2 M 2 ) F A (−t) 2 |D(t)| 2 ,(2.is [14] d 2 σ P P →R AA d 2 b = 2π 3Aβ 0 2π 2 4 dx 1 x 1 dx 2 x 2 Q 4 1 Q 4 2Q 2 e −x 2 1 M 2 /Q 2 1 e −x 2 2 M 2 /Q 2 2 × x 1 x 2 s 2 m 4 2ε σ P P →R AA (x 1 x 2 s) b 2Q2 e −b 2Q2 /2 , with σ P P →R
AA (x 1 x 2 s) indicating the subprocess cross section (double pomeron fusion producing a resonance), and whereQ
−2 = 1 2 (Q −2 1 + Q −2 2 ), with Q −2 i ≡ Q −2 0 + 2r 2 0 for idential ions.
In the calculations we are going to perform we noticed that the aproximation Q −2 i ≈ Q −2 0 is quite reasonable, because for the energies that we shall consider the pomeron range parameter (Eq.(2.8)) is smaller than the width of the Gaussian form factor and consequentlyQ 2 ≈ Q 2 0 . Therefore, we obtain the final expression
d 2 σ P P →R AA d 2 b = 2π 3Aβ 0 Q 2 0 2π 2 4 dx 1 x 1 dx 2 x 2 e −x 2 1 M 2 /Q 2 0 e −x 2 2 M 2 /Q 2 0 × x 1 x 2 s 2 m 4 2ε σ P P →R AA (x 1 x 2 s) b 2 Q 4 0 e −b 2 Q 2 0 /2 . (2.13)
As discussed previously, to enforce the condition of peripheral collisions we integrate Eq.(2.13) with the condition b min > 2R.
Another way of to exclude events due to inelastic central collisions is through the introduction of an absortion factor computed in the Glauber aproximation [15]. This factor modifies the cross section in the following form
dσ gl AA d 2 b = dσ P P →R AA d 2 b exp −A 2 bσ 0 dQ 2 (2π) 2 F 2 A (Q 2 ) e iQb = dσ P P →R AA d 2 b exp −A 2 bσ 0 Q 2 0 4π e −Q 2 0 b 2 /4 , (2.14)
where σ 0 is the nucleon-nucleon total cross section, whose value for the different energy domains that we shall consider is obtained directly from the fit of Ref. [16]
σ 0 = Xs ǫ + Y 1 s −η 1 + Y 2 s −η 2 , with X = 18.256, Y 1 = 60.19, Y 2 = 33.43, ǫ = 0.34, η 1 = 0.34, η 2 = 0.55, F A (Q 2 ) = e −Q 2 /2Q 2 0
and we exemplified Eq.(2.14) for the case of resonance production, i.e., σ P P →R AA is the total cross section for the resonance production to be discussed in the next section. The integration in Eq.(2.14) is over all impact parameter space and in the last section we discuss the differences between the two approaches showed above for removing central collisions.
III. SUBPROCESSES INITIATED BY PHOTONS AND POMERONS
A. Resonances
The main motivation to study resonance production in peripheral heavy ion collisions is that the high photon luminosity will allow us to observe resonances that couple very weakly to photons. The simplicity of this calculation also enable us to test the methods for removing central collisions, as well as to check up to which degree the double pomeron exchange is or not a background for the two photon physics.
To estimate the production of single spin-zero resonances, we note that these states can be formed by photon-photon fusion with a coupling strength that is measured by their photonic width
σ γγ→R = 8π 2 M Rŝ Γ R→γγ δ τ − M 2 R s , (3.1)
where M R is the ressonance mass and Γ R→γγ its decay width in two photons. Using this expression into Eq.(2.2) we obtain the total cross section for the production of pseudo-scalar mesons.
To compute the cross section of the subprocess P P → R we can use the pomeron model of Donnachie and Landshoff [11]. In this model it is assumed that the pomeron couples to the quarks like a isoscalar photon [11]. This means that the cross sections of P P → X subprocesses can be obtained from suitable modifications on the cross-section for γγ → X.
Another aspect to be considered is that the pomeron-quark-quark vertex is not point-like, and when either or both of the two quark legs in this vertex goes far off shell the coupling is known to decrease. So the quark-pomeron coupling β 0 must be replaced bỹ
β 0 (q 2 ) = β 0 µ 2 0 µ 2 0 + Q 2 ,(3.2)
where µ 2 0 = 1.2 GeV 2 is a mass scale characteristic of the pomeron, in the case of resonance production Q = M R /2 measures how far one of the quark legs is off mass shell and M R is the resonance mass. Therefore, the process P P → R is totally similar to the one initiated by photons unless from an appropriate change of factors. The cross section we are looking for is obtained changing the fine-structure constant α by 9β/16π 2 , whereβ is giving by Eq.(3.2) and 9 = 3 2 is a color factor, leading to
σ R P P = 9 2β 4 α 2 Γ(R → γγ) M R δ(x 1 x 2 s − M 2 R ).
Using this expression in Eq.(2.13) the total cross section is equal to
σ P P →R AA = 9π 8 (βQ 0 ) 4 α 2 3Aβ 0 Q 0 2π 4 Γ(R → γγ) M R M 2 R s m 4 2ǫ Q 4 0 M 2 R × dx x exp − M 2 R M sQ 0 x 2 − (xM) 2 Q 2 0 ∞ b min db2πb 3 e −Q 2 0 b 2 /2 ,(3.3)
where b min = 2R.
B. Pion pair production
The continuous production of pion pairs (π + π − ) is also an interesting signal to be observed in peripheral heavy ion collisions, mostly because they are a background for glueball and other resonances decays. Here we discuss the subprocess cross sections for two photons, ZZ → γγ → ZZπ + π − , and two pomerons exchange ZZ → P P → ZZπ + π − .
The cross section for pion pair production by two photons can be calculated approximately by using a low energy theorem derived from partially-conserved-axial-vector-current hypothesis and current algebra and is equal to [17] σ
(γγ → π + π − ) ∼ = 2πα 2 s 1 − 4m 2 π s (1/2) m 4 V 1 2 s + m 2 V 1 4 s + m 2 V 2 , (3.4)
where m π is the pion mass an s its squared energy, m V is a free parameter, whose value that provides the best fit to the experimental data is m V ∼ = 1.4 GeV. This expression shows a nice agreement with the experimental data [18]. For large values of s it deviates from the Brodsky and Lepage formula [19]. However, since most of the photon distribution is concentrated in the small x region, i.e., the photons carry a small fraction of the momentum of the incoming ion, the difference is negligible.
Using Eqs. (3.4) and (2.2) we obtain
σ(s) = 2πα 2 s 1 τ min dτ τ 1 − 4m 2 π sτ (1/2) m 4 V 1 2 sτ + m 2 V 1 4 sτ + m 2 V 2 dL dτ .
In the case of double pomeron exchange producing a pion pair we use once again the Donnachie and Landshoff model for the pomeron, obtaining the cross section for P P → π + π − from the photonic one changing α 2 → 9β 4 0 /16π 2 in σ(γγ → π + π − ), and the resulting expression replaces σ P P →R AA (x 1 x 2 s) in Eq.(2.13). The total cross section appears after we perform the integration in the parameter space representation of the following equation
d 2 σ P P →π + π − AA db 2 = π 2 8 9 4 (β 0 Q 0 ) 4 s 3Aβ 0 Q 0 2π 2 4 dx 1 x 2 1 dx 2 x 2 2 e −(x 1 M ) 2 /Q 2 0 e −(x 2 M ) 2 /Q 2 0 × x 1 x 2 s 2 m 4 2ε 1 − 4m 2 π x 1 x 2 s 1/2 m 4 V x 1 x 2 s 2 + m 2 V x 1 x 2 s 4 + m 2 V 2 × ∞ b min db2πQ 4 0 b 3 e −b 2 Q 2 0 /2 .
C. Multiple particle production
The elementary cross section for multiple-particle production via two photons fusion can be described by the parametrization [20] σ
γγ→hadrons = C 1 s s 0 ǫ + C 2 s s 0 −η ,(3.5)
where C 1 = 173 nbarn, C 2 = 519 nbarn, s 0 = 1 GeV 2 , ǫ = 0.079 and η = 0.4678. The total cross section comes out from Eq.(2.2).
Within the Donnachie and Landshoff model it is straightforward to see that with the above parametrization the differential cross section to produce a cluster of particles with mass M X through double pomeron exchange is
dσ dM X = (3Aβ 0β0 µ 0 ) 4 (2π) 4 R 4 N (16π 2 α 2 ) 1 2M X ds ′ s ′ C 1 s ′ s 0 ǫ + C 2 s ′ s 0 −η × exp − s ′ MR N s 2 − M 2 X MR N s ′ 2 ∞ b min db b e −b 2 /2R 2 N R 2 N ,
to obtain this expression we used the pomeron distribution function in the nucleus for inclusive process (Eq.(2.7)).
To be possible a comparison with the work of Engel et al. [10], we also make use of the Ter-Martirosyan [21] model for diffractive multiparticle production. In this model the subprocess P P → hadrons is characterized by the cross section σ tot P P (ln(M 2 X /m 2 ), t 1 , t 2 ) ≈ 8πr(t 1 )r(t 2 ), (3.6) which is a function of the triple-pomeron vertex r(t), where t is the exchanged momentum.
Using the value of r(0) from Ref. [22], σ tot PP = 8πr 2 (0) ≈ 140 µ barn. Note that we have clear differences between the approaches described above. Eq.(3.5) is a parametrization valid for a wide range of momenta, and with this one we naively apply the model of Ref. [9] to compute the total cross section for multiparticle production. On the other hand Eq.(3.6) is obtained in another specific model and it is not expected to be valid for the same range of energies as Eq. (3.5). This difference is going to be discussed in the last section.
Streng [23] applied the model of Ref. [21] for proton-proton collisions where the initial protons are scattered almost elastically, emerging with a very large fraction of the initial energy,
|x 1 |, |x 2 | ≥ c, c ≥ 0.9.
The double pomeron exchange produce a particle cluster within a large rapidity gap and with a mass of the order
M 2 X ≈ s(1 − |x 1 |)(1 − |x 2 |),(3.7)
where s is the reaction energy squared. As the scattering is almost elastic, i.e., the emerging beam has approximately the same energy as the incident one, the following kinematical boundaries can be introduced
M 0 ≤ M X ≤ (1 − c) √ s, M 2 X (1 − c) ≤ s 1 ≤ (1 − c)s,(3.
IV. RESULTS AND CONCLUSIONS
Peripheral collisions at relativistic heavy ion colliders provide an arena for interesting studies of hadronic physics. Resonances coupling weakly to photons can be studied due to the large photon luminosity, the continuous production of pion pairs will be observed not only as a reaction of interest as well as a possible background for some resonance decays.
A hadron cluster produced within a large rapidity gap will give information about photonphoton and double pomeron exchange. In this work we estimate the cross section for these processes. One of the main points is to verify if the double pomeron exchange is or not a background for the purely electromagnetic process. We discussed double pomeron exchange according to the Donnachie and Landshoff [11] model and calculated the cross sections in the impact parameter space. The condition for a realistic peripheral collision is imposed integrating the cross section with b min > 2R in the case of two identical ions with radius R.
We considered the production of pseudoscalars resonances in the collision of 238 U for energies available at RHIC ( √ s = 200 GeV/nucleon), and collisions of 206 Pb at energies available in LHC ( √ s = 6.300 GeV/nucleon). Our results are shown in Table 1. Contrarily to the result of Ref. [7] the double pomeron exchange is not important when the cut in the impact parameter is introduced. For a realistic peripheral collision in the case of resonance production the pomeron-pomeron process is at least two orders of magnitude below the photon-photon one. Note in Table 1 Table 1.
The decays of π 0 , η, etc... will be dominated by two (or three) body decays in the central region of rapidity, and easily separated from the larger multiplicity common to inelastic collisions. It is interesting that in the case of π 0 and η production we may focus on the 2γ decay, and even if it is possible to separate the background from inelastic nuclear reactions, we still have the background of the photon-photon scattering through the QED box diagram producing the same final state. The box diagram will be dominated by light quarks, electron and muon, and for these we can use the asymptotic expression of γγ scattering (σ(s) ∼ 20/s).
Integrating this expression in a bin of energy (proportional to the resonance partial width into two-photons) centered at the mass of the resonance, we obtain a cross-section smaller than the resonant one with subsequent decay into two-photons. We do not considered the interference between the box and resonant diagram because on resonance the two processes are out of phase. It is opportune to mention that the decay products will fill the central region of rapidity, which is also one of the conditions proposed in Ref. [6] to isolate the peripheral collisions.
As discussed in Sect. 2 we have two different ways to enforce a realistic peripheral heavy ion collision. One is a geometrical cut in the impact parameter space where b min > 2R is imposed, the other is through the introduction of the absorption factor in the Glauber approximation as given by Eq.(2.14). In Table 2 we compare the ratios between the total cross sections for diffractive resonance production computed with Eq.(2.14) and the one with the cut on the impact parameter (given by Eq. The values of Table 2 show that the geometrical cut is less restrictive than the one given by the Glauber absorption factor. However, which one is more realistic also depends on the energy and on the ion that we are considering. In Table 3 we present the cross section for π 0 production for different ions and at different energies. From Eq.(2.14) we notice that small variations in σ 0 (the nucleon-nucleon total cross section) are also promptly transmitted to the total cross section, and modify the ratios between the different methods to exclude inelastic collisions. Table 3 shows that the difference between the methods also become less important for light ions, but the most striking fact in this table is that for light ions double pomeron exchange starts becoming a background for photon-photon processes! According to Table 3 for 28 Si the diffractive π 0 production is a factor of 2 down the electromagnetic one (assuming the geometrical cut). This is not surprising because we know that for protonproton the double pomeron exchange process should be larger than the electromagnetic one for producing a light resonance.
In Table 4 we show the pion pair cross section for different ions. The values were obtained using the geometrical cut, and even if with this procedure the diffractive cross section is a little bit overestimated for heavy ions we verify that photon-photon dominates. For light ions the diffractive process is already of the order of 10% of the electromagnetic one.
The simulations discussed in the last paper of Ref. [6] have shown that the γγ interactions produce final states with small summed transverse momentum (| p T |). Therefore, a cut of | p T | ≤ 40 − 100 MeV /c can reduce considerably the background of non-peripheral collisions. In Table 4 we present the cross section for pion pair production through double photon interaction with | p T | ≤ 100 MeV /c. With this cut the cross section was reduced almost by a factor of 4. The electromagnetic process with the restriction on p T is still larger than those of double pomeron exchange without this cut, and the introduction of this cut in the diffractive process produces a similar reduction.
The results for a hadron cluster production with invariant mass M X is depicted in Fig.1.
In the figure it is shown the cross section for four different ions (Pb, Au, Ag, Ca) at energies that will be available at RHIC and LHC. The results were obtained integrating the cross sections with the condition b min > 2R. At LHC the photon-photon process will dominate the cross sections for heavy ions, whereas for light ions and small invariant mass they become of the same order. For heavy ions the diffractive process is indeed negligible. Note that our result for photons is similar to the one of Engel et al. [10], but the diffractive cross sections is slightly smaller than the one of Ref. [10]. We credit this deviation to the differences in our approachs to calculate the subprocess cross section, mainly in the use of Eq.(3.5) with the changes prescribed by the Donnachie and Landshoff [11] instead of Eq.(3.6) given by the Ter-Martirosyan [21] model. They also use a value for σ 0 that is smaller than the one we considered here, which gives a smaller cut of the central collisions. We believe that the use of Eq.(3.5) and the model of Ref. [11] is more appropriate for the full range of momenta. Actually, diffractive models are plagued by uncertainties and the measurement of the double pomeron exchange in heavy ion colliders will provide useful information to distinguish between different models.
For multiple particle production we will not have the criteria of low multiplicity to help us to select the truly peripheral collisions, as well as it is far from clear if the cut in transverse momentum will be very effective to select the γγ events. However, we can separate the peripheral events on the basis of a clustering in the central region of rapidity, although an extensive and detailed simulation of the background processes will be necessary in order to set the precise interval of rapidity needed to cut the inelastic nuclear collisions.
As verified by Drees, Ellis and Zeppenfeld [3], Eq.(2.10) is a reasonable approximation for the form factor obtained from a Fermi or Woods-Saxon density distribution. However, their result shows that for heavy final states the photon-photon luminosity is slightly underestimated, and we can expect the same for the Pomeron one. A simple form factor expression consistent with the Fermi distribution has been recently obtained in Ref. [25], and its use would yield a few percent larger cross section in the case of a very heavy hadron cluster production.
In the case of peripheral heavy ion collisions at RHIC we surely cannot neglect the diffractive contribution, for light ions and a hadron cluster with low invariant mass it surely dominates photon-photon collisions. Notice that these results may change if we use the Glauber absorption factor to compute the cross section (depending on the energy, the ion and invariant mass), but the actual fact is that double pomeron exchange cannot be neglected at RHIC.
In conclusion, we estimated the production of resonances, pion pairs and a cluster of hadrons with invariant mass M X in peripheral heavy ion collisions at energies that will be available at RHIC and LHC. The condition for a realistic peripheral collision was studied with the use of a geometrical cut, where the minimum impact parameter was forced to be larger than twice the identical nuclei radius. The introduction of an absorption factor in the Glauber approximation to eliminate central collisions was also studied. We find out that the most restrictive method to account for inelastic collisions depended on the energy, the ion, as well as on the value of σ 0 (the nucleon-nucleon total cross section). The geometrical cut is not allways the most restrictive way to enforce peripheral collisions, an this is a topic that should be answered by the future experiments. In both cases we noticed that at energies of the LHC operating in the heavy ion mode and for very heavy ions the double pomeron exchange is not a background for the two photon process. IV. Cross sections for π + π − production. The energies are in GeV/nucleon and the cross sections in mbarn. σ γγ (p T < 100 MeV) is the pion pair production through photon-photon interaction with the cut p T < 100 MeV .
= 2
2GeV and c = 0.9. These limits have been translated for the case of heavy ions by Engel et al.[10] and we will proceed as them. If we consider Eq.(3.6), dress it with the pomeron distribution functions within the nuclei, and subtract the central collisions considering the absortion factor computed in the Glauber aproximation[15] we reproduce the results of Ref.[10].
that the rate of diffractive resonance production decreases with the increase of the meson mass. The main reason for this behavior lies in the fast decrease of the pomeron-quark coupling as shown in Eq.(3.2). Note that the results of this table assume 100% of efficiency in tagging the peripheral collision, even if we consider a small efficiency we recall that the cross section for light resonances imply in approximately billions of events/yr which easily survive the cuts for the background separation proposed by Nystrand and Klein (see the last paper of Ref. [6]). One of the most important cuts to separate inelastic nuclear reactions, associated with grazing collisions, is the small multiplicity of the final state, and this is exactly what we may expect in the final state of the particles discussed in
(3.3)), in the collision of 238 U for energies available at RHIC ( √ s = 200 GeV/nucleon), and collisions of 206 Pb at energies available in LHC ( √ s = 6.300 GeV/nucleon).
FIG. 1 .
1Cross section for multiple particle production with invariant mass equal to M X for different nuclei collisions. The nuclei are indicated in the upper corner of each figure. The solid line is for pomeron-pomeron interaction and the dashed line is for double photon exchange at LHC, √ s = 6300 GeV/nucleon. In the same figures it can be seen the cross section for RHIC, √ s = 200 GeV/nucleon. Double pomeron exchange is given by the dotted line and the photon interaction by the dotted-dashed line.
I. Cross sections for resonance production through photon-photon (γγ) and double-pomeron (P P ) processes. For RHIC, √ s = 200 GeV/nucleon, we considered 238 U ion and for LHC, √ s = 6300 GeV/nucleon, the nucleus is 206 Pb. The cross sections are in mbarn.
4 )
4which is a fit resulting from the numerical integration of the photon distribution, accurate to2% or better for 0.05 < z < 5.0, and where A 1 = 1.909, A 2 = 12.35, A 3 = 46.28, b 1 = 2.566,
b 2 = 4.948, and b 3 = 15.21. For z < 0.05 we use the expression (see Ref. [4])
dL
dτ
=
Z 2 α
π
2 16
3τ
ln (
1.234
z
)
3
The situation changes considerably for light ions and mostly for the energies available at RHIC, where double pomeron exchange cannot be neglected. One of us (C.G.R.) thanks Paulo S. R. da Silva for useful discussions, and C. A. Bertulani for a helpful remark. This research was supported in part by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) (AAN), Fundação de Amparo a Pesquisa do Estado de São Paulo (FAPESP) (CGR and AAN), and by Programa de Apoio a Núcleos de Excelência (PRONEX).ACKNOWLEDGMENTS
TABLE
TABLE II .
IIRatios of cross sections for diffractive resonance production calculated with the Glauber absorption factor to the one with the geometrical cut in the collision of 238 U for energies available at RHIC ( √ s = 200 GeV/nucleon), and collisions of 206 Pb for energies available at LHC ( √ s = 6.300 GeV/nucleon).Nucleus
√ s
σ PP→R
AA
σ gl PP
AA
σ γγ
Au (A=197)
100
0.044
0.55 × 10 −3
2.4
Ca (A=40)
3 500
0.043
0.39 × 10 −3
0.14
Si (A=28)
200
0.34 × 10 −2
0.15 × 10 −3
0.69 × 10 −2
Si (A=28)
100
0.22 × 10 −2
0.12 × 10 −3
0.39 × 10 −2
TABLE III .
IIICross section for π 0 production for different ions and at different energies. The energies are in GeV/nucleon and the cross sections in mbarn. σ P P →R is the cross section computed with the geometrical cut and σ gl is the one with the absorption factor.Nucleus
√ s
σ γγ
σ γγ (p T < 100 MeV)
σ PP
U
200
9
2.15
7.47 × 10 −3
Pb
6 300
81.96
15.98
1.34 × 10 −2
Au
100
2.3
0.523
8.11 × 10 −3
Ca
3 500
0.28
0.05
5.85 × 10 −3
Si
200
0.98 × 10 −2
0.21 × 10 −2
1.02 × 10 −3
Si
100
0.49 × 10 −2
0.12 × 10 −2
8.6 × 10 −4
TABLE
. G Baur, J. Phys. 241657G. Baur, J. Phys. G24, 1657 (1998)
. G Baur, K Hencken, D A Trautmann ; C, G Bertulani, Baur, hep-ph/9810418Phys. Reports. 163299G. Baur, K. Hencken and D. Trautmann, hep-ph/9810418; C. A. Bertulani and G. Baur, Phys. Reports 163, 299 (1988);
G Baur, Proceedings of the CBPF International Workshop on Relativistic Aspects of Nuclear Physics. G. Baur and C. A. Bertulanithe CBPF International Workshop on Relativistic Aspects of Nuclear PhysicsSingaporeWorld Scientific835Rio de JaneiroG. Baur, in Proceedings of the CBPF International Workshop on Relativistic Aspects of Nuclear Physics, Rio de Janeiro, 1989, edited by T. Kodama et al. (World Scientific, Singapore, 1990), p. 127; G. Baur and C. A. Bertu- lani, Nucl. Phys. A505, 835 (1989).
. E Papageorgiu, Phys. Rev. 4092E. Papageorgiu, Phys. Rev. D40, 92 (1989);
. Nucl. Phys. 498593Nucl. Phys. A498, 593c (1989);
. M Grabiak, J. Phys. 1525M. Grabiak et al., J. Phys. G15, L25 (1989);
. M Drees, J Ellis, D Zeppenfeld, Phys. Lett. 223454M. Drees, J. Ellis and D. Zeppenfeld, Phys. Lett. B223, 454 (1989);
. M Greiner, M Vidovic, J Rau, G Soff, J. Phys. 1745M. Greiner, M. Vidovic, J. Rau and G. Soff, J. Phys. G17, L45 (1991);
. B Müller, A J Schramm, Phys. Rev. 423699B. Müller and A. J. Schramm, Phys. Rev. D42, 3699 (1990);
. J S Wu, C Bottcher, M R Strayer, A K Kerman, Ann. Phys. 210402J. S. Wu, C. Bottcher, M. R. Strayer and A. K. Kerman, Ann. Phys. 210, 402 (1991).
. R N Cahn, J D Jackson, Phys. Rev. 423690R. N. Cahn and J. D. Jackson, Phys. Rev. D42, 3690 (1990).
. J Rau, B Müller, W Greiner, G Soff, J. Phys. 16211J. Rau, B. Müller, W. Greiner and G. Soff, J. Phys. G16, 211 (1990);
. L D Almeida, A A Natale, S F Novaes, O J P Eboli, Phys. Rev. 44118L. D. Almeida, A. A. Natale, S. F. Novaes and O. J. P. Eboli, Phys. Rev. D44, 118 (1991).
talk at Photon97 (hep-ph/9706358). S Klein, E Scannapieco, 40457preprint LBNL-40495. presented at CIPANP97 (nuclth/9707008S. Klein and E. Scannapieco, preprint LBNL-40457, June, 1997, talk at Photon97 (hep-ph/9706358); preprint LBNL-40495, June, 1997, presented at CIPANP97 (nucl- th/9707008);
. J Nystrand, S Klein, hep-ex/97110219741111J. Nystrand and S. Klein, preprint LBNL-41111, November, 1997, talk at Hadron97 (hep-ex/9711021).
. A A Natale, Mod. Phys. Lett. 92075A. A. Natale, Mod. Phys. Lett. A9, 2075 (1994);
. A A Natale, Phys. Lett. 362177A. A. Natale, Phys. Lett. B362, 177 (1995).
. B Müller, A J Schramm, Nucl. Phys. 523677B. Müller and A. J. Schramm, Nucl. Phys. A523, 677 (1991).
. A Donnachie, P V Landshoff, Phys. Lett. 185319A. Donnachie and P. V. Landshoff, Phys. Lett. B185, 403 (1987); 207, 319 (1988);
. Nucl. Phys. 311Nucl. Phys. B311, 509 (1988/89).
. R Engel, M A Braun, C Pajares, J Ranft, Z. Phys. 74687R. Engel, M. A.Braun, C. Pajares and J. Ranft, Z. Phys. C74, 687 (1997).
. A Donnachie, P V Landshoff, Nucl. Phys. 244322A. Donnachie and P. V. Landshoff, Nucl. Phys. B244, 322 (1984);
. A Donnachie, P V Landshoff, Phys. Lett. 191309A. Donnachie and P. V. Landshoff, Phys. Lett. B191, 309 (1987);
. Nucl. Phys. 303634Nucl. Phys. B303, 634 (1988).
. A Schäfer, O Natctmann, R Schöpf, Phys. Lett. 249331A. Schäfer, O. Natctmann and R. Schöpf, Phys. Lett. B249 331 (1990).
. A J Schramm, D H Reeves, Phys. Rev. D. 557312A. J. Schramm and D. H. Reeves, Phys. Rev. D 55 7312 (1997).
. V Franco, R J Glauber, Phys. Rev. 1421195V. Franco and R. J. Glauber, Phys. Rev. 142 1195 (1966).
. C Caso, Particle Data GroupEur. Phys. J. C3. 1C. Caso et al. (Particle Data Group), Eur. Phys. J. C3 1 (1998).
. H Terazawa, Phys. Rev. D51. 954H. Terazawa, Phys. Rev. D51 R954 (1995).
. J Boyer, Mark II CollaborationPhys. Rev. Lett. 56207Mark II Collaboration, J. Boyer et al., Phys. Rev. Lett. 56 207 (1986);
. H Tpc/Two-Gamma Cllaboration, Aihara, Phys. Rev. Lett. 57404TPC/Two- Gamma Cllaboration, H. Aihara et al., Phys. Rev. Lett. 57 404 (1986);
. J Cleo Collaboration, Dominick, Phys. Rev. 503024CLEO Collab- oration, J. Dominick et al., Phys. Rev. D50 3024 (1973).
. S J Brodsky, G P Lepage, Phys. Rev. D. 241808S. J. Brodsky and G. P. Lepage, Phys. Rev. D 24 1808 (1981).
. M Acciarri, L3 CollaborationPhys. Lett. B408. 450L3 Collaboration, M. Acciarri et al., Phys. Lett. B408 450 (1997).
. K A Ter-Martirosyan, Phys. Lett. 44B. 179K. A. Ter-Martirosyan, Phys. Lett. 44B 179 (1973).
. R L Cool, Phys. Rev. Lett. 47701R. L. Cool et al., Phys. Rev. Lett. 47 701 (1981).
. K H Streng, Phys. Lett. 166B. 443K. H. Streng, Phys. Lett. 166B 443 (1986).
. A B Kaidalov, K A Ter-Martirosyan, Nucl. Phys. 75471A. B. Kaidalov and K. A. Ter-Martirosyan, Nucl. Phys. B75 471 (1974).
. S Klein, J Nystrand, Phys. Rev. 6014903S. Klein and J. Nystrand, Phys. Rev. C60, 014903 (1999).
| [] |
[
"Cycloaddition Functionalizations to Preserve or Control the Conductance of Carbon Nanotubes",
"Cycloaddition Functionalizations to Preserve or Control the Conductance of Carbon Nanotubes"
] | [
"Young-Su Lee \nDepartment of Materials Science and Engineering, and Institute for Soldier Nanotechnologies\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA\n",
"Nicola Marzari \nDepartment of Materials Science and Engineering, and Institute for Soldier Nanotechnologies\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA\n"
] | [
"Department of Materials Science and Engineering, and Institute for Soldier Nanotechnologies\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA",
"Department of Materials Science and Engineering, and Institute for Soldier Nanotechnologies\nMassachusetts Institute of Technology\n02139CambridgeMassachusettsUSA"
] | [] | We identify a class of covalent functionalizations that preserves or controls the conductance of single-walled metallic carbon nanotubes.[2+1] cycloadditions can induce bond cleaving between adjacent sidewall carbons, recovering in the process the sp 2 hybridization and the ideal conductance of the pristine tubes. This is radically at variance with the damage permanently induced by other common ligands, where a single covalent bond is formed with a sidewall carbon. Chirality, curvature, and chemistry determine bond cleaving, and in turn the electrical transport properties of a functionalized tube. A well-defined range of diameters can be found for which certain addends exhibit a bistable state, where the opening or closing of the sidewall bond, accompanied by a switch in the conductance, could be directed with chemical, optical or thermal means. | 10.1103/physrevlett.97.116801 | [
"https://arxiv.org/pdf/cond-mat/0608146v1.pdf"
] | 8,669,879 | cond-mat/0608146 | 3afcd17be8a340f1e3e83868aa4a2bfd06a43fbf |
Cycloaddition Functionalizations to Preserve or Control the Conductance of Carbon Nanotubes
6 Aug 2006
Young-Su Lee
Department of Materials Science and Engineering, and Institute for Soldier Nanotechnologies
Massachusetts Institute of Technology
02139CambridgeMassachusettsUSA
Nicola Marzari
Department of Materials Science and Engineering, and Institute for Soldier Nanotechnologies
Massachusetts Institute of Technology
02139CambridgeMassachusettsUSA
Cycloaddition Functionalizations to Preserve or Control the Conductance of Carbon Nanotubes
6 Aug 2006(Dated: July 11, 2018)
We identify a class of covalent functionalizations that preserves or controls the conductance of single-walled metallic carbon nanotubes.[2+1] cycloadditions can induce bond cleaving between adjacent sidewall carbons, recovering in the process the sp 2 hybridization and the ideal conductance of the pristine tubes. This is radically at variance with the damage permanently induced by other common ligands, where a single covalent bond is formed with a sidewall carbon. Chirality, curvature, and chemistry determine bond cleaving, and in turn the electrical transport properties of a functionalized tube. A well-defined range of diameters can be found for which certain addends exhibit a bistable state, where the opening or closing of the sidewall bond, accompanied by a switch in the conductance, could be directed with chemical, optical or thermal means.
Chemical functionalizations of carbon nanotubes (CNTs) are the subject of intensive research [1,2], and could offer new and promising avenues to process and assemble tubes, add sensing capabilities, or tune their electronic properties (e.g., doping levels, Schottky barriers, work functions, and electron-phonon couplings). However, the benefits of functionalizations are compromised by the damage to the conduction channels that follows sp 3 rehybridization of the sidewall carbons [3,4], as evidenced by absorption spectra and electrical transport measurements [5,6,7]. We report here on a class of cycloaddition functionalizations that preserves instead the remarkable transport properties of metallic CNTs. In addition, we identify a subclass of addends that displays a reversible valence tautomerism that can directly control the conductance. We focus here on [2+1] cycloaddition reactions, where the addition of a carbene or a nitrene group saturates a double-bond between two carbon atoms, forming a cyclopropane-like three-membered ring. Such functionalizations have been reported extensively in the literature [8,9,10]. All our calculations are performed using density-functional theory in the Perdew-Burke-Ernzerhof generalized gradient approximation (PBE-GGA) [11], ultrasoft pseudopotentials [12], a planewave basis set with a cutoff of 30 Ry for the wavefunctions and 240 Ry for the charge density, as implemented in quantum-espresso [13]. We examine first the simplest members in this class of addends, CH 2 and NH. Fig. 1(a) and (b) show the two inequivalent choices available on a (5,5) metallic CNT; for convenience, we label these as "S" (skewed) and "O" (orthogonal), reminiscent of the relative positions of the sidewall carbons with respect to the tube axis. Our simulation cells include 12 n carbons for a given (n,n) CNT, plus one functional group. We use a 1×1×4 Monkhorst-Pack mesh (including Γ) for structural optimizations, and a 1×1×8 mesh for single-point energy calculations, with a cold smearing of 0.03 Ry [14].
First, and for (n,n) metallic tubes, we highlight how strongly the reaction energies of these functionalizations depend on the curvature of the nanotubes, and on their attachment sites S and O. We plot in Fig. 2(a) the reaction energies ∆E CNT (defined as ∆E CNT = E CNT-func − E CNT − E func , func=CH 2 or NH), taking as a zero reference the same reaction on graphene. The reaction energies have a well-defined linear dependence on curvature, clearly demonstrating the higher reactivity of smallerdiameter tubes. The O site is always more stable, and significantly so for all diameters considered; at room temperature all small-diameter armchair CNTs strongly favor the O configuration. The energy difference between the O and the S form of the (5,5) CNT is 1.24 eV, which is in good agreement with other planewave basis calculations (1.24 eV, Ref. [15]) and localized basis calculations (1.4 eV, Ref. [16]). Second, we find that the C 1 -C 6 distance for this O configuration (d 16 in Fig. 1(c)) is much larger than the usual C-C distance (1.54Å in diamond and 1.42Å in graphite), a clear indication that the sidewall bond is broken [15,16,17,18]. For the CH 2 and NH cycloadditions, we observe bond cleaving for all nanotubes studied (up to (12,12)); on the other hand, in graphene the bond is intact. We estimate the critical diameter that sepa- rates the two regimes by bending a graphene sheet: we can see in Fig. 2(b) that such model closely reproduces the nanotube results, and a sharp transition from the bond-intact to the bond-broken form takes place around a diameter of 2.4 nm, i.e. a (18,18) tube, for CH 2 functionalizations. Broken or intact sidewall bonds play a fundamental role in the electronic transport properties of a metallic nanotube. We show this in Fig. 3, where the Landauer conductance is calculated for a (5,5) CNT functionalized with dichlorocarbene (CCl 2 ) (first reported experimentally in 1998 [8]), using an approach recently introduced by us that allows to treat realistic nanostructures with thousands of atoms while preserving full first-principles accuracy [4]. For this diameter the sidewall bond is broken, and the scattering by a single CCl 2 group is found to be remarkably weak, with the conductance approaching its ideal value. Even after adding 30 groups on the central 43 nm segment of an infinite nanotube the conductance is only reduced by 25%. This is in sharp contrast with the case of a hydrogenated tube, where the conductance drastically drops practically to zero when functionalized with a comparable number of ligands. This result is easily rationalized. Hydrogen and other single-bond covalent ligands induce sp 3 hybridization of the sidewall carbons, and these chemical defects act as very strong scatterers [4]. Such dramatic decrease in the conductance has also been recently confirmed experimentally [7]. On the other hand, after bond cleavage C 1 and C 6 recover a graphite-like bonding environment ( Fig. 1(c)), with three covalent bonds to their nearest neighbors. Their electronic orbitals go back to sp 2 hybridization, allowing for the p z orbitals of C 1 and C 6 to be recovered (as con- firmed by inspection of the maximally-localized Wannier functions [19]) and to contribute again to the graphitic π manifold. The net result is that conductance approaches again that of a pristine CNT, highlighting the promise of cycloadditions in preserving the conductance of metallic nanotubes. In principle, around the critical diameter shown in Fig. 2(b), a functional group could be found that displays as stable states on the same tube both the open and the closed configuration. If the case, interconversion between the two valence tautomers would have a direct effect on the conductance. We illustrate this paradigm with the dichlorocarbene example of Fig. 3; if we force the tube in its closed configuration, with all the sidewall bonds frozen in their ideal pristine-tube geometry, conductance decreases by a factor of 2 or 3 with respect to the case of the relaxed tube [20]. The configuration where the sidewall bonds are intact is not stable for this case, but, depending on the chemistry of the addends and the diameter considered, optimal ligands could be found for which a double-well stability is present. In order to identify the factors that determine stability in the open or closed form, we first screen several addends on small molecules. We find that the bridged 1,6-X- [10]annulene (inset of Fig. 4(a)) is an excellent molecular homologue of a functionalized CNT [17]. It is well-known that the substitutional group X dictates the preference for the annulene (henceforth labelled as 1o) or for its valence tautomer, a bisnorcaradiene derivative (1c), corresponding to the open and closed configurations of a functionalized CNT [22,23,24,25,26,27]. Similar tautomerization between an open (2o) and a closed (2c) form takes place in a pyrene derivative [28] (inset of Fig. 4(b)). We thus tested on these molecules the substitutional groups X= CH 2 , NH, SiH 2 , C(NO 2 ) 2 , C(CN) 2 , C(CCH) 2 , C(CH 3 ) 2 , C(COOH) 2 , CCl 2 , C(NH 2 ) 2 , C 6 H 4 O [29], and C 13 H 8 [29]. To assess the accuracy of our PBE-GGA approach, we compare in Table I our results for d 16 in 1 with those obtained from experiments or other theoretical methods (second order Møller-Plesset perturbation theory (MP2) [30] and Becke three parameter Lee-Yang-Parr hybrid functional (B3LYP) [31]), finding excellent agreement. Hydrogen and halogens have been reported to stabilize 1o both experimentally and theoretically [24,25,32]. Cyano group favors 1c experimentally (the possibility of coexistence with 1o is discussed) [27], and two minima have been actually predicted theoretically [32,35]. We show the potential energy profile of select groups in Fig. 4. All carbenes stabilizing the closed form in both 1 and 2 share a common feature: partially-occupied p orbitals parallel to the in-plane p σ orbital of the bridgehead C 11 atom [36] (the plane considered is that of C 1 -C 11 -C 6 of Fig. 1(c)). This conclusion is strongly supported by examining the energy minimum conformation for X=C(NO 2 ) 2 . At equilibrium, the two oxygen atoms in the NO 2 group lie on a line parallel to the C 1 -C 6 bond, and the open form is stable (solid red in Fig. 4). The TABLE I: Experimental and theoretical d16 of 1. Note that the calculations assume isolated molecules at 0 K, while experimental data are obtained from crystalline systems at finite temperature. Theory predicts two stable minima for X=C(CN)2. The long d16 of X=C(CH3)2 indicates that the potential energy surface would be very flat, which is also predicted in Fig. 4 a Ref. [25], b Ref. [24], c Ref. [27], d Ref. [22], e Ref. [33], f Ref. [32], g Ref. [34], h Ref. sidewall bond will switch from open to closed upon rotation of the two NO 2 groups by 90 • thereby placing the p orbitals of the nitrogens parallel to the p σ orbital of C 11 (dashed red in Fig. 4). Among all the substituents screened, we find that SiH 2 , C(CCH) 2 and C(CN) 2 show most clearly the presence of two minima in their potential energy surface; we choose here X=C(CN) 2 as the most promising candidate since both 1c [27] and a C 60 derivative [37] have already been synthesized. We explored, therefore, the potential energy landscape for armchair CNTs in the case of C(CN) 2 cycloadditions. The results shown in Fig. 5 reflect closely those found in the molecular homologues. A unique minimum in the open form is found in small-diameter tubes, as is generally the case in these [2+1] cycloadditions. As the diameter is increased, the signature of the closed minimum starts to appear, first as an inflection point ((5,5) CNT), then as a local minimum for the (10,10) CNT (φ=1.36 nm, as in 1c), and finally as a global minimum for the (12,12) CNT (φ=1.63 nm, as in 2c). As discussed before, the conductance is controlled by the bonding and hybridization of the sidewall carbons. We compare in Fig. 6 the two stable open and closed forms for the (10,10) CNT functionalized with C(CN) 2 . The scattering induced by a single group is negligible, especially in the open form, and the conductance around the Fermi energy is extremely close to its ideal value ( Fig. 6(a)). As the number of functional groups is increased, the difference between the two minima becomes rapidly apparent ( Fig. 6(b)). Two conclusions can be drawn: First, even with a large number of functional groups, the conductance of the tube is well preserved, whenever cycloaddition breaks the sidewall bond. Second, a subclass of substituents can be found (e.g., C(CN) 2 ) that stabilize two tautomeric forms on the same tube, separately displaying high and low conductance. Several mechanisms, including photochemical, electrochemical, and thermal, could then direct interconversion between the two tautomeric forms. Photochemical and electrochemical interconversion rely on the fact that the energy levels of the frontier orbitals are affected by d 16 , depending on their symmetries and charge distri- butions (e.g., the bond weakens as a filled orbital that has bonding character along C 1 -C 6 is emptied) [32,36]. Both photochemical excitations or electrochemical reduction or oxidation can populate or depopulate those frontier orbitals that favor the open or closed form; as a result, they would modulate the bond distance, and ultimately the conductance. As a proof of principle, time-dependent density functional calculations in 2 for X=C(CN) 2 show that the first singlet excitation (S 1 ) drives the system from the open to the closed form [38]. A similar conclusion is drawn from experimental observations in 2o for X=CH 2 , where this open form is stable in the ground state while the closed 2c is presumed to be stable in the S 1 energy surface [28]. Temperature also plays an important role, and 13 C NMR spectroscopy and X-ray data have captured the temperature-dependent equilibrium of similar fluxional systems: a higher temperature stabilizes 1o for X=C(CN)(CH 3 ), while destabilizing it for X=C(CH 3 ) 2 [23,26]. In conclusion, our calculations predict that 1) a broad class of cycloaddition functionalizations on narrowdiameter nanotubes recovers, as a consequence of bond cleaving, the conductance of the original pristine tubes, allowing for organic chemistry approaches to manipulation and assembly that preserve the remarkable electronic properties of these materials, and 2) that a subclass of addends, exemplified in this work by dicyanocarbene, exhibits fluxional behavior that could be controlled with optical or electrochemical means. Such conductance control, if realized, could have practical applications in nanoand opto-electronics, chemical sensing, and imaging. The authors would like to thank Francesco Stellacci (MIT) and Maurizio Prato (University of Trieste) for helpful discussions. This research has been supported by MIT Institute for Soldier Nanotechnologies (ISN-ARO DAAD 19-02-D-0002) and the National Science Foun-dation (DMR-0304019); computational facilities have been provided through NSF (DMR-0414849) and PNNL (EMSL-UP-9597).
FIG. 1 :
1The three different configurations for a functional group on an armchair nanotube are shown (CH2 on a (5,5) CNT): (a) skewed S, (b) orthogonal O with an intact ("closed") sidewall bond, and (c) orthogonal O with a broken ("open") sidewall bond.
FIG. 2 :
2(a) Energy change ∆ECNT upon functionalization as a function of curvature, for (n,n) armchair CNTs -the zero reference is for graphene. (b) Sidewall equilibrium bond distance C1-C6 (d16) for (n,n) CNTs and for a bent graphene sheet, functionalized at the stable O site. The sidewall bond is broken in all the (n,n) CNTs considered. Continuous bending of a graphene sheet shows that a well-defined transition from the closed to the open form takes place as the curvature increases.
one CCl2 group, with 30 CCl2 groups, and with 30 hydrogen pairs (dashed line: pristine (5,5) CNT). A CCl2 addend will choose the O-open configuration, while hydrogens prefer to pair in the S configuration; the results corresponds to these stable choices. The conductance for the energeticallyunstable O-closed configuration is also plotted. The 30 functional groups are arranged randomly on the central 43-nm segment of an otherwise infinite tube; the conductance is then averaged over 10 different configurations.
FIG. 4 :
4Potential energy surface as a function of d16 for select cases of (a) 1 and (b) 2. C(CN)2 (violet) and C(CCH)2 (blue) show a double-well minimum in both 1 and 2. C(NO2)2-rot (dashed red) indicates the unstable conformation where the two NO2 groups are rotated from their equilibrium position (solid red) by 90 • .
FIG. 5 :
5Potential energy surface for (n,n) CNTs functionalized with C(CN)2. Both (10,10) and(12,12) CNTs display a double-well minimum.
FIG. 6 :
6Quantum conductance for a (10,10) CNT functionalized with C(CN)2, in the O-open and O-closed stable configurations (dashed line: pristine (10,10) CNT). (a) Single group. (b) 30 functional groups arranged randomly on the central 32-nm segment of an infinite tube; the conductance is then averaged over 10 different configurations.
, especially for 2.X
Expt.
MP2
B3LYP
This work
CH2
2.235 a
2.251 f
2.279 g
2.278
CF2
2.269 b
2.268 f
2.296 f
2.300
C(CN)2
1.542 c 1.599, 2.237 f 1.558, 2.253 h 1.572, 2.245
C(CH3)2 1.836 d
2.156 f
2.168 f
2.151
NH
(open) e
2.237 g
2.239
. C A Dyke, J M Tour, J. Phys. Chem. A. 10811151C. A. Dyke and J. M. Tour, J. Phys. Chem. A 108, 11151 (2004).
. X Lu, Z Chen, Chem. Rev. 1053643X. Lu and Z. Chen, Chem. Rev. 105, 3643 (2005).
. K Kamaras, Science. 3011501K. Kamaras et al., Science 301, 1501 (2003).
. Y.-S Lee, M B Nardelli, N Marzari, Phys. Rev. Lett. 9576804Y.-S. Lee, M. B. Nardelli, and N. Marzari, Phys. Rev. Lett. 95, 076804 (2005).
. M S Strano, Science. 3011519M. S. Strano et al., Science 301, 1519 (2003).
. C Wang, J. Am. Chem. Soc. 12711460C. Wang et al., J. Am. Chem. Soc. 127, 11460 (2005).
. C Klinke, Nano Lett. 6906C. Klinke et al., Nano Lett. 6, 906 (2006).
. J Chen, Science. 28295J. Chen et al., Science 282, 95 (1998).
. M Holzinger, Angew. Chem. Int. Ed. 404002M. Holzinger et al., Angew. Chem. Int. Ed. 40, 4002 (2001).
. K S Coleman, S R Bailey, S Fogden, M L H Green, J. Am. Chem. Soc. 1258722K. S. Coleman, S. R. Bailey, S. Fogden, and M. L. H. Green, J. Am. Chem. Soc. 125, 8722 (2003).
. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
. D Vanderbilt, Phys. Rev. B. 417892D. Vanderbilt, Phys. Rev. B 41, 7892 (1990).
. S Baroni, S. Baroni et al., http://www.quantum-espresso.org.
. N Marzari, Phys. Rev. Lett. 823296N. Marzari et al., Phys. Rev. Lett. 82, 3296 (1999).
. J Lu, J. Mol. Struct. (Theochem). 725255J. Lu et al., J. Mol. Struct. (Theochem) 725, 255 (2005).
. H F Bettinger, Chem. Eur. J. 124372H. F. Bettinger, Chem. Eur. J. 12, 4372 (2006).
. Z Chen, Angew. Chem. Int. Ed. 431552Z. Chen et al., Angew. Chem. Int. Ed. 43, 1552 (2004).
. J Zhao, ChemPhysChem. 6598J. Zhao et al., ChemPhysChem 6, 598 (2005).
. N Marzari, D Vanderbilt, Phys. Rev. B. 5612847N. Marzari and D. Vanderbilt, Phys. Rev. B 56, 12847 (1997).
A recent study that considered a (6,6) CNT functionalized with CCl2 did not find that the open configuration displays higher conductance. 21]; we ascribe this discrepancy to the lack of chemical accuracy in the model Hückel Hamiltonian usedA recent study that considered a (6,6) CNT functional- ized with CCl2 did not find that the open configuration displays higher conductance [21]; we ascribe this discrep- ancy to the lack of chemical accuracy in the model Hückel Hamiltonian used.
. H Park, J Zhao, J P Lu, Nano Lett. 6916H. Park, J. Zhao, and J. P. Lu, Nano Lett. 6, 916 (2006).
. R Bianchi, Acta Crystallogr. 291196R. Bianchi et al., Acta Crystallogr. B29, 1196 (1973).
. H Günther, H Schmickler, Pure Appl. Chem. 44807H. Günther and H. Schmickler, Pure Appl. Chem. 44, 807 (1975).
. T Pilati, M Simonetta, Acta Crystallogr. 321912T. Pilati and M. Simonetta, Acta Crystallogr. B32, 1912 (1976).
Acta Crystallogr. R Bianchi, T Pilati, M Simonetta, 363146R. Bianchi, T. Pilati, and M. Simonetta, Acta Crystal- logr. B36, 3146 (1980).
. R Bianchi, T Pilati, M Simonetta, J. Am. Chem. Soc. 1036426R. Bianchi, T. Pilati, and M. Simonetta, J. Am. Chem. Soc. 103, 6426 (1981).
. E Vogel, Angew. Chem. Int. Ed. Engl. 21869E. Vogel et al., Angew. Chem. Int. Ed. Engl. 21, 869 (1982).
. J Wirz, Helv. Chim. Acta. 67305J. Wirz et al., Helv. Chim. Acta 67, 305 (1984).
. M Eiermann, Angew. Chem. Int. Ed. Engl. 341591M. Eiermann et al., Angew. Chem. Int. Ed. Engl. 34, 1591 (1995).
. C Møller, M S Plesset, Phys. Rev. 46618C. Møller and M. S. Plesset, Phys. Rev. 46, 618 (1934).
. A D Becke, J. Chem. Phys. 981372A. D. Becke, J. Chem. Phys. 98, 1372 (1993);
. C Lee, W Yang, R G Parr, Phys. Rev. B. 37785C. Lee, W. Yang, and R. G. Parr, Phys. Rev. B 37, 785 (1988).
. C H Choi, M Kertesz, J. Phys. Chem. A. 1023429C. H. Choi and M. Kertesz, J. Phys. Chem. A 102, 3429 (1998).
. E Vogel, W Pretzer, W A Böll, Tetrahedron Lett. 63613E. Vogel, W. Pretzer, and W. A. Böll, Tetrahedron Lett. 6, 3613 (1965).
. H J Jiao, N J R V E Hommes, P V R Schleyer, Org. Lett. 42393H. J. Jiao, N. J. R. v. E. Hommes, and P. v. R. Schleyer, Org. Lett. 4, 2393 (2002).
. C Gellini, P R Salvi, E Vogel, J. Phys. Chem. A. 1043110C. Gellini, P. R. Salvi, and E. Vogel, J. Phys. Chem. A 104, 3110 (2000).
. C Mealli, Chem. Eur. J. 3958C. Mealli et al., Chem. Eur. J. 3, 958 (1997).
. M Keshavarz-K, Tetrahedron. 525149M. Keshavarz-K et al., Tetrahedron 52, 5149 (1996).
. Y.-S Lee, in preparationY.-S. Lee et al., in preparation.
| [] |
[
"Algebraic Theory of Quantum Synchronization and Limit Cycles under Dissipation",
"Algebraic Theory of Quantum Synchronization and Limit Cycles under Dissipation"
] | [
"Berislav Buča *[email protected] \nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom\n",
"Cameron Booker \nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom\n",
"Dieter Jaksch \nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom\n\nInstitut für Laserphysik\nUniversität Hamburg\n22761HamburgGermany\n"
] | [
"Clarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom",
"Clarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom",
"Clarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom",
"Institut für Laserphysik\nUniversität Hamburg\n22761HamburgGermany"
] | [] | Synchronization is a phenomenon where interacting particles lock their motion and display non-trivial dynamics. Despite intense efforts studying synchronization in systems without clear classical limits, no comprehensive theory has been found. We develop such a general theory based on novel necessary and sufficient algebraic criteria for persistently oscillating eigenmodes (limit cycles) of timeindependent quantum master equations. We show these eigenmodes must be quantum coherent and give an exact analytical solution for all such dynamics in terms of a dynamical symmetry algebra. Using our theory, we study both stable synchronization and metastable/transient synchronization. We use our theory to fully characterise spontaneous synchronization of autonomous systems. Moreover, we give compact algebraic criteria that may be used to prove absence of synchronization. We demonstrate synchronization in several systems relevant for various fermionic cold atom experiments. | 10.21468/scipostphys.12.3.097 | [
"https://arxiv.org/pdf/2103.01808v5.pdf"
] | 232,092,969 | 2103.01808 | 74ce610ab133cb8b8410be804d38a6c4e9937ecb |
Algebraic Theory of Quantum Synchronization and Limit Cycles under Dissipation
Berislav Buča *[email protected]
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUnited Kingdom
Cameron Booker
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUnited Kingdom
Dieter Jaksch
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUnited Kingdom
Institut für Laserphysik
Universität Hamburg
22761HamburgGermany
Algebraic Theory of Quantum Synchronization and Limit Cycles under Dissipation
SciPost Physics Submission
Synchronization is a phenomenon where interacting particles lock their motion and display non-trivial dynamics. Despite intense efforts studying synchronization in systems without clear classical limits, no comprehensive theory has been found. We develop such a general theory based on novel necessary and sufficient algebraic criteria for persistently oscillating eigenmodes (limit cycles) of timeindependent quantum master equations. We show these eigenmodes must be quantum coherent and give an exact analytical solution for all such dynamics in terms of a dynamical symmetry algebra. Using our theory, we study both stable synchronization and metastable/transient synchronization. We use our theory to fully characterise spontaneous synchronization of autonomous systems. Moreover, we give compact algebraic criteria that may be used to prove absence of synchronization. We demonstrate synchronization in several systems relevant for various fermionic cold atom experiments.
Introduction to Quantum Synchronization
Understanding complex dynamics is arguably one of the main goals of all science from biology (e.g. [1,2]) to economics (e.g. [3,4]). Synchronization is a remarkable phenomenon where multiple bodies adjust their motion and rhythms to match by mutual interaction. It is one of the most ubiquitous behaviours in nature and can be found in diverse systems ranging from simple coupled pendula, neural oscillations in the human brain [5], epidemic disease spreading in the general population, power grids and many others [6]. Understanding this phenomenon has been one of the major successes of dynamical systems theory and deterministic chaos [7,8].
Intense work in the last decade has extended these non-linear results to the semi-classical domain of certain quantum systems with an infinite or very large local Hilbert space, e.g. quantum van der Pol oscillators, bosons, or large spin-S systems . These systems are usually understood successfully through mean-field methods or related procedures that neglect the full quantum correlations. By contrast, strongly interacting systems that have finitedimensional local Hilbert spaces, such as finite spins, qubits or electrons on a lattice, can most often not be adequately treated with mean-field methods [32] and semi-classical limits cannot be directly formulated for them [33][34][35]. For the sake of brevity, we will call such systems "quantum" and the previous ones "semi-classical". Very recently, these systems with finite-dimensional local Hilbert spaces have attracted much attention as possible platforms for quantum synchronization [36][37][38][39][40][41][42][43][44][45][46][47][48][49]. Crucially, the subsystems that are to be synchronized in such quantum systems do not have easily accessible → 0 or equivalent large parameter (semi-classical) limits.
Quantum synchronization holds promise for technical applications. For instance, synchronizing spins in a quantum magnet would allow for homogeneous and coherent time-dependent magnetic field sources. Developing sources of homogeneous and coherent time-dependent magnetic fields has the significant potential to improve the resolution of MRI images [50]. Additionally, recent studies have explored the role of synchronization in the security of quantum key distribution (QKD) protocols [51][52][53]. It is foreseeable that a better understanding of quantum synchronization could help improve security against specialist attacks that exploit the dependence of several QKD schemes on synchronization during calibration. Despite the intense recent study and the promise it holds, a general theory of quantum synchronization has been lacking until now.
An important theoretical concept in physics is that of symmetries and algebraic structures. Using algebraic methods, it is often possible to analytically understand the dynamics of systems without providing full solutions. This is vitally important because obtaining full solutions is, in general, impossible for the many-body and strongly correlated systems of interest. In this paper, we develop an algebraic theory for quantum synchronization in such systems. This extends the algebraic approach of dynamical symmetries [54] which has been successfully used for studying the emergence of long-time dynamics in various quantum many-body systems, such as time crystals , quantum scarred models , Stark many-body localization [108], and models with quantum many-body attractors [109]. Our work provides necessary and sufficient conditions for quantum synchronization to occur in terms of a compact set of algebraic criteria that may be checked based on the underlying symmetry structure of the system. We also rigorously demonstrate that if synchronization occurs, it must be due to the presence of quantum coherence. We fully solve these dynamics in terms of an elegant algebraic framework. These results provide a general theory for studying quantum synchronization and characterize the phenomenon. Through this, we have provided a framework for future efforts in the field.
In the rest of the introduction, we give a general discussion and brief history of the study of synchronization before focusing more closely on the details of quantum synchronization. We finally highlight the key results of this article.
An Overview of Synchronization
General Synchronization
Before presenting our theory of quantum synchronization, it is instructive to first discuss synchronization in general, with some historical background, to demonstrate the numerous subtleties surrounding the concept and make it clear what we will be referring to as synchronization.
The term synchronization, or more specifically synchronous, has its roots in the ancient Greek words 'syn-' meaning "together with" and 'kronous' meaning "time". Thus events are described as synchronous if they occur at the same time. The scientific study of synchronous (or synchronized) dynamical systems dates back to 1673 when Huygens studied the motions of two weakly coupled pendula [110]. He observed that pendula clocks hanging from the same bar had matched their frequency and phase exactly. Following this observation, he explored both synchronization and anti-synchronization where the pendula with approximately equal (or opposite for anti-synchronization) initial conditions would, over time, synchronize so that their motions were identical (resp. opposite). Since Huygens, synchronization has been a topic of significant interest in an ever-expanding range of scientific and technological fields. From a theoretical point of view, synchronization is a central issue in the study of dynamical and chaotic systems and is currently an area of very active research.
Broadly speaking, studies into classical synchronization of multiple, often chaotic, systems can be split into synchronization within driven and autonomous systems. Throughout the remainder of this work, we will consider exclusively synchronization within autonomous systems, often called spontaneous synchronization, as this is the form of synchronization most often studied in quantum systems and is where our theory is applicable. For more details regarding synchronization in driven systems, especially the most commonly studied master-slave systems, we direct the reader to [111][112][113]. We should also mention the related topic of Synergistics, first introduced by Haken to initially study lasers and fluid instabilities [114][115][116]. Synergetics studies how circular causality between microscopic systems and macroscopic order parameters can lead to self-organization within open systems that have been driven far from equilibrium [117,118]. In the years since its inception, the theory has been applied to a wide range of disciplines such as studying human ECG activity, and machine learning [119].
We now focus on spontaneous 1 synchronization, which can be further divided into two main classes, which intuitively capture how the two subsystems are related.
(i) Identical/Complete synchronization 1 From now on, all synchronization will be spontaneous synchronization in undriven systems.
For a system comprising of identical coupled subsystems. Corresponding variables in each of the subsystems become equal as the subsystems evolve in time.
(ii) Phase synchronization
For a system comprising of non-identical coupled subsystems. The phase differences between given variables in the different subsystems lock while the amplitudes remain uncorrelated.
One important additional constraint, which applies to both classes is that synchronization must be robust to small perturbations in the initial conditions. This is important to exclude many cases of trivially synchronized systems that arise simply through the fine-tuning of initial conditions. In many ways, identical synchronization is the most fundamental and is what Huygens originally studied with his coupled pendula. It is also what many people conjure to mind when thinking about synchronization. As expected, it has been extensively studied in the past, both theoretically [8,[120][121][122][123][124], and for its applications [125][126][127][128]. Much of this progress, which has been primarily focused on controlling synchronization within chaotic systems, has built upon the seminal work of Pecora and Carroll [8] who formulated criteria based on the signs of Lyapunov exponents. We also note that identical synchronization can be extended to non-identical subsystems by choosing appropriate, often de-dimensionalized, variables or coordinates for the different subsystems.
A useful order parameter for identical synchronization is the Pearson indicator [129] defined for two time-dependent signals, f (t), g(t) as
where ∆t is some fixed parameter and
X = 1 ∆t t+∆t t X(t)dt, δX = X − X(2)
This measures the correlation between the two variables in an intuitive way and has been widely applied to classical [7,130] and in some cases quantum [36,42,131,132] synchronization. We can see that C f,g takes the value +1 for 'perfect' synchronization and −1 for 'perfect' anti-synchronization. Notably the Pearson indicator is effective when the signals f and g are not periodic. While synchronization is often understood through periodic systems, such as pendula etc, this is not strictly necessary for identical synchronization. The notion of two variables that become equal over the period of evolution could just as well apply to two particles following unbound trajectories as to two particles that remain within some finite region. We can also easily present examples of quasi-periodic behaviour, that is a superposition of Fourier modes with non-commensurate frequencies. In practice, most systems that we are studying exhibit periodic or at least quasi-periodic behaviour.
In contrast to identical synchronization, phase synchronization generally requires periodic motion to define a relative phase difference between the two subsystems meaningfully. However, extensions can be considered with simply a time delay rather than an actual phase difference. This class of synchronization is weaker than identical synchronization, even under the natural extension to non-identical subsystems, as it requires no direct correlation between the variables describing the subsystems. Following work on cryptography using chaotic maps [133], phase synchronization was applied as a method for secure communication [134,135]. Phase synchronization has also been studied in the quantum setting through the use of Arnold tongues, and Hussimi-Q functions [37,129]. These methods are usually aimed at probing the response of the system to driving rather than mutual synchronization between subsystems. We should also mention the interesting related case of amplitude-envelope synchronization in chaotic systems [136] where there is no correlation between the amplitude and phases of the motion within the two systems, but instead, they both develop periodic envelopes at a matched frequency. In the remainder of our work, we will focus on studying identical synchronization between subsystems of an extended quantum system, although many of the results presented are just as applicable to phase synchronization.
Before discussing quantum synchronization in more detail, we make a short remark about the language used in the rest of this article. We will use the term synchronized to refer to any pair of time-dependent signals that are equal or sufficiently similar, usually after some transient time. The term synchronization will refer to the non-trivial process of two subsystems becoming synchronized in a way that is stable to perturbations in the initial conditions. In systems with multiple observable quantities within each subsystem, we will say the subsystems are completely synchronized if all observable quantities are synchronized, and at least one is non-constant. Consequently, we will use identical synchronization for the identical/complete synchronization discussed above to avoid confusion. These definitions will be formalized in Section 2.
Quantum Synchronization
Let us now focus more closely on synchronization in many-body quantum systems.
Synchronization is usually studied in open quantum systems. This is because, in closed systems, a generic initial state will excite a large number of eigenfrequencies with random phase differences, which will oscillate indefinitely and lead to observables that behave like noise. Only in sufficiently large systems does the Eigenstate Thermalisation Hypothesis [137] predict that these oscillations will rapidly dephase, causing observables to become effectively stationary. As a result, previous studies have focused on the open quantum system regime, where interactions with the environment cause decoherence and decay within the density operator describing the state, leaving only a handful of long-lived oscillating modes. These systems have previously been studied in a case-by-case manner, with various measures of quantum synchronization having been introduced. These measures have been based primarily on phase-locking of correlations, or Husimi Q-functions [36,37]. However, such measures are often unscalable to many-body problems.
The generic set-up we will consider is an extended system made up of arbitrarily, but finitely, many individual sites that can interact with each other and with an external bath that we often call the environment. This scenario is depicted in Fig 1. The restriction of having only finitely many sites is reasonable because if two subsystems are infinitely far apart, then locality will prevent them from ever being causally connected, thus rendering synchronization impossible. Alternatively, sites infinitely far apart could be redefined as belonging to the environment and thus taken into account in that manner. For our purposes, these individual sites will have finite-dimensional Hilbert spaces and thus have no well defined semi-classical limit. Importantly, these sites will also have identical local Hilbert space dimensions. We will then consider synchronizing the signals of observables measured on different sites or groups of Figure 1: An arbitrary interacting quantum many-body system of N sites (illustrated as yellow spheres) which may interact with each other along the blue bonds. The sites also interact with the background environment, illustrated by the red arrows. The sites each have a finite local Hilbert space, illustrated as dimension d on one of the sites (i.e. each site has exactly d levels). This is the general system of interest we will be focusing on. The goal will be to synchronize observables between different sites in the system. Crucially, the sites that are to be synchronized do not have → 0 or equivalent large parameter (semi-classical) limits. Rather we make no assumptions on the size of the local Hilbert space. sites of the system. Following our discussion above regarding synchronization in general, we will classify whether the sites are synchronized based on the dynamics of expectation values of these observables on each site over time. In this way, we have captured the fundamental essence of synchronization as discussed above intuitively within the quantum regime.
Overview of the Paper, Summary of Key Results and Relation to Previous Work
In this final part of the introduction, we will outline the structure of the rest of the article and draw attention to the key results which we present. In Sec. 2 we will introduce the Lindblad formalism as the most general type of smooth Markovian quantum evolution. This formalism is particularly useful since we recover an appealing and intuitive interpretation of the evolution equation in the limit of weak system bath interactions and under assumptions that the bath is Markovian. This section will also formalize our earlier discussion and explicitly define what we mean by synchronization in a quantum system. This will be the natural extension of the ideas we presented above and will distinguish between stable and meta-stable synchronization, which may further be robust to our choice of initial conditions or measurements.
In order to develop an algebraic theory of quantum synchronization, we will in Sec. 3.1 first develop a complete algebraic theory of the purely imaginary eigenvalues of quantum Liouvillians and their corresponding oscillating eigenmodes that we relate to limit cycles. This theory will be closely related to dynamical symmetries [54], thus demonstrating their importance for understanding long-time non-stationarity in open quantum systems. Previous works on Liouvillian eigenvalues have been based on finding the support of the stationary subspaces or diagonalizing the Hamiltonian [138][139][140][141]. However, in the case of many-body systems, both of these calculations are generally impossible except in very specific cases. Although in some cases analytic tools can be used to find non-equilibrium steady states, it remains an open problem to efficiently find the support of a generic stationary state [142]. Therefore, our criteria, based purely on the symmetry structures within the Liouvillian, are easier to work with for a many-body quantum system and can further be used to find systems that present persistent oscillations and limit cycles, an absence of relaxation and synchronization. Additionally, being necessary, our conditions can also be used to prove the absence of such phenomena. The results of this section, which completely characterize all possible purely imaginary eigenvalues of quantum Liouvillians, are far more wide-reaching than just quantum synchronization. In particular, they are also directly relevant for dissipative time crystals [57] and, more generally, any study which is concerned with long-time non-stationarity in an open system.
In Sec. 3.2 we will apply the above theory to stable synchronization, i.e. that exists for infinitely-long times. We will again show that symmetries can be used to provide conditions under which synchronization can occur. These results will highlight the importance of unital evolutions, i.e. those which preserve the identity operator, for quantum synchronization. Further, within the Lindblad formalism, these unital evolutions are easy to construct by considering dephasing or independent particle loss and gain. This can also explain why several previously studied cases of quantum synchronization are in systems where the evolution is indeed unital.
Having considered infinitely-long lived synchronization, we then proceed in Sec. 4 to provide a perturbative analysis of purely imaginary eigenvalues, again with a focus on the consequences for quantum synchronization. We classify the resulting cases into ultra-low frequency, quantum Zeno and dynamical metastable synchronization. As we show, ultralow frequency metastable synchronization is rather unsatisfactory for experimental purposes since the time period of the resulting oscillations is exceptionally long. We further prove that dynamical metastable synchronization has the longest lifetime and has oscillations that occur on relevant experimental timescales. Thus we conclude this is the most desirable form of metastable synchronization. Arguably, we have at this point covered all possible cases of quantum synchronization.
In Sec. 5 by taking the converse approach, we extend our algebraic theory so that it can be used to prove the absence of quantum synchronization in a given system. This is useful for easily identifying which systems should not be considered when looking for candidates for quantum synchronization.
Having completely characterized quantum synchronization through an analysis of Liouvillian eigenvalues, in Sec . 6 we apply our theory in a simple example where we show how two qubits may be anti-synchronized complementing the recent claim that qubits cannot synchronize [143]. We further use our results to demonstrate that the regularly studied example of quantum synchronization in the Fermi-Hubbard model with spin-agnostic heating [36] is, in fact, one specific example of a wide range of systems that can be expected to exhibit quantum synchronization. We will relate this discussion to more experimentally relevant set-ups with a less strict symmetry structure. Our theory then allows us to give simple predictions to explain experimental results previously found by [144].
Finally, we present a conclusion. Detailed proofs of our results together with additional analysis of existing examples of quantum synchronization are presented in the appendices.
We note that criteria for purely imaginary eigenvalues of quantum Liouvillians have been given in other papers, e.g. [138,139]. However, they require either diagonalizing the Hamiltonian or the stationary state to check and use. Both of these steps are prohibitively difficult for many systems once their size becomes sufficiently large. Moreover, they do not allow for the exact form of the corresponding eigenmodes (quantum limit cycles). Our work will rely on dynamical symmetries and immediately gives the form of the corresponding eigenmodes if the symmetries are known. Dynamical symmetries, first introduced in the context of quantum Liouvillians [54], have been applied before to studying certain cases of quantum synchronization [36]. In these works, dynamical symmetries were shown to be sufficient for the existence of purely imaginary eigenvalues, and synchronization enabled by these symmetries was studied. In this work, we will give both necessary and sufficient algebraic conditions for the existence of purely imaginary eigenvalues which can then be directly related to sufficient criteria for quantum synchronization. Moreover, we show that the previously identified compact sufficient condition is also necessary for quantum synchronization in unital quantum Liouvillians, which for example, includes those that have thermal stationary states.
Definitions and Methods
In this section, we will first present the natural definitions of synchronization in quantum systems, as follow from our previous discussion and then introduce the Lindblad Master equation framework as the most general description of the evolution of a quantum state.
Definitions of Quantum Synchronization
To recap our discussion above, for two systems to be identically synchronized, we require that their matching motion be non-stationary and long-lasting. For an extended quantum system, we will interpret the 'motion' via the behaviour of some local observable, O j , which is measured in the same basis on every site. Note that we will use subscripts to denote the subspaces/sites on which the operator acts non-trivially i.e. O j = 1 1...j−1 ⊗ O j ⊗ 1 j+1...N with N being the number of subsystems/sites. We will also consider only the strictest notion of identically synchronized signals whereby after synchronization has occurred, the two synchronized signals,
O j (t) & O k (t)
, are identical and do not differ by any overall phase, scale factor, or constant. This is stricter that the definition provided by considering the Pearson indicator from Eq. (1). The results we present can be suitably adapted to consider these alternative cases, in particular phase synchronization. We will indicate in Sec. 3.4 how alternate considerations can be made, but as the technical discussions do not provide any additional insight or understanding, we will work largely with these very strict definitions given below.
In line with previous works, we consider stable and metastable synchronization to be where the signals remain synchronized and non-stationary for infinitely long or finitely long times, respectively. In the case of metastable synchronization, we require some perturbative parameter within the system that controls the lifetime of the synchronized behaviour. Finally, we must allow some initial transient time period during which the process of synchronization can take place. These considerations lead naturally to the following definitions.
Note that equality in these definitions, and in the remainder of this work, is understood as equality up to terms which are exponentially small in time and for brevity, we do not continuously write "+O(e −γt )". These terms will always be present for finite-dimensional systems but are negligible on the longer time-scales we will be concerned with.
Definition 1 (Stably synchronized). We say that the subsystems j and k are stably synchronized in the observable O if for some initial state and after some transient time period
τ we have O j (t) = O k (t) for all t ≥ τ . Further we require that O j (t) does not become constant, i.e ∂ t O j (t) ≡ 0.
Definition 2 (Metastably 2 synchronized). We say that the subsystems j and k are metastably synchronized in the observable O if for some initial state during the interval t ∈ [τ, T ] where T >> τ we have O j (t) = O k (t) and again ∂ t O j (t) ≡ 0. Further, the cut-off time T must be controllable by some perturbative parameter in the system These definitions do not require that the system has an internal mechanism that synchronizes the two sites. Since we simply require the existence of some initial state for which the observables are synchronized, this initial state can be finely tuned so that, in fact, the observables on the two subsystems are initially equal, and their evolutions are identical. To characterize synchronization that is robust to variation in initial conditions, we make the further definition.
Definition 3 (Robustly synchronized). If the subsystems j and k are stably or metastably
synchronized in the observable O, this synchronization is robust if O j (t) = P j,k O k (t)P j,k and ∂ t O j (t) ≡ 0 on the operator level (i.e.
in the Heisenberg picture). Here P j,k is a permutation operator exchanging subsystems j and k. As with the definitions of stable and meta-stable synchronization, these requirements must hold for the same respective time periods.
This definition ensures that after the transient dynamics have decayed, regardless of the system's initial state, the observable on subsystems j and k will be equal and for a large class of initial states will be non-stationary. Robustly synchronized systems are of the most significant interest since these are the ones where some internal mechanism causes the synchronization process. However, as demonstrated in the appendices, several previously studied examples of quantum synchronization are, in fact, not robust and require fine-tuning of the initial state. This highlights the importance of making these considerations and definitions when studying quantum synchronization. We also remark that we have restricted ourselves to the case of identical subsystems so that robustness can be defined in this way. If one were to study synchronization between a spin-1/2 and a spin-1, for example, it is unclear how robustness could be defined in a similarly straightforward manner.
Notice that we have defined synchronization with respect to some observable. For most applications of synchronized motion, it is sufficient for just one observable to be synchronized. We can, however, say that the two subsystems are completely synchronized if they are robustly synchronized, whether stably or metastably, for all non-stationary observables [16].
These definitions are visualized in Fig. 2 and characterize the possible cases of identical synchronization within a quantum system. As earlier remarked, the results we present can be straightforwardly adapted to also consider phase synchronization provided the long-lived oscillations are periodic. Moreover, we can easily extend our analysis to other measures of Figure 2: Visualization of the different definitions of synchronization. If we have equality in the expectation value of some non-stationary observable on different sites for some initial state, then we classify the signals as either stably or metastably synchronized. If, in fact, we have equality on the operator level, then they are robustly synchronized. Finally, if this holds for all non-stationary operators, then we say the subsystems are completely synchronized.
synchronization [146]. For instance, we may define limit cycle dynamics over a phase space
composed of O k (t) and another observable Q k (t) that obeys [Q k , O k ] = 0.
Having now made explicit what we mean when referring to quantum synchronization and introduced some of the notation, we can begin to study these systems in more detail.
Lindblad Formalism
We describe the quantum state by a time dependent density operator, ρ(t), acting on the Hilbert space of the system, H. The most general quantum evolution is described by a completely positive, trace preserving channelT t so that after a time t the state has evolved via
ρ(t) =T t [ρ(0)].(3)
It is known [147] that any smooth, time-homogeneous, completely positive, trace-preserving quantum channelT t [ρ] which obeys the natural semi-group propertŷ
T t ·T s =T t+s ,(4)
can be expressed asT
t [ρ] = e tL [ρ],(5)
and thus d dt
ρ =L[ρ].(6)
HereL is a quantum Liouvillian of the form
L[ρ] = −i[H, ρ] + µ 2L µ ρL † µ − {L † µ L µ , ρ} ,(7)
for some Hermitian operator H. If we are considering a system which weakly interacts with a Markovian environment then equations (6-7) are usually referred to as the Lindblad master equation and we can interpret the operator H as the system's Hamiltonian and the L µ operators, now called Lindblad jump operators, model the influence of the environment's noise on our system. Such operators may, for example, be particle creation/annihilation operators to describe random particle gain/loss, or number operators to describe dephasing [148]. By formulating our theory of quantum synchronization with respect to evolutions described by Eqs. (6-7) the resulting theory is as general as possible and can be intuitively understood in the weak coupling, Markovian limit. However, evolution of the quantum state according to Eqn. (7) is not restricted exclusively to the weak coupling limit, although at strong coupling the jump operators lose their intuitive interpretation. By formulating our theory within the Lindblad formalism we are able to capture the widest class of possible quantum evolutions which are relevant to undriven systems which interact with an environment. We remark that it is possible to also include some non-Markovian effects in this formalism by considering an enlarged Hilbert space containing part of the environment, as we outline further in Appdx K. Although this is usually unpractical for large many-body systems, the example in Sec 6.1 will in effect use this idea when anti-synchronizing two spin-1/2s. With this one exception, however, our work considers exclusively Markovian dynamics.
A formal solution to the Lindblad equation can be obtained by finding the eigensystem of the Liouvillian. This is defined as the set {ρ k , σ k , λ k }, where λ k are the eigenvalues ofL and ρ k , σ k are the corresponding right and left eigenstates respectively. The eigensystem obeys the relations,L
[ρ k ] = λ k ρ k ,L † [σ k ] = λ * k σ k , σ k |ρ k = δ k,k .(8)
where σ|ρ = Tr(σ † ρ) is the Hilbert-Schmidt inner product. Note that for operatorsL which generate a CPTP map and thus describe physical quantum evolutions, the eigenvalues, λ k , can lie only in the left half of the complex plane with Re(λ k ) ≤ 0. In the familiar way, assuming diagonalizability ofL we can express the evolution of the expectation value of some observableÔ, given that the system is initialised in the state |ρ 0 as
Ô (t) = k e tλ k Ô |ρ k σ k |ρ 0 .(9)
We now see that in order to achieve long-lived oscillations in some observable, there must exist eigenvalues λ k with non-zero imaginary part and vanishingly small real part. Note that in general,L may not be diagonalizable, but that its restriction to the asymptotic subspace always is (Appendix F). Therefore in the long-time limit, we can assume that (9) is the evolution equation for all cases. The above observation motivates the next section, where we present algebraic results for the existence of purely imaginary eigenvalues in quantum Liouvillians. Following these results, we shall use them to provide spectral and symmetry-based conditions for quantum synchronization as described by our previous definitions. We will then demonstrate these conditions in a variety of paradigmatic and insightful examples before concluding.
Imaginary Eigenvalues of Quantum Liouvillians and Stable Synchronization
In this section, we present a series of algebraic results on the existence of purely imaginary eigenvalues, give the structure of the corresponding eigenmodes, and relate them to stable quantum synchronization. These are important not only for quantum synchronization, but also for dissipative time crystals [54,57,67,[70][71][72][149][150][151][152][153][154][155] and more generally any study of non-stationarity in open quantum systems. All proofs for the following results can be found in the Appendices. As mentioned previously, we will focus on systems with an arbitrarily large but strictly finite number of local levels (i.e. a finite local Hilbert space).
Necessary and Sufficient Conditions for Purely Imaginary Eigenvalues and the Structure of the Oscillating Coherent Eigenmodes
Our first theorem completely characterizes all purely imaginary eigenvalues of Liouvillian super-operators in terms of a unitary operator A and a proper non-equilibrium steady state (NESS) of the system ρ ∞ which obeysL[ρ ∞ ] = 0. Note that by 'proper', we mean that ρ ∞ is a density operator and can thus represent an actual quantum state of the system. Theorem 1. The following condition is necessary and sufficient for the existence of an eigenstate ρ with purely imaginary eigenvalue iλ,Lρ = iλρ, λ ∈ R.
We have ρ = Aρ ∞ , where ρ ∞ is a NESS and A is a unitary operator which obey,
[L µ , A]ρ ∞ = 0,(10)−i[H, A] − µ [L † µ , A]L µ ρ ∞ = iλAρ ∞ , λ ∈ R.(11)
The necessity direction of the proof in Appx. A is a consequence taking the polar decomposition of ρ = AR and then showing that the unitary part, A, must satisfy the stated conditions, while the positive-semidefinite part, R, satisfiesL[R] = 0 and is thus a proper NESS. The sufficiency direction can be obtained by rearranging Eqs. (10) and (11) and does not require that A be unitary. Note that we have solved for the eigenmode in terms of A and ρ ∞ . We also emphasize that the necessity of the conditions in Th. 1 does not hold for infinite-dimensional systems (e.g. bosons), but the sufficiency does. From this theorem, we can also obtain alternative necessary conditions for the existence of a purely imaginary eigenvalue.
−iρ ∞ A † [H, A]ρ ∞ = iλρ 2 ∞ ,(12)ρ ∞ A † [L † µ , A]L µ ρ ∞ = 0, ∀µ,(13)ρ ∞ [L † µ , A † ]L µ ρ ∞ = 0, ∀µ.(14)
In which case the eigenstate is given by ρ = Aρ ∞ .
In Appx. B this is shown by rearranging the conditions in Eqs. (10) and (11). We next make the connection between these results and strong dynamical symmetries [54] which have previously been shown as sufficient for the existence of purely imaginary eigenvalues. Using Thm. 1, we can, under certain conditions, strengthen the results of [54] and demonstrate that strong dynamical symmetries are both necessary and sufficient.
Theorem 2. When there exists a faithful (i.e. full-rank/ invertible) stationary state,ρ ∞ , ρ is an eigenstate with purely imaginary eigenvalue if and only if ρ can be expressed as,
ρ = ρ nm = A n ρ ∞ (A † ) m ,(15)
where A is a strong dynamical symmetry obeying
[H, A] = ωA(16)[L µ , A] = [L † µ , A] = 0, ∀µ.(17)
and ρ ∞ is some NESS, not necessarilyρ ∞ . Moreover the eigenvalue takes the form
λ = −iω(n − m).(18)
Furthemore, the corresponding left eigenstates, withL † σ mn = iω(n − m)σ mn , are given by
σ mn = (A ) m σ 0 (A ) † n where A
is also a strong dynamical symmetry and σ 0 = 1.
The proof of the necessity direction in Appx. C makes extensive use of the results and proofs from [139,156,157] regarding the asymptotic subspace of the Liovillian, As(H) and the projector, P , into the corresponding non-decaying part of the Hilbert space, H. Importantly, when a full rank stationary state exists, this projector must be the identity operator, P = I. Compared with Thm. 1 there is now no ambiguity about the unitarity of A. The simple manipulation (1) Taking trace (19) shows that A must not be invertible and thus cannot be unitary when ω = 0. Additionally, taking the trace of Eq. (16) shows A must be traceless for ω = 0. We also remark that the requirement of a faithful stationary state, i.e that ρ ∞ has full rank, is immediately satisfied if the Liouvillian is unital, defined asL[1] = 0 where 1 is the identity operator. This simplifies to µ [L µ , L † µ ] = 0 which in particular is true for pure dephasing where L µ = L † µ . We will give further discussion of unital maps and their relevance for synchronization later. Importantly, the quantum limit cycle eigenmodes ρ nm are, by construction, off-diagonal in the basis of ρ ∞ indicating that they are quantum coherent (cf. also quantum phase synchronization [158]).
[H, A] = ωA A −1 HA − H = ω1 Assuming A is invertible 0 = ωTr
Stable Quantum Synchronization
The criteria for persistent oscillations we gave in the previous subsection are necessary for stable quantum synchronization because the sites in a system cannot lock persistently into phase, frequency and amplitude if the frequency is trivially 0. The sufficient criteria depend on the precise definition of synchronization, as we will now examine. We will use the terminology introduced in Sec. 2.1. Recall that the crucial feature of quantum synchronization is that the various parts of the subsystem lock into the same phase, frequency and amplitude. Let P j,k be an operator exchanging subsystems j and k and letP j,k (x) := P j,k xP j,k . We note that P 2 j,k = 1. We can then prove the following corollary.
Corollary 2. Fulfilling all the following conditions is sufficient for robust and stable synchronization between subsystems j and k with respect to the local operator O:
• The operator P j,k exchanging j and k is a weak symmetry [159] of the quantum Liou-villianL (i.e. [L,P j,k ] = 0)
• There exists at least one A fulfilling the conditions of Th. 1
• For at least one operator A fulfilling the conditions of Th. 1 and the corresponding ρ
∞ we have tr[O j Aρ ∞ ] = 0 • All A fulfilling the conditions of Th. 1 also satisfy [P j,k , A] = 0
The proof detailed in Appx. D follows because under the assumption that the exchange operator, P j,k , is a weak symmetry we find that all NESSs, ρ ∞ , withL[ρ ∞ ] = 0 commute with P j,k . Thus when we write the general evolution in the long time limit we have
lim t→∞ O j (t) = tr O j n c n A n ρ ∞,n e iλnt = tr O j P 2 j,k n c n A n ρ ∞,n e iλnt = tr P j,k O j P j,k n c n A n ρ ∞,n e iλnt = tr O k n c n A n ρ ∞,n e iλnt = lim t→∞ O k (t) ,(20)
where: c n = σ n |ρ(0) for an arbitrary initial state ρ(0), the operators A n all satisfying the conditions of Th. 1 andLρ ∞,n = 0. The most straightforward example of a weak symmetry is when [H, P j,k ] = 0 andP j,k maps the set of all the Lindblad operators {L µ } into itself, though more exotic cases are possible (e.g. [160][161][162][163][164]). Thus systems satisfying the conditions of Cor. 2 are, for instance, those for which P j,k is a reflection operator, the Lindblad operators act on each subsystem individually, and the system Hamiltonian is invariant under reflections.
We may further relax these requirements in quite general cases and achieve total synchronization across the system, i.e. synchronization between all pairs of subsystems. In Appx. E we prove the following refinement of Cor. 2. This further illustrates the power and utility of unital maps in achieving total synchronization. Examples are 1D models that are reflection symmetric, have only one A operator, and experience dephasing. This includes the cases discussed before in [36,54].
Multiple Frequencies and Commensurability
Before moving on, we make some brief remarks about the issues of multiple frequencies and commensurability. Firstly, since our theory allows for multiple A operators, each of which needs not correspond to the same imaginary eigenvalue, the theorems above explicitly include the case of multiple frequencies. In general, if the multiple purely imaginary eigenvalues are not commensurate, i.e. are not all integer multiples of some fixed value, then the resulting oscillations will not be periodic. This alone does not preclude quantum synchronization since we have not required that the long-lived dynamics be periodic.
However, if there are "too many" incommensurable purely imaginary eigenvalues, they will generically dephase, leading to observables that are effectively stationary -at least within experimentally accessible and measurable limits. This is similar to eigenstate dephasing, which is often considered the generic mechanics for thermalization in closed systems [137,165]. Alternatively, the system may display chaotic dynamics. This means that in systems obeying the conditions for synchronization but with a large number of incommensurable purely imaginary eigenvalues, additional analysis must be carried out to determine if the synchronization survives the dephasing process and whether the dynamics become chaotic. Importantly, if all the strong dynamical symmetries or A-operators satisfy the conditions for Cors 2 or 3 the dynamics will be synchronized at late times even if the evolution is chaotic or relaxes to stationarity. For a more detailed discussion of such spectral problems from a more mathematical perspective, see [165].
It should also be noted that the situation is more straightforward in unital evolutions, where each purely imaginary eigenvalue can be related to a strong dynamical symmetry, A, and thus by Thm. 2 is an integer multiple of some fixed frequency corresponding to A. Generically in open quantum systems with sufficiently many Lindblad jump operators, there are very few, if any, dynamical symmetries, and thus any purely imaginary eigenvalues will be integer multiples of a few fixed values.
Extensions to Weaker Definitions of Synchronization
As we have emphasised throughout, our definitions pertain only to the strictest notion of synchronization, where the two signals must be identical. However, if we were to use the Pearson indicator from Eqn. (1), this would instead allow for two signals which differ by an overall additive constant or multiplicative factor. It is immediate from Eqn. (20) how Cor. 2 should be adapted to give sufficient conditions for this relaxed notion of synchronization. Firstly we no longer need exchange superoperatorP j,k to be a weak symmetry and we do not need all A-operators to satisfy [P j,k , A] = 0. Instead we require that if ρ n = A n ρ ∞,n has a non-zero, purely imaginary eigenvalue, then P j,k ρ n = αρ n P j,k for some α which corresponds to the constant multiplicative factor between the two signals. Note that inhomogeneity of the NESSs under exhange, P j,k ρ ∞,n P j,k = ρ ∞,n leads to an additive constant between the two signals.
As an extension to this we can also see how alternate modes of synchronisation can occur. For instance if there were only a single A operator and the condition [P j,k , A] = 0 were replaced by P j,k A − e iθ AP j,k = 0 then we would have
lim t→∞ O j (t) = lim t→∞ O k (t + θ/λ) ,(21)
corresponding to phase synchronisation
The results in this section have considered only the cases of purely imaginary eigenvalues. Thus they correspond to infinitely long-lived synchronized oscillations, i.e. stable quantum synchronization. In the next section, we will analyze the behaviour of Liouvillian eigenvalues under perturbations and discuss the relation of these results to metastable quantum synchronization.
Almost Purely Imaginary Eigenvalues and Metastable Quantum Synchronization
Let us now suppose that our Liouvillian is perturbed analytically so that we may writê
L (s) =L + sL 1 + s 2L 2 + O(s 3 ).(22)
For simplicity and to avoid unwanted technicalities, we will assume that this series expansion has an infinite radius of convergence. This is almost always the case in relevant examples where the Hamiltonian or Lindblad jump operators are generally perturbed only to some finite order. As explained in Sec. 2.1, metastable quantum synchronization requires eigenvalues with vanishingly small real part. Under a perturbation of the form in equation (22), it is known that the eigenvalues, λ(s) vary continuously with s [138]. Therefore in order to obtain vanishingly small real parts, it is clear that we must perturb those eigenvalues already on the imaginary axis. At this point we divide our analysis into two regimes, the first where λ(0) = 0 and the second where λ(0) = iω, ω ∈ R \ {0}.
Ultra-Low Frequency Metastable Synchronization
In the first regime, we consider the case of perturbing a state with zero eigenvalue at s = 0. Since there always exists a Liouvillian eigenstate with zero eigenvalue, this is only possible when the null space ofL is degenerate. This regime has been studied in depth by Macieszczak et al. [166]. It was shown that whenL (s) is perturbed away from s = 0 the degeneracy in the 0 eigenvalue is lifted in such a way that the eigenvalues are at least twice continuously differentiable,
λ(s) = iλ 1 s + λ 2 s 2 + o(s 2 ),(23)
where λ 1 ∈ R and Re(λ 2 ) ≤ 0. This gives rise to periodic oscillations with time period ∼ 1/s and lifetime ∼ 1/s 2 . In many ways, this is somewhat unsatisfactory for the purposes of synchronization in an applicable sense since as we extend the lifetime of our synchronization, the periodic behaviour becomes harder to observe as a consequence of the extended time period. We also remark that generically the order of s only differs by one between the decay rate and the oscillation frequency.
Quantum Zeno Metastable Synchronization
A more relevant variant of the ultra-low frequency case is when,
L [ρ] = −i[H, ρ] + γD[ρ] + O 1 γ ,(24)
for some large γ. In this case the evolution is dominated by the dissipative term
D[ρ] = µ 2L µ ρL † µ − {L † µ L µ , ρ}.(25)
In this set up quantum Zeno dynamics are possible. Physically this can, for example, correspond to experimental set-ups in regimes where it is difficult to suppress the effects of dephaing such as the example considered later in Sec 6.3. For more detailed derivations and in-depth discussions of the quantum Zeno effect, see [167][168][169][170]. We now re-scale the full quantum Liouvillian asL = 1 γL , and consider s = 1/γ as the perturbative parameter. If the 0 eigenvalue is split then as in the above section we havẽ
λ = iωs + λ 2 s 2 + o(s 2 ) = iω 1 γ + λ 2 1 γ 2 + o(s 2 ).(26)
Transforming back to the unscaled Liouvillian we now have
λ = iω + λ 2 1 γ + o 1 γ ,(27)
and thus we see that the oscillations occur on the relevant Hamiltonian time-scales for all γ.
In this case the purely imaginary eigenvalues must come from stationary phase relations [138] of the dissiaptive LiouvillianD. These are stationary states that fulfil the conditions of Th. 1 with trivial eigenvalue ω = 0. If all the A operators andD fulfil the conditions of Cor. 2 or 3, then robust metastable synchronization occurs with a lifetime ∼ 1 γ .
Dynamical Metastable Synchronization
We now analyse the case where λ(0) = iω, with ω = 0, which we label dynamical metastable synchronization. This is physically distinct from the case where ω = 0 as we are now considering perturbations to a system which already exhibits long-lived dynamics. By adapting the analysis of [166], in Appx. F we prove the following theorem.
Theorem 3. For analyticL (s) with λ(0) = iω, ω ∈ R we have λ(s) = iω + iλ 1 s + λ 2 s 1+ 1 p + o s 1+ 1 p ,(28)
for some integer p ≥ 1. We also find that λ 1 ∈ R and λ 2 has non-positive real part.
Crucially, this means that at first order in s the perturbed eigenvalues remain purely imaginary, while the real part, which would contribute to decay, is higher order in s.
When considering the synchronization aspects, we first note that the conditions for robust metastable synchronization remain the same as in Cor. 2 and 3 in the leading order, i.e. for L (see Appx. J). For sake of simplicity we assume thatL 1 is such that it explicitly breaks the exchange symmetry P j,k between the sites we wish to synchronize, i.e.P j,kL1Pj,k = −L 1 . This is the most relevant and standardly studied case in quantum synchronization. For instance, this case occurs if the subsystems j, k are perfectly tuned to the same frequency inL in the leading order, and we introduce a small detuning (e.g. [36]). In that case, we find the remarkable fact that, Corollary 4. For anti-symmetric perturbationsL 1 which explicitly break the exchange symmetry, and thus synchronization, between sites j and k, i.e.P j,kL1Pj,k = −L 1 , the frequency of synchronization ω is stable to next-to-leading order in s, i.e. λ 1 = 0.
This counter-intuitive result, proven in Appx. G implies that the frequency at which synchronized observables oscillate is more stable to perturbations that disturb the synchronization through explicitly breaking the exchange symmetry than those which preserve this symmetry.
The preceding theorem may be strengthened in the case where λ(0) is non-degenerate or where the perturbation ofL (s) corresponds to at least a second-order perturbation of the jump operators as follows Corollary 5. For a Liouvillian of the form in Eq. (7) with
H(s) = H (0) + sH (1) + O(s 2 ), L µ (s) = L (0) µ + O(s 2 ), ∀µ,(29)
the exponent in Theorem 3 has p = 1 so that the eigenvalues are twice continuously differentiable in s.
The proof can be found in Appx. H and follows from noticing thatL (1) = −i[H (1) , (·)] generates unitary dynamics. This result should be contrasted with the ultra-low frequency case. Under these type of perturbations we see that time period of the oscillations does not grow as s → 0 and instead remains O(1). We also observe that there are two orders of s between the decay rate and oscillation frequency, demonstrating that these are more stable and thus easier to observe experimentally and more relevant for utilisation than the ultra-low frequency case.
Analogy With Classical Synchronization
Before continuing to the examples, we pause to discuss the relationship between quantum synchronization in our sense and classical synchronization. Although there is no well-defined classical limit for the quantum systems and the microscopic observables we study, certain analogies can be drawn.
Firstly, classical synchronization, by definition, requires stable limit cycles [171]. More specifically, suppose that a finite perturbation in the neighborhood of a limit cycle O(t) leads to a new trajectory Õ (t) . The limit cycle is exponentially stable if there exists a finite a > 0 such that | Õ (t) − O(t) | e −at . In our case, provided the conditions of Th. 2 hold, any perturbation of the state ρ(t) + δρ that does not change the value of the strong dynamical symmetry, i.e. Tr(δρA) = 0 renders the limit cycle of O(t) exponentially stable provided that the Jordan normal form is trivial. Otherwise, it is stable. This follows directly by linearity from Eqs.(8), (9) and Th. 2. We have previously used this fact implicitly to guarantee synchronization for generic initial conditions. Perturbations for which Tr(δρA) = 0 generically change the amplitude of the limit cycle due to the finite-dimensionality of the Hilbert space and linearity. This is an unavoidable implication of our results and constitutes an important fundamental difference between the linear time evolution of a quantum observable O and the time evolution of a classical observable.
Secondly, stability to noise for classical synchronization follows directly from exponential stability -noise may be understood as a random series of perturbations. Quantum mechanically, the influence of fully generic noise is fundamentally different as it induces decoherence and relaxation to stationarity. We have shown in Th. 3 that quantum synchronization is stable to arbitrary noise/dissipation at least to the second-order in perturbation strength. Moreover, as we have shown, in the quantum case symmetry-selective (not arbitrary) noise is fundamentally necessary to induce synchronization.
Thirdly, the stability of the frequency of classical synchronization is neutral, which means that a perturbation can change the frequency, but in a way that neither grows nor decays in time [171]. In our case Th. 3 guarantees precisely this, and Cor. 4 shows that when a perturbation explicitly breaks the synchronization, the frequency is, counter-intuitively, one order more stable than to a perturbation that does not break synchronization. This elucidates quantum synchronization as a cooperative dynamical stabilization phenomenon analogous to classical synchronization.
Proving absence of synchronization and persistent oscillations
We have provided a general theory of quantum synchronization based on necessary and sufficient algebraic conditions that may naturally be applied to quantum many-body systems.
The following question remains: how can one easily show an absence of synchronization in a quantum system? Apart from calculating the long-time dynamics or diagonalizing the Liouvillian, which may be analytically or even numerically intractable for an extensive system, we can use the theory we have developed to provide the following theorem.
Theorem 4.
If there is a full rank stationary state ρ ∞ (i.e. invertible) and the commutant 3 is proportional to the identity, {H, L µ , L † µ } = c1, then there are no purely imaginary eigenvalues ofL and hence no stable synchronization. If this holds in the leading order, there are no almost purely imaginary eigenvalues and no metastable synchronization.
As shown in Appx. I, this is a consequence of the strong dynamical symmetry of Thm. 2 not being unitary for ω = 0. The absence of such a non-trivial commutant is implied by {H, L µ , L † µ } forming a complete algebra of B(H). This can be shown straightforwardly in numerous cases. For instance, if we have a system of N spin-1/2 and L + k = γ k σ + k , L − k = γ k σ − k , and arbitrary H, the map is unital (1 is a full rank stationary state) and e.g. [L + k , (L + k ) † ] = σ z k form a complete algebra of Pauli matrices on B(H). Further, the results of Evans and Frigerio [172][173][174][175] tell us that a trivial commutant is equivalent to the stationary state being unique. As a broad set of examples to which this idea can be applied, under the mild assumption of having a full rank NESS, any 1D spin-1/2 system with nearest-neighbour interaction undergoing Markovian dissipation modelled by on-site non-Hermitian Lindblad operators cannot have purely imaginary eigenvalues. This follows from our result and the construction of the complete algebra given in Sec. 2.1 of [176]. Note that the spin-1/2 example we give later does not have a full rank NESS.
Examples
To demonstrate the application of our theory and to showcase the results of Sections 3 and 4 we now present two examples. The first is a straightforward demonstration of how to antisynchronize two qubits using a third ancilla qubit. While being an interesting result in its own right, since it has previously been implied that two qubits cannot synchronize in the sense of [38], this example should provide a pedagogical explanation of how our theory works. We then move on to discuss the Hubbard model with spin-agnostic heating. This example has been used several times to study long-time non-stationarity in open quantum systems, and here we use it as a springboard to construct a family of more generalized models that will exhibit quantum synchronization. These generalized models are of interest for their relation to experiments involving particles of spin S > 1/2. Note that there are further applications of our theory to analyze additional examples from existing literature in the Appendices.
Perfectly Anti-Synchronizing Two Spin-1/2's
In [38] it was argued that the smallest possible system that can be synchronized is a spin-1, and this was then extended in [37] to two spin-1's. Here, using our theory, we complement this result by showing how it is, in fact, possible to anti-synchronize two spin-1/2's through what is effectively a non-Markovian bath. Let σ α j , α = +, −, z be the standard Pauli matrices on the site j. Take any 3-site Hamiltonian which non-trivially couples the three sites, is reflection symmetric around the first site, P 2,3 HP 2,3 = H, and further conserves total magnetization this explicitly, consider the most general ansatz for an eigenstate with just a single excitation,
|ψ = j a j |j ,(30)
where j means there is a spin-up on-site j and all other spins are down. Then to be an eigenstate with H |ψ = E |ψ we see,
H(P 2,3 |ψ ) = P 2,3 HP 2,3 P 2,3 |ψ = P 2,3 H |ψ = EP 2,3 |ψ = E(P 2,3 |ψ ).
Thus (32) |φ = |ψ − P 2,3 |ψ = (a 2 − a 3 ) |2 + (a 3 − a 2 ) |3 ,
is an eigenstate of H with energy E and it has a node on site 1. Provided the Hamiltonian non-trivially couples the three sites, this state will be the unique eigenstate in the S z = −1/2 sector with a node on site 1. We may exploit this by considering site 1 with a pure loss Lindblad L = γσ − 1 as the bath and the sites 2 and 3 as the system. This is illustrated in Fig. 3.
In this case we have two stationary states,
Note that these are pure states and form a decoherence-free subspace [177]. The A operator satisfying Th. 1 is, A = (|0, 0, 1 − |0, 1, 0 ) 0, 0, 0| Figure 4: Evolution of the model described in Sec. 6.1 with parameters ∆ = 1, B = 0.5 and γ = 2. The system is initially described by a randomly chosen density matrix. In the top plot we compare a randomly generated Hermitian observable on sites 2 and 3 (blue and red curves respectively). While they oscillate out of phase they have an offset equilibrium value which disrupts the perfect anti-synchronization. In the bottom plot we now compare the σ x observable on each site and see perfect anti-synchronization since σ z 1 → 0 as t → ∞.
The frequency ω will depend on the specific choice of the Hamiltonian. Taking, for example, the XXZ spin chain,
H = 3 j=1 σ + j σ − j+1 + σ − j σ + j+1 + ∆σ z j σ z j+1 + Bσ z j ,(36)
with implied periodic boundary conditions, we have ω = −1 + 2B − 4∆. Note that there are persistent oscillations even in the absence of an external field B = 0. In that case the interaction term σ z j σ z j+1 (corresponding in the Wigner-Jordan picture to a quartic term) picks out a natural synchronization frequency. Since A is antisymmetric, P 2,3 AP 2,3 = −A, the oscillating coherences will also be antisymmetric. The symmetric stationary state ρ 1,∞ spoils anti-synchronization between site 2 and 3 by offsetting the equilibrium value. However, an observable that is zero in this state tr (O k ρ 1,∞ ) = 0, k = 2, 3, will be robustly and stably anti-synchronized lim t→∞ O 2 (t) = − lim t→∞ O 3 (t) with frequency ω. A possible choice is the transverse spin O k = σ x k , k = 2, 3. This is demonstrated in Figure 4. Using this example, we can further make a link to previous studies of quantum synchronization, which focused on limit cycles in phase space. In Figure 5 we show the limit cycles of the second and third spins in the phase space defined by the usual Bloch sphere representation of a qubit. We see that the two limit cycles are perfectly out of phase, as expected for anti-synchronization. We can also see clearly that the first site is not synchronized to either the first or second site since its phase space trajectory quickly decays to a fixed point rather than the common limit cycle of sites 2 and 3. F Figure 5: Bloch sphere representation of the evolution of the model described in Sec. 6.1 with parameters ∆ = 1, B = 0.5 and γ = 2 and a randomly chosen initial state. We define the reduced density matrix for each site by taking the partial trace over the other sites, ρ k = Tr A,B =k (ρ). We then find and plot the corresponding Bloch sphere representation, a (k) for the reduced states as ρ k = 1 2 (I + a (k) · σ) where σ are the usual Pauli matrices. The initial point of each trajectory is marked with a cross. We see the second and third sites reach a limit cycle which they orbit perfectly out of phase, while the first site rapidly decays to the a = (0, 0, 1) point on the Bloch sphere. This demonstrates the anti-synchronization between only sites 2 and 3. Note that since the reduced states are note pure the trajectories live within the sphere.
Lattice Models with Translationally Invariant non-Abelian Symmetries: Stable and Metastable Synchronization
We will now proceed to explore examples of many-body models that exhibit robust and stable/meta-stable synchronization between every site due to their symmetry structure. We first discuss the Fermi-Hubbard model as a base case and then explore its generalizations to multi-band and higher spin versions before commenting on how our theory can be applied to existing experimental set-ups. We will also explain how systems can be engineered using the theory in this work to create long-lived synchronization.
Fermi-Hubbard Model with Spin Agnostic Heating
The standard Fermi-Hubbard model has been previously shown to exhibit synchronization in the presence of spin agnostic heating [36,54,57]. We can now understand the appearance of synchronization in this model as a consequence of Cor. 3 as follows.
Consider a generalized N -site, spin-1/2 Fermi-Hubbard model on any bipartite lattice,
H = − i,j s∈{↑,↓} (c † i,s c j,s + h.c) + j U j n j,↑ n j,↓ + j j n j + B j 2 (n j,↑ − n j,↓ ),(37)
where c j,s annihilates a fermion of spin s ∈ {↑, ↓} on site j and i, j denotes nearest neighbor sites. The particle number operators are n j,s = c † j,s c j,s , n j = s n j,s . This model includes onsite interactions U j , a potential j and an inhomogenous magnetic field B j,s in the z-direction. We further include dominant standard 2-body loss, gain and dephasing processes, naturally realized in optical lattice setups [178][179][180][181][182][183],
L − j = γ − j c j,↑ c j,↓(38)L + j = γ + j c † j,↑ c † j,↓(39)L z j = γ z j n j ,(40)
with j = 1, . . . N . From previous studies relating to time crystals, [54,57,59] it is now well appreciated that in a homogenous magnetic field B j = B the total spin raising operator S + = j c † j,↑ c j,↓ is a strong dynamical symmetry with,
[H, S + ] = BS + , [L α j , S + ] = 0. (41) For γ − j = γ + j we have α,j [L α j , (L α j ) † ] = 0,(42)
so we find the map is unital. Provided that S + and its conjugate S − are the only operator satisfying the conditions of Th. 2, as is generically the case unless γ − j = γ + j = γ z j = 0, we may apply Cor. 3 and conclude that every site is stably, robustly synchronized with every other site because S + and S − have complete permutation invariance. We emphasise that the on-site potentials, ε j , need not be the same in this case. This explicitly demonstrates how our theory can be applied non-perturbatively to non-homogeneous systems.
Metastability can be achieved by, for example, allowing detuning between the on-site magnetic field and considering perturbations from the average value of B jB := 1/N j B j , i.e. s j = B j −B. As per Cor. 4 the synchronization will be stable to second order, i.e. ∼ max(s 2 j ). This model hints at a general framework to construct more elaborate examples of systems that exhibit quantum synchronization based on their symmetries. We give details on this general framework in Appx. M, which can in principle be applied to a broad class of models, including fermionic tensor models relevant for high energy physics and gauge/gravity dualities, [184], and U (2N ) fermionic oscillators [101]. We will now discuss the case of multi-band, and SU (N ) Hubbard models, which have previously been explored in cold atoms [185][186][187][188].
Generalised Fermi-Hubbard Model with SU (N ) Symmetry and Experimental Applications
For concreteness, we will first consider the model studied in [188] describing fermionic alkalineearth atoms in an optical lattice. These atoms are often studied as they have a meta-stable 3 P 0 excited state, which is coupled to the 1 S 0 ground state through an ultra-narrow doublyforbidden transition [189]. We will refer to these levels as g (ground) and e (excited). We will further label the nuclear Zeeman levels as m = −I, . . . , I on-site i, where N = 2I+1 is the total number of Zeeman levels of the atoms. For example, 87 Sr has N = 10. It is further known that in these atoms, the nuclear spin is almost completely decoupled from the electronic angular momentum in the two states {g, e} [189]. Thus to a good level of approximation, one can describe a system of these atoms in an optical trap using a two-orbital single-band Hubbard Hamiltonian [188],
H = − i,j s,m J s (c † i,s,m c j,s,m + c † j,s,m c i,s,m ) + j,s U ss n j,s (n j,s − 1) + V j n j,g n j,e + V ex j,m,m c † j,g,m c † j,e,m c j,g,m c j,e,m .(43)
Here the c i,s,m operator annihilates a state with nuclear spin m and electronic orbital state s ∈ {g, e} on site i. Further n j,s = m c † j,s,m c j,s,m counts the number of atoms with electronic orbital state s on site j. This model assumes that the scattering and trapping potential are independent of nuclear spin, which gives rise to a large SU (N ) symmetry.
Defining the nuclear spin permutation operators as
S m n = j,s c † j,s,n c j,s,m ,(44)
which obey the SU (N ) algebra,
[S m n , S p q ] = δ mq S p n − δ np S m q ,(45)
we find that the Hamiltonian in equation (43)
We can also introduce the electron orbital operators as,
T α = j,s,s ,m c † j,s,m σ α ss c j,s ,m ,(47)
where α = x, y, z and σ α are the Pauli matrices in the g, e basis. These operators obey the usual SU (2) algebra and are independent of the nuclear spin operators, [T α , S m n ] = 0. In the specific case J e = J g , U ee = U gg = V, V ex = 0 these are also a full symmetry of the system, [H, T α ] = 0.
If we now introduce dephasing of the nuclear spin levels via the on-site Lindblad operators
L (m) j = γ (m) j S m m ,(48)
we break the SU (N ) nuclear-spin symmetry, while the electronic orbital symmetry is promoted to a strong symmetry [159] since the operators T α all commute with the Lindblad operators. This may be accomplished by scattering with incoherent light that does not distinguish between various energy levels of the internal degree of freedom s [178,190]. In particular, this represents the fact that the light has transmitted information to the environment about the location of an atom, but not its internal degree of freedom that remains coherent. The rates γ (m) j can be made larger by introducing more light scattering. In the presence of a field that only couples to the electronic degrees of freedom,
H field = ωT z ,(49)
the electronic orbital SU (2) symmetry is broken to a dynamical symmetry with frequency ω. Since the T ± operators are translationally independent, all the conditions of Cor. 3 are satisfied to guarantee robust, stable synchronization between all pairs of sites.
In realistic set-ups, this strict symmetry structure is unlikely to be perfectly maintained. In particular certain cold atom species may have a finite exchange term V ex between the spin levels, e.g. [191,192]. Thus we would expect metastable synchronization to be present when the conditions J e = J g , U ee = U gg = V, V ex = 0 do not hold perfectly or there are some inhomogeneities in the additional field. More generally, we could consider cases of interacting systems where the nuclear and electronic degrees of freedom are coupled through scattering processes, such as in the experiment conducted by [144]. Generically such systems will lack the symmetries required for robust, stable quantum synchronization, and thus the timescale over which synchronization can be maintained will be a measure of how much the symmetries are broken through these imperfections.
One possible approach for achieving metastable synchronization in more complex systems is to proceed as follows. Ignoring interactions, we can diagonalize the single site Hamiltonians to obtain eigenstates |n j and energy levels E n,j for each site, j. These give trivial onsite dynamical symmetries A n,m j = |n m| j for E n,j = E m,j . We then introduce dephasing operators on each site which break all but a few of these dynamical symmetries. When we reintroduce interactions between neighbouring sites, we search for translationally invariant linear combinations of the remaining on-site strong dynamical symmetries, which are as close as possible to dynamical symmetries of the whole model. These linear combinations can, in principle, be further optimized by adjusting the experimental parameters of the system. These considerations emphasize the importance of our theory when engineering long-lived synchronization in more complex systems. Our theory tells us that to produce quantum systems that exhibit long-lived synchronization, the symmetries must be carefully controlled.
Another aspect of our theory that can be applied to these more complicated systems, even if they do not admit the symmetries required for synchronization, is to give the scalings of decay rates and frequencies of the meta-stable oscillations. As an example, we consider the experimental set-up investigated by [144,193], where fermionic atoms with nuclear spin-9/2 were confined to a deep optical lattice. In their experiment, they initialized the system so that every site contained two atoms, one with m = 9/2 and the second with m = 1/2, and observed oscillations across the whole system between this initial state and the state in which the two atoms had spins m = 7/2 and m = 3/2. These oscillations were most pronounced in the limit where the optical lattice was very deep, corresponding to minimal hopping. It is known that deep optical lattices can often cause dephasing processes to occur, so it is likely that in the limit of no hopping, the system also experiences strong dephasing processes. Thus we apply the results of Sec. 4.2 to predict that as the trap becomes shallower, the frequency should vary as,
ω = ω 0 + λ γ + o(1/γ),(50)
for some constant λ, where γ = U/J is the ratio between the interaction strength and the hopping. This ratio is known to be related to the lattice depth in a 1D sinusoidal lattice in the deep lattice limit as [194],
γ ∼ exp( V 0 ),(51)
for V 0 the lattice depth, measured in units of E r . The 1D approximation is valid because the other two dimensions of the optical lattice are kept at very deep values V ⊥ = 35E r [144].
Thus we obtain
ω(V 0 ) = ω 0 + λ exp(− V 0 ).(52)
Our simple results appear to be in better agreement with the experimental measurements than the numerical simulations in [144].
Conclusions
We have introduced a general theory of quantum synchronization in many-body systems without well-defined semi-classical limits. So far, although a subject of great interest, quantum synchronization has been studied on an ad-hoc basis without a systematic framework. The advantage of our theory is that it provides an algebraic framework based on dynamical symmetries from which to systematically study quantum synchronization in many-body systems. Symmetries are useful in quantum physics, especially many-body physics, as they allow for exact results and statements without resorting to challenging and often infeasible analytical or numerical computations. Our framework also allows for exact solutions of all such dynamics in terms of this dynamical symmetry algebra. We introduced definitions of stable and metastable synchronization, which align with the notion of identical synchronization in classical dynamical systems. Stable synchronization lasts for an infinite amount of time, whereas metastable lasts only for very long times compared to the system's characteristic timescale and is the one usually studied in the literature. In addition, we defined robust synchronization to capture the robustness of the synchronized behaviour to the initial conditions. This is important to identify systems where the synchronization process is performed by some internal mechanism rather than fine-tuning of the initial state. We further divided the cases of metastable synchronization into those for which the observable dynamics take place on relevant time scales or not. We have demonstrated the synchronization is associated with quantum coherence.
We provided several examples, both new ones and from existing literature, and have shown how to use our theory to understand and extend them. Curiously, even though it was implied that the smallest system that can be synchronized is a spin-1 [38], we have used our theory to find an example of two spin-1/2's that anti-synchronize through interaction with a non-Markovian bath without contradicting the results of [38]. We then studied the Fermi-Hubbard model and its generalizations to explore how to generate models which can be expected to exhibit quantum synchronization. We also discussed how these results relate to experimental set-ups and demonstrated why robust quantum synchronization requires careful engineering and controlling experimental imperfections. Apart from higher symmetry fermionic quantum gases we discussed, similar considerations can also be directly applied to other complex cold atom systems with high degrees of symmetry and a large number of degrees of freedom, such as quantum spinor gases [195][196][197]. This demonstrates how our theory provides a guideline for achieving synchronization that would be difficult to predict without using an algebraic perspective.
Our results provide several illuminating insights which help generate models that exhibit synchronization. The first is that the most straightforward way to synchronize quantum manybody systems is to use unital maps, e.g. dephasing. This is because we may then reduce the problem to eliminating dynamical symmetries that lack the required permutation symmetry structure for synchronization. Our results also indicate the importance of interactions. For instance, if a model has only quadratic 'interacting' terms corresponding to hopping or onsite fields, dynamical symmetries that are not translationally invariant are possible [36]. In particular free-fermion models admit a host of non-translationally invariant conservation laws [H, Q k ] = 0, and these ruin the translation invariance in the long-time limit [198]. Adding interactions generally leaves only translationally invariant A operators.
We also note that there has been recent debate about the related phenomenon of limit cycles in driven-dissipative systems with finite local Hilbert space dimensions (in particular spin-1/2 systems). Mean-field methods find evidence of limit cycles (e.g. [199]), whereas including quantum correlations with some numerical methods seems to indicate an absence of limit cycle phases in these models (e.g. [200,201]). However, other methods [202] accounting for quantum correlations do show limit cycles, making the issue controversial. We note that limit cycles correspond to persistent dynamics and purely imaginary eigenvalues of the corresponding quantum Liouvillians. Thus our general algebraic theory should apply to these systems and can be used to prove either presence or absence of limit cycles.
One direct generalization of our work should be to quantum Liouvillians with an explicit time dependence that would allow for the study of synchronization to an external periodic drive. In this case, one could also study discrete time translation symmetry breaking and discrete time crystals under dissipation [203,204], which would correspond to purely imaginary eigenvalues in the Floquet Liouvillian or equivalently to eigenvalues of the corresponding propagator lying on the unit circle. A further direction that should be explored is the synchronisation of sites which have differing local Hilbert space dimension, such as synchronising a spin-1/2 with a spin-1. Our framework should be applied in the future for generating new models that have synchronization. Finally, extensions to more general types of synchronization such as amplitude envelope synchronization and those models with infinite-dimensional Hilbert spaces should also follow.
Appendices
We now provide proofs of the results in Sections 3 and 4 along with additional analysis of previously studied examples of quantum synchronization and a discussion of how to construct more elaborate models which exhibit quantum synchronization using symmetry structures.
A Proof of Theorem 1
Theorem. The following condition is necessary and sufficient for the existence of an eigenstate ρ with purely imaginary eigenvalue iλ,Lρ = iλρ, λ ∈ R.
We have ρ = Aρ ∞ , where ρ ∞ is a NESS and A is a unitary operator which obey,
[L µ , A]ρ ∞ = 0,(53)−i[H, A] − µ [L † µ , A]L µ ρ ∞ = iλAρ ∞ , λ ∈ R.(54)
Proof. Sufficiency can be checked directly by calculatingL[ρ] =L[Aρ ∞ ]. To prove the converse we first observe that ρ is also an eigenstate of the corresponding quantum channel T t = e tL with eigenvalue e iλt . Since this lies on the unit circle we may apply Theorem 5 of [156] to deduce that ρ admits a polar decomposition of the form ρ = AR where A is unitary and R is positive semi-definite withT t R = R. In particular this impliesL[R] = 0 so that R is a steady-state ofL which we now call ρ ∞ = R. Note that this also implies ρ ∞ is Hermitian and may be scaled to have unit trace. Writing the channel in Kraus form aŝ
T t [x] = k M k (t)xM † k (t),(55)
we can apply Theorem 5 of [156] again to find
M k (t)Aρ ∞ = e iλt AM k (t)ρ ∞ .(56)
Now note that the adjoint channel is given byT † t [x] = k M k (t) † xM k (t), and so we can compute the adjoint Liouvillian as,
L † [x] = dT † t dt | t=0 = kṀ † k (0)xM k (0) + M † k (0)xṀ k (0).(57)
Using the of derivative equation (56) and the requirement that Kraus operators satisfy
k M † k M k = I we can calculate, L † [A]ρ ∞ = kṀ † k (0)AM k (0)ρ ∞ + M † k (0)AṀ k (0)ρ ∞ = kṀ † k (0)M k (0)Aρ ∞ + M † k (0)Ṁ k (0)Aρ ∞ + iλM † k (0)AM k (0)ρ ∞ = k d dt M † k (t)M k (t) t=0 Aρ ∞ + iλ k M † k (0)M k (0)Aρ ∞ = iλAρ ∞ .(58)
A similar calculation using the conjugate equations, noting that ρ ∞ is Hermitian, yields
ρ ∞L † [A † ] = −iλρ ∞ A † .(59)
We will now also introduce the dissipation function [147,157], defined for any operator x as
D[x] =L † [x † x] −L † [x † ]x − x †L † [x] = µ [L µ , x] † [L µ , x].(60)
Then by unitarity of A and the above results forL
† [A],L † [A † ] we compute 0 = ρ ∞ D[A]ρ ∞ = µ ρ ∞ [L µ , A] † [L µ , A]ρ ∞ .(61)
Since this sum is positive definite we must have [L µ , A]ρ ∞ = 0 ∀µ. We now computeL[
Aρ ∞ ] = iλAρ ∞ usingL[ρ ∞ ] = 0 and [L µ , A]ρ ∞ = 0 to obtain −i[H, A] − µ [L † µ , A]L µ ρ ∞ = iλAρ ∞ ,(62)
as required.
B Proof of Corollary 1
Corollary. The Liouvillian super-operator,L, admits a purely imaginary eigenvalue, iλ only if there exists some unitary operator, A, satisfying
−iρ ∞ A † [H, A]ρ ∞ = iλρ 2 ∞ ,(63)ρ ∞ A † [L † µ , A]L µ ρ ∞ = 0, ∀µ,(64)ρ ∞ [L † µ , A † ]L µ ρ ∞ = 0, ∀µ.(65)
In which case the eigenstate is given by ρ = Aρ ∞ .
Proof. We may use the unitarity of A and [L µ , A]ρ ∞ = 0 ∀µ to easily obtain
ρ ∞ A † [L † µ , A]L µ ρ ∞ = 0, ∀µ.(66)
We then left multiply equation (62) by ρ ∞ A † and use (66) to obtain
−iρ ∞ A † [H, A]ρ ∞ = iλρ 2 ∞(67)
as stated. We get ρ ∞ [L † µ , A † ]L µ ρ ∞ = 0, ∀µ by using (66) Theorem. When there exists a faithful (i.e. full-rank/ invertible) stationary state,ρ ∞ , ρ is an eigenstate with purely imaginary eigenvalue if and only if ρ can be expressed as,
ρ = ρ nm = A n ρ ∞ (A † ) m(68)
where A is a (not necessarily unitary) strong dynamical symmetry obeying
[H, A] = ωA [L µ , A] = [L † µ , A] = 0, ∀µ.(69)
and ρ ∞ is some NESS, not necessarilyρ ∞ . Moreover the eigenvalue takes the form
λ = −iω(n − m)(70)
Furthemore, the corresponding left eigenstates, withL † σ mn = iω(n − m)σ mn , are given by σ mn = (A ) m σ 0 (A ) † n where A is also a strong dynamical symmetry and σ 0 = 1.
Proof. We will make use of [156], [139] and [157]. The asymptotic subspace of the Liouvillian As(H) is defined as a subspace of the space of linear operators B(H) such that all initial states ρ(0) in the long-time limit end up in ρ(t → ∞) ∈ As(H). The projector P (P 2 = P = P † ) to the corresponding non-decaying part of the Hilbert space H is uniquely defined [139,157] as, for all ρ(t → ∞) ∈ As(H),
ρ(t → ∞) = P ρ(t → ∞)P tr(P ) = max ρ(t→∞) {rank{ρ(t → ∞)}}.(71)
It follows therefore if there is a full rankρ ∞ that P = 1.
From the proof of Proposition 2 of [139] and Theorem 4 (Eqs. (2.39-2.40)) of [157] we know that for a left eigenmode with purely imaginary eigenvalue A †L = −iωA † , we have, [P HP, P AP ] = ωP AP,
[P L µ P, P AP ] = [P L † µ P, P AP ] = 0.(72)
Which reduces to [H, A] = ωA,
[L µ , A] = [L † µ , A] = 0.(73)
Since P = I. The left eigenmode A † is also the eigenmode of the dual mapT † t = exp L † t . Since A † corresponds to a peripheral eigenvalue ofT † t , by Lemma 3 of [156], the corresponding right eigenmode with eigenvalue iω is ρ = Aρ ∞ . By the same Lemma, to every oscillating coherence Aρ ∞ there corresponds a left eigenvector of the form Aρ ∞ρ −1 ∞ = A , which must also satisfy the conditions (73). Now we find that we can write ρ = A ρ ∞ where A satisifes the conditions of (73). We now have the same criteria as those given in [54] and the statement of the theorem follows. The converse was shown in [54]. It is a straightforward calculation to show that σ nm = A n A † m is a left eigenmode with the desired eigenvalue.
D Proof of Corollary 2
Corollary. The following conditions are sufficient for robust and stable synchronization between subsystems j and k with respect to the local operator O:
• The operator P j,k exchanging j and k is a weak symmetry [159] Proof. In the long-time limit we have,
lim t→∞ O j (t) = tr O j n c n A n ρ ∞,n e iλnt ,(74)
where c n = σ n |ρ(0) for an initial state ρ(0) which we assume to be arbitrary and A n are all operators satisfying the conditions of Th. 1 andLρ ∞,n = 0 forming a complete basis for the eigenspaces with purely imaginary eigenvalues. Clearly,
lim t→∞ O j (t) = tr O j P 2 j,k n c n A n ρ ∞,n e iλnt ,(75)
and we just need to commute the P j,k to the left to use O k = P j,k O j P j,k which results in lim t→∞ O j (t) = lim t→∞ O k (t) for any initial state. By assumption, [A n , P j,k ] = 0, ∀n. We just need to show that [ρ ∞,n , P j,k ] = 0, ∀n. To do so we use the fact that P j,k is a weak symmetry of the Liouvillian [P j,k ,L] = 0. The two eigenvalues of P j,k are +1, −1 and it has orthogonal eigenspaces. This implies that the Liouvillian is block reduced to the eigenspaces
B 1 = {+1, +1} ⊕ {−1, −1} and B 2 = {+1, −1} ⊕ {−1, +1}
corresponding to the two +1 and −1 eigenvalues ofP j,k , respectively [159]. The only possible eigenmodes with eigenvalue 0 as per Theorem 18 of Baumgartner and Narnhofer [138] are either operators that are diagonal in some basis or stationary phase relations. The diagonal subspace B 1 contains all matrices that are diagonal. To see this consider ρ 2 ∈ B 2 . It can be written as ρ 2 = P +1 ρ a P −1 + P −1 ρ a P 1 where
P ± = 1 2 (I ± P i,j )(76)
are the corresponding projectors which commute, [P +1 , P −1 ] = 0, and are mutually diagonalizable. We have either P +1 |ψ + = |ψ + , P −1 |ψ + = 0 or the reverse. Therefore ψ α | ρ 2 |ψ α = 0. For ρ ∞,n ∈ B 1 we immediately have [ρ ∞,n , P j,k ] = 0. All the eigenmodes with eigenvalue 0 which belong to B 2 are stationary phase relations because they only contain off-diagonal elements. It remains to show that all stationary phase relations, ρ ∞,n ∈ B 2 , either satisfy [ρ ∞,n , P j,k ] = 0 or vanish. This directly follows from Proposition 16 of Baumgartner and Narnhofer [138] that states the existence of a stationary phase relation implies the existence of a unitary U such that [H, U ] = [L µ , U ] = 0, ∀µ. However, such a U must intertwine between the subspaces +1 and −1 of P j,k , with projectors P 1 and P −1 , respectively, i.e. U P +1 = P −1 U and therefore [U, P j,k ] = 0. However, U satisfies all the conditions for an A operator from Th. 1, and by assumption does not exist. Thus there are no non-zero ρ ∞,n ∈ B 2 .
E Proof of Corollary 3
Corollary. The following conditions are sufficient for robust and stable synchronization between subsystems j and k with respect to the local operator O:
• The Liouvillian,L is unital (L(1) = 0)
• There exists at least one A fulfilling the conditions of Th. 1 Proof. We return to Eq. (75) and wish to show that [ρ ∞,n , P j,k ] = 0 but without the assumption of weak symmetry. IfL is unital, then all A n satisfy conditions of Th.2 and therefore it straightforwardly follows that A n A † n is a strong symmetry [159,205] of the Liouvillian and that the projectors to the eigenspaces of A n A † n , P n,a , are stationary statesLP n,a = 0. By assumption [P j,j+1 , A n A † n ] = [P n,a , P j,j+1 ] = 0. By Theorem 3 of [138] P n,a are projectors to enclosures. The existence of more minimal enclosures that do not satisfy the symmetry requirement would imply that there are more operators A (as projectors to enclosures satisfy the conditions of Th. 1 trivially), and this cannot happen by assumptions of the corollary. Therefore, P n,a are projectors to minimal enclosures. By Theorem 18 of [138] all the minimal diagonal blocks P n,a ρP n,a contain a unique stationary state, which must be up to a constant P n,a . The lack of stationary phase relations and oscillating coherences that do not commute with P j,k follows from the fact that by Theorem 18 of [138] the unique stationary state in each off-diagonal block is of the form of U P n,a . This intertwiner, like in the proof of Cor. 2, satisfies the conditions for an A and therefore must commute with P j,k by assumption.
F Proof of Theorem 3
Theorem. LetL (s) generate a CPTP map and depend analytically on s with Taylor serieŝ
L (s) =L + sL 1 + s 2L 2 + O(s 2 ).(77)
Suppose there exists some ω ∈ R \ {0} and ρ 0 such thatLρ 0 = iωρ 0 . Then for s sufficiently small λ(s) is an eigenvalue ofL (s) given by
λ(s) = iω + λ 1 s + λ 2 s 1+ 1 p + o λ2s 1+ 1 p (78)
where λ 1 is purely imaginary, p is some positive integer and λ 2 has non-positive real part.
Proof. We first note that if λ(0) is a non-degenerate eigenvalue, then it is known that λ(s) is analytic; thus, the above result is trivial. In fact, in this case, we also find that the eigenstate ρ(s) also depends analytically on s.
For the case where iω is an m-fold degenerate eigenvalue ofL 0 we define the 'ω-rotating stable manifold' (ω-RSM) as the eigenspace spanned by the m eigenmode corresponding to iω. We next prove the following lemma Lemma 1. The purely imaginary eigenvalues ofL have one-dimensional Jordan blocks.
Proof. Let the CPTP map generated byL beT t = e tL . Note that the eigenspace ofL with eigenvalue iω is also an eigenspace ofT t with eigenvalue e itω which has unit modulus. Thus by proposition 6.1 of [206] the Jordan blocks corresponding to e itω inT t are all one-dimensional. It follows that the Jordan blocks ofL corresponding to iω are also all one-dimensional.
Since iω is a semi-simple eigenvalue and we can directly apply Theorem 2.3 of [207] to write λ(s) = iω + λ 1 s + λ 2 s
1+ 1 p + o s 1+ 1 p (79)
for some integer p ≥ 1. Finally we observe that sinceL(s) always generates a CPTP map, we must have (λ(s)) ≤ 0 for all s ∈ R and thus immediately we can deduce that λ 1 is purely imaginary and λ 2 has non-positive real part.
G Proof of Corollary 4
Corollary. For perturbationsL 1 which explicitly break the exchange symmetry, and thus synchronization, between sites j and k, i.e.P j,kL1Pj,k = −L 1 , the frequency of synchronization ω is stable to next-to-leading order in s, i.e. λ 1 = 0.
Proof. We begin by noting that the projector to the λ(s) group is differentiable to leading order (83). We may write the eigenmode equationL (s)ρ(s) = λ(s)ρ(s) in the first order as,
LP 1 +L 1P0 = λ 0P1 + λ 1P0 ,(80)
which we multiply by the corresponding left eigenvector ofL σ(0)| and obtain,
λ 1 = σ(0)|L 1P0 ,(81)
which is clearly zero asP 0 ∈ B(B 1 ),L 1 ∈ B(B −1 ) and σ(0)|∈ B 1 , where the indices of B p denote the p = ±1 eigenspaces of the exchange superoperatorP j,k .
H Proof of Corollary 5
Corollary. Suppose our LiouvillianL (s) has Hamiltonian H(s) and jump operators L µ (s).
If the perturbation is such that
H(s) = H (0) + sH (1) + O(s 2 ) L µ (s) = L (0) µ + O(s 2 )(82)
then we find that p = 1 in the result of Theorem 3.
Proof. We follow the reduction method of [207] and [166]. From [207] we recall that the eigenvalues perturbed away from ω (called the ω-group) are not in general analytic. They are instead branches of analytic functions and the corresponding eigenstates may contain poles. However the projection onto the span of the ω-group eigenstates is analytic and thus the restriction ofL(s) to this subspace is also analytic. Let us write this projection operator aŝ P(s) and thus the restricted Liouvillian is given by [L(s)]P (s) =P(s)L(s)P(s). We can use the result from Section II.2 of [207] to writê
P(s) =P + sP 1 + O(s 2 ),(83)whereP 1 = −ŜL 1P −PL 1Ŝ .(84)
HereP is the zero order projector onto the ω-RSM andŜ is the reduced resolvent ofL(s) at iω which obeysŜP =PŜ = 0 andŜ(L − λI) = (L − λI)Ŝ = I −P. We also write (as on page 78 of [207]) L (s)
P (s) = iωP + s[L 1 ]P + O(s 2 L 2 ).(85)
SinceL 1 = −i[H (1) , • ] we can see thatL 1 generates unitary dynamics, and thus its projection to the ω-RSM, [L 1 ]P also generates unitary dynamics and thus has purely imaginary eigenvalues which are semi-simple. By the reduction arguments in Section II.3 [207], we can deduce that the eigenvalue λ(s) is twice differentiable,
λ(s) = iω + iλ 1 s + λ 2 s 2 + o s 2 ,(86)
corresponding to p = 1 in Thm 3.
I Proof of Theorem 4
Theorem. If there is a full rank stationary state ρ ∞ (i.e. no zero eigenvalues, invertible) and the commutant {H, L µ , L † µ } = c1, then there are no purely imaginary eigenvalues ofL and hence no stable synchronization. If this holds in the leading order, there are no almost purely imaginary eigenvalues and no metastable synchronization.
Proof. Suppose, that there exists a state ρ with purely imaginary eigenvalue iω. Sinceρ ∞ has full rank, by Thm.
Taking the trace trivially gives
c 1 = c 2 = c. Notice that c = 0, since c = 1 d Tr(A † A) = 1 d A 2
, and so c = 0 would correspond to A = 0. Now we compute,
Tr A † [H, A] = Tr ωA † A (88a) Tr A † HA − A † AH = ω Tr A † A (88b) 0 = ω Tr(A † A)(88c)
Hence ω = 0. Consequently, no non-zero purely imaginary eignenvalue can exist.
J Conditions For Meta-stable Synchronization
In Sec. 2.2 it was stated that in the long time limit the dynamics of an observableÔ are described by
Ô (t) = k e tλ k Ô |ρ k σ k |ρ 0 .(89)
This is because the purely imaginary eigenvalues ofL are semi-simple and thus have trivial Jordan normal form. Since we consider a system of finite size, there is also some µ > 0 such that all eigenvalues λ with non-zero real part have Re(λ) < −µ. This Liouvillian gap determines the time period during which transient dynamics occur, and after which the system is synchronized, i.e. the system will be synchronized for t >> 1/µ provided the relevant criteria of Cor 2 or 3 apply. When we introduce perturbations, writinĝ
L (s) =L + sL 1 + s 2L 2 + O(s 2 ),(90)
the eigenvalues vary continuously with the perturbative parameter s, and as a result so too will the gap µ. Provided the pertubration is small enough that the gap does not close, i.e µ 0, in the long time limit we need only consider perturbations to the eigenvalues ofL which lie on the imaginary axis. Since these are semi-simple we may diagonaliseL over this subspace. Now using the Baker-Campbell-Hausdorff (BCH) formula we can write e tL (s) = e tL e tsL 1 +o(st) (91) where e tL is diagonalisable over the subspace of purely imaginary eigenvalues. Thus for t < O(1/s) the dynamics are determined byL and we can apply the results of Cor. 2 & 3 tô L to determine whether meta-stable synchronization occurs.
If the perturbation is of the form
H(s) = H (0) + sH (1) + O(s 2 ) L µ (s) = L (0) µ + O(s 2 ),(92)
then by Cor. 5 the eigenvalues ofL + sL 1 are semi-simple. In this case we can proceed as above and use the BCH formula to write e tL (s) = e t(L+sL 1 ) e ts 2L 2 +o(s 2 t) ,
where now e t(L+sL 1 ) is diagonalisable. Consequently for t < O(1/s 2 ) the dynamics are determined byL + sL 1 and we can apply the results of Cor. 2 & 3 toL + sL 1 to determine whether meta-stable synchronization occurs.
K Application of our Theory to Non-Markovian Systems
This appendix indicates how our theory can be applied to systems where the environment is not Markovian. Let the full system-environment Hamiltonian be H tot , and suppose we can partition the full system-environment into (i) The system we are interested in, (ii) a part of the environment that can be modelled as Markovian and (iii) a part of the environment which is non-Markovian. We can write this as
H tot = HS + H E + HS ,E ,(94)
whereS contains both the system and the non-Markovian environment and E contains exclusively the Markovian portion of the environment. Labeling the reduced density operator for the system and the non-Markovian environment asρ we can trace out the Markoivan environemnt to obtain the Lindblad equation forρ,
d dtρ = −i[HS,ρ] + µ 2L µρ L † µ − {L † µ L µ ,ρ}.(95)
We can now apply the results of the main text to determine that the long-time dynamics of ρ are given byρ
(t) → k e iω k t c kÃkρk,∞(96)
where c k are constants determined by the initial conditions and ω k ,à k , &ρ k,∞ are as in Thm. 1. In this form we can study synchronization by directly applying our results to the operatorsà k , &ρ k,∞ . This is illustrated in the example in Sec 6.1.
We can also trace out the non-Markovian environment to obtain the reduced density matrix of the system as
ρ(t) → k e iω k t c k TrẼ Ã kρk,∞ .(97)
Unfortunately, it is not possible to draw general conclusions directly about the dynamics of the system without knowing the details of the non-Markovian bath and applying our results to the combined system. If the fine details of the non-Markovian bath were inaccessible or too difficult to study our theory would likely not be generally applicable but this would require a case by case analysis. Other more systematic approaches to non-Markovian generalisations of the Lindblad equation can be found in [208,209].
L Analysis of Additional Examples of Quantum Synchronization
To further demonstrate our theory, we now provide additional analysis of previously studied examples of quantum synchronization. We will focus on the model of two weakly coupled, driven-dissipative spin-1 systems as previously studied in a synchronization setting by [37].
In the absence of external interactions, two coupled spins, labelled A and B, evolve according to the Hamiltonian
H = ω A S z A + ω B S z B + i 2 S + A S − B − S + B S − A(98)
where for convenience we define the detuning ∆ = ω A −ω B . We then consider the independent interactions of each spin with some external bath which in the absense of spin-spin interactions drives the spins towards their own non-equilibrium steady states. These system-bath interactions are modeled by the Lindblad operators L u,j = γ u j S + j S z j , L d,j = γ d j S − j S z j , j = A, B
Using the theory we have developed, we will analyze three examples which were shown by [37] to exhibit quantum synchronization. The first example considers driving the two spins in opposite directions without any detuning, the second example introduces detuning but also takes the quantum Zeno limit of large driving, and the third considers driving two detuned spins in opposite directions.
L.1 Inverted Limit Cycle: Metastable ultra-low frequency anti-synchronization
We first analyse the so called inverted limit cycle. We take ∆ = 0 and invert the driving on the two spins so that
γ u A = γ d B = γ, γ d A = γ u B = µ,(100)
and consider the limiting case µ → 0. As in [37] the system is initialised in the state ρ(0) = ρ (0)
A ⊗ ρ (0) B where ρ (0) j
is the NESS of spin j in the absence of spin-spin couplings. Thus we can consider this as a quench of the system where the weak spin-spin interaction are instantaneously turned on. We find that in the absence of spin-spin interactions, i.e = 0, and with µ = 0 the independent spins have degenerate stationary states
ρ (0) A = p A |0 A 0| A + (1 − p A ) |1 A 1| A , ρ (0) B = p B |0 B 0| B + (1 − p B ) |−1 B −1| B ,(101)
for p j ∈ [0, 1]. This degeneracy is lifted as soon as µ becomes strictly positive and we find p A = p b = 1 in Eq. (101). Thus, to avoid this degeneracy and simplify discussions, in the following we consider µ infinitesimally small but still strictly positive. Consqeuently we consider both µ and as perturbative parameters while ω, γ remain O(1).
To understand the evolution of the system it is sufficient to consider those eigenstates of L which are excited by ρ(0), and in particular which of these have eigenvalues with small real part compared to ω and γ. We find that only 4 eigenmodes with small real parts are excited, with corresponding eigenvalues
λ = 0, −2µ + O(µ 2 , 2 , µ), −µ ± 2i + O(µ 2 , 2 , µ)(102)
Of these, the relevant eigenvalues for synchronized oscillations are λ ± = −µ±2i +O(µ 2 , 2 , µ) since they are the only one with a non-zero imaginary part at leading order. We conclude that is that this is an example of ultra-low frequency synchronization since both the real and imaginary parts of λ ± are O( , µ). The consequences of this are shown in Figure 6a where we consider the S Z j observables. We see that the observable appears almost stationary on short time scales, t ∼ O(1). When, in Figure 6b, we consider significantly longer timescales, however, we see the behaviour which we consider metastable synchronization. We further find that the decay rate is inversely proportional to 2 at the next lowest order, as indicated in Sec. 4.1. Consequently, there is only one power of between the decay rate and the frequency of the signal unless is sufficiently small that µ > 2 in which case the decay rate is propositional to µ. This example demonstrates why we generally discount ultra-low frequency synchronization since it is unfeasible to observe experimentally.
L.2 Inverted Limit Cycle: Anti-synchronization in the Quantum Zeno Limit
We again consider the same system as above, but now detuned, ω A = ω B , and with strong dissipation, γ u A = γ d B = γ >> ω j , . We find that the dissipation operator
D[ρ] =2S + A S z A ρS z A S − A + 2S − B S z B ρS z B S + B − {S z A S − A S + A S z A , ρ} − {S z B S + B S − B S z B , ρ}(103)
has a stationary subspace with 16-fold degeneracy. Thus, when we lift the degeneracy of this subspace by introducing comparatively weak unitary dynamics, we introduce several eigenvalues with O(1) imaginary parts and O(1/γ) real parts. This leads to metastable dynamics on timescales relevant to the Hamiltonian. In Fig. 7 we see that for the initial state ρ(0) = |0 A 0| A ⊗ |0 B 0| B there are clean, synchronized oscillations in the S z observable. When initialised in this particular state, this metastable anti-synchronization occurs for any ω, << γ. Figure 6: Evolution of S z observable on sites A (blue) and B (red) for the Spin-1, inverted limit cycle model with ω = γ = 1, µ = 0.0001, = 0.01. In (a) we see over short time periods the observables gradually move away from 0 while in (b) we see that over much longer time scales decaying oscillations can be measured. As per (102) the frequency of these oscillations is 2
Unlike the previous example, we find that this meta-stable anti-synchronisation is robust to initialising the system in arbitrary states. This can be seen by noting that the eigenstates of the dissipation operator,D, are coherences between the states |0, 0 , |1, −1 , |0, −1 , |1, 0
Under strong dissipation, the system rapidly decays onto the space spanned by these eigenstates before the slower dynamics take over. Since this space spanned by coherences of the above eigenstates is invariant under the transformation S z A ←→ −S B z , all dynamics within this space will satisfy S z A (t) = − S z B (t) regardless of the initial state. However, we find that the system is not completely anti-synchronized since if we measure instead the observable S x j we find that the measurements on the two sites are now uncorrelated. Figure 7: Metastable anti-synchonisation of the inverted limit cycle model in the Quantum Zeno limit for γ = 100, µ = 0, ω A = 0.5, ω B = 1.5 and = 2 when the system is initialised in ρ(0) = |0 A 0| A ⊗ |0 B 0| B . This is a consequence of the unitary dynamics lifting the degeneracy in the stationary subspace of the dissipation. Comparison with Fig. 6 shows that in the Zeno limit the oscillations are now on timescales relevant to the Hamiltonian.
L.3 Pure Gain or Loss: Stable Limit Cycles, but no Robust, Stable Synchronization
We now set γ d j = 0, γ u A = γ u B = 0. In that case we find three proper (density matrix) stationary states,
Solving for the conditions of Th. 1, we may find the A,
A 1 = a 1 |−1, −1 −1, 0| + i(a 2 − a 3 ) |−1, −1 0, −1| ,(108)A 2 = A T 1 ,(109)
where a 1 = 2 and a 2 = (ω A − ω B ), a 3 = 4 2 + (ω A − ω B ) 2 . The corresponding frequencies are, ω 1 = 1 2 (ω A + ω B − a 3 ), ω 2 = 1 2 (ω A + ω B + a 3 ), ω 3 = a 3 . There is no permutation or generalized symmetry between sites A and B and so although we do have persistent oscillations and a limit cycle the sites are not robustly synchronized as seen in Fig. 8. Figure 8: Evolution of S x observable on sites A (blue) and B (red) for the Spin-1, pure gain model. We have non-zero detuning and interaction, ∆, = 0 and asymmetric driving, γ u A = γ u B . Initialising the system in a random state, we see that while persistent oscillations occur the two sites do not synchronize. This can be understood through the absence of generalised symmetry between sites A and B.
M A General Framework for Constructing Models which Exhibit Quantum Synchronization
In Sec. 6.2 we studied models of many-body systems which exhibit quantum synchronization. In particular, the Fermi Hubbard model with spin-agnostic heating hinted at a general framework for constructing more elaborate models which exhibit this behaviour. To understand this, we make the following observations:
(i) In the absence of a magnetic field and potential, B j = j = 0, the Hubbard model has a symmetry group G = SU (2) × SU (2)/Z 2 , coming from the independent spin and η symmetries [210].
(ii) Further, the representation of these symmetries are permutation invariant, that is [S α , P j,k ] = [η α , P j,k ] = 0, α = x, y, z (iii) Introduction of the magnetic field, BS z , which corresponds to the unique element of the Cartan subalgebra of the spin-SU (2) symmetry, breaks the spin symmetry. Consequently, the remaining elements, S + , S − of the spin-su(2) algebra become dynamical symmetries (iv) Choosing a unital set of Lindblad operators, L µ , from the complementary η symmetry grantees that [L µ , S ± ] = 0 so that the spin operators are strong dynamical symmetries as required for Thm. 2. Also, since the choice of Lindblad operators do not all commute with the η α operators, there can be no further strong dynamical symmetries. Finally, since these strong dynamical symmetries have complete permutation invariance, the synchronization is robust, as per Cor. 3.
These principles can be applied more widely to models with more elaborate symmetry structures in order to guarantee quantum synchronization, such as the generalized SU (N ) models in Sec 6.3
[41] G. L. Giorgi
Corollary 1 .
1The Liouvillian super-operator,L, admits a purely imaginary eigenvalue, iλ only if there exists some unitary operator, A, satisfying
Corollary 3 . 0 •
30The following conditions are sufficient for robust and stable synchronization between subsystems j and k with respect to the local operator O:• The Liouvillian,L is unital (L(1) = 0) • There exists at least one operator A fulfilling the conditions of Th. 1 • For at least one A fulfilling the conditions of Th. 1 and the corresponding ρ ∞ we have tr[O j Aρ ∞ ] = All such A fulfilling the conditions of Th. 1 also satisfy [P j,k , A] = 0Furthermore, if all A are translationally invariant, [A, P j,j+1 ] = 0, ∀j, then every subsystem is robustly and stably synchronized with every other subsystem.
[H, S z ] = 0. The Hamiltonian can then be decomposed into blocks of conserved total magnetization. Furthermore, the block with eigenvalue S z = −1/2, i.e. one spin-up and the other two spin-down, has one eigenstate, |φ , with a node on site 1, i.e. σ − 1 |φ = 0. To see which commute with all A ∈ A. Note that all multiplies of the identity trivially belong to the commutant of any set.
Figure 3 :
3Anti-synchronizing two spin-1/2 (sites 2, 3) via a non-Markovian bath. The bath is the site 1 spin and the L = γσ − 1 loss term (the blue box).
, 0, 1 − |0, 1, 0 )( 0, 0, 1| − 0, 1, 0|).
has full SU (N ) symmetry, [H, S m n ] = 0 ∀n, m.
's Seventh Framework Programme (FP7/2007-2013)/ERC Grant Agreement no. 319286, Q-MAC. The work of DJ was partly supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via Research Unit FOR 2414 under project number 277974659 and via the Cluster of Excellence 'CUI: Advanced Imaging of Matter'-EXC 2056-under project number 390715994.
and the unitary of A, [L † µ , A † A] = 0 C Proof of Theorem 2
1 • 0 •
10of the quantum Liou-villianL (i.e. [L,P j,k ] = 0) • There exists at least one A fulfilling the conditions of Th. For at least one A fulfilling the conditions of Th. 1 and the corresponding ρ ∞ we have tr[O j Aρ ∞ ] = All such A fulfilling the conditions of Th. 1 also satisfy [P j,k , A] = 0
•= 0 •
0For at least on A fulfilling the conditions of Th. 1 and the corresponding ρ ∞ we have tr[O j Aρ ∞ ] All such A fulfilling the conditions of Th. 1 also satisfy [P j,k , A] = 0 Furthermore, if all A are translationally invariant, [A, P j,j+1 ] = 0, ∀j, then every subsystem is robustly and stably synchronized with every other subsystem.
1 we can write ρ = Aρ ∞ where [H, A] = ωA, [L, A] = [L † , A] = 0 and L[ρ ∞ ] = 0. We then see that A † A and AA † both belong to the commutant {H, L µ , L † µ } , and so A † A = c 1 I, AA † = c 2 I.
, −1 0, −1| + |−1, 0 −1, 0|) .
A 3 = −i(a 2 + a 3 )− a 3 )
3233||0, −1 0, −1| ,
Overview of the Paper, Summary of Key Results and Relation to Previous Work 7 Inverted Limit Cycle: Anti-synchronization in the Quantum Zeno Limit 40 L.3 Pure Gain or Loss: Stable Limit Cycles, but no Robust, Stable Synchronization 42 M A General Framework for Constructing Models which Exhibit Quantum Synchronization 431 Introduction to Quantum Synchronization
3
1.1 An Overview of Synchronization
4
1.1.1 General Synchronization
4
1.1.2 Quantum Synchronization
6
1.2 2 Definitions and Methods
9
2.1 Definitions of Quantum Synchronization
9
2.2 Lindblad Formalism
11
3 Imaginary Eigenvalues of Quantum Liouvillians and Stable Synchronization 13
3.1 Necessary and Sufficient Conditions for Purely Imaginary Eigenvalues and the
Structure of the Oscillating Coherent Eigenmodes
13
3.2 Stable Quantum Synchronization
14
3.3 Multiple Frequencies and Commensurability
16
3.4 Extensions to Weaker Definitions of Synchronization
16
1
arXiv:2103.01808v5 [quant-ph] 11 Jan 2022
SciPost Physics
Submission
4 Almost Purely Imaginary Eigenvalues and Metastable
Quantum Synchronization
17
4.1 Ultra-Low Frequency Metastable Synchronization
17
4.2 Quantum Zeno Metastable Synchronization
18
4.3 Dynamical Metastable Synchronization
18
4.4 Analogy With Classical Synchronization
20
5 Proving absence of synchronization and persistent oscillations
20
6 Examples
21
6.1 Perfectly Anti-Synchronizing Two Spin-1/2's
21
6.2 Lattice Models with Translationally Invariant non-Abelian Symmetries: Stable
and Metastable Synchronization
24
6.2.1 Fermi-Hubbard Model with Spin Agnostic Heating
25
6.3 Generalised Fermi-Hubbard Model with SU (N ) Symmetry and Experimental
Applications
26
7 Conclusions
28
A Proof of Theorem 1
30
B Proof of Corollary 1
32
C Proof of Theorem 2
32
D Proof of Corollary 2
33
E Proof of Corollary 3
34
F Proof of Theorem 3
35
G Proof of Corollary 4
36
H Proof of Corollary 5
36
I Proof of Theorem 4
37
J Conditions For Meta-stable Synchronization
37
K Application of our Theory to Non-Markovian Systems
38
L Analysis of Additional Examples of Quantum Synchronization
39
L.1 Inverted Limit Cycle: Metastable ultra-low frequency
anti-synchronization
39
L.2
, F. Galve and R. Zambrini, Probing the spectral density of a dissipative qubit via quantum synchronization, Phys. Rev. A 94, 052121 (2016), doi:10.1103/PhysRevA.94.052121.
Metastable synchronisation of this form has also been previously referred to as transient synchronisation[44,145].
The commutant of a set of linear operators on a Hilbert space A ⊂ B(H), denoted A , is the set of O ∈ B(H)
AcknowledgementsWe thank C. Bruder and G. L. Giorgi for useful discussions.Author contributions BB conceived the research and stated and proved the main theorems. CB stated and proved some of the theorems and wrote the majority of the manuscript. DJ suggested experimentally relevant examples. All authors contributed to discussions and writing of the manuscript.
4d imaging to assay complex dynamics in live specimens. D Gerlich, J Ellenberg, Nature cell biology. SupplD. Gerlich and J. Ellenberg, 4d imaging to assay complex dynam- ics in live specimens, Nature cell biology Suppl, S14-9 (2003), doi:http://europepmc.org/abstract/MED/14562846.
An introduction to systems biology: design principles of biological circuits. U Alon, 10.1201/9781420011432CRC pressU. Alon, An introduction to systems biology: design principles of biological circuits, CRC press, doi:https://doi.org/10.1201/9781420011432 (2019).
Modeling the experimentally organized economy: Complex dynamics in an empirical micro-macro model of endogenous economic growth. G Eliasson, 10.1016/0167-2681(91)90047-2Journal of Economic Behavior & Organization. 161G. Eliasson, Modeling the experimentally organized economy: Complex dynamics in an empirical micro-macro model of endogenous economic growth, Journal of Eco- nomic Behavior & Organization 16(1), 153 (1991), doi:https://doi.org/10.1016/0167- 2681(91)90047-2.
Spectral analysis of finite-time correlation matrices near equilibrium phase transitions. T Vinayak, B Prosen, T H Buča, Seligman, 10.1209/0295-5075/108/20006Europhysics Letters). 1082EPLVinayak, T. Prosen, B. Buča and T. H. Seligman, Spectral analysis of finite-time corre- lation matrices near equilibrium phase transitions, EPL (Europhysics Letters) 108(2), 20006 (2014), doi:10.1209/0295-5075/108/20006.
The role of phase synchronization in memory processes. J Fell, N Axmacher, 10.1038/nrn2979Nature reviews neuroscience. 122105J. Fell and N. Axmacher, The role of phase synchronization in memory processes, Nature reviews neuroscience 12(2), 105 (2011), doi:https://doi.org/10.1038/nrn2979.
Synchronisation of chaos and its applications. D Eroglu, J S W Lamb, T Pereira, 10.1080/00107514.2017.1345844Contemporary Physics. 583207D. Eroglu, J. S. W. Lamb and T. Pereira, Synchronisation of chaos and its applications, Contemporary Physics 58(3), 207 (2017), doi:10.1080/00107514.2017.1345844, https: //doi.org/10.1080/00107514.2017.1345844.
The synchronization of chaotic systems. S Boccaletti, J Kurths, G Osipov, D L Valladares, C S Zhou, 10.1016/S0370-1573(02)00137-0Physics Reports. 3661S. Boccaletti, J. Kurths, G. Osipov, D. L. Valladares and C. S. Zhou, The synchro- nization of chaotic systems, Physics Reports 366(1), 1-101 (2002), doi:10.1016/S0370- 1573(02)00137-0.
Synchronization in chaotic systems. L M Pecora, T L Carroll, 10.1103/PhysRevLett.64.821Phys. Rev. Lett. 64821L. M. Pecora and T. L. Carroll, Synchronization in chaotic systems, Phys. Rev. Lett. 64, 821 (1990), doi:10.1103/PhysRevLett.64.821.
Quantum synchronization of a driven self-sustained oscillator. S Walter, A Nunnenkamp, C Bruder, 10.1103/PhysRevLett.112.094102Phys. Rev. Lett. 11294102S. Walter, A. Nunnenkamp and C. Bruder, Quantum synchronization of a driven self-sustained oscillator, Phys. Rev. Lett. 112, 094102 (2014), doi:10.1103/PhysRevLett.112.094102.
Synchronization of two ensembles of atoms. M Xu, D A Tieri, E C Fine, J K Thompson, M J Holland, 10.1103/PhysRevLett.113.154101Phys. Rev. Lett. 113154101M. Xu, D. A. Tieri, E. C. Fine, J. K. Thompson and M. J. Holland, Syn- chronization of two ensembles of atoms, Phys. Rev. Lett. 113, 154101 (2014), doi:10.1103/PhysRevLett.113.154101.
Quantum synchronization of two van der pol oscillators. S Walter, A Nunnenkamp, C Bruder, https:/onlinelibrary.wiley.com/doi/pdf/10.1002/andp.201400144Annalen der Physik. 5271-2131S. Walter, A. Nunnenkamp and C. Bruder, Quantum synchronization of two van der pol oscillators, Annalen der Physik 527(1-2), 131 (2015), doi:https://doi.org/10.1002/andp.201400144, https://onlinelibrary.wiley.com/ doi/pdf/10.1002/andp.201400144.
Noise-induced transitions in optomechanical synchronization. T Weiss, A Kronwald, F Marquardt, 10.1088/1367-2630/18/1/013043New Journal of Physics. 18113043T. Weiss, A. Kronwald and F. Marquardt, Noise-induced transitions in optomechani- cal synchronization, New Journal of Physics 18(1), 013043 (2016), doi:10.1088/1367- 2630/18/1/013043.
Entanglement tongue and quantum synchronization of disordered oscillators. T E Lee, C.-K Chan, S Wang, 10.1103/PhysRevE.89.022913Phys. Rev. E. 8922913T. E. Lee, C.-K. Chan and S. Wang, Entanglement tongue and quantum synchronization of disordered oscillators, Phys. Rev. E 89, 022913 (2014), doi:10.1103/PhysRevE.89.022913.
Observing quantum synchronization blockade in circuit quantum electrodynamics. S E Nigg, 10.1103/PhysRevA.97.013811Phys. Rev. A. 9713811S. E. Nigg, Observing quantum synchronization blockade in circuit quantum electrody- namics, Phys. Rev. A 97, 013811 (2018), doi:10.1103/PhysRevA.97.013811.
Synchronization boost with single-photon dissipation in the deep quantum regime. W.-K Mok, L.-C Kwek, H Heimonen, 10.1103/PhysRevResearch.2.033422Phys. Rev. Research. 233422W.-K. Mok, L.-C. Kwek and H. Heimonen, Synchronization boost with single-photon dissipation in the deep quantum regime, Phys. Rev. Research 2, 033422 (2020), doi:10.1103/PhysRevResearch.2.033422.
Degree of quantumness in quantum synchronization. H Eneriz, D Rossatto, F A Cárdenas-López, E Solano, M Sanz, 10.1038/s41598-019-56468-xScientific Reports. 911H. Eneriz, D. Rossatto, F. A. Cárdenas-López, E. Solano and M. Sanz, De- gree of quantumness in quantum synchronization, Scientific Reports 9(1), 1 (2019), doi:https://doi.org/10.1038/s41598-019-56468-x.
Synchronization phase as an indicator of persistent quantum correlations between subsystems. S Siwiak-Jaszek, T P Le, A Olaya-Castro, 10.1103/PhysRevA.102.032414Phys. Rev. A. 10232414S. Siwiak-Jaszek, T. P. Le and A. Olaya-Castro, Synchronization phase as an indicator of persistent quantum correlations between subsystems, Phys. Rev. A 102, 032414 (2020), doi:10.1103/PhysRevA.102.032414.
Measures of quantum synchronization in continuous variable systems. A Mari, A Farace, N Didier, V Giovannetti, R Fazio, 10.1103/PhysRevLett.111.103605Phys. Rev. Lett. 111103605A. Mari, A. Farace, N. Didier, V. Giovannetti and R. Fazio, Measures of quantum synchronization in continuous variable systems, Phys. Rev. Lett. 111, 103605 (2013), doi:10.1103/PhysRevLett.111.103605.
Quantum synchronization of quantum van der pol oscillators with trapped ions. T E Lee, H R Sadeghpour, 10.1103/PhysRevLett.111.234101Phys. Rev. Lett. 111234101T. E. Lee and H. R. Sadeghpour, Quantum synchronization of quantum van der pol oscillators with trapped ions, Phys. Rev. Lett. 111, 234101 (2013), doi:10.1103/PhysRevLett.111.234101.
Quantum synchronization blockade: Energy quantization hinders synchronization of identical oscillators. N Lörch, S E Nigg, A Nunnenkamp, R P Tiwari, C Bruder, 10.1103/PhysRevLett.118.243602Phys. Rev. Lett. 118243602N. Lörch, S. E. Nigg, A. Nunnenkamp, R. P. Tiwari and C. Bruder, Quantum synchro- nization blockade: Energy quantization hinders synchronization of identical oscillators, Phys. Rev. Lett. 118, 243602 (2017), doi:10.1103/PhysRevLett.118.243602.
Synchronization transition in dipole-coupled two-level systems with positional disorder. M P Kwasigroch, N R Cooper, 10.1103/PhysRevA.96.053610Phys. Rev. A. 9653610M. P. Kwasigroch and N. R. Cooper, Synchronization transition in dipole-coupled two-level systems with positional disorder, Phys. Rev. A 96, 053610 (2017), doi:10.1103/PhysRevA.96.053610.
Dissipative nonequilibrium synchronization of topological edge states via self-oscillation. C W Wächtler, V M Bastidas, G Schaller, W J Munro, 10.1103/PhysRevB.102.014309Phys. Rev. B. 10214309C. W. Wächtler, V. M. Bastidas, G. Schaller and W. J. Munro, Dissipative nonequilib- rium synchronization of topological edge states via self-oscillation, Phys. Rev. B 102, 014309 (2020), doi:10.1103/PhysRevB.102.014309.
Breaking strong symmetries in dissipative quantum systems: the (non-)interacting bosonic chain coupled to a cavity. C.-M Halati, A Sheikhan, C Kollath, 2102. 02537C.-M. Halati, A. Sheikhan and C. Kollath, Breaking strong symmetries in dissipative quantum systems: the (non-)interacting bosonic chain coupled to a cavity (2021), 2102. 02537.
Critical response of a quantum van der pol oscillator. S Dutta, N R Cooper, 10.1103/PhysRevLett.123.250401Phys. Rev. Lett. 123250401S. Dutta and N. R. Cooper, Critical response of a quantum van der pol oscillator, Phys. Rev. Lett. 123, 250401 (2019), doi:10.1103/PhysRevLett.123.250401.
Semiclassical lindblad master equation for spin dynamics. J Dubois, U Saalmann, J M Rost, 2101.10700J. Dubois, U. Saalmann and J. M. Rost, Semiclassical lindblad master equation for spin dynamics (2021), 2101.10700.
Vaporization dynamics of a dissipative quantum liquid. A Bácsi, C U . U. U. U. P. M. C. Moca, G Zaránd, B Dóra, 10.1103/PhysRevLett.125.266803Phys. Rev. Lett. 125266803A. Bácsi, C. u. u. u. u. P. m. c. Moca, G. Zaránd and B. Dóra, Vaporization dynamics of a dissipative quantum liquid, Phys. Rev. Lett. 125, 266803 (2020), doi:10.1103/PhysRevLett.125.266803.
Limit cycle phase and goldstone mode in driven dissipative systems. H Alaeian, G Giedke, I Carusotto, R Löw, T Pfau, 10.1103/physreva.103.013712Physical Review A. 1031H. Alaeian, G. Giedke, I. Carusotto, R. Löw and T. Pfau, Limit cycle phase and goldstone mode in driven dissipative systems, Physical Review A 103(1) (2021), doi:10.1103/physreva.103.013712.
Self-ordered limit cycles, chaos, and phase slippage with a superfluid inside an optical resonator. F Piazza, H Ritsch, 10.1103/PhysRevLett.115.163601Phys. Rev. Lett. 115163601F. Piazza and H. Ritsch, Self-ordered limit cycles, chaos, and phase slippage with a superfluid inside an optical resonator, Phys. Rev. Lett. 115, 163601 (2015), doi:10.1103/PhysRevLett.115.163601.
F Mivehvar, F Piazza, T Donner, H Ritsch, 2102.04473Cavity qed with quantum gases: New paradigms in many-body physics. F. Mivehvar, F. Piazza, T. Donner and H. Ritsch, Cavity qed with quantum gases: New paradigms in many-body physics (2021), 2102.04473.
Pathway to chaos through hierarchical superfluidity in blue-detuned cavity-bec systems. R Lin, P Molignini, A U J Lode, R Chitra, 10.1103/PhysRevA.101.061602Phys. Rev. A. 10161602R. Lin, P. Molignini, A. U. J. Lode and R. Chitra, Pathway to chaos through hierarchical superfluidity in blue-detuned cavity-bec systems, Phys. Rev. A 101, 061602 (2020), doi:10.1103/PhysRevA.101.061602.
Dynamical mean-field theory for markovian open quantum many-body systems. O Scarlatella, A A Clerk, R Fazio, M Schiró, 10.1103/PhysRevX.11.031018Phys. Rev. X. 1131018O. Scarlatella, A. A. Clerk, R. Fazio and M. Schiró, Dynamical mean-field theory for markovian open quantum many-body systems, Phys. Rev. X 11, 031018 (2021), doi:10.1103/PhysRevX.11.031018.
Variational principle for steady states of dissipative quantum many-body systems. H Weimer, 10.1103/PhysRevLett.114.040402Phys. Rev. Lett. 11440402H. Weimer, Variational principle for steady states of dissipative quantum many-body systems, Phys. Rev. Lett. 114, 040402 (2015), doi:10.1103/PhysRevLett.114.040402.
Exact spectral form factor in a minimal model of many-body quantum chaos. B Bertini, P Kos, T C V Prosen, 10.1103/PhysRevLett.121.264101Phys. Rev. Lett. 121264101B. Bertini, P. Kos and T. c. v. Prosen, Exact spectral form factor in a min- imal model of many-body quantum chaos, Phys. Rev. Lett. 121, 264101 (2018), doi:10.1103/PhysRevLett.121.264101.
Solution of a minimal model for many-body quantum chaos. A Chan, A De Luca, J T Chalker, 10.1103/PhysRevX.8.041019Phys. Rev. X. 841019A. Chan, A. De Luca and J. T. Chalker, Solution of a minimal model for many-body quantum chaos, Phys. Rev. X 8, 041019 (2018), doi:10.1103/PhysRevX.8.041019.
Information scrambling and chaos in open quantum systems. P Zanardi, N Anand, 2012.13172P. Zanardi and N. Anand, Information scrambling and chaos in open quantum systems (2021), 2012.13172.
Quantum synchronisation enabled by dynamical symmetries and dissipation. J Tindall, C Muñoz, B Buča, D Jaksch, 10.1088/1367-2630/ab60f5New Journal of Physics. 22113026J. Tindall, C. Sánchez Muñoz, B. Buča and D. Jaksch, Quantum synchronisation enabled by dynamical symmetries and dissipation, New Journal of Physics 22(1), 013026 (2020), doi:10.1088/1367-2630/ab60f5.
Quantum synchronization and entanglement generation. A Roulet, C Bruder, 10.1103/PhysRevLett.121.063601Physical Review Letters. 121663601A. Roulet and C. Bruder, Quantum synchronization and entanglement generation, Phys- ical Review Letters 121(6), 063601 (2018), doi:10.1103/PhysRevLett.121.063601.
Synchronizing the smallest possible system. A Roulet, C Bruder, 10.1103/PhysRevLett.121.053601Phys. Rev. Lett. 12153601A. Roulet and C. Bruder, Synchronizing the smallest possible system, Phys. Rev. Lett. 121, 053601 (2018), doi:10.1103/PhysRevLett.121.053601.
Spontaneous synchronization and quantum correlation dynamics of open spin systems. G L Giorgi, F Plastina, G Francica, R Zambrini, 10.1103/PhysRevA.88.042115Phys. Rev. A. 8842115G. L. Giorgi, F. Plastina, G. Francica and R. Zambrini, Spontaneous synchronization and quantum correlation dynamics of open spin systems, Phys. Rev. A 88, 042115 (2013), doi:10.1103/PhysRevA.88.042115.
Stabilizing twodimensional quantum scars by deformation and synchronization. A A Michailidis, C J Turner, Z Papić, D A Abanin, M Serbyn, 2003.02825A. A. Michailidis, C. J. Turner, Z. Papić, D. A. Abanin and M. Serbyn, Stabilizing two- dimensional quantum scars by deformation and synchronization (2020), 2003.02825.
Quantum synchronization of few-body systems under collective dissipation. G Karpat, I D I Yalç, B , 10.1103/PhysRevA.101.042121Phys. Rev. A. 10142121G. Karpat, i. d. I. Yalç ınkaya and B. Ç akmak, Quantum synchronization of few-body systems under collective dissipation, Phys. Rev. A 101, 042121 (2020), doi:10.1103/PhysRevA.101.042121.
Synchronization and coalescence in a dissipative two-qubit system. A Cabot, G L Giorgi, R Zambrini, 1912.10984A. Cabot, G. L. Giorgi and R. Zambrini, Synchronization and coalescence in a dissipative two-qubit system (2020), 1912.10984.
Quantum synchronization in dimer atomic lattices. A Cabot, G L Giorgi, F Galve, R Zambrini, 10.1103/PhysRevLett.123.023604Phys. Rev. Lett. 12323604A. Cabot, G. L. Giorgi, F. Galve and R. Zambrini, Quantum synchro- nization in dimer atomic lattices, Phys. Rev. Lett. 123, 023604 (2019), doi:10.1103/PhysRevLett.123.023604.
Quantum synchronization as a local signature of super-and subradiance. B Bellomo, G L Giorgi, G M Palma, R Zambrini, 10.1103/PhysRevA.95.043807Phys. Rev. A. 9543807B. Bellomo, G. L. Giorgi, G. M. Palma and R. Zambrini, Quantum synchroniza- tion as a local signature of super-and subradiance, Phys. Rev. A 95, 043807 (2017), doi:10.1103/PhysRevA.95.043807.
M Cattaneo, G L Giorgi, S Maniscalco, G S Paraoanu, R Zambrini, Synchronization and subradiance as signatures of entangling bath between superconducting qubits. 6229M. Cattaneo, G. L. Giorgi, S. Maniscalco, G. S. Paraoanu and R. Zambrini, Synchro- nization and subradiance as signatures of entangling bath between superconducting qubits (2020), 2005.06229.
Transient synchronisation and quantum coherence in a bio-inspired vibronic dimer. S Siwiak-Jaszek, A Olaya-Castro, 10.1039/c9fd00006bFaraday Discussions. 216S. Siwiak-Jaszek and A. Olaya-Castro, Transient synchronisation and quantum co- herence in a bio-inspired vibronic dimer, Faraday Discussions 216, 38-56 (2019), doi:10.1039/c9fd00006b.
Synchronization and non-markovianity in open quantum systems. G Karpat, İskender Yalçınkaya, B Akmak, G L Giorgi, R Zambrini, 2008.03310G. Karpat,İskender Yalçınkaya, B. Ç akmak, G. L. Giorgi and R. Zambrini, Synchro- nization and non-markovianity in open quantum systems (2020), 2008.03310.
Quantum synchronization on the ibm q system. M Koppenhöfer, C Bruder, A Roulet, 10.1103/PhysRevResearch.2.023026Phys. Rev. Research. 223026M. Koppenhöfer, C. Bruder and A. Roulet, Quantum synchronization on the ibm q system, Phys. Rev. Research 2, 023026 (2020), doi:10.1103/PhysRevResearch.2.023026.
Pros and cons of ultra-high-field mri/mrs for human application. M E Ladd, P Bachert, M Meyerspeer, E Moser, A M Nagel, D G Norris, S Schmitter, O Speck, S Straub, M Zaiss, 10.1016/j.pnmrs.2018.06.001Progress in Nuclear Magnetic Resonance Spectroscopy. 109M. E. Ladd, P. Bachert, M. Meyerspeer, E. Moser, A. M. Nagel, D. G. Norris, S. Schmit- ter, O. Speck, S. Straub and M. Zaiss, Pros and cons of ultra-high-field mri/mrs for hu- man application, Progress in Nuclear Magnetic Resonance Spectroscopy 109, 1 (2018), doi:https://doi.org/10.1016/j.pnmrs.2018.06.001.
Vulnerability of the synchronization process in the quantum key distribution system. A P Pljonkin, 10.4018/IJCAC.2019010104Int. J. Cloud Appl. Comput. 91A. P. Pljonkin, Vulnerability of the synchronization process in the quantum key distribution system, Int. J. Cloud Appl. Comput. 9(1), 50-58 (2019), doi:10.4018/IJCAC.2019010104.
Secure and efficient synchronization scheme for quantum key distribution. P Liu, H.-L Yin, 10.1364/OSAC.2.002883OSA Continuum. 2102883P. Liu and H.-L. Yin, Secure and efficient synchronization scheme for quantum key distribution, OSA Continuum 2(10), 2883 (2019), doi:10.1364/OSAC.2.002883.
Synchronization in quantum key distribution systems. A Pljonkin, K Rumyantsev, P K Singh, 10.3390/cryptography1030018Cryptography. 13A. Pljonkin, K. Rumyantsev and P. K. Singh, Synchronization in quantum key distri- bution systems, Cryptography 1(3) (2017), doi:10.3390/cryptography1030018.
Non-stationary coherent quantum manybody dynamics through dissipation. B Buča, J Tindall, D Jaksch, 10.1038/s41467-019-09757-yNature Communications. 1011730B. Buča, J. Tindall and D. Jaksch, Non-stationary coherent quantum many- body dynamics through dissipation, Nature Communications 10(1), 1730 (2019), doi:10.1038/s41467-019-09757-y.
Isolated Heisenberg magnet as a quantum time crystal. M Medenjak, B Buča, D Jaksch, 10.1103/PhysRevB.102.041117Phys. Rev. B. 10241117M. Medenjak, B. Buča and D. Jaksch, Isolated Heisenberg magnet as a quantum time crystal, Phys. Rev. B 102, 041117 (2020), doi:10.1103/PhysRevB.102.041117.
Rigorous bounds on dynamical response functions and time-translation symmetry breaking. M Medenjak, T Prosen, L Zadnik, 10.21468/SciPostPhys.9.1.003SciPost Physics. 91M. Medenjak, T. Prosen and L. Zadnik, Rigorous bounds on dynamical response functions and time-translation symmetry breaking, SciPost Physics 9(1) (2020), doi:http://dx.doi.org/10.21468/SciPostPhys.9.1.003.
Non-stationarity and dissipative time crystals: Spectral properties and finite-size effects. C Booker, B Buča, D Jaksch, 10.1088/1367-2630/ababc4New Journal of Physics. C. Booker, B. Buča and D. Jaksch, Non-stationarity and dissipative time crystals: Spec- tral properties and finite-size effects, New Journal of Physics (2020), doi:10.1088/1367- 2630/ababc4.
Detecting time crystal and classifying quantum phases with time order. T.-C Guo, L You, T.-C. Guo and L. You, Detecting time crystal and classifying quantum phases with time order (2020), 2008.10188.
Time crystals protected by floquet dynamical symmetry in hubbard models. K Chinzei, T N Ikeda, 10.1103/PhysRevLett.125.060601Phys. Rev. Lett. 12560601K. Chinzei and T. N. Ikeda, Time crystals protected by floquet dynam- ical symmetry in hubbard models, Phys. Rev. Lett. 125, 060601 (2020), doi:10.1103/PhysRevLett.125.060601.
Floquet time crystals. D V Else, B Bauer, C Nayak, 10.1103/PhysRevLett.117.090402Phys. Rev. Lett. 11790402D. V. Else, B. Bauer and C. Nayak, Floquet time crystals, Phys. Rev. Lett. 117, 090402 (2016), doi:10.1103/PhysRevLett.117.090402.
Equilibrium states of generic quantum systems subject to periodic driving. A Lazarides, A Das, R Moessner, 10.1103/PhysRevE.90.012110Phys. Rev. E. 9012110A. Lazarides, A. Das and R. Moessner, Equilibrium states of generic quan- tum systems subject to periodic driving, Phys. Rev. E 90, 012110 (2014), doi:10.1103/PhysRevE.90.012110.
Phase structure of driven quantum systems. V Khemani, A Lazarides, R Moessner, S L Sondhi, 10.1103/PhysRevLett.116.250401Phys. Rev. Lett. 116250401V. Khemani, A. Lazarides, R. Moessner and S. L. Sondhi, Phase struc- ture of driven quantum systems, Phys. Rev. Lett. 116, 250401 (2016), doi:10.1103/PhysRevLett.116.250401.
Time crystals: a review. K Sacha, J Zakrzewski, 10.1088/1361-6633/aa8b38Reports on Progress in Physics. 81116401K. Sacha and J. Zakrzewski, Time crystals: a review, Reports on Progress in Physics 81(1), 016401 (2017), doi:10.1088/1361-6633/aa8b38.
Quantum Time Crystals. F Wilczek, 10.1103/PhysRevLett.109.160401Phys. Rev. Lett. 109160401F. Wilczek, Quantum Time Crystals, Phys. Rev. Lett. 109, 160401 (2012), doi:10.1103/PhysRevLett.109.160401.
Dissipationinduced structural instability and chiral dynamics in a quantum gas. N Dogra, M Landini, K Kroeger, L Hruby, T Donner, T Esslinger, 10.1126/science.aaw4465Science. 36664721496N. Dogra, M. Landini, K. Kroeger, L. Hruby, T. Donner and T. Esslinger, Dissipation- induced structural instability and chiral dynamics in a quantum gas, Science 366(6472), 1496 (2019), doi:10.1126/science.aaw4465, https://science.sciencemag.org/ content/366/6472/1496.full.pdf.
p-band induced self-organization and dynamics with repulsively driven ultracold atoms in an optical cavity. P Zupancic, D Dreon, X Li, A Baumgärtner, A Morales, W Zheng, N R Cooper, T Esslinger, T Donner, 10.1103/PhysRevLett.123.233601Phys. Rev. Lett. 123233601P. Zupancic, D. Dreon, X. Li, A. Baumgärtner, A. Morales, W. Zheng, N. R. Cooper, T. Esslinger and T. Donner, p-band induced self-organization and dynamics with repul- sively driven ultracold atoms in an optical cavity, Phys. Rev. Lett. 123, 233601 (2019), doi:10.1103/PhysRevLett.123.233601.
Dissipation induced nonstationarity in a quantum gas. B Buča, D Jaksch, 10.1103/PhysRevLett.123.260401Phys. Rev. Lett. 123260401B. Buča and D. Jaksch, Dissipation induced nonstationarity in a quantum gas, Phys. Rev. Lett. 123, 260401 (2019), doi:10.1103/PhysRevLett.123.260401.
Dissipation-induced instabilities of a spinor bose-einstein condensate inside an optical cavity. E I R Chiacchio, A Nunnenkamp, 10.1103/PhysRevLett.122.193605Phys. Rev. Lett. 122193605E. I. R. Chiacchio and A. Nunnenkamp, Dissipation-induced instabilities of a spinor bose-einstein condensate inside an optical cavity, Phys. Rev. Lett. 122, 193605 (2019), doi:10.1103/PhysRevLett.122.193605.
Time crystal behavior of excited eigenstates. A Syrwid, J Zakrzewski, K Sacha, 10.1103/PhysRevLett.119.250602Phys. Rev. Lett. 119250602A. Syrwid, J. Zakrzewski and K. Sacha, Time crystal behavior of excited eigenstates, Phys. Rev. Lett. 119, 250602 (2017), doi:10.1103/PhysRevLett.119.250602.
Time crystals in a shaken atom-cavity system. J G Cosme, J Skulte, L Mathey, 10.1103/PhysRevA.100.053615Phys. Rev. A. 10053615J. G. Cosme, J. Skulte and L. Mathey, Time crystals in a shaken atom-cavity system, Phys. Rev. A 100, 053615 (2019), doi:10.1103/PhysRevA.100.053615.
From a continuous to a discrete time crystal in a dissipative atom-cavity system. H Keßler, J G Cosme, C Georges, L Mathey, A Hemmerich, 10.1088/1367-2630/ab9fc0New Journal of Physics. 22885002H. Keßler, J. G. Cosme, C. Georges, L. Mathey and A. Hemmerich, From a continuous to a discrete time crystal in a dissipative atom-cavity system, New Journal of Physics 22(8), 085002 (2020), doi:10.1088/1367-2630/ab9fc0.
Dissipative time crystal in an asymmetric nonlinear photonic dimer. K Seibold, R Rota, V Savona, 10.1103/PhysRevA.101.033839Phys. Rev. A. 10133839K. Seibold, R. Rota and V. Savona, Dissipative time crystal in an asymmetric nonlinear photonic dimer, Phys. Rev. A 101, 033839 (2020), doi:10.1103/PhysRevA.101.033839.
K Sacha, 10.1007/978-3-030-52523-1_3Spontaneous Breaking of Continuous Time Translation Symmetry. ChamSpringer International PublishingK. Sacha, Spontaneous Breaking of Continuous Time Translation Symmetry, pp. 19-38, Springer International Publishing, Cham, ISBN 978-3-030-52523-1, doi:10.1007/978-3- 030-52523-1 3 (2020).
S P Kelly, E Timmermans, J Marino, S W Tsai, 2011.07072Stroboscopic aliasing in longrange interacting quantum systems. S. P. Kelly, E. Timmermans, J. Marino and S. W. Tsai, Stroboscopic aliasing in long- range interacting quantum systems (2020), 2011.07072.
Correlation engineering via non-local dissipation. K Seetharam, A Lerose, R Fazio, J Marino, 2101.06445K. Seetharam, A. Lerose, R. Fazio and J. Marino, Correlation engineering via non-local dissipation (2021), 2101.06445.
Stochastic discrete time crystals: Entropy production and subharmonic synchronization. L Oberreiter, U Seifert, A C Barato, 10.1103/PhysRevLett.126.020603Phys. Rev. Lett. 12620603L. Oberreiter, U. Seifert and A. C. Barato, Stochastic discrete time crystals: Entropy production and subharmonic synchronization, Phys. Rev. Lett. 126, 020603 (2021), doi:10.1103/PhysRevLett.126.020603.
Olmos and I. Lesanovsky, Dynamical phases and quantum correlations in an emitter-waveguide system with feedback. G Buonaiuto, F Carollo, B , 2102.02719G. Buonaiuto, F. Carollo, B. Olmos and I. Lesanovsky, Dynamical phases and quantum correlations in an emitter-waveguide system with feedback (2021), 2102.02719.
H Keßler, P Kongkhambut, C Georges, L Mathey, J G Cosme, A Hemmerich, 2012.08885Observation of a dissipative time crystal. H. Keßler, P. Kongkhambut, C. Georges, L. Mathey, J. G. Cosme and A. Hemmerich, Observation of a dissipative time crystal (2021), 2012.08885.
Time crystals in the driven transverse field ising model under quasiperiodic modulation. P Liang, R Fazio, S Chesi, 10.1088/1367-2630/abc9ecNew Journal of Physics. 2212125001P. Liang, R. Fazio and S. Chesi, Time crystals in the driven transverse field ising model under quasiperiodic modulation, New Journal of Physics 22(12), 125001 (2020), doi:10.1088/1367-2630/abc9ec.
H Taheri, A B Matsko, L Maleki, K Sacha, 2012.07927All-optical dissipative discrete time crystals. H. Taheri, A. B. Matsko, L. Maleki and K. Sacha, All-optical dissipative discrete time crystals (2020), 2012.07927.
Local hilbert space fragmentation and out-of-time-ordered crystals. B Buča, 2108.13411B. Buča, Local hilbert space fragmentation and out-of-time-ordered crystals (2021), 2108.13411.
Signatures of discrete time-crystallinity in transport through quantum dot arrays. S Sarkar, Y Dubi, 2107.04214S. Sarkar and Y. Dubi, Signatures of discrete time-crystallinity in transport through quantum dot arrays (2021), 2107.04214.
Dissipative phase transitions in the fully connected ising model with p-spin interaction. P Wang, R Fazio, 10.1103/PhysRevA.103.013306Phys. Rev. A. 10313306P. Wang and R. Fazio, Dissipative phase transitions in the fully connected ising model with p-spin interaction, Phys. Rev. A 103, 013306 (2021), doi:10.1103/PhysRevA.103.013306.
Symmetries and conserved quantities of boundary time crystals in generalized spin models. G Piccitto, M Wauters, F Nori, N Shammah, 2101.05710G. Piccitto, M. Wauters, F. Nori and N. Shammah, Symmetries and conserved quantities of boundary time crystals in generalized spin models (2021), 2101.05710.
R C Verstraten, R F Ozela, C M Smith, 2006.08786Time glass: A fractional calculus approach. R. C. Verstraten, R. F. Ozela and C. M. Smith, Time glass: A fractional calculus approach (2020), 2006.08786.
Nonequilibrium many-body quantum engine driven by time-translation symmetry breaking. F Carollo, K Brandner, I Lesanovsky, 10.1103/PhysRevLett.125.240602Phys. Rev. Lett. 125240602F. Carollo, K. Brandner and I. Lesanovsky, Nonequilibrium many-body quantum engine driven by time-translation symmetry breaking, Phys. Rev. Lett. 125, 240602 (2020), doi:10.1103/PhysRevLett.125.240602.
Weak ergodicity breaking from quantum many-body scars. C J Turner, A A Michailidis, D A Abanin, M Serbyn, Z Papić, 10.1038/s41567-018-0137-5Nat. Phys. 147745C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn and Z. Papić, Weak ergodicity breaking from quantum many-body scars, Nat. Phys. 14(7), 745 (2018), doi:10.1038/s41567-018-0137-5.
Exact excited states of nonintegrable models. S Moudgalya, S Rachel, B A Bernevig, N Regnault, 10.1103/PhysRevB.98.235155Phys. Rev. B. 98235155S. Moudgalya, S. Rachel, B. A. Bernevig and N. Regnault, Exact excited states of non- integrable models, Phys. Rev. B 98, 235155 (2018), doi:10.1103/PhysRevB.98.235155.
Emergent SU(2) Dynamics and Perfect Quantum Many-Body Scars. S Choi, C J Turner, H Pichler, W W Ho, A A Michailidis, Z Papić, M Serbyn, M D Lukin, D A Abanin, 10.1103/PhysRevLett.122.220603Phys. Rev. Lett. 122220603S. Choi, C. J. Turner, H. Pichler, W. W. Ho, A. A. Michailidis, Z. Papić, M. Serbyn, M. D. Lukin and D. A. Abanin, Emergent SU(2) Dynamics and Perfect Quantum Many- Body Scars, Phys. Rev. Lett. 122, 220603 (2019), doi:10.1103/PhysRevLett.122.220603.
Exact quantum many-body scar states in the rydberg-blockaded atom chain. C.-J Lin, O I Motrunich, 10.1103/PhysRevLett.122.173401Phys. Rev. Lett. 122173401C.-J. Lin and O. I. Motrunich, Exact quantum many-body scar states in the rydberg-blockaded atom chain, Phys. Rev. Lett. 122, 173401 (2019), doi:10.1103/PhysRevLett.122.173401.
Quantum many-body scar states with emergent kinetic constraints and finite-entanglement revivals. T Iadecola, M Schecter, 10.1103/PhysRevB.101.024306Phys. Rev. B. 10124306T. Iadecola and M. Schecter, Quantum many-body scar states with emergent ki- netic constraints and finite-entanglement revivals, Phys. Rev. B 101, 024306 (2020), doi:10.1103/PhysRevB.101.024306.
Entanglement of exact excited states of affleck-kennedy-lieb-tasaki models: Exact results, many-body scars, and violation of the strong eigenstate thermalization hypothesis. S Moudgalya, N Regnault, B A Bernevig, 10.1103/PhysRevB.98.235156Phys. Rev. B. 98235156S. Moudgalya, N. Regnault and B. A. Bernevig, Entanglement of exact excited states of affleck-kennedy-lieb-tasaki models: Exact results, many-body scars, and violation of the strong eigenstate thermalization hypothesis, Phys. Rev. B 98, 235156 (2018), doi:10.1103/PhysRevB.98.235156.
C J Turner, J.-Y Desaules, K Bull, Z Papić, Correspondence principle for manybody scars in ultracold Rydberg atoms. 13207C. J. Turner, J.-Y. Desaules, K. Bull and Z. Papić, Correspondence principle for many- body scars in ultracold Rydberg atoms (2020), 2006.13207.
Weak ergodicity breaking and quantum manybody scars in spin-1 xy magnets. M Schecter, T Iadecola, 10.1103/PhysRevLett.123.147201Phys. Rev. Lett. 123147201M. Schecter and T. Iadecola, Weak ergodicity breaking and quantum many- body scars in spin-1 xy magnets, Phys. Rev. Lett. 123, 147201 (2019), doi:10.1103/PhysRevLett.123.147201.
Stabilizing twodimensional quantum scars by deformation and synchronization. A A Michailidis, C J Turner, Z Papić, D A Abanin, M Serbyn, 10.1103/PhysRevResearch.2.022065Phys. Rev. Research. 222065A. A. Michailidis, C. J. Turner, Z. Papić, D. A. Abanin and M. Serbyn, Stabilizing two- dimensional quantum scars by deformation and synchronization, Phys. Rev. Research 2, 022065 (2020), doi:10.1103/PhysRevResearch.2.022065.
Probing manybody dynamics on a 51-atom quantum simulator. H Bernien, S Schwartz, A Keesling, H Levine, A Omran, H Pichler, S Choi, A S Zibrov, M Endres, M Greiner, V Vuletić, M D Lukin, 10.1038/nature24622Nature. 551579H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić and M. D. Lukin, Probing many- body dynamics on a 51-atom quantum simulator, Nature 551, 579 EP (2017), doi:https://doi.org/10.1038/nature24622.
Quantum scars as embeddings of weakly broken lie algebra representations. K Bull, J.-Y Desaules, Z Papić, 10.1103/PhysRevB.101.165139Phys. Rev. B. 101165139K. Bull, J.-Y. Desaules and Z. Papić, Quantum scars as embeddings of weakly broken lie algebra representations, Phys. Rev. B 101, 165139 (2020), doi:10.1103/PhysRevB.101.165139.
S Moudgalya, N Regnault, B A Bernevig, Eta-Pairing in Hubbard Models: From Spectrum Generating Algebras to Quantum Many-Body Scars. S. Moudgalya, N. Regnault and B. A. Bernevig, Eta-Pairing in Hubbard Models: From Spectrum Generating Algebras to Quantum Many-Body Scars (2020), 2004.13727.
D K Mark, O I Motrunich, 2004.13800Eta-pairing states as true scars in an extended Hubbard Model. D. K. Mark and O. I. Motrunich, Eta-pairing states as true scars in an extended Hubbard Model (2020), 2004.13800.
Unified structure for exact towers of scar states in the affleck-kennedy-lieb-tasaki and other models. D K Mark, C.-J Lin, O I Motrunich, 10.1103/PhysRevB.101.195131Phys. Rev. B. 101195131D. K. Mark, C.-J. Lin and O. I. Motrunich, Unified structure for exact towers of scar states in the affleck-kennedy-lieb-tasaki and other models, Phys. Rev. B 101, 195131 (2020), doi:10.1103/PhysRevB.101.195131.
K Pakrouski, P N Pallegar, F K Popov, I R Klebanov, 2007.00845Many Body Scars as a Group Invariant Sector of Hilbert Space. K. Pakrouski, P. N. Pallegar, F. K. Popov and I. R. Klebanov, Many Body Scars as a Group Invariant Sector of Hilbert Space (2020), 2007.00845.
From tunnels to towers: quantum scars from Lie Algebras and q-deformed Lie Algebras (2020). N O'dea, F Burnell, A Chandran, V Khemani, 16207N. O'Dea, F. Burnell, A. Chandran and V. Khemani, From tunnels to towers: quantum scars from Lie Algebras and q-deformed Lie Algebras (2020), 2007.16207.
Onsager's scars in disordered spin chains. N Shibata, N Yoshioka, H Katsura, 10.1103/PhysRevLett.124.180604Phys. Rev. Lett. 124180604N. Shibata, N. Yoshioka and H. Katsura, Onsager's scars in disordered spin chains, Phys. Rev. Lett. 124, 180604 (2020), doi:10.1103/PhysRevLett.124.180604.
Y Kuno, T Mizoguchi, Y Hatsugai, 2010.02044Flat band quantum scar. Y. Kuno, T. Mizoguchi and Y. Hatsugai, Flat band quantum scar (2020), 2010.02044.
J.-Y Desaules, A Hudomal, C J Turner, Z Papić, 2102.01675A proposal for realising quantum scars in the tilted 1d fermi-hubbard model. J.-Y. Desaules, A. Hudomal, C. J. Turner and Z. Papić, A proposal for realising quantum scars in the tilted 1d fermi-hubbard model (2021), 2102.01675.
M Serbyn, D A Abanin, Z Papić, 2011.09486Quantum many-body scars and weak breaking of ergodicity. M. Serbyn, D. A. Abanin and Z. Papić, Quantum many-body scars and weak breaking of ergodicity (2020), 2011.09486.
Kinetically constrained quantum dynamics in a circuit-qed transmon wire. R J Valencia-Tortora, N Pancotti, J Marino, 2112.08387R. J. Valencia-Tortora, N. Pancotti and J. Marino, Kinetically constrained quantum dynamics in a circuit-qed transmon wire (2021), 2112.08387.
Dynamical l-bits in stark many-body localization. T Gunawardana, B Buča, 2110.13135T. Gunawardana and B. Buča, Dynamical l-bits in stark many-body localization (2021), 2110.13135.
B Buca, A Purkayastha, G Guarnieri, M T Mitchison, D Jaksch, J Goold, Quantum many-body attractors. B. Buca, A. Purkayastha, G. Guarnieri, M. T. Mitchison, D. Jaksch and J. Goold, Quantum many-body attractors (2020), 2008.11166.
Christiaan Huygens' the pendulum clock, or, Geometrical demonstrations concerning the motion of pendula as applied to clocks. C Huygens, R J Blackwell, 978-0-8138-0933-5Iowa State University PressC. Huygens and R. J. Blackwell, Christiaan Huygens' the pendulum clock, or, Geomet- rical demonstrations concerning the motion of pendula as applied to clocks, Iowa State University Press, ISBN 978-0-8138-0933-5 (1986).
A theory for synchronization of dynamical systems. A C J Luo, 10.1016/j.cnsns.2008.07.002Communications in Nonlinear Science and Numerical Simulation. 145A. C. J. Luo, A theory for synchronization of dynamical systems, Communi- cations in Nonlinear Science and Numerical Simulation 14(5), 1901-1951 (2009), doi:https://doi.org/10.1016/j.cnsns.2008.07.002.
Dynamical system synchronization. A C Luo, 10.1007/978-1-4614-5097-9SpringerA. C. Luo, Dynamical system synchronization, Springer, doi:10.1007/978-1-4614-5097-9 (2013).
Properties of generalized synchronization of chaos. P , 10.15388/NA.1998.3.0.15261Nonlinear Analysis: Modelling and Control. 31P. K, Properties of generalized synchronization of chaos, Nonlinear Analysis: Modelling and Control 3(1), 101-129 (1998), doi:10.15388/NA.1998.3.0.15261.
A nonlinear theory of laser noise and coherence. i. H Haken, 10.1007/BF01383921Zeitschrift für Physik. 1811H. Haken, A nonlinear theory of laser noise and coherence. i, Zeitschrift für Physik 181(1), 96-124 (1964), doi:10.1007/BF01383921.
Cooperative phenomena in systems far from thermal equilibrium and in nonphysical systems. H Haken, 10.1103/RevModPhys.47.67Reviews of Modern Physics. 471H. Haken, Cooperative phenomena in systems far from thermal equilibrium and in nonphysical systems, Reviews of Modern Physics 47(1), 67-121 (1975), doi:10.1103/RevModPhys.47.67.
H Haken, 10.1016/0375-9601(75)90353-9Analogy between higher instabilities in fluids and lasers. 53H. Haken, Analogy between higher instabilities in fluids and lasers, Physics Letters A 53(1), 77-78 (1975), doi:10.1016/0375-9601(75)90353-9.
H Haken, 10.1088/0034-4885/52/5/001Synergetics: an overview. 52H. Haken, Synergetics: an overview, Reports on Progress in Physics 52(5), 515 (1989), doi:10.1088/0034-4885/52/5/001.
P Schuster, Springer Series in Synergetics. Springer-VerlagP. Schuster, ed., Springer Series in Synergetics, Springer-Verlag.
H Haken, 10.1007/978-3-642-48779-8Synergetics of Cognition: Proceedings of the International Symposium at Schloß Elmau. BavariaSpringer-VerlagSpringer Series in SynergeticsH. Haken, Synergetics of Cognition: Proceedings of the International Symposium at Schloß Elmau, Bavaria, June 4-8, 1989, Springer Series in Synergetics. Springer-Verlag, ISBN 978-3-642-48781-1, doi:10.1007/978-3-642-48779-8 (1990).
Weak and strong synchronization of chaos. K Pyragas, 10.1103/PhysRevE.54.R4508Phys. Rev. E. 54K. Pyragas, Weak and strong synchronization of chaos, Phys. Rev. E 54, R4508 (1996), doi:10.1103/PhysRevE.54.R4508.
Synchronizing hyperchaos with a scalar transmitted signal. J H Peng, E J Ding, M Ding, W Yang, 10.1103/PhysRevLett.76.904Phys. Rev. Lett. 76904J. H. Peng, E. J. Ding, M. Ding and W. Yang, Synchronizing hyperchaos with a scalar transmitted signal, Phys. Rev. Lett. 76, 904 (1996), doi:10.1103/PhysRevLett.76.904.
Synchronization of chaos using continuous control. T Kapitaniak, 10.1103/PhysRevE.50.1642Phys. Rev. E. 501642T. Kapitaniak, Synchronization of chaos using continuous control, Phys. Rev. E 50, 1642 (1994), doi:10.1103/PhysRevE.50.1642.
Complete synchronization and generalized synchronization of one-way coupled time-delay systems. M Zhan, X Wang, X Gong, G W Wei, C.-H Lai, 10.1103/PhysRevE.68.036208Phys. Rev. E. 6836208M. Zhan, X. Wang, X. Gong, G. W. Wei and C.-H. Lai, Complete synchronization and generalized synchronization of one-way coupled time-delay systems, Phys. Rev. E 68, 036208 (2003), doi:10.1103/PhysRevE.68.036208.
N Detal, H Taheri, K Wiesenfeld, 10.1063/1.5097237Synchronization behavior in a ternary phase model. 2963115N. DeTal, H. Taheri and K. Wiesenfeld, Synchronization behavior in a ternary phase model, Chaos: An Interdisciplinary Journal of Nonlinear Science 29(6), 063115 (2019), doi:10.1063/1.5097237.
E Mosekilde, Y Maistrenko, D Postnov, 10.1142/4845Chaotic synchronization: applications to living systems. World Scientific42E. Mosekilde, Y. Maistrenko and D. Postnov, Chaotic synchronization: applications to living systems, vol. 42, World Scientific, doi:https://doi.org/10.1142/4845 (2002).
Synchronization in the human cardiorespiratory system. C Schäfer, M G Rosenblum, H.-H Abel, J Kurths, 10.1103/PhysRevE.60.857Phys. Rev. E. 60857C. Schäfer, M. G. Rosenblum, H.-H. Abel and J. Kurths, Synchronization in the human cardiorespiratory system, Phys. Rev. E 60, 857 (1999), doi:10.1103/PhysRevE.60.857.
Bursting and synchronization transition in the coupled modified ml neurons. H Wang, Q Lu, Q Wang, 10.1016/j.cnsns.2007.03.001Communications in Nonlinear Science and Numerical Simulation. 1381668H. Wang, Q. Lu and Q. Wang, Bursting and synchronization transition in the coupled modified ml neurons, Communications in Nonlinear Science and Numerical Simulation 13(8), 1668 (2008), doi:https://doi.org/10.1016/j.cnsns.2007.03.001.
Synchronization dynamics in a ring of four mutually coupled biological systems. H E Kadji, J C Orou, P Woafo, 10.1016/j.cnsns.2006.11.004Communications in Nonlinear Science and Numerical Simulation. 1371361H. E. Kadji, J. C. Orou and P. Woafo, Synchronization dynamics in a ring of four mu- tually coupled biological systems, Communications in Nonlinear Science and Numerical Simulation 13(7), 1361 (2008), doi:https://doi.org/10.1016/j.cnsns.2006.11.004.
Quantum Correlations and Synchronization Measures. F Galve, G Giorgi, R Zambrini, 10.1007/978-3-319-53412-1_18Springer International PublishingQuantum Science and TechnologyF. Galve, G. Luca Giorgi and R. Zambrini, Quantum Correlations and Synchroniza- tion Measures, p. 393-420, Quantum Science and Technology. Springer International Publishing, ISBN 978-3-319-53412-1, doi:10.1007/978-3-319-53412-1 18 (2017).
S H Strogatz, 10.1002/aic.690410720978-0-7382-0453-6Nonlinear Dynamics And Chaos: With Applications To Physics. CRC PressChemistry, And Engineering. 1st edition ednS. H. Strogatz, Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering, CRC Press, 1st edition edn., ISBN 978-0-7382-0453-6, doi:https://doi.org/10.1002/aic.690410720 (2000).
G , Manzano Paule, 10.1007/978-3-319-93964-3_4Transient Synchronization and Quantum Correlations. Springer International PublishingG. Manzano Paule, Transient Synchronization and Quantum Correlations, p. 179-200, Springer Theses. Springer International Publishing, ISBN 978-3-319-93964-3, doi:10.1007/978-3-319-93964-3 4 (2018).
Quantum correlations and mutual synchronization. G L Giorgi, F Galve, G Manzano, P Colet, R Zambrini, 10.1103/PhysRevA.85.052101Phys. Rev. A. 8552101G. L. Giorgi, F. Galve, G. Manzano, P. Colet and R. Zambrini, Quan- tum correlations and mutual synchronization, Phys. Rev. A 85, 052101 (2012), doi:10.1103/PhysRevA.85.052101.
Cryptography using multiple one-dimensional chaotic maps. N K Pareek, V Patidar, K K Sud, 10.1016/j.cnsns.2004.03.006Communications in Nonlinear Science and Numerical Simulation. 107N. K. Pareek, V. Patidar and K. K. Sud, Cryptography using multiple one-dimensional chaotic maps, Communications in Nonlinear Science and Numerical Simulation 10(7), 715-723 (2005), doi:10.1016/j.cnsns.2004.03.006.
A chaotic secure communication scheme using fractional chaotic systems based on an extended fractional kalman filter. A Kiani-B, K Fallahi, N Pariz, H Leung, 10.1016/j.cnsns.2007.11.011Communications in Nonlinear Science and Numerical Simulation. 143A. Kiani-B, K. Fallahi, N. Pariz and H. Leung, A chaotic secure communication scheme using fractional chaotic systems based on an extended fractional kalman filter, Com- munications in Nonlinear Science and Numerical Simulation 14(3), 863-879 (2009), doi:10.1016/j.cnsns.2007.11.011.
An application of chen system for secure chaotic communication based on extended kalman filter and multi-shift cipher algorithm. K Fallahi, R Raoufi, H Khoshbin, 10.1016/j.cnsns.2006.07.006Communications in Nonlinear Science and Numerical Simulation. 134K. Fallahi, R. Raoufi and H. Khoshbin, An application of chen system for secure chaotic communication based on extended kalman filter and multi-shift cipher algorithm, Com- munications in Nonlinear Science and Numerical Simulation 13(4), 763-781 (2008), doi:10.1016/j.cnsns.2006.07.006.
Amplitude envelope synchronization in coupled chaotic oscillators. J M Gonzalez-Miranda, 10.1103/PhysRevE.65.036232Phys. Rev. E. 6536232J. M. Gonzalez-Miranda, Amplitude envelope synchronization in coupled chaotic oscil- lators, Phys. Rev. E 65, 036232 (2002), doi:10.1103/PhysRevE.65.036232.
From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics. L D'alessio, Y Kafri, A Polkovnikov, M Rigol, 10.1080/00018732.2016.1198134Advances in Physics. 653239L. D'Alessio, Y. Kafri, A. Polkovnikov and M. Rigol, From quantum chaos and eigen- state thermalization to statistical mechanics and thermodynamics, Advances in Physics 65(3), 239 (2016), doi:10.1080/00018732.2016.1198134, https://doi.org/10.1080/ 00018732.2016.1198134.
Analysis of quantum semigroups with GKS-lindblad generators: II. general. B Baumgartner, H Narnhofer, 10.1088/1751-8113/41/39/395303Journal of Physics A: Mathematical and Theoretical. 4139395303B. Baumgartner and H. Narnhofer, Analysis of quantum semigroups with GKS-lindblad generators: II. general, Journal of Physics A: Mathematical and Theoretical 41(39), 395303 (2008), doi:10.1088/1751-8113/41/39/395303.
Geometry and response of lindbladians. V V Albert, B Bradlyn, M Fraas, L Jiang, 10.1103/PhysRevX.6.041031Phys. Rev. X. 641031V. V. Albert, B. Bradlyn, M. Fraas and L. Jiang, Geometry and response of lindbladians, Phys. Rev. X 6, 041031 (2016), doi:10.1103/PhysRevX.6.041031.
Information-preserving structures: A general framework for quantum zero-error information. R Blume-Kohout, H K Ng, D Poulin, L Viola, 10.1103/PhysRevA.82.062306Phys. Rev. A. 8262306R. Blume-Kohout, H. K. Ng, D. Poulin and L. Viola, Information-preserving structures: A general framework for quantum zero-error information, Phys. Rev. A 82, 062306 (2010), doi:10.1103/PhysRevA.82.062306.
Theory of quantum error correction for general noise. E Knill, R Laflamme, L Viola, 10.1103/PhysRevLett.84.2525Phys. Rev. Lett. 842525E. Knill, R. Laflamme and L. Viola, Theory of quantum error correction for general noise, Phys. Rev. Lett. 84, 2525 (2000), doi:10.1103/PhysRevLett.84.2525.
Exact solutions of open integrable quantum spin chains. E Ilievski, 1410.1446E. Ilievski, Exact solutions of open integrable quantum spin chains (2014), 1410.1446.
No synchronization for qubits. L.-C Kwek, 10.1103/Physics.11.75Physics. 11L.-C. Kwek, No synchronization for qubits, Physics 11, 75 (2018), doi:10.1103/Physics.11.75.
Coherent multi-flavour spin dynamics in a fermionic quantum gas. J S Krauser, J Heinze, N Fläschner, S Götze, O Jürgensen, D.-S Lühmann, C Becker, K Sengstock, 10.1038/nphys2409Nature Physics. 8J. S. Krauser, J. Heinze, N. Fläschner, S. Götze, O. Jürgensen, D.-S. Lühmann, C. Becker and K. Sengstock, Coherent multi-flavour spin dynamics in a fermionic quantum gas, Nature Physics 8(1111), 813-818 (2012), doi:10.1038/nphys2409.
Transient synchronization in open quantum systems. G L Giorgi, A Cabot, R Zambrini, 978-3-030-31146-9Advances in Open Systems and Fundamental Tests of Quantum Mechanics. B. Vacchini, H.-P. Breuer and A. BassiSpringer International PublishingG. L. Giorgi, A. Cabot and R. Zambrini, Transient synchronization in open quantum systems, In B. Vacchini, H.-P. Breuer and A. Bassi, eds., Advances in Open Systems and Fundamental Tests of Quantum Mechanics, p. 73-89. Springer International Publishing, ISBN 978-3-030-31146-9 (2019).
N Jaseem, M Hajdušek, P Solanki, L.-C Kwek, R Fazio, S Vinjanampathy, 2006.13623Generalized measure of quantum synchronization. N. Jaseem, M. Hajdušek, P. Solanki, L.-C. Kwek, R. Fazio and S. Vinjanampathy, Generalized measure of quantum synchronization (2020), 2006.13623.
On the generators of quantum dynamical semigroups. G Lindblad, 10.1007/BF01608499Communications in Mathematical Physics. 482G. Lindblad, On the generators of quantum dynamical semigroups, Communications in Mathematical Physics 48(2), 119-130 (1976), doi:https://doi.org/10.1007/BF01608499.
Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics. C Gardiner, P Zoller, 978-3-540-22301-6Springer Series in Synergetics. Springer-Verlag3 ednC. Gardiner and P. Zoller, Quantum Noise: A Handbook of Markovian and Non- Markovian Quantum Stochastic Methods with Applications to Quantum Optics, Springer Series in Synergetics. Springer-Verlag, 3 edn., ISBN 978-3-540-22301-6 (2004).
Symmetries and conservation laws in quantum trajectories: Dissipative freezing. C Sánchez Muñoz, B Buča, J Tindall, A González-Tudela, D Jaksch, D Porras, 10.1103/PhysRevA.100.042113Phys. Rev. A. 10042113C. Sánchez Muñoz, B. Buča, J. Tindall, A. González-Tudela, D. Jaksch and D. Porras, Symmetries and conservation laws in quantum trajectories: Dissipative freezing, Phys. Rev. A 100, 042113 (2019), doi:10.1103/PhysRevA.100.042113.
C S Muñoz, B Buča, J Tindall, A González-Tudela, D Jaksch, D Porras, 1903. 05080Nonstationary dynamics and dissipative freezing in squeezed superradiance. C. S. Muñoz, B. Buča, J. Tindall, A. González-Tudela, D. Jaksch and D. Porras, Non- stationary dynamics and dissipative freezing in squeezed superradiance (2019), 1903. 05080.
Shattered time: can a dissipative time crystal survive many-body correlations?. K Tucker, B Zhu, R J Lewis-Swan, J Marino, F Jimenez, J G Restrepo, A M Rey, 10.1088/1367-2630/aaf18bNew Journal of Physics. 2012123003K. Tucker, B. Zhu, R. J. Lewis-Swan, J. Marino, F. Jimenez, J. G. Restrepo and A. M. Rey, Shattered time: can a dissipative time crystal survive many-body correlations?, New Journal of Physics 20(12), 123003 (2018), doi:10.1088/1367-2630/aaf18b.
Dicke time crystals in driven-dissipative quantum many-body systems. B Zhu, J Marino, N Y Yao, M D Lukin, E A Demler, 10.1088/1367-2630/ab2afeNew Journal of Physics. 21773028B. Zhu, J. Marino, N. Y. Yao, M. D. Lukin and E. A. Demler, Dicke time crystals in driven-dissipative quantum many-body systems, New Journal of Physics 21(7), 073028 (2019), doi:10.1088/1367-2630/ab2afe.
Boundary time crystals. F Iemini, A Russomanno, J Keeling, M Schirò, M Dalmonte, R Fazio, 10.1103/PhysRevLett.121.035301Phys. Rev. Lett. 12135301F. Iemini, A. Russomanno, J. Keeling, M. Schirò, M. Dalmonte and R. Fazio, Boundary time crystals, Phys. Rev. Lett. 121, 035301 (2018), doi:10.1103/PhysRevLett.121.035301.
Emergent finite frequency criticality of driven-dissipative correlated lattice bosons. O Scarlatella, R Fazio, M Schiró, 10.1103/PhysRevB.99.064511Phys. Rev. B. 9964511O. Scarlatella, R. Fazio and M. Schiró, Emergent finite frequency criticality of driven-dissipative correlated lattice bosons, Phys. Rev. B 99, 064511 (2019), doi:10.1103/PhysRevB.99.064511.
F Minganti, I I Arkhipov, A Miranowicz, F Nori, Correspondence between dissipative phase transitions of light and time crystals. 8075F. Minganti, I. I. Arkhipov, A. Miranowicz and F. Nori, Correspondence between dissi- pative phase transitions of light and time crystals (2020), 2008.08075.
Ergodic and mixing quantum channels in finite dimensions. D Burgarth, G Chiribella, V Giovannetti, P Perinotti, K Yuasa, 10.1088/1367-2630/15/7/073045New Journal of Physics. 15773045D. Burgarth, G. Chiribella, V. Giovannetti, P. Perinotti and K. Yuasa, Ergodic and mixing quantum channels in finite dimensions, New Journal of Physics 15(7), 073045 (2013), doi:10.1088/1367-2630/15/7/073045.
V V Albert, 1802.00010Lindbladians with multiple steady states: theory and applications. V. V. Albert, Lindbladians with multiple steady states: theory and applications (2018), 1802.00010.
J Czartowski, R Müller, K Zyczkowski, D Braun, 2103.02031Completely synchronizing quantum channels. J. Czartowski, R. Müller, K. Zyczkowski and D. Braun, Completely synchronizing quantum channels (2021), 2103.02031.
A note on symmetry reductions of the lindblad equation: transport in constrained open spin chains. B Buča, T Prosen, 10.1088/1367-2630/14/7/073007New Journal of Physics. 14773007B. Buča and T. Prosen, A note on symmetry reductions of the lindblad equation: trans- port in constrained open spin chains, New Journal of Physics 14(7), 073007 (2012), doi:10.1088/1367-2630/14/7/073007.
Invariant neural network ansatz for weakly symmetric open quantum lattices. D Nigro, 2101.03511D. Nigro, Invariant neural network ansatz for weakly symmetric open quantum lattices (2021), 2101.03511.
Integrability of one-dimensional lindbladians from operator-space fragmentation. F H L Essler, L Piroli, 10.1103/PhysRevE.102.062210Phys. Rev. E. 10262210F. H. L. Essler and L. Piroli, Integrability of one-dimensional lindbladi- ans from operator-space fragmentation, Phys. Rev. E 102, 062210 (2020), doi:10.1103/PhysRevE.102.062210.
Quantum jump monte carlo approach simplified: Abelian symmetries. K Macieszczak, D C Rose, 10.1103/physreva.103.042204Physical Review A. 1034K. Macieszczak and D. C. Rose, Quantum jump monte carlo ap- proach simplified: Abelian symmetries, Physical Review A 103(4) (2021), doi:10.1103/physreva.103.042204.
Exact solutions of interacting dissipative systems via weak symmetries. A Mcdonald, A A Clerk, 2109.13221A. McDonald and A. A. Clerk, Exact solutions of interacting dissipative systems via weak symmetries (2021), 2109.13221.
Universality class of ising critical states with long-range losses. J Marino, 2108. 12422J. Marino, Universality class of ising critical states with long-range losses (2021), 2108. 12422.
Dephasing and the steady state in quantum many-particle systems. T Barthel, U Schollwöck, 10.1103/PhysRevLett.100.100601Phys. Rev. Lett. 100100601T. Barthel and U. Schollwöck, Dephasing and the steady state in quantum many-particle systems, Phys. Rev. Lett. 100, 100601 (2008), doi:10.1103/PhysRevLett.100.100601.
Towards a theory of metastability in open quantum dynamics. K Macieszczak, M Guţȃ, I Lesanovsky, J P Garrahan, 10.1103/PhysRevLett.116.240404Physical Review Letters. 11624240404K. Macieszczak, M. Guţȃ, I. Lesanovsky and J. P. Garrahan, Towards a theory of metastability in open quantum dynamics, Physical Review Letters 116(24), 240404 (2016), doi:10.1103/PhysRevLett.116.240404.
Quantum zeno dynamics: mathematical and physical aspects. P Facchi, S Pascazio, 10.1088/1751-8113/41/49/493001Journal of Physics A: Mathematical and Theoretical. 4149P. Facchi and S. Pascazio, Quantum zeno dynamics: mathematical and physical as- pects, Journal of Physics A: Mathematical and Theoretical 41(49), 493001 (2008), doi:10.1088/1751-8113/41/49/493001.
Quantum zeno subspaces. P Facchi, S Pascazio, 10.1103/PhysRevLett.89.080401Phys. Rev. Lett. 8980401P. Facchi and S. Pascazio, Quantum zeno subspaces, Phys. Rev. Lett. 89, 080401 (2002), doi:10.1103/PhysRevLett.89.080401.
Coherent quantum dynamics in steady-state manifolds of strongly dissipative systems. P Zanardi, L. Campos Venuti, 10.1103/PhysRevLett.113.240406Phys. Rev. Lett. 113240406P. Zanardi and L. Campos Venuti, Coherent quantum dynamics in steady-state manifolds of strongly dissipative systems, Phys. Rev. Lett. 113, 240406 (2014), doi:10.1103/PhysRevLett.113.240406.
Effective quantum zeno dynamics in dissipative quantum systems. V Popkov, S Essink, C Presilla, G Schütz, 10.1103/PhysRevA.98.052110Phys. Rev. A. 9852110V. Popkov, S. Essink, C. Presilla and G. Schütz, Effective quantum zeno dynamics in dissipative quantum systems, Phys. Rev. A 98, 052110 (2018), doi:10.1103/PhysRevA.98.052110.
Synchronization: A Universal Concept in Nonlinear Sciences, Cambridge Nonlinear Science Series. A Pikovsky, M Rosenblum, J Kurths, 10.1017/CBO9780511755743Cambridge University PressA. Pikovsky, M. Rosenblum and J. Kurths, Synchronization: A Universal Concept in Nonlinear Sciences, Cambridge Nonlinear Science Series. Cambridge University Press, doi:10.1017/CBO9780511755743 (2001).
Irreducible quantum dynamical semigroups. D E Evans, 10.1007/BF01614091Communications in Mathematical Physics. 543293D. E. Evans, Irreducible quantum dynamical semigroups, Communications in Mathe- matical Physics 54(3), 293 (1977), doi:10.1007/BF01614091.
Stationary states of quantum dynamical semigroups. A Frigerio, 10.1007/BF01196936Communications in Mathematical Physics. 633269A. Frigerio, Stationary states of quantum dynamical semigroups, Communications in Mathematical Physics 63(3), 269 (1978), doi:https://doi.org/10.1007/BF01196936.
Quantum dynamical semigroups and approach to equilibrium. A Frigerio, 10.1007/BF00398571Letters in Mathematical Physics. 2279A. Frigerio, Quantum dynamical semigroups and approach to equilibrium, Letters in Mathematical Physics 2(2), 79 (1977), doi:https://doi.org/10.1007/BF00398571.
Kinetic equations from hamiltonian dynamics: Markovian limits. H Spohn, 10.1103/RevModPhys.52.569Rev. Mod. Phys. 52H. Spohn, Kinetic equations from hamiltonian dynamics: Markovian limits, Rev. Mod. Phys. 52, 569 (1980), doi:10.1103/RevModPhys.52.569.
Matrix product solutions of boundary driven quantum chains. T Prosen, 10.1088/1751-8113/48/37/373001Journal of Physics A: Mathematical and Theoretical. 4837T. Prosen, Matrix product solutions of boundary driven quantum chains, Journal of Physics A: Mathematical and Theoretical 48(37), 373001 (2015), doi:10.1088/1751- 8113/48/37/373001.
Decoherence-free subspaces for quantum computation. D A Lidar, I L Chuang, K B Whaley, 10.1103/PhysRevLett.81.2594Phys. Rev. Lett. 812594D. A. Lidar, I. L. Chuang and K. B. Whaley, Decoherence-free subspaces for quantum computation, Phys. Rev. Lett. 81, 2594 (1998), doi:10.1103/PhysRevLett.81.2594.
Light scattering and dissipative dynamics of many fermionic atoms in an optical lattice. S Sarkar, S Langer, J Schachenmayer, A J Daley, 10.1103/PhysRevA.90.023618Phys. Rev. A. 9023618S. Sarkar, S. Langer, J. Schachenmayer and A. J. Daley, Light scattering and dissipative dynamics of many fermionic atoms in an optical lattice, Phys. Rev. A 90, 023618 (2014), doi:10.1103/PhysRevA.90.023618.
K Sponselee, L Freystatzky, B Abeln, M Diem, B Hundt, A Kochanke, T Ponath, B Santra, L Mathey, K Sengstock, C Becker, 10.1088/2058-9565/aadccdDynamics of ultracold quantum gases in the dissipative fermi-hubbard model. 414002K. Sponselee, L. Freystatzky, B. Abeln, M. Diem, B. Hundt, A. Kochanke, T. Ponath, B. Santra, L. Mathey, K. Sengstock and C. Becker, Dynamics of ultracold quantum gases in the dissipative fermi-hubbard model, Quantum Science and Technology 4(1), 014002 (2018), doi:10.1088/2058-9565/aadccd.
Strong dissipation inhibits losses and induces correlations in cold molecular gases. N Syassen, D M Bauer, M Lettner, T Volz, D Dietze, J J Garcia-Ripoll, J I Cirac, G Rempe, S Dürr, 10.1126/science.1155309Science. 32058811329N. Syassen, D. M. Bauer, M. Lettner, T. Volz, D. Dietze, J. J. Garcia-Ripoll, J. I. Cirac, G. Rempe and S. Dürr, Strong dissipation inhibits losses and induces correlations in cold molecular gases, Science 320(5881), 1329 (2008), doi:10.1126/science.1155309.
Many-body physics with ultracold gases. I Bloch, J Dalibard, W Zwerger, 10.1103/RevModPhys.80.885Rev. Mod. Phys. 80I. Bloch, J. Dalibard and W. Zwerger, Many-body physics with ultracold gases, Rev. Mod. Phys. 80, 885 (2008), doi:10.1103/RevModPhys.80.885.
Observation of many-body localization of interacting fermions in a quasirandom optical lattice. M Schreiber, S S Hodgman, P Bordia, H P Lüschen, M H Fischer, R Vosk, E Altman, U Schneider, I Bloch, Science. 3496250842M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Alt- man, U. Schneider and I. Bloch, Observation of many-body localization of interacting fermions in a quasirandom optical lattice, Science 349(6250), 842 (2015).
Fermionic atoms in a three dimensional optical lattice: Observing fermi surfaces, dynamics, and interactions. M Köhl, H Moritz, T Stöferle, K Günter, T Esslinger, 10.1103/PhysRevLett.94.080403Phys. Rev. Lett. 9480403M. Köhl, H. Moritz, T. Stöferle, K. Günter and T. Esslinger, Fermionic atoms in a three dimensional optical lattice: Observing fermi surfaces, dynamics, and interactions, Phys. Rev. Lett. 94, 080403 (2005), doi:10.1103/PhysRevLett.94.080403.
Spectrum of majorana quantum mechanics with O(4) 3 symmetry. K Pakrouski, I R Klebanov, F Popov, G Tarnopolsky, 10.1103/PhysRevLett.122.011601Phys. Rev. Lett. 12211601K. Pakrouski, I. R. Klebanov, F. Popov and G. Tarnopolsky, Spectrum of majo- rana quantum mechanics with O(4) 3 symmetry, Phys. Rev. Lett. 122, 011601 (2019), doi:10.1103/PhysRevLett.122.011601.
Thermodynamics of a deeply degenerate su(n)-symmetric fermi gas. L Sonderhouse, C Sanner, R B Hutson, A Goban, T Bilitewski, L Yan, W R Milner, A M Rey, J Ye, 10.1038/s41567-020-0986-6Nature Physics. L. Sonderhouse, C. Sanner, R. B. Hutson, A. Goban, T. Bilitewski, L. Yan, W. R. Milner, A. M. Rey and J. Ye, Thermodynamics of a deeply degenerate su(n)-symmetric fermi gas, Nature Physics (2020), doi:10.1038/s41567-020-0986-6.
Magnetism and domain formation in SU(3)-symmetric multi-species fermi mixtures. I Titvinidze, A Privitera, S.-Y Chang, S Diehl, M A Baranov, A Daley, W Hofstetter, 10.1088/1367-2630/13/3/035013New Journal of Physics. 13335013I. Titvinidze, A. Privitera, S.-Y. Chang, S. Diehl, M. A. Baranov, A. Daley and W. Hofstetter, Magnetism and domain formation in SU(3)-symmetric multi-species fermi mixtures, New Journal of Physics 13(3), 035013 (2011), doi:10.1088/1367- 2630/13/3/035013.
Ultracold fermions and the SU(n) hubbard model. C Honerkamp, W Hofstetter, 10.1103/PhysRevLett.92.170403Phys. Rev. Lett. 92170403C. Honerkamp and W. Hofstetter, Ultracold fermions and the SU(n) hubbard model, Phys. Rev. Lett. 92, 170403 (2004), doi:10.1103/PhysRevLett.92.170403.
Two-orbital s u ( n ) magnetism with ultracold alkaline-earth atoms. A V Gorshkov, M Hermele, V Gurarie, C Xu, P S Julienne, J Ye, P Zoller, E Demler, M D Lukin, A M Rey, 10.1038/nphys1535Nature Physics. 644A. V. Gorshkov, M. Hermele, V. Gurarie, C. Xu, P. S. Julienne, J. Ye, P. Zoller, E. Demler, M. D. Lukin and A. M. Rey, Two-orbital s u ( n ) magnetism with ultracold alkaline-earth atoms, Nature Physics 6(44), 289-295 (2010), doi:10.1038/nphys1535.
Nuclear spin effects in optical lattice clocks. M M Boyd, T Zelevinsky, A D Ludlow, S Blatt, T Zanon-Willette, S M Foreman, J Ye, 10.1103/PhysRevA.76.022510Phys. Rev. A. 7622510M. M. Boyd, T. Zelevinsky, A. D. Ludlow, S. Blatt, T. Zanon-Willette, S. M. Foreman and J. Ye, Nuclear spin effects in optical lattice clocks, Phys. Rev. A 76, 022510 (2007), doi:10.1103/PhysRevA.76.022510.
Nonequilibrium dynamics of bosonic atoms in optical lattices: Decoherence of many-body states due to spontaneous emission. H Pichler, A J Daley, P Zoller, 10.1103/physreva.82.063605Physical Review A. 826H. Pichler, A. J. Daley and P. Zoller, Nonequilibrium dynamics of bosonic atoms in optical lattices: Decoherence of many-body states due to spontaneous emission, Physical Review A 82(6) (2010), doi:10.1103/physreva.82.063605.
Interorbital interactions in an su(2)xsu(6)-symmetric fermi-fermi mixture. B Abeln, K Sponselee, M Diem, N Pintul, K Sengstock, C Becker, 2012.11544B. Abeln, K. Sponselee, M. Diem, N. Pintul, K. Sengstock and C. Becker, Interorbital interactions in an su(2)xsu(6)-symmetric fermi-fermi mixture (2020), 2012.11544.
Cornish, Interspecies thermalization in an ultracold mixture of cs and yb in an optical trap. A Guttridge, S A Hopkins, S L Kemp, M D Frye, J M Hutson, S L , 10.1103/physreva.96.012704Physical Review A. 961A. Guttridge, S. A. Hopkins, S. L. Kemp, M. D. Frye, J. M. Hutson and S. L. Cor- nish, Interspecies thermalization in an ultracold mixture of cs and yb in an optical trap, Physical Review A 96(1) (2017), doi:10.1103/physreva.96.012704.
Giant spin oscillations in an ultracold fermi sea. J S Krauser, U Ebling, N Flaschner, J Heinze, K Sengstock, M Lewenstein, A Eckardt, C Becker, 10.1126/science.1244059Science. 3436167J. S. Krauser, U. Ebling, N. Flaschner, J. Heinze, K. Sengstock, M. Lewenstein, A. Eckardt and C. Becker, Giant spin oscillations in an ultracold fermi sea, Science 343(6167), 157-160 (2014), doi:10.1126/science.1244059.
I Bloch, J Dalibard, W Zwerger, 10.1103/revmodphys.80.885Many-body physics with ultracold gases. 80I. Bloch, J. Dalibard and W. Zwerger, Many-body physics with ultracold gases, Reviews of Modern Physics 80(3), 885-964 (2008), doi:10.1103/revmodphys.80.885.
Spinor bose gases: Symmetries, magnetism, and quantum dynamics. D M Stamper-Kurn, M Ueda, 10.1103/RevModPhys.85.1191Rev. Mod. Phys. 851191D. M. Stamper-Kurn and M. Ueda, Spinor bose gases: Symmetries, magnetism, and quantum dynamics, Rev. Mod. Phys. 85, 1191 (2013), doi:10.1103/RevModPhys.85.1191.
Observation of universal dynamics in a spinor bose gas far from equilibrium. M Prüfer, P Kunkel, H Strobel, S Lannig, D Linnemann, C.-M Schmied, J Berges, T Gasenzer, M K Oberthaler, 10.1038/s41586-018-0659-0Nature. 5637730M. Prüfer, P. Kunkel, H. Strobel, S. Lannig, D. Linnemann, C.-M. Schmied, J. Berges, T. Gasenzer and M. K. Oberthaler, Observation of universal dynamics in a spinor bose gas far from equilibrium, Nature 563(7730), 217-220 (2018), doi:10.1038/s41586-018- 0659-0.
Quantum quench and nonequilibrium dynamics in lattice-confined spinor condensates. Z Chen, T Tang, J Austin, Z Shaw, L Zhao, Y Liu, 10.1103/PhysRevLett.123.113002Phys. Rev. Lett. 123113002Z. Chen, T. Tang, J. Austin, Z. Shaw, L. Zhao and Y. Liu, Quantum quench and nonequilibrium dynamics in lattice-confined spinor condensates, Phys. Rev. Lett. 123, 113002 (2019), doi:10.1103/PhysRevLett.123.113002.
Quench dynamics and relaxation in isolated integrable quantum spin chains. F H L Essler, M Fagotti, 10.1088/1742-5468/2016/06/064002Journal of Statistical Mechanics: Theory and Experiment. 2016664002F. H. L. Essler and M. Fagotti, Quench dynamics and relaxation in isolated inte- grable quantum spin chains, Journal of Statistical Mechanics: Theory and Experiment 2016(6), 064002 (2016), doi:10.1088/1742-5468/2016/06/064002.
Limit-cycle phase in driven-dissipative spin systems. C.-K Chan, T E Lee, S Gopalakrishnan, 10.1103/physreva.91.051601Physical Review A. 915C.-K. Chan, T. E. Lee and S. Gopalakrishnan, Limit-cycle phase in driven-dissipative spin systems, Physical Review A 91(5) (2015), doi:10.1103/physreva.91.051601.
Beyond mean-field bistability in driven-dissipative lattices: Bunchingantibunching transition and quantum simulation. J J Mendoza-Arenas, S R Clark, S Felicetti, G Romero, E Solano, D G Angelakis, D Jaksch, 10.1103/physreva.93.023821Physical Review A. 932J. J. Mendoza-Arenas, S. R. Clark, S. Felicetti, G. Romero, E. Solano, D. G. Angelakis and D. Jaksch, Beyond mean-field bistability in driven-dissipative lattices: Bunching- antibunching transition and quantum simulation, Physical Review A 93(2) (2016), doi:10.1103/physreva.93.023821.
Quantum correlations and limit cycles in the driven-dissipative heisenberg lattice. E T Owen, J Jin, D Rossini, R Fazio, M J Hartmann, 10.1088/1367-2630/aab7d3New Journal of Physics. 20445004E. T. Owen, J. Jin, D. Rossini, R. Fazio and M. J. Hartmann, Quantum correlations and limit cycles in the driven-dissipative heisenberg lattice, New Journal of Physics 20(4), 045004 (2018), doi:10.1088/1367-2630/aab7d3.
Multistability of driven-dissipative quantum spins. H Landa, M Schiró, G Misguich, 10.1103/PhysRevLett.124.043601Phys. Rev. Lett. 12443601H. Landa, M. Schiró and G. Misguich, Multistability of driven-dissipative quantum spins, Phys. Rev. Lett. 124, 043601 (2020), doi:10.1103/PhysRevLett.124.043601.
Time crystallinity in dissipative floquet systems. A Lazarides, S Roy, F Piazza, R Moessner, 10.1103/physrevresearch.2.022002Physical Review Research. 22A. Lazarides, S. Roy, F. Piazza and R. Moessner, Time crystallinity in dissipative floquet systems, Physical Review Research 2(2) (2020), doi:10.1103/physrevresearch.2.022002.
Time crystallinity in open quantum systems. A Riera-Campeny, M Moreno-Cardoner, A Sanpera, 10.22331/q-2020-05-25-270Quantum. 4A. Riera-Campeny, M. Moreno-Cardoner and A. Sanpera, Time crystallinity in open quantum systems, Quantum 4, 270 (2020), doi:10.22331/q-2020-05-25-270.
Stationary state degeneracy of open quantum systems with non-abelian symmetries. Z Zhang, J Tindall, J Mur-Petit, D Jaksch, B Buča, 10.1088/1751-8121/ab88e3Journal of Physics A: Mathematical and Theoretical. 5321215304Z. Zhang, J. Tindall, J. Mur-Petit, D. Jaksch and B. Buča, Stationary state degen- eracy of open quantum systems with non-abelian symmetries, Journal of Physics A: Mathematical and Theoretical 53(21), 215304 (2020), doi:10.1088/1751-8121/ab88e3.
Quantum channels & operations: A guided tour. M Wolf, Lecture Notes. M. Wolf, Quantum channels & operations: A guided tour, Lecture Notes (2012).
T Kato, 10.1007/978-3-642-66282-9Perturbation Theory for Linear Operators. Springer-Verlag2 ednT. Kato, Perturbation Theory for Linear Operators, Classics in Mathematics. Springer- Verlag, 2 edn., ISBN 978-3-540-58661-6, doi:10.1007/978-3-642-66282-9 (1995).
Non-markovian generalization of the lindblad theory of open quantum systems. H.-P Breuer, 10.1103/PhysRevA.75.022103Phys. Rev. A. 7522103H.-P. Breuer, Non-markovian generalization of the lindblad theory of open quantum systems, Phys. Rev. A 75, 022103 (2007), doi:10.1103/PhysRevA.75.022103.
Ensemble of lindblad's trajectories for non-markovian dynamics. K Head-Marsden, D A Mazziotti, 10.1103/PhysRevA.99.022109Phys. Rev. A. 9922109K. Head-Marsden and D. A. Mazziotti, Ensemble of lindblad's trajec- tories for non-markovian dynamics, Phys. Rev. A 99, 022109 (2019), doi:10.1103/PhysRevA.99.022109.
F H L Essler, H Frahm, F Göhmann, A Klümper, V E Korepin, 10.1017/CBO9780511534843The One-Dimensional Hubbard Model. Cambridge University PressF. H. L. Essler, H. Frahm, F. Göhmann, A. Klümper and V. E. Kore- pin, The One-Dimensional Hubbard Model, Cambridge University Press, doi:10.1017/CBO9780511534843 (2005).
| [] |
[
"arXiv:hep-ph/0007327v1 28 Jul 2000 No-Scale Scenario with Non-Universal Gaugino Masses",
"arXiv:hep-ph/0007327v1 28 Jul 2000 No-Scale Scenario with Non-Universal Gaugino Masses"
] | [
"Shinji Komine *[email protected]†[email protected] \nDepartment of Physics\nTohoku University\n980-8578SendaiJapan\n",
"Masahiro Yamaguchi \nDepartment of Physics\nTohoku University\n980-8578SendaiJapan\n"
] | [
"Department of Physics\nTohoku University\n980-8578SendaiJapan",
"Department of Physics\nTohoku University\n980-8578SendaiJapan"
] | [] | Phenomenological issues of no-scale structure of Kähler potential are reexamined, which arises in various approaches to supersymmetry breaking.When no-scale boundary conditions are given at the Grand Unified scale and universal gaugino masses are postulated, a bino mass is quite degenerate with right-handed slepton masses and the requirement that the lightest superparticle (LSP) be neutral supplemented with slepton searches at LEP200 severely constrains allowed mass regions of superparticles. The situation drastically changes if one moderately relaxes the assumption of the universal gaugino masses. After reviewing some interesting scenarios where non-universal gaugino masses arise, we show that the non-universality diminishes the otherwise severe constraint on the superparticle masses, and leads a variety of superparticle mass spectra: in particular the LSP can be a wino-like neutralino, a higgsino-like neutralino, or even a sneutrino, and also left-handed sleptons can be lighter than right-handed ones. | 10.1103/physrevd.63.035005 | [
"https://arxiv.org/pdf/hep-ph/0007327v1.pdf"
] | 15,400,517 | hep-ph/0007327 | 73313d71bc0310e1d1d0cc1f11a5006b723a25d1 |
arXiv:hep-ph/0007327v1 28 Jul 2000 No-Scale Scenario with Non-Universal Gaugino Masses
Shinji Komine *[email protected]†[email protected]
Department of Physics
Tohoku University
980-8578SendaiJapan
Masahiro Yamaguchi
Department of Physics
Tohoku University
980-8578SendaiJapan
arXiv:hep-ph/0007327v1 28 Jul 2000 No-Scale Scenario with Non-Universal Gaugino Masses
(July, 2000)
Phenomenological issues of no-scale structure of Kähler potential are reexamined, which arises in various approaches to supersymmetry breaking.When no-scale boundary conditions are given at the Grand Unified scale and universal gaugino masses are postulated, a bino mass is quite degenerate with right-handed slepton masses and the requirement that the lightest superparticle (LSP) be neutral supplemented with slepton searches at LEP200 severely constrains allowed mass regions of superparticles. The situation drastically changes if one moderately relaxes the assumption of the universal gaugino masses. After reviewing some interesting scenarios where non-universal gaugino masses arise, we show that the non-universality diminishes the otherwise severe constraint on the superparticle masses, and leads a variety of superparticle mass spectra: in particular the LSP can be a wino-like neutralino, a higgsino-like neutralino, or even a sneutrino, and also left-handed sleptons can be lighter than right-handed ones.
I. INTRODUCTION
One of the most important phenomenological issues in supersymmetric (SUSY) Standard Models (SSMs) is to identify the mechanisms of supersymmetry breaking in the hidden sector and its mediation to the SSM sector (observable sector). Soft supersymmetry breaking masses which arise in effective theories after integrating over the hidden sector are in fact constrained from various requirements. For instance, they should lie in the range of 10 2 -10 3 GeV to solve the naturalness problem in the Higgs sector which is responsible for the electroweak symmetry breaking, and satisfy mass bounds given by collider experiments.
They should also satisfy flavor-changing-neutral-current (FCNC) constraints as well. Furthermore, if the lightest superparticle (LSP) is stable, which is often the case, cosmological arguments require it be electrically neutral and SU(3) c singlet.
The structure of the soft scalar masses is characterized by the Kähler potential. In this paper, we shall focus on a special class of the Kähler structure in which the hidden sector and the observable sector are separated from each other in the Kähler potential K as follows:
e −K/3 = f hid (z, z * ) + f obs (φ, φ * ),(1)
where z and φ symbolically represent fields in the hidden and observable sector, respectively.
The first example which exhibits this form of the Kähler potential is a so-called no-scale model [1], and thus we call it the no-scale structure. The characteristics of the Kähler potential in the no-scale form is that the soft SUSY breaking scalar masses vanish (as the vacuum energy vanishes) and gaugino masses are a dominant source of SUSY breaking mass.
Of course, this mass pattern is given at the energy scale where the soft masses are given, and the renormalization group effects due to the non-vanishing gaugino masses raise the masses of the scalar superparticles at the weak scale.
The no-scale structure of the Kähler potential is obtained in many types of models.
As we will see in the next section, such models include the (tree-level) Kähler potential of simple Calabi-Yau compactification of the heterotic string theories [2] both in the weak-and strong-coupling regimes, the splitting Ansatz of the hidden and observable sectors in the superspace density in a supergravity formalism [3], and the geometrical splitting of the two sectors in a brane scenario [4,5].
In this paper we revisit some phenomenological issues of the models with the no-scale boundary conditions. This class of models has closely been investigated in the literature.
A particular attention was paid to the minimal case where the boundary conditions are given at the Grand Unified Theory (GUT) scale of 2 × 10 16 GeV and the gaugino masses are assumed to be universal at this energy scale. In this case the mass spectrum of superparticles is very constrained, and the bino mass is almost degenerated with those of the right-handed sleptons. In fact it was shown that the neutralino can be the LSP only when its mass is less than about 120 GeV [3,6,7]: otherwise the stau would be the LSP which is charged, and thus not allowed if it is stable. We will revise this result, emphasizing that the present experimental bounds already exclude the large tan β case, leaving only tan β < ∼ 8.
One of the main points in this paper is that slight modifications of the minimal scenario will drastically change mass spectrum of the superparticles. In particular, we shall devote ourselves to the case where the gaugino masses are non-universal at the GUT scale. We will first review several cases that the non-universality of the gaugino masses result. Then we will discuss its phenomenological implications. Most remarkably relaxing the universality condition within a factor of two or so will result in a variety of mass spectra. In particular the LSP can be not only the bino-dominant neutralino, but also a wino or higgsino-dominant neutralino, or an admixture of the gaugino and the higgsino, or even a sneutrino. Furthermore the severe upperbound on the masses of the superparticles no longer exists. Thus we expect superparticle phenomenology in this case is much richer than the minimal case.
The paper is organized as follows. In the subsequent section, we review some examples which possess the no-scale Kähler potential. In section 3, we re-examine the case where the no-scale boundary conditions are given at the GUT scale and gaugino masses are universal at the scale, and show that the superparticle mass spectrum is very restrictive and tight constraints already exclude much of the parameter space. In section 4, we argue that the very constrained mass spectrum can be relaxed by several ways, and then we focus on one of them, namely the case with non-universal gaugino masses. After recalling some mechanisms to realize the non-universality of the gaugino masses, we consider its phenomenological implications. The final section is devoted to conclusions.
II. NO-SCALE BOUNDARY CONDITIONS
In this section, we would like to review some models which have the no-scale Kähler potential. The first model is the no-scale model [1] with the Kähler potential
K = −3 ln(T + T * − φ * φ)(2)
and the superpotential
W = W (φ),(3)
where T is a hidden sector field responsible for the SUSY breaking and φ is a generic matter field. Here and in the following, we use a unit that the reduced Planck scale M pl = 2.4 ×10 18
GeV set to unity. With the above Kähler potential and superpotential, one can compute the scalar potential in supergravity and find that (4) and no supersymmetry breaking masses arise in the scalar sector. Furthermore the gravitino mass is not fixed, which can be arbitrarily heavy or light at this level. Thus the no-scale model is named after this property. Non-trivial dependence of gauge kinetic functions on the field T yields non-vanishing gaugino masses in this case.
V = 1 3(T + T * − φ * φ) 2 ∂W ∂φ 2
The no-scale structure appears when one considers a Calabi-Yau compactification of weakly coupled E 8 × E 8 heterotic string theory. If one focuses on the overall modulus field whose scalar component represents the overall size of the compactified space, then one finds
[2] K = − ln(S + S * ) − 3 ln(T + T * − φ * φ),(5)
where S is the dilaton field and T is the overall modulus field. The superpotential in this case generally depends on the these fields S and T . Now if T dominates the SUSY breaking, then one finds that the soft SUSY breaking scalar mass as well as a trilinear scalar coupling (A term) vanishes as the vacuum energy, i.e. the vacuum expectation value of the scalar potential, vanishes. Note that the T dominant SUSY breaking occurs when the gaugino condensation triggers the SUSY breaking.
The same structure was also obtained for the heterotic M-theory [8] which corresponds to the strong coupling regime of the heterotic string theory, but this time the fields S and T have physically different meanings. In both the weak-coupling and strong-coupling cases, one has to keep in mind that quantum corrections may alter the form of the Kähler potential (5).
Severe FCNC constraints on superparticle masses may suggest that the hidden sector and the observable sector are in some way separated from each other in the Kähler potential.
An assumption often taken along this line of reasoning is the separation of the two sectors in the Kähler potential itself, namely the Kähler potential is a sum of the contributions from the two sectors. This Ansatz will generate the superparticle mass spectrum of the wellknown minimal supergravity model and non-zero scalar masses arise. It may be, however, more natural to consider the same separation in the superspace density in the supergravity Lagrangian [3], before making Weyl transformations to obtain the Einstein-Hilbert action for gravity part. This spirit indeed leads the form of the Kähler potential in Eq. (1). In this case and in the string cases, the gaugino masses become non-zero, provided that the hidden sector couples to the gauge multiplets via the gauge kinetic functions.
Recently it has been pointed out that the form (1) is naturally realized in a fivedimensional setting with two separated 3-branes [4,5]. Consider the five-dimensional supergravity on R 4 × S 1 /Z 2 . The geometry has two four-dimensional boundaries, i.e. 3-branes.
Suppose that the hidden sector is on one of the 3-branes and the observable sector is on the other. Now a dimensional reduction of the theory yields, in four dimensions, the following form of the Kähler potential
K = −3 ln(T + T * + f hid (z, z * ) + f obs (φ, φ * )),(6)
where this time the real part of T stands for the length of the compactified fifth dimension.
In the brane separation scenario, the two sectors are really split geometrically and thus not only the scalar masses, but also the gaugino masses vanish. Therefore one needs to seek for another mechanism to mediate the SUSY breaking occurred in the hidden sector.
One way is to invoke superconformal anomaly to obtain loop-suppressed soft masses [4,9].
This anomaly mediation is very appealing, albeit its minimal version has negative masses squared for sleptons. Many attempts to build realistic models have been made [10], and the superparticle masses obtained are in general different from those from the no-scale boundary conditions. In ref. [11], a new U(1) gauge interaction is assumed to play a role of the mediator of the SUSY breaking. The resulting mass pattern is similar to that of gauge-mediated SUSY breaking. On the other hand, if the SM gauge sector lives in the bulk, then the gauginos can play a role of the SUSY-breaking messenger [12] and the resulting mass spectrum of the superparticles exhibits the no-scale structure with non-vanishing gaugino masses, which is given at the scale of (the inverse of) the length of the fifth-dimension.
III. MINIMAL SCENARIO
In this section, we would like to discuss phenomenological consequences of the minimal no-scale scenario which has been mainly studied in the literature. The soft SUSY breaking masses in the minimal case are parameterized by:
• vanishing scalar masses: m 0 = 0
• vanishing trilinear scalar couplings: A = 0
• non-zero Higgs mixing masses: B
• non-zero universal gaugino masses: M 1/2
Note that these values are given at the GUT scale M GU T ≃ 2×10 16 GeV. In addition to these soft masses, we assume a non-zero supersymmetric higgsino mass, µ. These masses at the weak scale are obtained by solving renormalization group equations. Given M 1/2 , requiring the correct electroweak symmetry breaking relates B and µ with the Z boson mass m Z and the ratio of the two Higgs vacuum expectation values tan β as in the usual manner.
At first we roughly estimate the mass spectrum of superparticles when the Yukawa effects and the left-right mixing effects are neglected. The bino, wino and gluino masses at the weak scale are given by one parameter M 1/2 (in the following we set the renormalization point to be 500 GeV),
M 2 1 ≃ 0.18M 2 1/2 , M 2 2 ≃ 0.69M 2 1/2 , M 2 3 ≃ 7.0M 2 1/2 .(7)
The soft SUSY breaking masses of scalars in the first-two generations are also determined
by one parameter M 1/2 .m 2 u L ≃ 5.8M 2 1/2 + 0.35m 2 Z cos 2β (8) m 2 d L ≃ 5.8M 2 1/2 − 0.42m 2 Z cos 2β (9) m 2 u R ≃ 5.4M 2 1/2 + 0.15m 2 Z cos 2β (10) m 2 d R ≃ 5.4M 2 1/2 − 0.077m 2 Z cos 2β (11) m 2 ℓ L ≃ 0.51M 2 1/2 − 0.27m 2 Z cos 2β (12) m 2 ℓ R ≃ 0.15M 2 1/2 − 0.23m 2 Z cos 2β (13) m 2 ν ≃ 0.51M 2 1/2 + 0.5m 2 Z cos 2β(14)
The terms proportional to m 2 Z cos 2β are U(1) Y D-term contributions. From these equations, we find that bino and right-handed slepton are light. When M 1/2 > ∼ 2.8m Z ∼ 260 GeV, U(1) Y D-term contribution becomes small, and then the charged right-handed slepton becomes the LSP, and this scenario contradicts to cosmological observations.
In Fig. 1, we show the numerical result. The region above the solid line is excluded cosmologically since charged stau is the LSP. For tan β < ∼ 10 where left-right mixing effect is negligible, the region M 1/2 > ∼ 260 GeV is excluded as we estimate above. For tan β > ∼ 10, since left-right mixing effect makes stau mass lighter , the constraint becomes stronger. In Fig. 1, we also show the value of the right-handed smuon mass. From the cosmological constraint, we find that the right-handed smuon must be lighter than about 120 GeV.
On the other hand, the LEP experiments at √ s = 202 GeV provide rather strong lower bound on slepton masses [13]. For smuon, except near the threshold, the cross section for smuon pair production σ(e + e − →μ +
IV. CASE OF NON-UNIVERSAL GAUGINO MASSES
In this section, we consider modifications of the minimal boundary conditions discussed in the previous section, and argue that slight modifications will drastically change phenomenological consequences.
The reason of the very constrained superparticle mass spectrum in the minimal case is the degeneracy of the bino mass and those of the right-handed sleptons. The degeneracy is resolved if one considers the renormalization group effects above the GUT scale [14][15][16].
The point is that the right-handed slepton multiplets belong to 10-plets in the minimal choice of the matter representations in the SU(5) GUT, and the large group factor in the gauge loop contributions yields large positive corrections to the slepton masses. We should note, however, that in some realistic models to attempt to explain the masses of quarks, leptons and neutrinos, matter multiplets in different generations are often taken to be in different representations of the GUT groups [17], and then the renormalization group effects would violate the mass degeneracy among the different generations, which might cause unacceptably large FCNCs.
Secondly the stau can be the lightest superparticle in the SSM sector if it is not stable.
This is indeed the case when R-parity is violated or there exists another superparticle such as a gravitino out of the SSM sector which is lighter than the stau [18].
Another possibility is to relax the universality of the gaugino masses. In the rest of this section, we will discuss this case in detail. In the next subsection, we shall review various possibilities to realize non-universal gaugino masses. In particular we will emphasize that the non-universality of the gaugino masses does not conflict with the universality of the gauge couplings. Then we will look into phenomenological implications of the non-universality.
A. Examples of Non-Universal Gaugino Masses
Once the gaugino masses are given universal at some high energy scale where the gauge groups are unified, it is shown that the gaugino mass relation M 1 : M 2 : M 3 ≃ 1 : 2 : 6 holds at low energy, irrespective of the breaking patterns of the GUT group [19,14]. Here we review some mechanisms in which the gaugino masses are non-universal from the beginning.
In string models with simple Calabi-Yau compactification, the gauge kinetic functions for the Standard Model gauge multiplets can be written as [20] f
i = S + ǫ i T(15)
where i = 1, 2, 3 represent the three Standard Model gauge groups and ǫ i are some coefficients of one-loop order determined by the details of the compactification. If ǫ i depends on a gauge group and the modulus field T is dominantly responsible for the SUSY breaking, we will have the non-universal gaugino masses:
M 1 : M 2 : M 3 = ǫ 1 : ǫ 2 : ǫ 3 .(16)
Here we would like to emphasize that large threshold corrections are necessary for the string unification scenario in the weak coupling regime where the string scale is more than one order of magnitude larger than the naive GUT scale, and thus appearance of the non-universal ǫ i terms seems to be requisite. Note again that the Kähler potential may receive quantum corrections at the same order and the no-scale structure may be distorted.
The non-universality of the gaugino masses can be achieved in the conventional GUT approaches. Suppose that the gauge kinetic functions are written in the following form [21]:
f = c + ΣZ(17)
where c is a universal constant, Σ is a field which breaks the GUT group to the SM group, and Z is assumed to break the SUSY. The first term respects the GUT symmetry and thus universal for all SM gauge groups, while the second term is a symmetry breaking part which depends on each SM group. As for the gauge couplings, the first term gives a dominant contribution and hence the gauge couplings are unified up to small non-universal effects from the second term. On the other hand, the gaugino masses are assumed to come from the second term in Eq. (17). They are proportional to the vacuum expectation value of Σ and thus non-universal. The form of Eq. (17) can also be obtained through GUT threshold corrections to the gauge kinetic functions [22].
Non-universal gaugino masses can also be realized in scenarios of product GUTs [23] where the gauge group has the structure of G GU [24].
The flipped SU(5) is another example where the non-universality of the gaugino masses naturally arises [25]. The gauge group is SU(5) × U(1) and thus even if the SU(5) part gives a universal contribution the gaugino mass from U(1) in general gives a different mass, violating the universality of the U(1) Y gaugino mass with the rest two.
In summary, the non-universality of the gaugino masses is not a peculiar phenomenon even in the light of the gauge coupling unification. Motivated by this observation, we will discuss its phenomenological consequences.
B. Phenomenological Implications
In this subsection we discuss some phenomenological implications of non-universal gaug-
Neglecting effects of Yukawa interaction, the masses squared of sfermions at the weak scale are evaluated to bẽ
m 2 u L ≃ 5.4M 2 3,0 + 0.47M 2 2,0 + 4.2 × 10 −3 M 2 1,0 + 0.35m 2 Z cos 2β (19) m 2 d L ≃ 5.4M 2 3,0 + 0.47M 2 2,0 + 4.2 × 10 −3 M 2 1,0 − 0.42m 2 Z cos 2β (20) m 2 u R ≃ 5.4M 2 3,0 + 0.066M 2 1,0 + 0.15m 2 Z cos 2β (21) m 2 d R ≃ 5.4M 2 3,0 + 0.017M 2 1,0 − 0.077m 2 Z cos 2β (22) m 2 ℓ L ≃ 0.47M 2 2,0 + 0.037M 2 1,0 − 0.27m 2 Z cos 2β (23) m 2 ℓ R ≃ 0.15M 2 1,0 − 0.23m 2 Z cos 2β(24)m 2 ν ≃ 0.47M 2 2,0 + 0.037M 2 1,0 + 0.5m 2 Z cos 2β .(25)
From the above equations, we find that if M 1,0 > ∼ 2.0M 2,0 ,m 2 ℓ R is heavier than M 2 1 , M 2 2 , m 2 ℓ L andm 2 ν . Notice that the mass of the charged left-handed slepton is heavier than the mass of the neutral sneutrino because cos 2β ≤ 0 for tan β ≥ 1 . On the other hand, for M 1,0 /M 2,0 > ∼ 2.5, the wino mass tends to be lighter than the sneutrino mass. Hence we expect that sneutrino can be LSP when 2 < ∼ M 1,0 /M 2,0 < ∼ 2.5, and wino-like neutralino can be LSP when M 1,0 /M 2,0 > ∼ 2.5.
Next, we consider how µ affects the mass spectrum of the superparticles. The value µ is determined by minimizing the Higgs potential. At the tree-level, µ is calculated in terms of the soft SUSY breaking masses of the Higgses and tan β,
µ 2 =m 2 H d −m 2 H d tan 2 β tan 2 β − 1 − 1 2 m 2 Z .(26)
In order to obtain the value ofm 2 H d andm 2 Hu , we have to include the Yukawa interaction.
For the moment we consider the low tan β region, i.e., we take only the top Yukawa coupling into account and neglect the bottom and tau Yukawa couplings for simplicity. In this casẽ m 2 H d =m 2 ℓ L and we can obtain an analytic solution for the RGE ofm 2 Hu . For tan β = 10 , µ is approximately
µ 2 = 2.1M 2 3,0 − 0.22M 2 2,0 − 0.0064M 2 1,0 + 0.0063M 1,0 M 2,0 +0.19M 2,0 M 3,0 + 0.029M 3,0 M 1,0 − 1 2 m 2 Z .(27)
From this equation, we find that the size of µ is strongly correlated with the size of the In the non-universal case, not only the mass spectrum but also the mixing properties of the neutralinos are very different from those in the minimal case. To see this we classify the lightest neutralino χ 0 1 into five cases as follows.χ 0 1 is a linear combination of bino, wino and higgsinos and is written as
χ 0 1 = (O N ) 1BB + (O N ) 1WW + (O N ) 1H dH d + (O N ) 1HuHu ,(28)
where O N is orthogonal matrix diagonalizing the neutralino mass matrix. And we find that when M 3,0 /M 2,0 is larger than 2, i.e., |µ| is large and so is the left-handed and right-handed stau mixing, sneutrino can not be the LSP, and stau is LSP even when M 1,0 /M 2,0 is bigger than 2.5 -3.
When |(O N ) 1B | 2 > 0.8, |(O N ) 1W | 2 > 0.8 or |(O N ) 1H d | 2 + |(O N ) 1Hu | 2 > 0.8,
In the non-universal case sfermions, as well as neutralinos and charginos, show variety of mass spectrum. From eq. (23) and eq. (24), we find that when M 1,0 /M 2,0 > ∼ 2 left-handed sfermions are smaller than right-handed sfermions in contrast to the universal case. For stau, the mixing angle ofτ L andτ R also depends on this ratio. In Fig. 3 we show the behavior of this mixing angle θ τ in the M 1,0 /M 2,0 − M 3,0 /M 2,0 plane, where θ τ is defined such that the lighter stauτ 1 is written asτ 1 = cos θ ττL + sin θ ττR . Around M 1,0 /M 2,0 ≃ 2, the mass of the right-handed stau is as heavy as that of the left-handed stau, and they mix maximally (θ τ = 40 − 50) as expected. Also masses of squarks strongly depend on M 3 , and thus mass relations between squarks and sleptons drastically change. As we shall see later, some of the squarks can be lighter than the sleptons.
In Fig. 4 we show the same graph as We shall next investigate the mass spectrum of superparticles in detail, by choosing some representative parameter sets, and discuss phenomenology for each parameter set.
The points we choose are listed in Table I. In table II we show contamination ofχ 0 1 for each point. At the points A and E, the LSP is the wino-like neutralino. At the points B, C and E the LSP is the higgsino-like neutralino. And at the point D the tau sneutrino is the LSP.
In table III we list the mass spectrum of superparticles.
The wino-like neutralino is the LSP when M 1,0 /M 2,0 > ∼ 2 and M 3,0 /M 2,0 > ∼ 1 . In the wino-like neutralino LSP case, the lighter chargino and the lightest neutralino are highly degenerate generally. This character and the resulting phenomenology has been studied in [25][26][27][28][29][30]. On top of this our scenario also predicts that the right-handed sfermions are heavier than the left-handed ones because of the inequality M 1,0 /M 2,0 > ∼ 2 , and colored superparticles are heavier than other superparticles because of the inequality M 3,0 /M 2,0 > ∼ 1 (see the list for the points A and E in the table III) . The former may be an interesting feature. Anomaly-mediated SUSY breaking (AMSB) scenario also predicts the wino-like LSP. However in the minimal AMSB model where universal mass is added to all scalars to avoid negative slepton masses squared, the left-handed and right-handed sleptons in the first two generations tend to be degenerate [29]. Thus we can distinguish two scenarios with the wino LSP, the no-scale scenario with non-universal gaugino masses and the minimal AMSB, by measuring these slepton masses.
The higgsino-like neutralino is the LSP when M 3,0 /M 2,0 < ∼ 0.5 regardless of M 1,0 /M 2,0 .
In the higgsino-like neutralino LSP case, the mass deference between higgsino-like neutralino LSP and chargino NLSP is generally small. The resulting phenomenology have been studied in [31][32][33]. Furthermore in our case, the sleptons are as heavy as the squarks due to the inequality M 3,0 /M 2,0 < ∼ 0.5. Especially the lighter stop and sbottom can be lighter than some of the sleptons. Actually, at the points B, C, and F, the lighter stop is comparable to the slepton masses, and all superparticle masses are below 400 -450 GeV.
In the non-universal scenario, the tau sneutrino can also be the LSP when 2 < ∼ M 1,0 /M 2,0 < ∼ 2.5, 1 < ∼ M 3,0 /M 2,0 < ∼ 5 and tan β < ∼ 15. From the first inequality, we find that the mass difference between left-handed and right-handed squarks in the first two generations is small, and the left-right mixing angle of the stau is big as shown in Fig. 3.
V. CONCLUSIONS
In this paper, we have revisited the no-scale scenario where the vanishing SUSY-breaking scalar masses and trilinear scalar couplings are given at the GUT scale. When the gaugino masses are given universal, the renormalization group analysis implies that the bino mass and the right-handed slepton masses are close to each other. This degeneracy leads an upperbound of the LSP mass around 120 GeV: above it the LSP would be the charged stau, which must be excluded cosmologically. Furthermore, the negative results of the slepton searches at LEP200 already excluded a large portion of the parameter space including a large tan β region, leaving tan β < ∼ 8.
We next considered various ways out to avoid the aforementioned severe constraints.
Among them, we concentrated on the case of the non-universal gaugino masses. In fact the non-universality of the gaugino masses is by no means a peculiar phenomenon, rather it is realized in various scenarios, including some approaches to grand unification. We investigated some phenomenological implications of the no-scale model with the non-universal gaugino masses. We found that there is no longer severe constraint on the superparticle masses and the mass spectrum of the superparticle has much richer structure. In particular, the LSP can be the wino-like neutralino, the higgsino-like neutralino, or even the sneutrino.
We also found that unlike the conventional universal gaugino mass case, the left-handed slepton masses can be lighter than the right-handed slepton masses. We expect that resulting collider signatures with these features will be quite different from the usual scenario with the universal gaugino masses. Further studies along this direction should be encouraged.
ACKNOWLEDGMENT
We would like to thank Y. Nomura, T. Moroi
R
) must be smaller than 0.05 pb to survive the smuon searches at LEP. Here we impose that σ(e + e − →μ +Rμ − R ) ≤ 0.05 pb for mμ R ≤ 98 GeV and mχ 0 1 ≤ 0.98mμ R − 4.1 GeV. This constraint excludes the left side of the dashed line in the Fig. 1. Combining these two constraints, we conclude that the no-scale scenario with the universal gaugino masses is allowed only for tan β < ∼ 8 and 210 GeV < ∼ M 1/2 < ∼ 270 GeV.
T × G H and the Standard Model gauge groups are obtained as diagonal subgroups of the two product groups. The idea of the product GUTs provides an elegant solution to the triplet-doublet splitting problem in the Higgs sector based on the missing doublet mechanism. The gauge coupling unification achieves if the gauge couplings of the G H group are sufficiently large, while contributions to the gaugino masses from the G H sector are generally sizable and destroy their universality.
ino masses. At the cutoff scale, all scalar masses are vanishing as in the minimal case, while the bino, winos and gluinos possess nonzero masses M 1,0 , M 2,0 , and M 3,0 , respectively, and now they are no longer degenerate in general. The soft SUSY breaking mass parameters at the weak scale are obtained by solving the RGEs. In this paper we use the one-loop level RGEs. With the soft SUSY breaking masses, we evaluate the physical masses using the tree-level potential. We also obtain the value of µ from the electroweak symmetry breaking condition with tree-level Higgs potential. Before showing numerical results, we discuss the mass spectrum of superparticles when the Yukawa effects to the RG evolutions and left-right mixings are neglected. Relations of the gaugino masses at the GUT scale M GU T and the electroweak scale M EW are
gluino mass M 3,0 and |µ| becomes large as M 3,0 increases. Hence when M 3,0 is large enough, the left-right mixing in the slepton masses is important, which makes one of the staus,τ 1 , lighter than sneutrino. On the other hand if M 3,0 is small enough, |µ| becomes smaller than the mass of bino, wino, slepton and sneutrinos, and then a higgsino-like neutralino can be the LSP. Actually, for tan β = 10, from eq.(27) we find that |µ| is smaller than M 2 if M 3,0 /M 2,0 < ∼ 0.5 is satisfied.
we call these parameter region 'bino region', 'wino region' or 'higgsino region', respectively. When |(O N ) 1B | 2 < 0.8 and|(O N ) 1W | 2 < 0.8 and |(O N ) 1B | 2 + |(O N ) 1W | 2 > 0.8 ,we call the region 'bino-wino mixed region'. The other parameter region is called 'mixed region'. In Fig. 2 we show the composition of the LSP when we relax the gaugino mass universality. Here we take M 2,0 = 200GeV, tan β = 10 and sgn(µ) = +1. Recall that for the universal gaugino masses at the GUT scale, the LSP is the lighter stau and this parameter set is excluded. Once we relax the universality, however, we see that the situation drastically changes, and the composition of the LSP behaves as we have discussed with the approximate expressions eq.(18) -eq.(25). The lightest neutralino can be the LSP in a large parameter region , and furthermore unlike the universal case, it can be wino-like, higgsino-like or admixture of them as well as bino-like. When M 1,0 /M 2,0 > ∼ 2.5 and M 3,0 /M 2,0 > ∼ 1 the wino is the LSP. And as the ratio M 3,0 /M 2,0 decreases, |µ| becomes comparable to M 1 and M 2 and the lightest neutralino is the admixture of bino, wino and higgsinos. Further M 3,0 /M 2,0becomes smaller than about 0.5, the dominant component of the lightest neutralino is higgsino. Also we find in the region 2 < ∼ M 1,0 /M 2,0 < ∼ 2.5, the tau sneutrino is indeed the LSP.
Fig. 2
2except for tan β = 35. In this case the Yukawa interaction and the left-right mixing make the stau mass lighter. In fact, although wino-like, higgsino-like and mixed neutralino is LSP in large parameter region, the sneutrino can not be the LSP. To see the relation among the stau mass, the tau sneutrino mass and tan β, we plot in Fig. 5 the composition of the LSP in the M 3,0 /M 2,0 − tan β plane, fixing M 3,0 /M 2,0 = 2.5. This figure shows that the tau sneutrino can be the LSP when tan β < ∼ 15 where the left-right mixing is not so sizable. We have checked that these features are insensitive to the signs of µ and gaugino masses.
FIG. 1 .FIG. 2 .
12Allowed region of the minimal no-scale scenario. The horizontal axis is the universal gaugino mass at the GUT scale M 1/2 and the vertical axis is tan β. In the region above the solid lineτ is the LSP and it should be cosmologically excluded. The left side of the dashed line is excluded by smuon searches by the LEP experiments at √ s = 202 GeV. We also show the contour of right-handed smuon mass. The composition of the LSP in the M 1,0 /M 2,0 − M 3,0 /M 2,0 plane, for M 2,0 = 200GeV and tan β = 10. The classification of the neutralino LSP is given in the text.
FIG. 3 .
3Mixing angle θ τ in the M 1,0 /M 2,0 −M 3,0 /M 2,0 plane for M 2,0 = 200GeV and tan β = 10.We also show the region where stau and tau sneutrino are the LSP. The definition of the mixing angle is given in the text.
FIG. 4 .FIG. 5 .
45The same as Fig. 2, but for M 2,0 = 200GeV and tan β = 35. The composition of the LSP in the M 3,0 /M 2,0 − tan β plane, for M 2,0 = 200GeV and M 1,0 /M 2,0 = 2.5 .
and Y. Yamada for useful discussions. This work was supported in part by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports, and Culture of Japan, on Priority Area 707 "Supersymmetry and Unified Theory of Elementary Particles", and by the Grants-in-Aid No.11640246 and No.12047201. TABLE I. Gaugino masses at the GUT scale for each point. All dimensionful parameters are given in the GeV unit.TABLE II. Components of the lightest neutralinoχ 0 1 which is a linear combination of bino, wino and higgsinos,χ 0 1= (O N ) 1BB + (O N ) 1WW + (O N ) 1H dH d + (O N ) 1HuHu .TABLE III. Mass spectrum for each point. All values are given in the GeV unit.TABLES
Point A
Point B
Point C
Point D
Point E
Point F
M 1,0
800
1000
400
500
800
600
M 2,0
200
250
200
200
200
200
M 3,0
400
125
100
300
300
100
tan β
10
10
10
10
35
35
Point A
Point B
Point C
Point D
Point E
Point F
(O N ) 1B
-0.017
0.0835
0.241
0.092
-0.022
0.126
(O N ) 1W
0.987
-0.478
-0.457
-0.967
0.973
-0.445
(O N ) 1H d
-0.149
0.689
0.710
0.219
-0.213
0.729
(O N ) 1Hu
0.054
-0.539
-0.479
-0.096
0.084
-0.504
particle
Point A
Point B
Point C
Point D
Point E
Point F
χ 0
1
160
106
70
156
159
72
χ 0
2
336
152
126
209
332
120
χ 0
3
594
248
169
444
438
202
χ 0
4
603
430
222
457
453
267
χ +
1
160
113
81
157
159
81
χ +
2
602
253
216
457
449
212
u L
929
336
263
701
702
265
d L
932
345
275
706
707
277
u R
941
384
249
700
718
274
d R
925
316
237
692
697
244
ν
196
250
144
155
196
167
e L
212
262
164
174
212
185
e R
312
389
161
198
312
236
t 1
742
233
164
538
544
176
t 2
925
409
339
721
709
338
b 1
855
293
230
646
602
200
b 2
922
317
252
691
676
252
ν τ
195
249
143
154
183
156
τ 1
205
261
154
159
166
169
τ 2
314
387
169
209
316
225
g
1053
329
263
790
790
263
. J Ellis, C Kounnas, D V Nanopoulos, Nucl. Phys. 247373J. Ellis, C. Kounnas and D.V. Nanopoulos, Nucl. Phys. B247, 373 (1984).
. E Witten, Phys. Lett. 155151E. Witten, Phys. Lett. B155, 151 (1985).
. K Inoue, M Kawasaki, M Yamaguchi, T Yanagida, Phys. Rev. 45328K. Inoue, M. Kawasaki, M. Yamaguchi and T. Yanagida, Phys. Rev. D45, 328 (1992).
. L Randall, R Sundrum, Nucl. Phys. 55779L. Randall and R. Sundrum, Nucl. Phys. B557, 79 (1999).
. M Luty, R Sundrum, hep-th/9910202M. Luty and R. Sundrum, hep-th/9910202.
. S Kelly, J L Lopez, D V Nanopoulos, H Pois, K.-J Yuan, Phys. Lett. 273423S. Kelly, J.L. Lopez, D.V. Nanopoulos, H. Pois and K.-J. Yuan, Phys. Lett. B273, 423 (1991).
. M Drees, M M Nojiri, Phys. Rev. 452482M. Drees and M.M. Nojiri, Phys. Rev. D45, 2482 (1992).
. T Banks, M Dine, Nucl. Phys. 505445T. Banks and M. Dine, Nucl. Phys. B505, 445 (1997);
. H P Nilles, M Olechowski, M Yamaguchi, Phys. Lett. 41524H.P. Nilles, M. Olechowski and M. Yamaguchi, Phys. Lett. B415 (1997) 24;
. Nucl. Phys. 53043Nucl. Phys. B530, 43 (1998);
. Z Lalak, S Thomas, Nucl. Phys. 51555Z. Lalak and S. Thomas, Nucl. Phys. B515, 55 (1998);
. A Lukas, B A Ovrut, D Waldram, A. Lukas, B.A. Ovrut and D. Waldram, Nucl.
. Phys. 53243Phys. B532, 43 (1998);
. Phys. Rev. 577529Phys. Rev. D57, 7529 (1998);
. K Choi, H B Kim, C Muñoz, Phys. Rev. 577521K. Choi, H.B. Kim and C. Muñoz, Phys. Rev. D57, 7521 (1998).
. G F Giudice, M A Luty, H Murayama, R Rattazzi, JHEP. 981227G.F. Giudice, M.A. Luty, H. Murayama and R. Rattazzi, JHEP 9812, 027 (1998).
. A See For Example, R Pomarol, Rattazzi, JHEP. 990513See for example, A. Pomarol and R. Rattazzi, JHEP 9905, 013 (1999);
. Z Chacko, M A Luty, I Maksymyk, E Pontón, JHEP. 00041Z. Chacko, M.A. Luty, I. Maksymyk and E. Pontón, JHEP 0004, 001 (2000);
. E Katz, Y Shadmi, Y Shirman, JHEP. 990815E. Katz, Y. Shadmi and Y. Shirman, JHEP 9908, 015 (1999);
. K.-I Izawa, Y Nomura, T Yanagida, Prog , K.-I. Izawa, Y. Nomura and T. Yanagida, Prog.
. Theor. Phys. 1021181Theor. Phys. 102, 1181 (1999);
. I Jack, D R T Jones, Phys. Lett. 482167I. Jack and D.R.T. Jones, Phys. Lett. B482, 167 (2000).
. Y Nomura, T Yanagida, hep-ph/0005211Y. Nomura and T. Yanagida, hep-ph/0005211.
. D E Kaplan, G D Kribs, M Schmaltz, ; Z Chacko, M Luty, A E Nelson, E Pontón, hep-ph/9911293JEHP. 00013D.E. Kaplan, G.D. Kribs and M. Schmaltz, hep-ph/9911293; Z. Chacko, M. Luty, A.E. Nelson and E. Pontón, JEHP 0001, 003 (2000);
. M Schmaltz, W Skiba, hep- ph/0001172M. Schmaltz and W. Skiba, hep- ph/0001172.
Standard SUSY at LEP', talk presented in SUSY2K. G Ganis, 8th Int. Conf. on Supersymmetry in Physics. Geneva, SwitzerlandCERNG. Ganis, 'Standard SUSY at LEP', talk presented in SUSY2K, 8th Int. Conf. on Supersymmetry in Physics, CERN, Geneva, Switzerland, June 26 -July 1, 2000
. Y Kawamura, H Murayama, M Yamaguchi, Phys. Rev. 511337Y. Kawamura, H. Murayama and M. Yamaguchi, Phys. Rev. D51, 1337 (1995).
. N Polonsky, A Pomarol, Phys. Rev. 516532N. Polonsky and A. Pomarol, Phys. Rev. D51, 6532 (1995)
. M Schmaltz, W Skiba, hep-ph/0004210M. Schmaltz and W. Skiba, hep-ph/0004210.
. J See For Example, T Sato, Yanagida, Phys. Lett. 430127See for example, J. Sato and T. Yanagida, Phys. Lett. B430, 127 (1998);
. Y Nomura, T Yanagida, Phys. Rev. 5917303Y. Nomura and T. Yanagida, Phys. Rev. D59, 017303 (1999);
. M Bando, T Kugo, Prog. Theor. Phys. 1011313M. Bando and T. Kugo, Prog. Theor. Phys. 101, 1313 (1999).
. T Moroi, H Murayama, M Yamaguchi, Phys. Lett. 303289T. Moroi, H. Murayama and M. Yamaguchi, Phys. Lett. B303 (1993) 289;
. G F Gherghetta, A Giudice, Riotto, Phys. Lett. 44628Gherghetta, G.F. Giudice and A. Riotto, Phys. Lett. B446 (1999) 28;
. T Asaka, K Hamaguchi, K Suzuki, hep-ph/0005136T. Asaka, K. Hamaguchi and K. Suzuki, hep-ph/0005136.
. Y Kawamura, H Murayama, M Yamaguchi, Phys. Lett. 32452Y. Kawamura, H. Murayama and M. Yamaguchi, Phys. Lett. B324, 52 (1994).
. A Brignole, L E Ibáñez, C Muñoz, Nucl. Phys. 422125A. Brignole, L.E. Ibáñez and C. Muñoz, Nucl. Phys. B422, 125 (1994);
Erratum ibid. 437747Erratum ibid. B437, 747 (1995).
. J Ellis, K Enqvist, D Nanopoulos, K Tamvakis, Phys. Lett. 155381J. Ellis, K. Enqvist, D. Nanopoulos, and K. Tamvakis, Phys. Lett. 155B, 381 (1985);
. G , G.
C.-H Anderson, J F Chen, J Gunion, T Lykken, Y Moroi, Yamada, New Directions for High Energy Physics. D. G. Cassel, L. Trindle Gennari, and R. H. SiemannMenlo Park, CAStanford Linear Accelerator Center96Anderson, C.-H. Chen, J.F. Gunion, J. Lykken, T. Moroi and Y. Yamada, in New Direc- tions for High Energy Physics, Snowmass 96, edited by D. G. Cassel, L. Trindle Gennari, and R. H. Siemann (Stanford Linear Accelerator Center, Menlo Park, CA, 1997);
. Y Huitu, T Kawamura, K Kobayashi, Puolamäki, Phys. Rev. 6135001Huitu, Y. Kawamura, T. Kobayashi, and K. Puolamäki, Phys. Rev. D61,035001 (1999);
. G Anderson, H Baer, C.-H Chen, P Quintana, X Tata, Phys. Rev. 6195005G. Anderson, H. Baer, C.-H. Chen, P. Quintana and X. Tata, Phys. Rev. D61, 095005 (2000).
. J Hisano, T Goto, H Murayama, Phys. Rev. 491446J. Hisano, T. Goto, and H. Murayama, Phys. Rev. D49, 1446 (1994)
. T Hotta, K.-I Izawa, T Yanagida, Phys. Rev. 533913T. Hotta, K.-I. Izawa and T. Yanagida, Phys. Rev. D53, 3913 (1996);
. Prog. Theor. Phys. 95949Prog. Theor. Phys. 95, 949 (1996);
. Phys. Rev. 546970Phys. Rev. D54, 6970 (1996).
. N Arkani-Hamed, H.-C Cheng, T Moroi, Phys. Lett. 387529N. Arkani-Hamed, H.-C. Cheng and T. Moroi, Phys. Lett. B387, 529 (1996);
. K Kurosawa, Y Nomura, K Suzuki, Phys. Rev. 60117701K. Kuro- sawa, Y. Nomura and K. Suzuki, Phys. Rev. D60, 117701 (1999).
. S Mizuta, D Ng, M Yamaguchi, Phys. Lett. 30096S. Mizuta, D. Ng, and M. Yamaguchi, Phys. Lett. B300, 96 (1993)
. D Pierce, A Papadopoulos, Nucl. Phys. 430565Phys. Rev.D. Pierce and A. Papadopoulos, Nucl. Phys B430, 278 (1994), and Phys. Rev. D50, 565 (1994)
. C.-H Chen, M Dress, J F Gunion, Phys. Rev. Lett. 762002C.-H. Chen, M. Dress and J. F. Gunion, Phys. Rev. Lett. 76, 2002 (1996);
. Phys. Rev. 55330Phys. Rev. D55, 330 (1997) 330
. J F Feng, T Moroi, L Randall, M Strassler, S Su, Phys. Rev. Lett. 831731J. F. Feng, T. Moroi, L. Randall, M. Strassler, and S. Su, Phys. Rev. Lett. 83, 1731 (1999)
. J F Feng, T Moroi, Phys. Rev. 6195004J. F. Feng and T. Moroi, Phys. Rev. D61, 095004 (2000)
. J F Gunion, S Mrenna, Phys. Rev. 6215002J. F. Gunion and S. Mrenna, Phys. Rev. D62, 015002 (2000)
. S Mizuta, M Yamaguchi, Phys. Lett. 298120S. Mizuta and M. Yamaguchi, Phys. Lett. B298, 120 (1993)
. G F Giudice, A Pomarol, Phys. Lett. 372253G. F. Giudice and A. Pomarol, Phys. Lett. B372, 253 (1996)
. M Dress, M M Nojiri, D P Roy, Y Yamada, Phys. Rev. 56276M. Dress, M. M. Nojiri, D. P. Roy and Y. Yamada, Phys. Rev. D56, 276 (1997)
| [] |
[
"Physical Action Categorization using Signal Analysis and Machine Learning",
"Physical Action Categorization using Signal Analysis and Machine Learning"
] | [
"Asad Mansoor Khan \nCollege of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan\n",
"Ayesha Sadiq \nCollege of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan\n",
"Sajid Gul Khawaja \nCollege of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan\n",
"Norah Saleh Alghamdi \nDepartment of Computer Sciences\nCollege of Computer and Information Sciences\nPrincess Nourah bint Abdulrahman University\nP.O. Box 8442811671RiyadhSaudia Arabia\n",
"Muhammad Usman Akram \nCollege of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan\n",
"Ali Saeed \nNational University of Modern Sciences\nPakistan\n"
] | [
"College of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan",
"College of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan",
"College of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan",
"Department of Computer Sciences\nCollege of Computer and Information Sciences\nPrincess Nourah bint Abdulrahman University\nP.O. Box 8442811671RiyadhSaudia Arabia",
"College of E&ME\nNational University of Sciences and Technology\nIslamabadPakistan",
"National University of Modern Sciences\nPakistan"
] | [] | Daily life of thousands of individuals around the globe suffers due to physical or mental disability related to limb movement. The quality of life for such individuals can be made better by use of assistive applications and systems. In such scenario, mapping of physical actions from movement to a computer aided application can lead the way for solution. Surface Electromyography (sEMG) presents a non-invasive mechanism through which we can translate the physical movement to signals for classification and use in applications. In this paper, we propose a machine learning based framework for classification of 4 physical actions. The framework looks into the various features from different modalities which contribution from time domain, frequency domain, higher order statistics and inter channel statistics. Next, we conducted a comparative analysis of k-NN, SVM and ELM classifier using the feature set. Effect of different combinations of feature set has also been recorded. Finally, the classifier accuracy with SVM and 1-NN based classifier for a subset of features gives an accuracy of 95.21 and 95.83 respectively. Additionally, we have also proposed that dimensionality reduction by use of PCA leads to only a minor drop of less than 5.55% in accuracy while using only 9.22% of the original feature set. These finding are useful for algorithm designer to choose the best approach keeping in mind the resources available for execution of algorithm. | null | [
"https://arxiv.org/pdf/2008.06971v2.pdf"
] | 221,218,863 | 2008.06971 | 404b58a3d1df9ea0ad6d485701fda680f2e96b79 |
Physical Action Categorization using Signal Analysis and Machine Learning
Asad Mansoor Khan
College of E&ME
National University of Sciences and Technology
IslamabadPakistan
Ayesha Sadiq
College of E&ME
National University of Sciences and Technology
IslamabadPakistan
Sajid Gul Khawaja
College of E&ME
National University of Sciences and Technology
IslamabadPakistan
Norah Saleh Alghamdi
Department of Computer Sciences
College of Computer and Information Sciences
Princess Nourah bint Abdulrahman University
P.O. Box 8442811671RiyadhSaudia Arabia
Muhammad Usman Akram
College of E&ME
National University of Sciences and Technology
IslamabadPakistan
Ali Saeed
National University of Modern Sciences
Pakistan
Physical Action Categorization using Signal Analysis and Machine Learning
sEMGSignal ProcessingMachine LearningPhysical Action Classification
Daily life of thousands of individuals around the globe suffers due to physical or mental disability related to limb movement. The quality of life for such individuals can be made better by use of assistive applications and systems. In such scenario, mapping of physical actions from movement to a computer aided application can lead the way for solution. Surface Electromyography (sEMG) presents a non-invasive mechanism through which we can translate the physical movement to signals for classification and use in applications. In this paper, we propose a machine learning based framework for classification of 4 physical actions. The framework looks into the various features from different modalities which contribution from time domain, frequency domain, higher order statistics and inter channel statistics. Next, we conducted a comparative analysis of k-NN, SVM and ELM classifier using the feature set. Effect of different combinations of feature set has also been recorded. Finally, the classifier accuracy with SVM and 1-NN based classifier for a subset of features gives an accuracy of 95.21 and 95.83 respectively. Additionally, we have also proposed that dimensionality reduction by use of PCA leads to only a minor drop of less than 5.55% in accuracy while using only 9.22% of the original feature set. These finding are useful for algorithm designer to choose the best approach keeping in mind the resources available for execution of algorithm.
Introduction
In the modern day, physical disabilities present a major problem to the daily life. The main reason behind this is the various factors that contribute to these disabilities. These factors include gait disorder or limb impairment due to aging process [1], occupational injuries or trauma such as sports accidents, which hinder the quality of life. Stroke is another major cause of limb disabilities in adults [2]. Most of these sufferers may require partial limb support or prosthetics limb to elevate their daily suffering. Apart from these factors, neurological disorders are also a leading cause of accidents [3]. Epilepsy, a major neurological disorder, is caused by unusual nerve cell activity in the brain [4], which effects almost 50 million of people around the globe [5]. This brings to light the immense need for a system that can categorize the physical signals for either prosthetic limb design or timely notification of epileptic attacks for injury prevention.
In this regard, a possible solution is to somehow sense the intended motion and take decisions accordingly. Surface Electromyography (sEMG) has been characterized as the best non-invasive performer for activity analysis [6]. Electromyography (EMG) refers to the electrical activity recordings, which are produced because of skeletal muscles. Fig 1. shows a side by side comparison of sEMG signals collected against normal and abnormal activities such as clapping and elbowing respectively. These EMG can be used to analyze the biomechanics of human or animal movement. Due to this, sEMG is used both in the identification of ailments of the muscular system, clinical or biomedical applications and in the development of modern human computer interaction [3]. These signals can be analyzed to detect medical abnormalities [7,8], emotion detection [9], prosthetic arms, hand and lower limbs control [10]. Various approaches have been suggested by different people to analyze EMG signal. A new classification algorithm for multiple physical actions using sEMG is proposed in this paper. In the proposed approach, a window of raw sEMG signal is initially pre-processed to enhance the variability between signals of normal and abnormal activates. The processed window is then forwarded to feature extractor where, among the well-known features for EMG, inter channel correlation and frequency based signatures for classes is also calculated. The feature vector is then subjected to normalization and is finally fed to a classifier to provide us with the output label of the signal. The proposed methodology results in high classification accuracy even with use of a simple classifier and lower number of features as compared to previous methods. The rest of the paper is structured in the following manner: Section 2 discusses state-of-art algorithms proposed by researchers for detection of physical activity using EMG signals. Proposed methodology is presented in Section 3, which is followed by dataset explanation, experimentation, and results, which are under Section 4. Finally, Discussion on these results and conclusion are stated in Sections 5 and 6 respectively.
Literature Review
Surface electromyography is a discipline that studies or identifies electrical activity of muscles to recognize morphological variations of the neuromuscular system. Electromyography (sEMG) signal is an electrical activity produced when nervous and muscular events are recorded from the skin surface of human skeletons using electrodes, which can reflect the functionality of nerves and muscles on a real-time basis. Nevertheless, how to proficiently exact features from electromyography signals to understand accurate identification of events is the main issue to attain precision of rehabilitation cure and preparation of electromyography-controlled prostheses.
An extreme learning machine algorithm based on EMG signal using bispectrum and QPCs (quadratic phase coupling) for all the dataset in order to classify aggressive and normal activity is proposed in [11]. Improved EMD method based on median filtering and feature extraction for the classification of ALS (amyotrophic lateral sclerosis and Normal EMG signals demonstrated in [12]. A novel approach of using time-frequency representation of an EMG signal and convolutional neural network in order to distinguish between normal and aggressive action is suggested in [13]. Discrimination of Aggressive actions from normal actions based on adaptive neuro-fuzzy inference system (ANFIS) and feature extraction from EMG signals is described in [14]. In this study, eight channels recorded the EMG signals of 10 aggressive and 10 normal actions of three males and one female, which were then analyzed for classifying normal and aggressive actions. An improved classification framework depends upon modified spectral moment based features and inter-channel correlation feature proposed in [15]. Identification of physical activities of sEMG signal based on variational mode decomposition (VMD), statistical feature extraction and multi class least square support vector machine are discussed in [16]. Classification of physical actions as normal and aggressive using quadratic phase coupling and artificial neural network is described in [17]. An EMG pattern recognition system in which multi-scale principal component analysis is used for de-noising whereas discrete wavelet transform based feature extraction is proposed in [18]. Another approach in which time-frequency Image is used as input to pre-trained convolutional neural networks. Deep feature extraction using AlexNet and VGG-16 and SVM is used for the classification of EMG based Physical activities in [19].
An extreme learning machine (ELM) classifier based on flexible analytic wavelet transform features for the identification of physical actions is proposed in [20]. A non-invasive technique to provide reference control signals for prosthetic hands based on SEMG signal carried out feature extraction using wavelet analysis and contributes information in time-frequency domain whereas classification system was done using artificial neural networks (ANNs) provides in [21]. Evaluation SEMG quality by performing comparison of five different classifiers, two of them are supervised whereas three of them are unsupervised artificial neural networks (ANNs). Unsupervised artificial neural networks perform better classification accuracy which is greater than 98% as compared to supervised artificial neural networks describes in [22]. Use of convolutional neural networks (CNNs) for feature extraction of SEMG signals and perform classification of actions. Because of local connection and weight sharing properties of CNNs, it exhibit good translation invariance. Consequently, the spectrogram acquired by evaluating SEMG signals and used as an image to CNN is demonstrated in [23].
Methodology
In our paper, we address the problem of M-class classification of physical actions which is based on C-channel sEMG data. The proposed methodology, in this regard, is divided into pre-processing of RAW sEMG signals, feature extraction and selection, feature normalization and classification into M-classes. The system level flow diagram representing the proposed methodology is presented in
Pre-Processing and Segmentation
The first step in our proposed methodology is to pre-process and segment out each channel of sEMG signal. In order to enhance the interclass variation between aggressive and normal actions upper envelope of sEMG signal is calculated using Hilbert Transform. Qualitatively, envelope of any signal represents the upper and lower boundary limits within which signal is contained.
The resultant signal after envelop calculation is subjected to segmentation where W-length windows with an overlap factor of 25% are generated. The length "W" of the window is controlled by the sampling frequency through which the concerned signal is acquired. Note that after segmentation now each C-channel sEMG signal is converted to Ns samples of length W having C-channels. Now feature extraction is performed on each of the sub-windows from every pattern.
Feature Vector Generation
In order to generate a valid feature vector our proposed methodology makes use of diverse features from time and frequency domain thus motivating us to employ feature selection to trim the number of features. Moreover, feature normalization is conducted on the selected features so that contribution of each feature becomes approximately proportionately to the final distance while classifying.
Feature Extraction
The feature vector for our proposed methodology contains signatures from various modalities including Time Domain, Frequency Domain and Inter-Channel Statistics. These features are elaborated in the relevant subsections.
Inter Channel Statistics
The first subgroup of our feature vector consists of statistics that are calculated between the corresponding ℎ segments of all channels. The channel wise paring for the channels is shown in Tab 1. In all a total of 56 features belong to this subgroup of features whose division is discussed below. I.
Maximum Similarity Index
These statistics are based on maximum cross correlation [24] among the corresponding segments of two channels a and b of an EMG signal which is defined as
, = max ( ( , )) Where ! =
The above equation represents the maximum correlation between the segments of two channels sa and sb of an EMG signal. As we have 8 channels so we can get 28 values after performing the maximum correlation between corresponding channels.
II.
Covariance Index These statistics belong to the class of higher order statistics, which represent the cross-correlation between respective segments of all channel pairs.
Power Spectral Density
The spectral features were previously proposed for the identification of an EMG pattern in [25]. In the proposed technique, the spectral band power features are extracted using Burgs Transform. For each channel of an EMG pattern, assuming a model order ν, the power spectral density is estimated as:
Here σ 2 burg is the error variance computed in the Burg's method. Finally, the PSD features are evaluated by dividing the spectrum into bands and computing the respective powers in those bands as:
Thus a total of features against each segment of every channel are calculated leading to a feature vector of 8 .
( ) ( ) burg k k w H w 2 2 = ( ) = band b k k m b th w 3.2
.1.3. Log Moments of Fourier Spectra
The logarithms of moments and their ratios from the frequency domain are computed for the EMG segments based on [27]. The ℎ frequency domain moment from the spectrum is defined as [28] where represents the magnitude response against ℎ segment of ℎ channel Based on the above expression a total of 17 features have been calculated using the given expressions When the smallest scale is set to one, the definition of MFL resembles a modified version of WL by using the RMS and logarithm functions. • Average Amplitude Change: Average amplitude change (AAC) is nearly equivalent to WL feature, except that wavelength is averaged. • Kurtosis: Kurtosis is known as a statistical method that used to describe the distribution and a characteristic that identifies the tendency of peak data. Kurtosis level data is determine by comparing the peak of the curve inclination data.……………………………………………….. • Skewness: Skewness is defined as the inclination distribution data.
= = L k j i j k k i g 1 ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
Higher Order Statistics
This subgroup belongs to two higher order statistics based features, which are calculated against every w th segments of all M channels. Firstly second order Cumulant, representing alternatives to moment distribution, is calculated against each channel leading to eight features. Secondly, fourth order Cross-Cumulants is calculated between the channels leading to two features against upper limb and lower limb respectively.
The calculation of all the above features leads to a feature vector of length 303, which is summarized in Tab 2.
Feature Normalization
After calculation and selection of features, the resultant feature vector is normalized before proceeding to classification. Z-score normalization has been employed, using equation 2, on the feature vector so that the value for all features comes within range of [1, -1].
′ = ( − )
Here represents the feature value, represent standard deviation and mean values for feature .
Classifier
The extracted feature vector mentioned in section 3.2 is passed onto a classifier to predict the action class out of 4 possible actions. In our work, we have implemented various classification schemes with focus towards using a simple classifier. The implemented and tested classifiers include K-Nearest Neighbor, Support Vector Machine and Extreme Learning Machines. The proposed methodology explained in this section is implemented on the sEMG data which has been collected locally and obtained results are discussed extensively for the performance evaluation of methodology.
Experimentation and Results
Dataset
Physical action dataset for the initial analysis of our proposed methodology is self-generated by the use of Thalmic Myoware Armband. The dataset contains sEMG data of one male subject performing four actions, the details of these actions are mentioned in Tab 1. sEMG collected from these subjects has 8 channels, where each channel corresponds to time series data from each electrode which consists of approximately 10000 values.
Tab 2. Summary of physical actions
Actions
Typing Rest Lifting Pushups
Experimentation
The physical activity dataset mentioned previously contains approximately 10000 samples per action per subject. In the first step these samples are subdivided into multiple overlapping segments where length of each segment is set to 1000 with an overlap of 25%. This division of each sample against each subject and action leads to a sample space of 974 sample where samples per action are between 45-52 depending upon the length of original recording. This sample space is used to extract a feature vector against each segment using different modalities which have been discussed in section 3. The experimental setup for validation consists of measuring this parameter for a 4-class problem. SVM, K-NN and ELM have been used to classify using complete set of features and their subset. The feature set has been subdivided into subsets containing individual feature type and its possible combinations to get the best classification rate. In view of measuring the performance of our proposed algorithm, different parameters including Accuracy, Sensitivity, Specificity, F-measure and precision have been calculated. 10 fold cross validation for SVM and K-NN is used for classification of our data into 20 and 10 class each. Initially, the complete feature vector of length 303 is passed to the classifiers to classify data into 20, and 10 class respectively for normal and aggressive actions. As for the ELM classifier 8 different activation functions are used in the hidden neurons. The number of hidden neurons for our problem has been set to 200, which is selected after experimentation. Data division of 80-20 has been used with 20 tries with shuffled data and its division to avoid any learning bias in the classification. Tab 4. shows the training accuracies against different sigmoid functions. Accuray Loss with selected features after PCA Moreover, a thorough analysis of SVM for classification of physical actions using complete and subset of features is shown in Tab .7 where comparison between performance measure, such as sensitivity, specificity and precision, is shown. Specificity and sensitivity represent the true negative rate and true positive rate of our classifier against proposed feature set. Precision quantifies the true positive rate that actually belong to that class. The table indicates a marginal change in these values even if a subset of features or even a lower subset of features after dimensionality reduction is used for classification.
Discussion
Non-invasive signal acquisition sensors such as sEMG can play an important role in elevating the life style of people suffering from numerous physical and neurological disabilities/diseases. Correct cataloging of physical actions is the first step in providing a viable solution to such patients. In this article, we have proposed a framework which can help in designing an assistive technologies by providing classification of physical actions.
In our approach, use of pre and post processing greatly affects the classification accuracy, be that for complete feature set or an optimal subset. The results clearly show that SVM and 1-NN using the feature combinations of ICS and Frequency Domain Analysis provide the best classification accuracy in comparison to complete feature set and other classifiers. The respective classification accuracies for 1-NN and SVM are 95.83 and 95.21. Fig. 5 and 6. Clearly indicate that class 4 (pushups) is the one with most inconsistent results among the physical actions.
a. b. The significance of these classes is also clear if misclassification rate against each class if calculated using SVM. Fig 8. Clearly shows that the maximum misclassification is being caused by pushups. The results and the accompanying discussions yield that we can use all three option of feature selection with little loss in accuracy. Though decreasing the number of features can have a positive impact in real-time processing of signals if an embedded solution is sought. As the number of features to the classifier decrease so do the number of tunable parameters thus leading to less number of computation incidentally saving computational resource, time and power.
Conclusion
In this paper, we have proposed a classification framework for categorizing physical action dataset based on SVM using a feature set calculated from sEMG data. Initially a set of 303 features were calculated, from various domains, 282 were categorized as the meaningful features. 10-fold cross-validation using SVM and 1-NN classification led to an accuracy of 95.21% and 95.83% respectively for the 4 physical actions. The feature vector was further reduced using PCA to indicate that the accuracy falls by a factor of less than 5.6% while using only 9.22% of the original feature vector. These findings are particularly useful for the application designer keeping in view the platform where the algorithm is eventually going to work on. Reducing the feature set can significantly reduce the power and execution time facilitating real-time processing on embedded devices if need be.
Fig 1 .
1Raw sEMG signal for normal and abnormal activities signifying the difference
Fig 2.
Fig 2 .
2System level flow diagram of our proposed methodology
The pair wise features based on moment products are With = 8, … , 17 where the values of i and j chosen from the set C2 of 10 ordered pairs defined below 3.2.1.4. Time Domain/Miscellaneous Features This subset of features correspond to the most common type of time domain-features which are frequently used for sEMG signal analysis [29, 30, 31, and 32]. These features are listed in Tab • Amplitude: The maximum amplitude of the signal • Root Mean Square: The RMS represents the square root of the average power of the EMG signal for a given period of time. • Variance: Variance of EMG signal (VAR) is good at measuring the signal power • Waveform length: WL is a cumulative variation of the EMG that can indicate the degree of variation about the EMG signal. • Mean Absolute Value: The Mean absolute value (MAV) is one of the most popular EMG features, and it is defined as the average of the summation of absolute value of signal. • Simple Square Integral: Simple square integral (SSI) is defined as the summation of square values of EMG signal amplitude • Zero Crossing: ZC counts the times that the signal changes sign. • Slope Sign Change: SSC counts the times the slope of the signal changes sign. • Willison Amplitude: WAMP is the number of counts for each change of the EMG signal amplitude between two adjacent samples that exceeds a defined threshold. • Integrated EMG: Integrated EMG (IEMG) is generally used as a pre-activation index for muscle activity. It is the area under the curve of the rectified EMG signal. IEMG can be simplified and expressed as the summation of the absolute values of the EMG amplitude. • Log detector: It is a feature that is good at estimating the exerted force. • Myopulse percentage rate: MYOP is defined as the mean of Myopulse output in which the absolute value of EMG signal exceeds the pre-defined threshold value. • Difference absolute standard deviation value: DASDV is another frequently used EMG feature ( )
•
Enhanced Mean Absolute value: It is a feature that is good at estimating the exerted force • Enhanced Wavelength: • Modified Mean Absolute Value: Modified mean absolute value (MMAV) is an extension of MAV feature by assigning the weight window function. • Modified Mean Absolute Value 2: Modified mean absolute value 2 (MMAV2) is another extension of MAV feature by assigning the continuous weight window function. • Maximum Fractal Length: MFL is a recently established method for measuring low-level muscle activation.
Fig 3 .
3represents the classification accuracies using various feature subsets using SVM and 1-NN.
Fig 3 .
3Feature wise Accuracies using SVM and 1-NN classification Tab 3 shows the classification accuracies using K-NN classifier for K = {1, 2, 3,….,10} for 10 tries and 10 fold crossvalidation to avoid biasness in result. The best results are achieved for K = 1 and feature subset of Inter channel statistics and frequency based features.
Fig 4 .
4Relationship between accuracy losses against dimensionality reduction using PCA The overall effect of feature reduction is not significant in classification of physical actions, this can be visualized inFig 5.where the comparative balanced accuracy against all features, selected features and top 26 PCA based features' which are providing best accuracy as shown inFig 4.
Fig 5 .
5Balanced accuracy for using 1-NN for a) All Features, b) selected feature set (ICS and Frequency) and c) top 26 features using PCA
Fig 6 .
6Confusion Matrix f using SVM for a) All Features, b) selected feature set (ICS and Frequency) Fig 7. Confusion Matrix using 1-NN for a) All Features, b) selected feature set (ICS and Frequency)
Fig 8 .
8Misclassification rate using SVM for selected feature set (ICS and Frequency) and top 26 features using PCA
Tab 1. Channel wise pairing for inter-channel statistic calculationFeatures
Channel
Pair
Features Channel
Pair
Features Channel
Pair
Features Channel
Pair
1
(1, 2)
8
(2, 3)
15
(3, 5)
22
(4, 8)
2
(1, 3)
9
(2, 4)
16
(3, 6)
23
(5, 6)
3
(1, 4)
10
(2, 5)
17
(3, 7)
24
(5, 7)
4
(1, 5)
11
(2, 6)
18
(3, 8)
25
(5, 8)
5
(1, 6)
12
(2, 7)
19
(4, 5)
26
(6, 7)
6
(1, 7)
13
(2, 8)
20
(4, 6)
27
(6, 8)
7
(1, 8)
14
(3, 4)
21
(4, 7)
28
(7, 8)
Tab 2 .
2Summary of calculated and selected featuresFeature Type Feature Count
ICS
56
PSD
80
LMFS
136
TDS
21
HOSA
10
Total
303
Tab 3 .
3Classification Summary using 10-fold SVM and selected featuresFeatures
All Actions
Normal Actions
Aggressive Actions
'All'
90.25
82.36
98.22
'Time Based'
52.81
27.86
77.31
'ICS'
84.99
78.66
92.59
'PSD'
80.41
69.21
92.71
'LMF'
81.56
68.88
95.65
'HOSA'
61.97
51.89
79.34
'Freq Based'
87.70
76.21
98.18
'ICS + Freq'
90.49
81.63
98.26
'ICS + Freq + HOSA'
90.55
82.19
98.30
'Freq + HOSA'
88.36
77.80
98.30
'ICS + HOSA'
86.84
79.63
92.59
'Time + ICS + HOSA'
87.93
79.31
96.27
'ICS + PSD'
87.37
79.55
94.82
'ICS + LMF'
88.11
77.35
97.64
'ICS + PSD + HOSA'
88.44
80.53
96.19
'ICS + LMF + HOSA'
86.10
77.52
95.28
Tab 4 .
4Average Training Accuracy using ELM Classifier with 200 neurons in the hidden layer'sig'
'sin'
'hardlim'
'tribas'
'radbas'
'relu'
'lrelu'
'smax'
Training Accuracy
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.997
ELM shows a substantial increase in the training accuracies for all 3 problems, unfortunately this the effect of
overfitting and cramming with regard to the training data. Once the testing accuracies are calculated using ELM a
major drop in accuracy is seen for all sigmoid functions. Tab 5. represents the testing accuracies for 20 and 10 class
problem. The best results are achieved using sigmoid, ReLu and L-Relu activations functions.
Tab 5. Average Testing Accuracy using ELM Classifier with 200 neurons in the hidden layer
'sig'
'sin' 'hardlim' 'tribas' 'radbas' 'relu'
'lrelu'
'smax'
Testing Accuracy
0.80
0.70
0.69
0.71
0.71
0.77
0.76
0.56
The results show that 1-NN outperforms other classifiers by a fraction. The initial results of accuracy for different
subsets of feature show that the Inter channel statistics and frequency domain features provide us with the best
accuracy. After the initial experimentation the selected features were further reduced by the use of Principal
Component Analysis (PCA) to reduce the work load of our classifier. Fig 4. shows the relationship of accuracy loss
with reduction in dimensionality using SVM for 4 class problem. The results indicate that using a mere 9.22% of
feature vector i.e. 26 features the accuracy comes out to be 89.93% resulting in less than 5.6% loss. Finally, a
comparison of Cohen's Kappa values for 1-NN, SVM based classification against all features, selected features and
top 26 selected features after PCA returns 0.944, 0, 921, 0.935, 0.921and 0.866 respectively. This shows that using
either of these options marginally effects the decision power of the classifier for our problem.
Tab 7 .
7Comparison of features based on sensitivity, specificity and precisionActions
1-NN Best Features
1-NN All
Features
SVM Best
Features
SVM All
Features
Top 26 Features after
PCA
Sensitivity
Sensitivity
Sensitivity
Sensitivity
Sensitivity
Typing
0.973
0.973
0.959
0.959
0.945
Rest
1.000
0.986
1.000
1.000
1.000
Lifting
Objects
0.944
0.931
0.958
0.931
0.917
Pushups
0.917
0.875
0.889
0.875
0.736
Specificity
Specificity
Specificity
Specificity
Specificity
Typing
0.981
0.966
0.986
0.990
0.979
Rest
0.990
0.985
0.985
0.985
0.926
Lifting
Objects
0.986
0.990
0.986
0.981
0.995
Pushups
0.986
0.977
0.977
0.963
0.958
Precision
Precision
Precision
Precision
Precision
Typing
0.947
0.910
0.959
0.972
0.945
Rest
0.973
0.959
0.959
0.959
0.826
Lifting
Objects
0.958
0.971
0.958
0.944
0.985
Pushups
0.957
0.926
0.928
0.887
0.855
AcknowledgmentWe thank Princess Nourah bint Abdulrahman University Program which provided us with the funding under Researchers Supporting Project Number 'PNURSP2022R40', Princess Nourah bint Abdulrahman University, Riyadh, Saudia Arabia.
Prevalence of gait disorders in hospitalized neurological patients. H Stolze, S Klebe, C Baecker, C Zechlin, L Friege, S Pohle, G Deuschl, Movement disorders. 201H. Stolze, S. Klebe, C. Baecker, C. Zechlin, L. Friege, S. Pohle, and G. Deuschl, "Prevalence of gait disorders in hospitalized neurological patients," Movement disorders, vol. 20, no. 1, pp. 89-94, 2005.
Upper limb robotic exoskeletons for euro-rehabilitation: A review on control strategies. T Proietti, V Crocher, A Roby-Brami, N Jarrasse, IEEE Reviews in Biomedical Engineering. 9T. Proietti, V. Crocher, A. Roby-Brami, and N. Jarrasse, "Upper limb robotic exoskeletons for euro-rehabilitation: A review on control strategies," IEEE Reviews in Biomedical Engineering, vol. 9, pp. 4-14, 2016.
Accidents in patients with epilepsy: types, circumstances, and complications: a European cohort study. Mariska Van Den Broek, Ettore Beghi, Rest-1 Group, Epilepsia. 45Van Den Broek, Mariska, Ettore Beghi, and RESt-1 Group. "Accidents in patients with epilepsy: types, circumstances, and complications: a European cohort study." Epilepsia 45.6 (2004): 667-672.
ILAE official report: a practical clinical definition of epilepsy. Robert S Fisher, Epilepsia. 55Fisher, Robert S., et al. "ILAE official report: a practical clinical definition of epilepsy." Epilepsia 55.4 (2014): 475-482.
Electromyographic kinesiology of the hand: muscles moving the long finger. Charles Long, Mary Eleanor Brown, JBJS. 46LONG, CHARLES, and MARY ELEANOR BROWN. "Electromyographic kinesiology of the hand: muscles moving the long finger." JBJS 46.8 (1964): 1683-1706.
Features based on intrinsic mode functions for classification of EMG signals. Varun Bajaj, Anil Kumar, International Journal of Biomedical Engineering and Technology. 18Bajaj, Varun, and Anil Kumar. "Features based on intrinsic mode functions for classification of EMG signals." International Journal of Biomedical Engineering and Technology 18.2 (2015): 156-167.
A Review of EMG Techniques for Detection of Gait Disorders. Rajat Singh, Emanuel, Machine Learning in Medicine and Biology. IntechOpen. Singh, Rajat Emanuel, et al. "A Review of EMG Techniques for Detection of Gait Disorders." Machine Learning in Medicine and Biology. IntechOpen, 2019.
Emotion detection using noninvasive low cost sensors. Daniela Girardi, Filippo Lanubile, Nicole Novielli, Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEEGirardi, Daniela, Filippo Lanubile, and Nicole Novielli. "Emotion detection using noninvasive low cost sensors." 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2017.
Single channel surface EMG control of advanced prosthetic hands: A simple, low cost and efficient approach. Mahmoud Tavakoli, Carlo Benussi, Joao Luis Lourenco, Expert Systems with Applications. 79Tavakoli, Mahmoud, Carlo Benussi, and Joao Luis Lourenco. "Single channel surface EMG control of advanced prosthetic hands: A simple, low cost and efficient approach." Expert Systems with Applications 79 (2017): 322-332.
Analysis of EMG signals in aggressive and normal activities by using higher-order spectra. Necmettin Sezgin, The Scientific World Journal. Sezgin, Necmettin. "Analysis of EMG signals in aggressive and normal activities by using higher-order spectra." The Scientific World Journal 2012 (2012).
An efficient method for analysis of EMG signals using improved empirical mode decomposition. Vipin K Mishra, AEU-International Journal of Electronics and Communications. 72Mishra, Vipin K., et al. "An efficient method for analysis of EMG signals using improved empirical mode decomposition." AEU-International Journal of Electronics and Communications 72 (2017): 200-209.
Deep Learning of EMG Time-Frequency Representations for Identifying Normal and Aggressive Actions. Haya Mohammad Alaskar, Abdulaziz, Alaskar, Haya Mohammad Abdulaziz. "Deep Learning of EMG Time-Frequency Representations for Identifying Normal and Aggressive Actions." (2018).
An intelligent method for classification of normal and aggressive actions from electromyography signals. Gopal Jana, Aleena Chandra, Prasant Swetapadma, Pattnaik, 1st International Conference on Electronics. IEEEIEMENTechJana, Gopal Chandra, Aleena Swetapadma, and Prasant Pattnaik. "An intelligent method for classification of normal and aggressive actions from electromyography signals." 2017 1st International Conference on Electronics, Materials Engineering and Nano-Technology (IEMENTech). IEEE, 2017.
Feature Analysis for Classification of Physical Actions Using Surface EMG Data. Anish C Turlapaty, Balakrishna Gokaraju, IEEE Sensors Journal. 19Turlapaty, Anish C., and Balakrishna Gokaraju. "Feature Analysis for Classification of Physical Actions Using Surface EMG Data." IEEE Sensors Journal 19.24 (2019): 12196-12204.
Physical actions classification of surface EMG signals using VMD. Nagineni Sukumar, Sachin Taran, Varun Bajaj, 2018 International Conference on Communication and Signal Processing. IEEEICCSPSukumar, Nagineni, Sachin Taran, and Varun Bajaj. "Physical actions classification of surface EMG signals using VMD." 2018 International Conference on Communication and Signal Processing (ICCSP). IEEE, 2018.
Distinguishing physical actions using an artificial neural network. Hana Šahinbegović, Laila Mušić, Berina Alić, 2017 XXVI International Conference on Information, Communication and Automation Technologies. IEEEŠahinbegović, Hana, Laila Mušić, and Berina Alić. "Distinguishing physical actions using an artificial neural network." 2017 XXVI International Conference on Information, Communication and Automation Technologies (ICAT). IEEE, 2017.
Surface EMG pattern recognition by using DWT feature extraction and SVM classifier. Ermin Podrug, Abdulhamit Subasi, The 1st conference of medical and biological engineering in Bosnia and HerzegovinaPodrug, Ermin, and Abdulhamit Subasi. "Surface EMG pattern recognition by using DWT feature extraction and SVM classifier." The 1st conference of medical and biological engineering in Bosnia and Herzegovina (CMBEBIH 2015). 2015.
Surface EMG signals and deep transfer learning-based physical action classification. Fatih Demir, Neural Computing and Applications. 31Demir, Fatih, et al. "Surface EMG signals and deep transfer learning-based physical action classification." Neural Computing and Applications 31.12 (2019): 8455-8462.
Flexible analytic wavelet transform based features for physical action identification using sEMG signals. C Sravani, IRBM. 41Sravani, C., et al. "Flexible analytic wavelet transform based features for physical action identification using sEMG signals." IRBM 41.1 (2020): 18-22.
Rehabilitation of motor function after stroke: a multiple systematic review focused on techniques to stimulate upper extremity recovery. Samar M Hatem, Frontiers in human neuroscience. 10442Hatem, Samar M., et al. "Rehabilitation of motor function after stroke: a multiple systematic review focused on techniques to stimulate upper extremity recovery." Frontiers in human neuroscience 10 (2016): 442.
Development of a deep neural network for automated electromyographic pattern classification. Riad Akhundov, Journal of Experimental Biology. 222198101Akhundov, Riad, et al. "Development of a deep neural network for automated electromyographic pattern classification." Journal of Experimental Biology 222.5 (2019): jeb198101.
Classification of multichannel surface-electromyography signals based on convolutional neural networks. Na Duan, Journal of Industrial Information Integration. 15Duan, Na, et al. "Classification of multichannel surface-electromyography signals based on convolutional neural networks." Journal of Industrial Information Integration 15 (2019): 201-206.
Comparison of surface and intramuscular EMG pattern recognition for simultaneous wrist/hand motion classification. Lauren H Smith, Levi J Hargrove, 35th annual international conference of the IEEE engineering in medicine and biology society. IEEESmith, Lauren H., and Levi J. Hargrove. "Comparison of surface and intramuscular EMG pattern recognition for simultaneous wrist/hand motion classification." 2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 2013.
Towards limb position invariant myoelectric pattern recognition using time-dependent spectral features. Rami N Khushaba, Neural Networks. 55Khushaba, Rami N., et al. "Towards limb position invariant myoelectric pattern recognition using time-dependent spectral features." Neural Networks 55 (2014): 42-58.
Spectral analysis of signals. Petre Stoica, Randolph L Moses, Stoica, Petre, and Randolph L. Moses. "Spectral analysis of signals." (2005).
Classification of finger movements for the dexterous hand prosthesis control with surface electromyography. Ali H Al-Timemy, IEEE journal of biomedical and health informatics. 17Al-Timemy, Ali H., et al. "Classification of finger movements for the dexterous hand prosthesis control with surface electromyography." IEEE journal of biomedical and health informatics 17.3 (2013): 608-618.
Improving the performance against force variation of EMG controlled multifunctional upper-limb prostheses for transradial amputees. Ali H Al-Timemy, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 24Al-Timemy, Ali H., et al. "Improving the performance against force variation of EMG controlled multifunctional upper-limb prostheses for transradial amputees." IEEE Transactions on Neural Systems and Rehabilitation Engineering 24.6 (2015): 650-661.
Classification of Hand Movements based on Discrete Wavelet Transform and Enhanced Feature Extraction. Jingwei Too, Abdul Rahim Abdullah, Norhashimah Mohd Saad, Too, Jingwei, Abdul Rahim Abdullah, and Norhashimah Mohd Saad. "Classification of Hand Movements based on Discrete Wavelet Transform and Enhanced Feature Extraction."(2019)
Fractal analysis features for weak and single-channel upper-limb EMG signals. Angkoon Phinyomark, Pornchai Phukpattaranont, Chusak Limsakul, Expert Systems with Applications. 39Phinyomark, Angkoon, Pornchai Phukpattaranont, and Chusak Limsakul. "Fractal analysis features for weak and single-channel upper-limb EMG signals." Expert Systems with Applications 39.12 (2012): 11156-11163.
Automatic Recognition of Parkinson Disease Using Surface Electromyography During Standarized Gait Test. P Kuegler, S Member, C Jaremenko, J Schalachelzki, IEEE International Conference on Engineering in Medicine and Biology Society (EMBC). Kuegler, P., S. Member, C. Jaremenko, J. Schalachelzki, 2013, "Automatic Recognition of Parkinson Disease Using Surface Electromyography During Standarized Gait Test," in IEEE International Conference on Engineering in Medicine and Biology Society (EMBC), (IEEE Explore, 2013), pp 5781-5784.
Feature reduction and selection for EMG signal classification. Angkoon Phinyomark, Pornchai Phukpattaranont, Chusak Limsakul, 8Expert systems with applications 39Phinyomark, Angkoon, Pornchai Phukpattaranont, and Chusak Limsakul. "Feature reduction and selection for EMG signal classification." Expert systems with applications 39.8 (2012): 7420-7431.
| [] |
[
"Generalized transition waves and their properties",
"Generalized transition waves and their properties"
] | [
"Henri Berestycki \nEHESS\nCAMS\n54 Boulevard RaspailF-75006ParisFrance\n",
"François Hamel \nAix-Marseille Université and Institut Universitaire de France LATP\nAvenue Escadrille Normandie-NiemenF-13397Marseille Cedex 20France\n",
"\nDepartment of Mathematics\nUniversity of Chicago\n5734 S. University Avenue60637ChicagoILUSA\n"
] | [
"EHESS\nCAMS\n54 Boulevard RaspailF-75006ParisFrance",
"Aix-Marseille Université and Institut Universitaire de France LATP\nAvenue Escadrille Normandie-NiemenF-13397Marseille Cedex 20France",
"Department of Mathematics\nUniversity of Chicago\n5734 S. University Avenue60637ChicagoILUSA"
] | [] | In this paper, we generalize the usual notions of waves, fronts and propagation speeds in a very general setting. These new notions, which cover all usual situations, involve uniform limits, with respect to the geodesic distance, to a family of hypersurfaces which are parametrized by time. We prove the existence of new such waves for some time-dependent reaction-diffusion equations, as well as general intrinsic properties, some monotonicity properties and some uniqueness results for almost planar fronts. The classification results, which are obtained under some appropriate assumptions, show the robustness of our general definitions. | 10.1002/cpa.21389 | [
"https://arxiv.org/pdf/1012.0794v1.pdf"
] | 28,164,067 | 1012.0794 | c5ded4628c010af267ae8ea6149de37d47a01834 |
Generalized transition waves and their properties
Henri Berestycki
EHESS
CAMS
54 Boulevard RaspailF-75006ParisFrance
François Hamel
Aix-Marseille Université and Institut Universitaire de France LATP
Avenue Escadrille Normandie-NiemenF-13397Marseille Cedex 20France
Department of Mathematics
University of Chicago
5734 S. University Avenue60637ChicagoILUSA
Generalized transition waves and their properties
In this paper, we generalize the usual notions of waves, fronts and propagation speeds in a very general setting. These new notions, which cover all usual situations, involve uniform limits, with respect to the geodesic distance, to a family of hypersurfaces which are parametrized by time. We prove the existence of new such waves for some time-dependent reaction-diffusion equations, as well as general intrinsic properties, some monotonicity properties and some uniqueness results for almost planar fronts. The classification results, which are obtained under some appropriate assumptions, show the robustness of our general definitions.
Introduction and main results
We first introduce the definition of generalized transition waves and state some intrinsic general properties. We then give some specifications, including the notion of global mean speed, as well as some standard and new examples of such waves. Lastly, we state some of their important qualitative properties. We complete this section with some further possible extensions.
Definition of generalized transition waves
Traveling fronts describing the transition between two different states are a special important class of time-global solutions of evolution partial differential equations. One of the simplest examples is concerned with the homogeneous scalar semilinear parabolic equation
u t = ∆u + f (u) in R N ,(1.1)
where u = u(t, x) and ∆ is the Laplace operator with respect to the spatial variables in R N . In this case, assuming f (0) = f (1) = 0, a planar traveling front connecting the uniform steady states 0 and 1 is a solution of the type
u(t, x) = φ(x · e − ct)
such that φ : R → [0, 1] satisfies φ(−∞) = 1 and φ(+∞) = 0. Such a solution propagates in a given unit direction e with the speed c. Existence and possible uniqueness of such fronts, formulae for the speed(s) of propagation are well-known [1,12,25] and depend upon the profile of the function f on [0, 1]. In this paper, we generalize the standard notion of traveling fronts. That will allow us to consider new situations, that is new geometries or more complex equations. We provide explicit examples of new types of waves and we prove some qualitative properties. Although the definitions given below hold for general evolution equations (see Section 1.5), we mainly focus on parabolic problems, that is we consider reaction-diffusion-advection equations, or systems of equations, of the type u t = ∇ x · (A(t, x)∇ x u) + q(t, x) · ∇ x u + f (t, x, u) in Ω,
g[t,
x, u] = 0 on ∂Ω, (1.2) where the unknown function u, defined in R × Ω, is in general a vector field u = (u 1 , · · · , u m ) ∈ R m
and Ω is a globally smooth non-empty open connected subset of R N with outward unit normal vector field ν. By globally smooth, we mean that there exists β > 0 such that Ω is globally of class C 2,β , that is there exist r 0 > 0 and M > 0 such that, for all y ∈ ∂Ω, there is a rotation R y of R N and there is a C 2,β map φ y : B N −1 2r 0 → R such that φ y (0) = 0, φ y C 2,β B N −1 is the closed Euclidean ball of R N −1 with center 0 and radius s (notice in particular that R N is globally smooth).
Let us now list the general assumptions on the coefficients of (1.2). The diffusion matrix field (t, x) → A(t, x) = (a ij (t, x)) 1≤i,j≤N is assumed to be of class C 1,β (R × Ω) and there exist 0 < α 1 ≤ α 2 such that α 1 |ξ| 2 ≤ a ij (t, x)ξ i ξ j ≤ α 2 |ξ| 2 for all (t, x) ∈ R × Ω and ξ ∈ R N , under the usual summation convention of repeated indices. The vector field (t, x) → q(t, x) ranges in R N and is of class C 0,β (R × Ω). The function f :
R × Ω × R m → R m (t, x, u) → f (t, x, u)
is assumed to be of class C 0,β in (t, x) locally in u ∈ R m , and locally Lipschitz-continuous in u, uniformly with respect to (t, x) ∈ R × Ω. Lastly, the boundary conditions
g[t,
x, u] = 0 on ∂Ω may for instance be of the Dirichlet, Neumann, Robin or tangential types, or may be nonlinear or heterogeneous as well. The notation g[t, x, u] = 0 means that this condition may involve not only u(t, x) itself but also other quantities depending on u, like its derivatives for instance. Throughout the paper, d Ω denotes the geodesic distance in Ω, that is, for every pair (x, y) ∈ Ω × Ω, d Ω (x, y) is the infimum of the arc lengths of all C 1 curves joining x to y in Ω. We assume that Ω has an infinite diameter with respect to the geodesic distance d Ω , that is diam Ω (Ω) = +∞, where diam Ω (E) = sup d Ω (x, y); (x, y) ∈ E × E for any E ⊂ Ω. For any two subsets A and B of Ω, we set d Ω (A, B) = inf d Ω (x, y); (x, y) ∈ A × B .
For x ∈ Ω and r > 0, we set B Ω (x, r) = y ∈ Ω; d Ω (x, y) < r and S Ω (x, r) = y ∈ Ω; d Ω (x, y) = r .
The following definition of a generalized transition wave, which has a geometric essence, involves two families (Ω − t ) t∈R and (Ω + t ) t∈R of open nonempty and unbounded subsets of Ω such that
Ω − t ∩ Ω + t = ∅, ∂Ω − t ∩ Ω = ∂Ω + t ∩ Ω =: Γ t , Ω − t ∪ Γ t ∪ Ω + t = Ω, sup d Ω (x, Γ t ); x ∈ Ω + t = +∞, sup d Ω (x, Γ t ); x ∈ Ω − t = +∞, (1.3)
for all t ∈ R. In other words, Γ t splits Ω into two parts, namely Ω − t and Ω + t (see Figure 1.1 below). The unboundedness of the sets Ω ± t means that these sets have infinite diameters with respect to geodesic distance d Ω . Moreover, for each t ∈ R, these sets are assumed to contain points which are as far as wanted from the interface Γ t . We further impose that sup d Ω (y, Γ t ); y ∈ Ω ± t ∩ S Ω (x, r) → +∞ as r → +∞ uniformly in t ∈ R, x ∈ Γ t (1.4) and that the interfaces Γ t are made of a finite number of graphs. By the latter we mean that, when N ≥ 2, there is an integer n ≥ 1 such that, for each t ∈ R, there are n open subsets ω i,t ⊂ R N −1 , n continuous maps ψ i,t : ω i,t → R and n rotations R i,t of R N (for all 1 ≤ i ≤ n), such that Γ t ⊂ 1≤i≤n R i,t x ∈ R N ; (x 1 , . . . , x N −1 ) ∈ ω i,t , x N = ψ i,t (x 1 , . . . , x N −1 ) .
(1. 5) In dimension N = 1, the above condition reduces to the existence of an integer n ≥ 1 such that Γ t is made of at most n points, that is Γ t = {x 1 t , . . . , x n t } for each t ∈ R (where the real numbers x i t may not be all pairwise distinct). As far as the condition (1.4) is concerned, its exact definition is
inf sup d Ω (y, Γ t ); y ∈ Ω + t ∩ S Ω (x, r) ; t ∈ R, x ∈ Γ t → +∞ inf sup d Ω (y, Γ t ); y ∈ Ω − t ∩ S Ω (x, r) ; t ∈ R, x ∈ Γ t → +∞ as r → +∞.
It means that, for every point x ∈ Γ t , there are some points in both Ω + t and Ω − t which are far from Γ t and are at the same distance r from x, when r is large. The reason why this condition is used will become clearer in the following definition of transition waves. Definition 1.1 (Generalized transition wave) Let p ± : R × Ω → R m be two classical solutions of (1.2). A (generalized) transition wave connecting p − and p + is a time-global classical 1 solution u of (1.2) such that u ≡ p ± and there exist some sets (Ω ± t ) t∈R and (Γ t ) t∈R satisfying (1.3), (1.4) and (1.5) with u(t, x) − p ± (t, x) → 0 uniformly in t ∈ R as d Ω (x, Γ t ) → +∞ and x ∈ Ω ± t , (1.6) that is, for all ε > 0, there exists M such that
∀ t ∈ R, ∀ x ∈ Ω ± t , d Ω (x, Γ t ) ≥ M =⇒ |u(t, x) − p ± (t, x)| ≤ ε .
Let us comment with words the key point in the above Definition 1.1. 2 Namely, a central role is played by the uniformity of the limits u(t, x) − p ± (t, x) → 0 as d Ω (x, Γ t ) → +∞ and x ∈ Ω ± t . These limits hold far away from the hypersurfaces Γ t inside Ω. To make the definition meaningful, the distance which is used is the distance geodesic d Ω . It is the right notion to fit with the geometry of the underlying domain. Furthermore, it is necessary to describe the propagation of transition waves in domains such as curved cylinders (like in the joint figure), spiral-shaped domains, exterior domains, etc. Roughly speaking, these limiting conditions (1.6), together with (1.4) and (1.5), mean that the transition between the limiting states p − and p + is made of a finite number of neighborhoods of graphical interfaces, the width of these neighborhoods being bounded uniformly in time. Therefore, the region where a transition wave u connecting p − and p + is not close to p ± has a uniformly bounded width. This is the reason why the word "transition", referring to the intuitive notion of spatial transition, is used to give a name to the objects introduced in Definition 1.1.
We point out that, in Definition 1.1, the limiting states p ± of a transition wave u are imposed to solve (1.2). In other words, a transition wave is by definition a spatial connection between two other solutions. Thus, if ε ± are any two functions defined in R × Ω such that ε ± (t, x) → 0 as d Ω (x, Γ t ) → +∞ and x ∈ Ω ± t uniformly in t, and if u is any time-global solution of (1.2), then u is in general not a transition wave between u+ε − and u+ε + , because the limiting states u + ε ± do not solve (1.2) in general. The requirement that the limiting states p ± of a transition wave u solve (1.2) is then made in order to avoid the introduction of artificial and useless objects.
In Definition 1.1, the sets (Ω ± t ) t∈R and (Γ t ) t∈R are not uniquely determined, given a generalized transition wave. Nevertheless, in the scalar case, under some assumptions on p ± and Ω ± t and oblique Neumann boundary conditions on ∂Ω, the sets Γ t somehow reflect the location of the level sets of u. Namely, the following result holds:
1 Actually, from standard parabolic interior estimates, any classical solution of (1.2) is such that u, u t , u xi and u xixj , for all 1 ≤ i, j ≤ N , are locally Hölder continuous in R × Ω.
2 Definition 1.1 of generalized transition waves is slightly more precise than the one used in our companion paper [3]. In the present paper, we impose in the definition itself additional geometric conditions on the sets (Ω ± t ) t∈R and (Γ t ) t∈R , the meaning of which is explained in this paragraph. Figure 1: A schematic picture of the sets Ω ± t and Γ t Theorem 1.2 Assume that m = 1 (scalar case), that p ± are constant solutions of (1.2) such that p − < p + and let u be a time-global classical solution of (1.2) such that
u(t, x); (t, x) ∈ R × Ω = (p − , p + ) and g[t, x, u] = µ(t, x) · ∇ x u(t, x) = 0 on R × ∂Ω, for some unit vector field µ ∈ C 0,β (R × ∂Ω) such that inf µ(t, x) · ν(x); (t, x) ∈ R × ∂Ω > 0. 3 1.
Assume that u is a generalized transition wave connecting p − and p + , or p + and p − , in the sense of Definition 1.1 and that there exists τ > 0 such that
sup d Ω (x, Γ t−τ ); t ∈ R, x ∈ Γ t < +∞.
(1.7)
Then ∀ λ ∈ (p − , p + ), sup d Ω (x, Γ t ); u(t, x) = λ < +∞ (1.8) and ∀ C ≥ 0, p − < inf u(t, x); d Ω (x, Γ t ) ≤ C ≤ sup u(t, x); d Ω (x, Γ t ) ≤ C < p + .(1.(t, x) ∈ R × Ω; x ∈ Ω + t , d Ω (x, Γ t ) ≥ d and (t, x) ∈ R × Ω; x ∈ Ω − t , d Ω (x, Γ t ) ≥ d are connected for all d ≥ d 0 ,
then u is a generalized transition wave connecting p − and p + , or p + and p − .
The assumption (1.7) means that the interfaces Γ t and Γ t−τ are in some sense not too far from each other. For instance, if all Γ t are parallel hyperplanes in Ω = R N , then the assumption (1.7) means that the distance between Γ t and Γ t−τ is bounded independently of t, for some τ > 0. As far as the connectedness assumptions made in part 2 of Theorem 1.2 are concerned, they are a topological ingredient in the proof, to guarantee the uniform convergence of u to p ± or p ∓ far away from Γ t in Ω ± t .
Some specifications and the notion of global mean speed
In this section, we define the more specific notions of fronts, pulses, invasions (or traveling waves) and almost planar waves, as well as the concept of global mean speed, when it exists. These notions are related to some analytical or geometric properties of the limiting states p ± or of the sets (Ω ± t ) t∈R and (Γ t ) t∈R , and are listed in the following definitions, where u denotes a transition wave connecting p − and p + , associated to two families (Ω ± t ) t∈R and (Γ t ) t∈R , in the sense of Definition 1.1.
Definition 1.3 (Fronts and spatially extended pulses)
Let p ± = (p ± 1 , · · · , p ± m ). We say that the transition wave u is a front if, for each 1 ≤ k ≤ m, either inf p + k (t, x) − p − k (t, x); x ∈ Ω > 0 for all t ∈ R or inf p − k (t, x) − p + k (t, x); x ∈ Ω > 0 for all t ∈ R. The transition wave u is a spatially extended pulse if p ± depend only on t and p − (t) = p + (t) for all t ∈ R.
In the scalar case (m = 1), our definition of a front corresponds to the natural extension of the usual notion of a front connecting two different constants. In the pure vector case (m ≥ 2), if a bounded C 0,β (R × Ω) transition wave u = (u 1 , . . . , u m ) is a front for problem
u t = ∇ x · (A(t, x)∇ x u) + q(t, x) · ∇ x u + f (t, x, u)
in the sense of Definitions 1.1 and 1.3, if u k ≡ p ± k for some 1 ≤ k ≤ m, then the function u k is a front connecting p − k and p + k for the problem
(u k ) t = ∇ x · (A(t, x)∇ x u k ) + q(t, x) · ∇ x u k + f k (t, x, u k ) associated with the same sets (Ω ± t ) t∈R and (Γ t ) t∈R as u, where f k (t, x, s) = f (t, x, u 1 (t, x), . . . , u k−1 (t, x), s, u k+1 (t, x), . . . , u m (t, x))
and f = (f 1 , . . . , f m ). The same observation is valid for spatially extended pulses as well.
Definition 1.4 (Invasions)
We say that p + invades p − , or that u is an invasion of p − by p + (resp. p − invades p + , or u is an invasion of p + by p − ) if
Ω + t ⊃ Ω + s (resp. Ω − t ⊃ Ω − s ) for all t ≥ s and d Ω (Γ t , Γ s ) → +∞ as |t − s| → +∞.
Therefore, if p + invades p − (resp. p − invades p + ), then u(t, x) − p ± (t, x) → 0 as t → ±∞ (resp. as t → ∓∞) locally uniformly in Ω with respect to the distance d Ω . One can then say that, roughly speaking, invasions correspond to the usual idea of traveling waves. Notice that a generalized transition wave can always be viewed as a spatial connection between p − and p + , while an invasion wave can also be viewed as a temporal connection between the limiting states p − and p + . Definition 1.5 (Almost planar waves in the direction e) We say that the generalized transition wave u is almost planar (in the direction e ∈ S N −1 ) if, for all t ∈ R, the sets Ω ± t can be chosen so that
Γ t = x ∈ Ω; x · e = ξ t for some ξ t ∈ R.
By extension, we say that the generalized transition wave u is almost planar in a moving direction e(t) ∈ S N −1 if, for all t ∈ R, Ω ± t can be chosen so that
Γ t = x ∈ Ω; x · e(t) = ξ t for some ξ t ∈ R.
As in the usual cases (see Section 1.3), an important notion which is attached to a generalized transition wave is that of its global mean speed of propagation, if any. Definition 1.6 (Global mean speed of propagation) We say that a generalized transition wave u associated to the families (Ω ± t ) t∈R and (Γ t ) t∈R has global mean speed c (≥ 0) if
d Ω (Γ t , Γ s ) |t − s| → c as |t − s| → +∞.
We say that the transition wave u is almost-stationary if it has global mean speed c = 0. We say that u is quasi-stationary if
sup {d Ω (Γ t , Γ s ); (t, s) ∈ R 2 } < +∞,
and we say that u is stationary if it does not depend on t.
The global mean speed c, if it exists, is unique. Moreover, under some reasonable assumptions, the global mean speed is an intrinsic notion, in the sense that it does not depend on the families (Ω ± t ) t∈R and (Γ t ) t∈R . This is indeed seen in the following result:
Theorem 1.7
In the general vectorial case m ≥ 1, let p ± be two solutions of (1.2) satisfying
inf |p − (t, x) − p + (t, x)|; (t, x) ∈ R × Ω > 0.
Let u be a transition wave connecting p − and p + with a choice of sets (Ω ± t ) t∈R and (Γ t ) t∈R , satisfying (1.3), (1.4) and (1.5). If u has global mean speed c, then, for any other choice of sets ( Ω ± t ) t∈R and ( Γ t ) t∈R , satisfying (1.3), (1.4) and (1.5), u has a global mean speed and this global mean speed is equal to c.
Usual cases and new examples
In this subsection, we list some basic examples of transition waves, which correspond to the classical notions in the standard situations. We also state the existence of new examples of transition fronts in a time-dependent medium.
For the homogeneous equation (1.1)
in R N , a solution u(t, x) = φ(x · e − ct),
with φ(−∞) = 1 and φ(+∞) = 0 (assuming f (0) = f (1) = 0) is an (almost) planar invasion front connecting p − = 1 and p + = 0, with (global mean) speed |c|. The uniform stationary state p − = 1 (resp. p + = 0) invades the uniform stationary p + = 0 (resp. p − = 1) if c > 0 (resp. c < 0). The sets Ω ± t can for instance be defined as
Ω ± t = x ∈ R N ; ±(x · e − ct) > 0 .
The general definitions that we just gave also generalize the classical notions of pulsating traveling fronts in spatially periodic media (see [2,5,6,8,16,20,23,39,43,44]) with possible periodicity or almost-periodicity in time (see [14,30,32,34,35,36,37]) or in spatially recurrent media (see [27]). We point out that the limiting states p ± (t, x) are not assumed to be constant in general. It is indeed important to let the possibility of transition waves connecting time-or spacedependent limiting states. In the aforementioned references in the periodic case, the limiting states are typically periodic as well. Let us mention here another situation, corresponding to a one-dimensional medium which is asymptotically homogeneous but not uniformly homogeneous, and let us explain what a transition wave can be in this case. Namely, consider an equation of the type
u t = u xx + f (x, u), where u : R × R → R m , f (x, a 1 ) = 0 for all x ∈ R, f (x, a 2 ) → 0 as x → −∞, f (x, a 3 )
→ 0 as x → +∞, and a 1 , a 2 and a 3 are three distinct vectors in R m . The homogeneous states a 2 and a 3 are solutions of the limiting equations obtained as x → −∞ and x → +∞ respectively, but these states do not solve the original equation in general since f (x, a 2 ) and f (x, a 3 ) are not identically equal to 0 in general. One can then wonder what could be a generalized transition wave u(t, x) connecting p − = a 1 to another limiting state p + , with a single interface
Γ t = {x t } such that Ω ± t = {x ∈ R; ±(x − x t ) < 0} and x t → ±∞ as t → ±∞. The limiting state p + (t, x) such that u(t, x) − p + (t, x) → 0 as x − x t → −∞ (uniformly in t ∈ R) cannot
be a 2 or a 3 in general. A natural candidate could be a solution p + (x) of the stationary equation
p + (x) + f (x, p + (x)) = 0, x ∈ R,
such that p + (−∞) = a 2 and p + (+∞) = a 3 . If such a solution p + exists, a transition wave connecting p − = a 1 and p + and satisfying lim t→±∞ x t = ±∞ would then be such that u(t, x) → a 2 (resp. u(t, x) → a 1 ) as x → −∞ (resp. x → +∞) locally uniformly in t ∈ R, but u(t, x) → p + (x) ≡ a 2 as t → +∞ locally uniformly in x ∈ R. Without going into further details here, this simple example already illustrates the wideness of Definition 1.1 and the possibility of new objects connecting general non-constant limiting states.
In the particular one-dimensional case, when equation (1.2) is scalar and when the limiting states p ± are ordered, say p + > p − , Definition 1.1 corresponds to that of "wave-like" solutions given in [38]. However, Definition 1.1 also includes more general situations involving complex heterogeneous geometries or media. Existence, uniqueness and stability results of generalized almost planar transition fronts in one-dimensional media or straight higherdimensional cylinders with combustion-type nonlinearities and arbitrary spatial dependence have just been proved in [28,29,33,45]. In general higher-dimensional domains, generalized transition waves which are not almost planar can also be covered by Definition 1.1: such transition waves are known to exist for the homogeneous equation (1.1) in R N for usual types of nonlinearities f (combustion, bistable, Kolmogorov-Petrovsky-Piskunov type), see [3,10,17,18,21,22,31,40,41] for details. Further on, other situations can also be investigated, such as the case when some coefficients of (1.2) are locally perturbed and more complex geometries, like exterior domains (the existence of almost planar fronts in exterior domains with bistable nonlinearity f has just been proved in [4]), curved cylinders, spirals, etc can be considered.
It is worth mentioning that, even in dimension 1, Definition 1.1 also includes a very interesting class of transition wave solutions which are known to exist and which do not fall within the usual notions, that is invasion fronts which have no specified global mean speed. For instance, for (1.1)
in dimension N = 1, if f = f (u) satisfies f is C 2 concave in [0, 1], positive in (0, 1) and f (0) = f (1) = 0, (1.10)
then there are invasion fronts connecting 0 and 1 for which
Ω − t = (x t , +∞), Ω + t = (−∞, x t ) and
x t t → c 1 as t → −∞ and x t t → c 2 as t → +∞ with 2 f (0) ≤ c 1 < c 2 (see [18]). There are also some fronts for which x t /t → c 1 ≥ 2 f (0) as t → −∞ and x t /t → +∞ as t → +∞. For further details, we refer to [3,18]. In the companion survey paper [3], we made a detailed presentation of the usual particular cases of transition waves covered by Definition 1.1. We explained and compared the notions of fronts which had been introduced earlier, starting from the simplest situations and going to the most general ones. In the present paper, in addition to the intrinsic properties of the generalized transition waves stated in Theorems 1.2 and 1.7 above, we mainly focus on the proof of some important qualitative properties, including some monotonicity and uniqueness results, and on the application of these qualitative properties in order to get Liouville-type results in some particular situations. In doing so, we prove that, under some assumptions, the generalized transition waves reduce to the standard traveling or pulsating fronts in homogeneous or periodic media. These qualitative properties are stated in the next subsection 1.4. In a forthcoming paper, we deal with a general method to prove the existence of transition waves in a broad framework. However, in the present paper, in order to illustrate the interest of the above definitions, we also analyze a specific example which had not been considered in the literature. We prove the existence of new generalized transition waves, which in general do not have any global mean speed, for time-dependent equations. Namely, we consider one-dimensional reaction-diffusion equations of the type
u t = u xx + f (t, u) (1.11)
where the function f : R × [0, 1] → R is of class C 1 and satisfies:
∀ t ∈ R, f (t, 0) = f (t, 1) = 0, ∀ (t, s) ∈ R × [0, 1], f (t, s) ≥ 0, ∃ t 1 < t 2 ∈ R, ∃ f 1 , f 2 ∈ C 1 ([0, 1]; R), ∀ (t, s) ∈ (−∞, t 1 ] × [0, 1], f (t, s) = f 1 (s), ∀ (t, s) ∈ [t 2 , +∞) × [0, 1], f (t, s) = f 2 (s), f 1 (0) > 0, f 2 (0) > 0, ∀ s ∈ (0, 1), f 1 (s) > 0, f 2 (s) > 0.
(1.12)
In other words, the function f is time-independent and non-degenerate at 0 for times less than t 1 and larger than t 2 , and for the times t ∈ (t 1 , t 2 ), the functions f (t, ·) are just assumed to be nonnegative, but they may a priori vanish. If f 1 and f 2 are equal, then the nonlinearity f (t, s) can be viewed as a time-local perturbation of a time-independent equation. But, it is worth noticing that the functions f 1 and f 2 are not assumed to be equal nor even compared in general. When t ≤ t 1 , classical traveling fronts
(t, x) ∈ R 2 → ϕ 1,c (x − ct) ∈ [0, 1]
such that ϕ 1,c (−∞) = 1 and ϕ 1,c (+∞) = 0 are known to exist, for all and only all speeds
c ≥ c * 1 , where c * 1 ≥ 2 f 1 (0) > 0
only depends on f 1 (see e.g. [1]). The open questions are to know how these traveling fronts behave during the time interval [t 1 , t 2 ] and whether they can subsist and at which speed, if any, they travel after the time t 2 . Indeed, it is also known that, when t ≥ t 2 , there exist classical traveling fronts
(t, x) ∈ R 2 → ϕ 2,c (x − ct) ∈ [0, 1]
such that ϕ 2,c (−∞) = 1 and ϕ 2,c (+∞) = 0 for all and only all speeds c ≥ c * 2 , where c * 2 ≥ 2 f 2 (0) > 0 only depends on f 2 . The following result provides an answer to these questions and shows the existence of generalized transition waves connecting 0 and 1 for equation (1.11), which fall within our general definitions and do not have any global mean speed in general. To state the result, we need a few notations. For each c ≥ c * 1 , we set
λ 1,c = c 1 − c 2 1 − 4f 1 (0) 2 if c > c * 1 , c * 1 + c * 1 2 − 4f 1 (0) 2 if c = c * 1 .
(1.13)
We also denote
λ * ,− 2 = c * 2 − c * 2 2 − 4f 2 (0) 2 .Ω ± t = {x ∈ R; ±(x − x t ) < 0}, Γ t = {x t } for all t ∈ R, x t = c 1 t for t ≤ t 1 and x t t → c 2 as t → +∞,
where c 1 is any given speed in [c * 1 , +∞) and
c 2 = λ 1,c 1 + f 2 (0) λ 1,c 1 if λ 1,c 1 < λ * ,− 2 , c * 2 if λ 1,c 1 ≥ λ * ,− 2 .
(1.14)
When f 1 = f 2 , then c * 1 = c * 2 and the transition fronts constructed in Theorem 1.8 are such that c 1 = c 2 , whence they have a global mean speed c = c 1 = c 2 in the sense of Definition 1.6.
When
f 1 ≤ f 2 (resp. f 1 ≥ f 2 ), then c * 1 ≤ c * 2 (resp. c * 1 ≥ c * 2 )
, the inequalities c 1 ≤ c 2 (resp. c 1 ≥ c 2 ) always hold and, for c 1 large enough so that λ 1,c 1 < λ * ,− 2 , the inequalities c 1 < c 2 (resp. c 1 > c 2 ) are strict if f 1 (0) = f 2 (0) (hence, these transition fronts do not have any global mean speed).
In the general case, acceleration and slow down may occur simultaneously, for the same equation
c 2 > c 1 for all c 1 > c * 1 , and c 2 < c 1 for c 1 = c * 1 .
To do so, it is sufficient to choose f 2 of the Kolmogorov-Petrovsky-Piskunov type, that is
f 2 (s) ≤ f 2 (0)s in (0, 1) whence c * 2 = 2 f 2 (0) = 2λ * ,− 2 , and to choose f 1 in such a way that f 1 (0) < f 2 (0) and c * 1 > c * 2 (for instance, if f 2 is chosen as above, if M > 0 is such that √ 2M > c * 2 and if f 1 (s) ≥ M ε × (1 − |x − 1 + ε|) on [1 − 2ε, 1]
for ε > 0 small enough, then c * 1 > c * 2 for ε small enough, see [9]). Lastly, it is worth noticing that, in Theorem 1.8, the speed c 2 of the position x t at large time is determined only from c 1 , f 1 and f 2 , whatever the profile of f between times t 1 and t 2 may be.
Remark 1.9
The solutions u constructed in Theorem 1.8 are by definition spatial transition fronts connecting 1 and 0. Furthermore, it follows from the proof given in Section 3 that these transition fronts can also be viewed as temporal connections between a classical traveling front with speed c 1 for the nonlinearity f 1 and another classical traveling front, with speed c 2 , for the nonlinearity f 2 .
Qualitative properties
We now proceed to some further qualitative properties of generalized transition waves. Throughout this subsection, m = 1, i.e. we work in the scalar case, and u denotes transition wave connecting p − and p + , for equation (1.2), associated with families (Ω ± t ) t∈R and (Γ t ) t∈R satisfying properties (1.3), (1.4), (1.5) and (1.7). We also assume that u and p ± are globally bounded in R × Ω and that
µ(x) · ∇ x u(t, x) = µ(x) · ∇ x p ± (t, x) = 0 on R × ∂Ω, (1.15) where µ is a C 0,β (∂Ω) unit vector field such that inf µ(x) · ν(x); x ∈ ∂Ω > 0.
First, we establish a general property of monotonicity with respect to time.
Theorem 1.10 Assume that A and q do not depend on t, that f and p ± are nondecreasing in t and that there is δ > 0 such that
s → f (t, x, s) is nonincreasing in (−∞, p − (t, x) + δ] and [p + (t, x) − δ, +∞) (1.16) for all (t, x) ∈ R × Ω. If u is an invasion of p − by p + with κ := inf p + (t, x) − p − (t, x); (t, x) ∈ R × Ω > 0, (1.17) then u satisfies ∀ (t, x) ∈ R × Ω, p − (t, x) < u(t, x) < p + (t, x). (1.18)
and u is increasing in time t.
Notice that if (1.18) holds a priori and if f is assumed to be nonincreasing in s for s
in [p − (t, x), p − (t, x) + δ] and [p + (t, x) − δ, p + (t, x)] only, instead of (−∞, p − (t, x) + δ] and [p + (t, x) − δ, +∞)
, then the conclusion of Theorem 1.10 (strict monotonicity of u in t) holds. The simplest case is when f = f (u) only depends on u and p ± are constants and both stable,
that is f (p ± ) < 0.
The monotonicity result stated in Theorem 1.10 plays an important role in the following uniqueness and comparison properties for almost planar fronts: Theorem 1.11 Under the same conditions as in Theorem 1.10, assume furthermore that f and p ± are independent of t, that u is almost planar in some direction e ∈ S N −1 and has global mean speed c ≥ 0, with the stronger property that
sup d Ω (Γ t , Γ s ) − c|t − s| ; (t, s) ∈ R 2 < +∞, (1.19) where Γ t = x ∈ Ω;
x · e − ξ t = 0 and Ω ± t = x ∈ Ω; ±(x · e − ξ t ) < 0 . Let u be another globally bounded invasion front of p − by p + for equation (1.2) and (1.15), associated with
Γ t = x ∈ Ω; x · e − ξ t = 0 and Ω ± t = x ∈ Ω; ±(x · e − ξ t ) < 0 and having global mean speed c ≥ 0 such that sup d Ω ( Γ t , Γ s ) − c|t − s| ; (t, s) ∈ R 2 < +∞.
Then c = c and there is (the smallest) T ∈ R such that
u(t + T, x) ≥ u(t, x) for all (t, x) ∈ R × Ω.
Furthermore, there exists a sequence (t n , x n ) n∈N in R × Ω such that (d Ω (x n , Γ tn )) n∈N is bounded and u(t n + T, x n ) − u(t n , x n ) → 0 as n → +∞.
Lastly, either u(t + T, x) > u(t, x) for all (t, x) ∈ R × Ω or u(t + T, x) = u(t, x) for all (t, x) ∈ R × Ω.
This result shows the uniqueness of the global mean speed among a certain class of almost planar invasion fronts. It also says that any two such fronts can be compared up to shifts. In particular cases listed below, uniqueness holds up to shifts. However, this uniqueness property may not hold in general.
Remark 1.12 Notice that property (1.19) and the fact that u is an invasion imply that the speed c is necessarily (strictly) positive.
As a corollary of Theorem 1.11, we now state a result which is important in that it shows that, at least under appropriate conditions on f , our definition does not introduce new objects in some classical situations: it reduces to pulsating traveling fronts in periodic media and to usual traveling fronts when there is translation invariance in the direction of propagation.
Theorem 1.13 Under the conditions of Theorem 1.11, assume that Ω, A, q, f , µ and p ± are periodic in x, in that there are positive real numbers L 1 , . . . , L N > 0 such that, for every vector
k = (k 1 , . . . , k N ) ∈ L 1 Z × · · · × L N Z, Ω + k = Ω, A(x + k) = A(x), q(x + k) = q(x), f (x + k, ·) = f (x, ·), p ± (x + k) = p ± (x) for all x ∈ Ω, µ(x + k) = µ(x) for all x ∈ ∂Ω.
(i) Then u is a pulsating front, namely
u t + γ k · e c , x = u(t, x − k) for all (t, x) ∈ R × Ω and k ∈ L 1 Z × · · · × L N Z, (1.20)
where γ = γ(e) ≥ 1 is given by
γ(e) = lim (x,y)∈Ω×Ω, (x−y) e, |x−y|→+∞ d Ω (x, y) |x − y| . (1.21)
Furthermore, u is unique up to shifts in t.
(ii) Under the additional assumptions that e is one of the axes of the frame, that Ω is invariant in the direction e and that A, q, f , µ and p ± are independent of x · e, then u actually is a classical traveling front, that is:
u(t, x) = φ(x · e − ct, x )
for some function φ, where x denotes the variables of R N which are orthogonal to e. Moreover, φ is decreasing in its first variable.
(iii) If Ω = R N and A, q, f (·, s) (for each s ∈ R), p ± are constant, then u is a planar (i.e. one-dimensional) traveling front, in the sense that
u(t, x) = φ(x · e − ct), where φ : R → (p − , p + ) is decreasing and φ(∓∞) = p ± .
Notice that properties (1.4) and (1.7) are automatically satisfied here -and property (1.7) is actually satisfied for all τ > 0-due to the periodicity of Ω, the definition of Ω ± t and assumption (1.19).
The constant γ(e) in (1.21) is by definition larger than or equal to 1. It measures the asymptotic ratio of the geodesic and Euclidean distances along the direction e. If the domain Ω is invariant in the direction e, that is Ω = Ω + se for all s ∈ R, then γ(e) = 1. For a pulsating traveling front satisfying (1.20), the "Euclidean speed" c/γ(e) in the direction of propagation e is then less than or equal to the global mean speed c (the latter being indeed defined through the geodesic distance in Ω).
Part (ii) of Theorem 1.13 still holds if e is any direction of R N and if Ω, A, q, f , µ and p ± are invariant in the direction e and periodic in the variables x . This result can actually be extended to the case when the medium may not be periodic and u may not be an invasion front: Theorem 1.14 Assume that Ω is invariant in a direction e ∈ S N −1 , that A, q, µ and p ± depend only on the variables x which are orthogonal to e, that f = f (x , u) and that (1. 16) and (1.17) hold.
If u is almost planar in the direction e, i.e. the sets Ω ± t can be chosen as
Ω ± t = x ∈ Ω; ±(x · e − ξ t ) < 0 ,
and if u has global mean speed c ≥ 0 with the stronger property that
sup |ξ t − ξ s | − c|t − s| ; (t, s) ∈ R 2 < +∞, then there exists ε ∈ {−1, 1} such that u(t, x) = φ(x · e − εct, x )
for some function φ. Moreover, φ is decreasing in its first variable.
If one further assumes that c = 0, then the conclusion holds even if f and p ± also depend on x · e, provided that they are nonincreasing in x · e. In particular, if u is quasi-stationary in the sense of Definition 1.6, then u is stationary.
In Theorems 1.13 and 1.14, we gave some conditions under which the fronts reduce to usual pulsating or traveling fronts. The fronts were assumed to have a global mean speed. Now, the following result generalizes part (iii) of Theorem 1.13 to the case of almost planar fronts which may not have any global mean speed and which may not be invasion fronts. It gives some conditions under which almost planar fronts actually reduce to one-dimensional fronts.
Theorem 1.15 Assume that Ω = R N , that A and q depend only on t, that the functions p ± depend only on t and x · e and are nonincreasing in x · e for some direction e ∈ S N −1 , that f = f (t, x · e, u) is nonincreasing in x · e, and that (1.16) and (1.17) hold. If u is almost planar in the direction e with
Ω ± t = x ∈ R N ; ±(x · e − ξ t ) < 0 such that ∃ σ > 0, sup |ξ t − ξ s |; (t, s) ∈ R 2 , |t − s| ≤ σ < +∞, (1.22)
then u is planar, i.e. u only depends on t and x · e :
u(t, x) = φ(t, x · e)
for some function φ :
R 2 → R. Furthermore, ∀ (t, x) ∈ R × R N , p − (t, x · e) < u(t, x) < p + (t, x · e) (1.23)
and u is decreasing with respect to x · e.
Notice that the assumption sup {|ξ t+σ − ξ t |; t ∈ R} < +∞ for every σ ∈ R is clearly stronger than property (1.7). But one does not need ξ t to be monotone or |ξ t − ξ s | → +∞ as |t − s| → +∞, namely u may not be an invasion front.
As for Theorem 1.10, if the inequalities (1.23) are assumed to hold a priori and if f is
assumed to be nonincreasing in s for s in [p − (t, x·e), p − (t, x·e)+δ] and [p + (t, x·e)−δ, p + (t, x·e)] only, instead of (−∞, p − (t, x · e) + δ] and [p + (t,
x · e) − δ, +∞), then the strict monotonicity of u in x · e still holds.
As a particular case of the result stated in Theorem 1.14 (with c = 0), the following property holds, which states that, under some assumptions, any quasi-stationary front is actually stationary. Corollary 1.16 Under the conditions of Theorem 1.15, if one further assumes that the function t → ξ t is bounded and that A, q, f and p ± do not depend on t, then u depends on x · e only, that is u is a stationary one-dimensional front.
Further extensions
In the previous sections, the waves were defined as spatial transitions connecting two limiting states p − and p + . Multiple transition waves can be defined similarly. Definition 1.17 (Waves with multiple transitions) Let k ≥ 1 be an integer and let p 1 , . . . , p k be k time-global solutions of (1.2). A generalized transition wave connecting p 1 , . . . , p k is a time-global classical solution u of (1.2) such that u ≡ p j for all 1 ≤ j ≤ k, and there exist k
families (Ω j t ) t∈R (1 ≤ j ≤ k)
of open nonempty unbounded subsets of Ω, a family (Γ t ) t∈R of nonempty subsets of Ω and an integer n ≥ 1 such that
and u(t, x) − p j (t, x) → 0 uniformly in t ∈ R as d Ω (x, Γ t ) → +∞ and x ∈ Ω j t for all 1 ≤ j ≤ k.
Triple or more general multiple transition waves are indeed known to exist in some reaction-diffusion problems (see e.g. [11,13]). The above definition also covers the case of multiple wave trains.
On the other hand, the spatially extended pulses, as defined in Definition 1.3 with p − (t) = p + (t), correspond to the special case k = 1, p 1 = p ± (t) and Ω 1 t = Ω − t ∪ Ω + t in the above definition. We say that they are extended since, for each time t, the set Γ t is unbounded in general. The usual notion of localized pulses can be viewed as a particular case of Definition 1.17.
sup diam Ω (Γ t ); t ∈ R < +∞,
then we say that u is a localized pulse.
In all definitions of this paper, the time interval R can be replaced with any interval I ⊂ R. However, when I = R, the sets Ω ± t or Ω j t are not required to be unbounded, but one only requires that
lim t→+∞ sup d Ω (x, Γ t ); x ∈ Ω ± t } = +∞ or lim t→+∞ sup d Ω (x, Γ t ); x ∈ Ω j t } = +∞, in the case of double or multiple transitions, if I ⊃ [a, +∞) (resp. lim t→−∞ sup d Ω (x, Γ t ); x ∈ Ω ± t } = +∞ or lim t→−∞ sup d Ω (x, Γ t ); x ∈ Ω j t } = +∞ if I ⊃ (−∞, a])
. The particular case I = [0, T ) with 0 < T ≤ +∞ is used to describe the formation of waves and fronts for the solutions of Cauchy problems. For instance, consider equation
(1.1) for t ≥ 0, with a function f ∈ C 1 ([0, 1]) such that f (0) = f (1) = 0, f > 0 in (0, 1) and f (0) > 0. If u 0 is in C c (R N )
and satisfies 0 ≤ u 0 ≤ 1 with u 0 ≡ 0 and if u(t, x) denotes the solution of (1.1) with initial condition u(0, ·) = u 0 , then 0 ≤ u(t, x) ≤ 1 for all t ≥ 0 and x ∈ R N and it follows easily from [1,24] that there exists a continuous increasing
function [0, +∞) t → r(t) > 0 such that r(t)/t → c * > 0 as t → +∞ and lim A→+∞ inf u(t, x); t ≥ 0, r(t) ≥ A, 0 ≤ |x| ≤ r(t) − A = 1, lim A→+∞ sup u(t, x); t ≥ 0, |x| ≥ r(t) + A = 0,
where c * > 0 is the minimal speed of planar fronts ϕ(x−ct) ranging in [0, 1] and connecting 0 and 1 for this equation (in other words, the minimal speed c * of planar fronts is also the spreading speed of the solutions u in all directions). If we define
Ω − t = x ∈ R N ; |x| < r(t) , Ω + t = x ∈ R N ; |x| > r(t) and Γ t = x ∈ R N ; |x| = r(t)
for all t ≥ 0, then the function u(t, x) can be viewed as a transition invasion wave connecting p − = 1 and p + = 0 in the time interval [0, +∞). We also refer to [7] for further definitions and properties of the spreading speeds of the solutions of the Cauchy problem u t = ∆u+f (u) with compactly supported initial conditions, in arbitrary domains Ω and no-flux boundary conditions. It is worth pointing out that, for the one-dimensional equation
u t = u xx + f (u) in R with C 1 ([0, 1], R) functions f such that f (0) = f (1) = 0, f (s) > 0 and f (s) ≤ f (s)/s on (0, 1), there are solutions u : [0, +∞) × R → [0, 1], (t, x) → u(t, x) such that u(t, −∞) = 1, u(t, +∞) = 0 for all t ≥ 0, and lim t→+∞ u x (t, ·) L ∞ (R) = 0,
see [19]. At each time t, u(t, ·) connects 1 to 0, but since the solutions become uniformly flatter and flatter as time runs, they are examples of solutions which are not generalized fronts connecting 1 and 0.
Time-dependent domains and other equations. We point out that all these general definitions can be adapted to the case when the domain
Ω = Ω t depends on time t.
Lastly, the general definitions of transition waves which are given in this paper also hold for other types of evolution equations
F [t,
x, u, Du, D 2 u, · · · ] = 0 which may not be of the parabolic type and which may be non local. Here Du stands for the gradient of u with respect to all variables t and x.
Outline of the paper. The following sections are devoted to proving all the results we have stated here. Section 2 is concerned with level set properties and the intrinsic character of the global mean speed. In Section 3, we prove Theorem 1.8 on the existence of generalized transition waves for the time-dependent equation (1.11). Section 4 deals with the proof of the general time-monotonicity result (Theorem 1.10). Section 5 is concerned with the proofs of Theorems 1.11 and 1.13 on comparison of almost planar invasion fronts and reduction to pulsating fronts in periodic media. Lastly, in Section 6, we prove the remaining Theorems 1.14 and 1.15 concerned with almost planar fronts in media which are invariant or monotone in the direction of propagation.
its localization in terms of (1.8) and (1.9) is intrinsic. Thus, this gives a meaning to the "interface" in this continuous problem (even though it is not a free boundary). This section is divided into two parts, the first one dealing with the properties of the level sets and the second one with the intrinsic character of the global mean speed.
2.1 Localization of the level sets: proof of Theorem 1.2
Heuristically, the fact that u converges to two distinct constant states p ± in Ω ± t uniformly as d Ω (x, Γ t ) → +∞ will force any level set to stay at a finite distance from the interfaces Γ t , and the solution u to stay away from p ± in tubular neighborhoods of Γ t .
More precisely, let us first prove part 1 of Theorem 1.2. Formula (1.8) is almost immediate. Indeed, assume that the conclusion does not hold for some λ ∈ (p − , p + ). Then there exists a sequence (t n , x n ) n∈N in R × Ω such that u(t n , x n ) = λ for all n ∈ N and d Ω (x n , Γ tn ) → +∞ as n → +∞.
Up to extraction of some subsequence, two cases may occur: either x n ∈ Ω − tn and then u(t n , x n ) → p − as n → +∞, or x n ∈ Ω + tn and then u(t n , x n ) → p + as n → +∞. In both cases, one gets a contradiction with the fact that u(t n , x n ) = λ ∈ (p − , p + ).
Assume now that property (1.9) does not hold for some C ≥ 0. One may then assume that there exists a sequence (t n , x n ) n∈N of points in R × Ω such that
d Ω (x n , Γ tn ) ≤ C for all n ∈ N and u(t n , x n ) → p − as n → +∞ (2.1)
(the case where u(t n , x n ) → p + could be treated similarly). Since d Ω (x n , Γ tn ) ≤ C for all n, it follows from (1.7) that there exists a sequence ( x n ) n∈N such that
x n ∈ Γ tn−τ for all n ∈ N and sup d Ω (x n , x n ); n ∈ N < +∞.
On the other hand, from Definition 1.1, there exists d > 0 such that
∀ t ∈ R, ∀ y ∈ Ω + t , d Ω (y, Γ t ) ≥ d =⇒ u(t, y) ≥ p − + p + 2 .
From (1.4), there exists r > 0 such that, for each n ∈ N, there exists a point y n ∈ Ω + tn−τ satisfying d Ω ( x n , y n ) = r and d Ω (y n , Γ tn−τ ) ≥ d.
Therefore,
∀ n ∈ N, u(t n − τ, y n ) ≥ p − + p + 2 . (2.2)
But the sequence (d Ω (x n , y n )) n∈N is bounded and the function v = u − p − is nonnegative and is a classical global solution of an equation of the type
v t = ∇ x · (A(t, x)∇ x v) + q(t, x) · ∇ x v + b(t, x)v in R × Ω for some bounded function b, with µ(t, x) · ∇ x v(t,
x) = 0 on ∂Ω. Furthermore, the function v has bounded derivatives, from standard parabolic estimates. Since v(t n , x n ) → 0 as n → +∞ from (2.1), one concludes from the linear estimates that v(t n − τ, y n ) → 0 as n → +∞. 4 But
v(t n − τ, y n ) ≥ p + − p − 2 > 0 from (2.
2). One has then reached a contradiction. This gives the desired conclusion (1.9).
To prove part 2 of Theorem 1.2, assume now that (1.8) and (1.9) hold and that there is d 0 > 0 such that the sets
(t, x) ∈ R × Ω; x ∈ Ω + t , d Ω (x, Γ t ) ≥ d and (t, x) ∈ R × Ω; x ∈ Ω − t , d Ω (x, Γ t ) ≥ d are connected for all d ≥ d 0 . Denote m − = lim inf x∈Ω − t , d Ω (x,Γt)→+∞ u(t, x) and M − = lim sup x∈Ω − t , d Ω (x,Γt)→+∞ u(t, x). One has p − ≤ m − ≤ M − ≤ p + . Call λ = (m − + M − )/2. Assume now that m − < M − . Then λ ∈ (p − , p + ) and, from (1.8), there exists C 0 ≥ 0 such that d Ω (x, Γ t ) < C 0 for all (t, x) ∈ R × Ω such that u(t, x) = λ.
Furthermore, there exist some times t 1 , t 2 ∈ R and some points
x 1 , x 2 with x i ∈ Ω − t i such that u(t 1 , x 1 ) < λ < u(t 2 , x 2 ) and d Ω (x i , Γ t i ) ≥ max(C 0 , d 0 ) for i = 1, 2. Since the set (t, x) ∈ R × Ω; x ∈ Ω − t , d Ω (x, Γ t ) ≥ max(C 0 , d 0 )
is connected and the function u is continuous in R × Ω, there would then exist t ∈ R and x ∈ Ω − t such that d Ω (x, Γ t ) ≥ max(C 0 , d 0 ) and u(t, x) = λ. But this is in contradiction with the choice of C 0 . 4 We use here the fact that, since the domain Ω is assumed to be globally smooth, as well as all coefficients A, q and µ of (1.2) and (1.15), in the sense given in Section 1, then, for every positive real numbers δ, ρ, σ, M, B and η > 0, there exists a positive real number ε = ε(δ, ρ, σ, M, B, η) > 0 such that, for any t 0 ∈ R, for any C 1 path P : [0, 1] → Ω whose length is less than δ, for any nonnegative classical supersolution u of
u t ≥ ∇ x · (A(t, x)∇ x u) + q(t, x) · ∇ x u + b(t, x)u in the set E = E t0,P,ρ,σ = [t 0 , t 0 + ρ]× x ∈ Ω; d Ω (x, P ([0, 1])) ≤ ρ ∪ [t 0 , t 0 + σ]×B Ω (P (0), ρ), satisfying (1.15) on ∂E ∩ (R × ∂Ω), ∇ x u L ∞ (E) ≤ M , b L ∞ (E) ≤ B and
max u(t 0 , P (s)); s ∈ [0, 1] ≥ η, then u(t 0 + σ, P (0)) ≥ ε.
Therefore, p − ≤ m − = M − ≤ p + and u(t, x) → m − uniformly as d Ω (x, Γ t ) → +∞ and x ∈ Ω − t .
Similarly,
u(t, x) → m + ∈ [p − , p + ] uniformly as d Ω (x, Γ t ) → +∞ and x ∈ Ω + t .
If max(m − , m + ) < p + , then there is ε > 0 and C ≥ 0 such that u(t,
x) ≤ p + − ε for all (t, x) with d Ω (x, Γ t ) ≥ C. But sup {u(t, x); d Ω (x, Γ t ) ≤ C} < p +
because of (1.9). Therefore, sup {u(t, x); (t, x) ∈ R × Ω} < p + , which contradicts the fact that the range of u is the whole interval (p − , p + ). As a consequence,
max(m − , m + ) = p + .
Similarly, one can prove that min(m − , m + ) = p − . Eventually, either m − = p − and m + = p + , or m − = p + and m + = p − , which means that u is a transition wave connecting p − and p + (or p + and p − ). That completes the proof of Theorem 1.2.
Uniqueness of the global mean speed for a given transition wave
This section is devoted to the proof of the intrinsic character of the global mean speed, when it exists, of a generalized transition wave in the general vectorial case m ≥ 1, when p + and p − are separated from each other.
Proof of Theorem 1.7. We make here all the assumptions of Theorem 1.7 and we call
Γ t = ∂ Ω − t ∩ Ω = ∂ Ω + t ∩ Ω
for all t ∈ R. We first claim that there exists C ≥ 0 such that
d Ω (x, Γ t ) ≤ C for all t ∈ R and x ∈ Γ t .
Assume not. Then there is a sequence (t n , x n ) n∈N in R × Ω such that
x n ∈ Γ tn for all n ∈ N and d Ω (x n , Γ tn ) → +∞ as n → +∞.
Up to extraction of some subsequence, one can assume that x n ∈ Ω − tn (the case where x n ∈ Ω + tn could be handled similarly). Call
ε = inf |p − (t, x) − p + (t, x)|; (t, x) ∈ R × Ω > 0
and let A ≥ 0 be such that
|u(t, z) − p + (t, z)| ≤ ε 2 for all (t, z) ∈ R × Ω with d Ω (z, Γ t ) ≥ A and z ∈ Ω + t .
From the condition (1.4), there exist r > 0 and a sequence (y n ) n∈N such that y n ∈ Ω + tn , d Ω (x n , y n ) = r and d Ω (y n , Γ tn ) ≥ A for all n ∈ N. Therefore, d Ω (y n , Γ tn ) → +∞ as n → +∞ and y n ∈ Ω − tn for n large enough. As a consequence,
u(t n , y n ) − p − (t n , y n ) → 0 as n → +∞.
On the other hand, d Ω (y n , Γ tn ) ≥ A and y n ∈ Ω + tn , whence
|u(t n , y n ) − p + (t n , y n )| ≤ ε 2 for all n ∈ N. It follows that lim sup n→+∞ |p − (t n , y n ) − p + (t n , y n )| ≤ ε 2 .
This contradicts the definition of ε. Therefore, there exists C ≥ 0 such that
∀ t ∈ R, ∀ x ∈ Γ t , d Ω (x, Γ t ) ≤ C. (2.3)
Let now (t, s) ∈ R 2 be any couple of real numbers and let η > 0 be any positive number.
There exists (x, y) ∈ Γ t × Γ s such that d Ω (x, y) ≤ d Ω (Γ t , Γ s ) + η. From (2.3), there exists ( x, y) ∈ Γ t × Γ s such that d Ω (x, x) ≤ C + η and d Ω (y, y) ≤ C + η. Thus, d Ω ( x, y) ≤ d Ω (Γ t , Γ s ) + 2C + 3η and d Ω ( Γ t , Γ s ) ≤ d Ω (Γ t , Γ s ) + 2C + 3η.
Since η > 0 was arbitrary, one gets that
d Ω ( Γ t , Γ s ) ≤ d Ω (Γ t , Γ s ) + 2C for all (t, s) ∈ R 2 . Hence, lim sup |t−s|→+∞ d Ω ( Γ t , Γ s ) |t − s| ≤ lim sup |t−s|→+∞ d Ω (Γ t , Γ s ) |t − s| = c.
With similar arguments, by permuting the roles of the sets Ω ± t and Ω ± t , one can prove that
d Ω (Γ t , Γ s ) ≤ d Ω ( Γ t , Γ s ) + 2 C
for all (t, s) ∈ R 2 and for some constant C ≥ 0. Thus,
c = lim inf |t−s|→+∞ d Ω (Γ t , Γ s ) |t − s| ≤ lim inf |t−s|→+∞ d Ω ( Γ t , Γ s ) |t − s| .
As a conclusion, the ratio d Ω ( Γ t , Γ s )/|t − s| converges as |t − s| → +∞, and its limit is equal to c. The proof of Theorem 1.7 is thereby complete.
Generalized transition waves for a time-dependent equation
In this section, we construct explicit examples of generalized invasion transition fronts connecting 0 and 1 for the one-dimensional equation (1.11) under the assumption (1.12). Namely, we do the Proof of Theorem 1.8. The strategy consists in starting from a classical traveling front with speed c 1 for the nonlinearity f 1 , that is for times t ∈ (−∞, t 1 ], and then in letting it evolve and in proving that the solution eventually moves with speed c 2 at large times. The key point is to control the exponential decay of the solution when it approaches the state 0, between times t 1 and t 2 .
For the nonlinearity f 1 , there exists a family of traveling fronts ϕ 1,c (x−ct) of the equation
u t = u xx + f 1 (u),
where ϕ 1,c : R → (0, 1) satisfies ϕ 1,c (−∞) = 1 and ϕ 1,c (+∞) = 0, for each speed c ∈ [c * 1 , +∞). The minimal speed c * 1 satisfies c * 1 ≥ 2 f 1 (0), see [1,15]. Each ϕ 1,c is decreasing and unique up to shifts (one can normalize ϕ 1,c is such a way that ϕ 1, where A 1,c ≥ 0, and B 1,c > 0 if A 1,c = 0, see [1]. Let any speed c 1 ∈ [c * 1 , +∞) be given, let ξ be any real number (which is just a shift parameter) and let u be the solution of (1.11) such that
c (0) = 1/2). Furthermore, if c > c * 1 , then ϕ 1,c (s) ∼ A 1,c e −λ 1u(t, x) = ϕ 1,c 1 (x − c 1 t + ξ) for all t ≤ t 1 and x ∈ R. Define x t = c 1 t for all t ≤ t 1 . (3.1)
The function u satisfies
u(t, x) → 1 as x − x t → −∞, u(t, x) → 0 as x − x t → +∞, uniformly w.r.t. t ≤ t 1 . (3.2)
Let us now study the behavior of u on the time interval [t 1 , t 2 ] and next on the interval [t 2 , +∞). From the strong parabolic maximum principle, there holds 0 < u(t, x) < 1 for all (t, x) ∈ R 2 . For each t ≥ t 1 , the function u(t, ·) remains decreasing in R since f does not depend on x. Furthermore, from standard parabolic estimates, the function u satisfies the limiting conditions u(t, −∞) = 1 and u(t, +∞) = 0 locally in t ∈ R,
3) since f (t, 0) = f (t, 1) = 0. Therefore, setting
x t = x t 1 = c 1 t 1 for all t ∈ (t 1 , t 2 ], (3.4) one gets that
u(t, x) → 1 as x − x t → −∞, u(t, x) → 0 as x − x t → +∞, uniformly w.r.t. t ∈ (t 1 , t 2 ]. (3.5)
Let ε be any positive real number in (0, λ 1,c 1 ). From the definition of u and the above results, it follows that there exists a constant C ε > 0, which also depends on ξ, A 1,c 1 and B 1,c 1 , such that
u(t 1 , x) ≤ min C ε e −(λ 1,c 1 −ε)x , 1 for all x ∈ R.
Let M be the nonnegative real number defined by
M = sup (t,s)∈[t 1 ,t 2 ]×(0,1] f (t, s) s .
This quantity is finite since f is of class C 1 and f (t, 0) = 0 for all t. Denote
α = λ 1,c 1 − ε + M λ 1,c 1 − ε > 0 and u(t, x) = min C ε e −(λ 1,c 1 −ε) (x−α(t−t 1 )) , 1 for all (t, x) ∈ [t 1 , t 2 ] × R.
The function u is positive and it satisfies u(t 1 , ·) ≤ u(t 1 , ·) in R. Furthermore, for all (t, x) ∈ [t 1 , t 2 ] × R, if u(t, x) < 1, then
u t (t, x) − u xx (t, x) − f (t, u(t, x)) ≥ u t (t, x) − u xx (t, x) − M u(t, x) = C ε [α (λ 1,c 1 − ε) − (λ 1,c 1 − ε) 2 − M ] e −(λ 1,c 1 −ε) (x−α(t−t 1 )) = 0
from the definitions of M and α. Thus, u is a supersolution of (1.11) on the time interval [t 1 , t 2 ] and it is above u at time t 1 . Therefore,
u(t, x) ≤ u(t, x) ≤ C ε e −(λ 1,c 1 −ε) (x−α(t−t 1 )) for all (t, x) ∈ [t 1 , t 2 ] × R (3.6)
from the maximum principle. On the other hand, from the behavior of ϕ 1,c 1 at +∞, there exists a constant C ε > 0 such that
u(t 1 , x) ≥ min C ε e −(λ 1,c 1 +ε)x , 1 2 for all x ∈ R.
Let u the solution of the heat equation u t = u xx for all t ≥ t 1 and x ∈ R, with value u(t 1 , x) = min C ε e −(λ 1,c 1 +ε)x , 1 2 for all x ∈ R at time t 1 . Since f ≥ 0, it follows from the maximum principle that
u(t, x) ≥ u(t, x) for all (t, x) ∈ [t 1 , +∞) × R. (3.7)
But, for all x ∈ R,
u(t 2 , x) = +∞ −∞ p(t 2 − t 1 , x − y) u(t 1 , y) dy ≥ C ε +∞ xε p(t 2 − t 1 , x − y) e −(λ 1,c 1 +ε)y dy,
where x ε is the unique real number such that C ε e −(λ 1,c 1 +ε)xε = 1/2 and p(τ, z) = (4πτ ) −1/2 e −z 2 /(4τ ) is the heat kernel. Thus, for all x ≥ x ε + 4(t 2 − t 1 ), there holds
u(t 2 , x) ≥ C ε 4π(t 2 − t 1 ) x+ √ 4(t 2 −t 1 ) x− √ 4(t 2 −t 1 ) e − (x−y) 2 4(t 2 −t 1 ) −(λ 1,c 1 +ε)y dy ≥ 2 C ε e −1−(λ 1,c 1 +ε) √ 4(t 2 −t 1 ) √ π × e −(λ 1,c 1 +ε)x .
(3.8)
It follows from (3.6), (3.7) and (3.8) that, for all ε ∈ (0, λ 1,c 1 ), there exist two positive constants C ± ε and a real number X ε such that
C + ε e −(λ 1,c 1 +ε)x ≤ u(t 2 , x) ≤ C − ε e −(λ 1,c 1 −ε)x for all x ∈ [X ε , +∞)
. Remember also that 0 < u(t 2 , x) < 1 for all x ∈ R, and that u(t 2 , −∞) = 1. Since f (t, s) = f 2 (s) for all t ≥ t 2 and s ∈ [0, 1], the classical front stability results (see e.g. [26,42]) imply that sup x∈R u(t, x) − ϕ 2,c 2 (x − c 2 t + m(t)) → 0 as t → +∞, (3.9) where m (t) → 0 as t → +∞, and c 2 > 0 is given by (1.14). Here, ϕ 2,c 2 denotes the profile of the front traveling with speed c 2 for the equation u t = u xx + f 2 (u), such that ϕ 2,c 2 (−∞) = 1 and ϕ 2,c 2 (+∞) = 0. Therefore, there exists t 3 > t 2 such that the map t → c 2 t − m(t) is increasing in [t 3 , +∞), and c 2 t 3 − m(t 3 ) ≥ c 1 t 1 . Define
x t = c 1 t 1 if t ∈ (t 2 , t 3 ),c 2 t − m(t) if t ∈ [t 3 , +∞). (3.10)
It follows from (3.3) and (3.9) that
u(t, x) → 1 as x − x t → −∞, u(t, x) → 0 as x − x t → +∞, uniformly w.r.t. t ∈ (t 2 , +∞). (3.11)
Eventually, setting Ω ± t = x ∈ R; ±(x − x t ) < 0 and Γ t = {x t } for each t ∈ R, where the real numbers x t 's are defined in (3.1), (3.4) and (3.10), one concludes from (3.2), (3.5) and (3.11) that the function u is a generalized transition front connecting p − = 0 and p + = 1. Furthermore, since the map t → x t is nondecreasing and x t − x s → +∞ as t − s → +∞, this transition front u is an invasion of 0 by 1. The proof of Theorem 1.8 is thereby complete.
Monotonicity properties
This section is devoted to the proof of the time-monotonicity properties, that is Theorem 1.10. This result has its own interest and it is also one of the key points in the subsequent uniqueness and classification results. The proof uses several comparison lemmata and some versions of the sliding method with respect to the time variable. Let us first show the following
∀ (t, x) ∈ R × Ω, p − (t, x) < u(t, x) < p + (t, x).
Proof. We only prove the inequality p − (t, x) < u(t, x), the proof of the second inequality is similar. Remember that u and p − are globally bounded. Assume now that
m := inf u(t, x) − p − (t, x); (t, x) ∈ R × Ω < 0. Let (t n , x n ) n∈N be a sequence in R × Ω such that u(t n , x n ) − p − (t n , x n ) → m < 0 as n → +∞. Since p + (t, x) − p − (t, x) ≥ κ > 0 for all (t,
x) ∈ R × Ω, it follows from Definition 1.1 that the sequence (d Ω (x n , Γ tn )) n∈N is bounded. From assumption (1.7), there exists a sequence of points ( x n ) n∈N such that the sequence (d Ω (x n , x n )) n∈N is bounded and x n ∈ Γ tn−τ for every n ∈ N. From Definition 1.1, there exists d ≥ 0 such that
∀ t ∈ R, ∀ z ∈ Ω + t , d Ω (z, Γ t ) ≥ d =⇒ u(t, z) ≥ p + (t, z) − κ .
From the condition (1.4), there exist r > 0 and a sequence (y n ) n∈N of points in Ω such that y n ∈ Ω + tn−τ , d Ω (y n , x n ) = r and d Ω (y n , Γ tn−τ ) ≥ d for all n ∈ N.
One then gets that u(t n − τ n , y n ) ≥ p + (t n − τ, y n ) − κ (4.1)
for all n ∈ N. Call v(t, x) = p − (t, x) + m and w(t, x) = u(t, x) − v(t, x) = u(t, x) − p − (t, x) − m ≥ 0 for every (t, x) ∈ R × Ω. Since p − solves (1.2), since f (t, x, ·) is nonincreasing in (−∞, p − (t, x) + δ] for each (t, x) ∈ R × Ω, 5 and since m < 0, the function v solves v t ≤ ∇ x · (A(x)∇ x v) + q(x) · ∇ x v + f (t, x, v) in R × Ω 5
Here, we actually just use the fact that f (t, x, ·) is nonincreasing in (−∞, p − (t, x)] for each (t, x) ∈ R×Ω.
(remember that A and f do not depend on t, but this property is actually not used here). In other words, v is a subsolution for (1.2). But u solves (1.
2) and f (t, x, s) is locally Lipschitzcontinuous in s uniformly in (t, x) ∈ R × Ω. There exists then a bounded function b such that
w t ≥ ∇ x · (A(x)∇ x v) + q(x) · ∇ x v + b(t, x)w in R × Ω.
Lastly, w satisfies µ · ∇ x w = 0 on R × ∂Ω. Since the sequences (d Ω (x n , x n )) n∈N and (d Ω (y n , x n )) n∈N are bounded, the sequence (d Ω (x n , y n )) n∈N is bounded as well. Thus, since w ≥ 0 in R × Ω and w(t n , x n ) → 0 as n → +∞, one gets, as in the proof of part 1 of Theorem 1.2, that w(t n − τ, y n ) → 0 as n → +∞. But w(t n − τ, y n ) satisfies
w(t n − τ, y n ) = u(t n − τ, y n ) − p − (t n − τ, y n ) − m ≥ p + (t n − τ, y n ) − κ − p − (t n − τ, y n ) − m ≥ −m > 0
for all n ∈ N because of (4.1). One has then reached a contradiction. As a conclusion, m ≥ 0, whence
u(t, x) ≥ p − (t, x) for all (t, x) ∈ R × Ω.
If u(t 0 , x 0 ) = p − (t 0 , x 0 ) for some (t 0 , x 0 ) ∈ R × Ω, then the strong parabolic maximum principle and Hopf lemma imply that u(t, x) = p − (t, x) for all x ∈ Ω and t ≤ t 0 , and then for all t ∈ R by uniqueness of the Cauchy problem for (1.2). But this is impossible since p + − p − ≥ κ > 0 in R × Ω and u(t, x) − p + (t, x) → 0 uniformly as x ∈ Ω + t and d Ω (x, Γ t ) → +∞ (notice actually that for each t ∈ R, there are some points z n ∈ Ω + t such that d Ω (z n , Γ t ) → +∞ as n → +∞, from (1.3)).
As already underlined, the proof of the inequality u < p + is similar.
Let us now turn to the Proof of Theorem 1.10. In the hypotheses (1.16) and (1.17), one can assume without loss of generality that 0 < 2δ ≤ κ, even if it means decreasing δ. In what follows, for any s ∈ R, we define u s in R × Ω by
∀ (t, x) ∈ R × Ω, u s (t, x) = u(t + s, x).
The general strategy is to prove that u s ≥ u in R × Ω for all s > 0 large enough, and then for all s ≥ 0 by sliding u with respect to the time variable. First, from Definition 1.1, there exists A > 0 such that Since p + invades p − , there exists s 0 > 0 such that ∀ t ∈ R, ∀ s ≥ s 0 , Ω + t+s ⊃ Ω + t and d Ω (Γ t+s , Γ t ) ≥ 2A.
∀ (t, x) ∈ R × Ω, x ∈ Ω − t and d Ω (x, Γ t ) ≥ A =⇒ u(t, x) ≤ p − (t, x) + δ , x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ u(t, x) ≥ p + (t, x) − δ 2 .
Fix any t ∈ R, s ≥ s 0 and x ∈ Ω. If x ∈ Ω + t , then x ∈ Ω + t+s and d Ω (x, Γ t+s ) ≥ 2A since any continuous path from x to Γ t+s in Ω meets Γ t . On the other hand, if x ∈ Ω − t and d Ω (x, Γ t ) ≤ A, then d Ω (x, Γ t+s ) ≥ A and x ∈ Ω + t+s . In both cases, one then has that
u s (t, x) = u(t + s, x) ≥ p + (t + s, x) − δ 2 ≥ p + (t, x) − δ since p + is nondecreasing in time. To sum up, ∀ s ≥ s 0 , ∀ (t, x) ∈ R × Ω, x ∈ Ω + t or x ∈ Ω − t and d Ω (x, Γ t ) ≤ A =⇒ u s (t, x) = u(t + s, x) ≥ p + (t, x) − δ .ω − A = {(t, x) ∈ R × Ω; x ∈ Ω − t and d Ω (x, Γ t ) ≥ A}.
For all s ≥ s 0 , one has
u s ≥ u in ω − A .
Proof. Fix s ≥ s 0 and define
ε * = inf ε > 0; u s ≥ u − ε in ω − A .
Since u is bounded, ε * is a well-defined nonnegative real number and one has
u s ≥ u − ε * in ω − A . (4.4)
One only has to prove that ε * = 0. Assume by contradiction that ε * > 0. There exist then a sequence (ε n ) n∈N of positive real numbers and a sequence of points (t n , x n ) n∈N in ω − A such that ε n → ε * as n → +∞ and u s (t n , x n ) < u(t n , x n ) − ε n for all n ∈ N. (4.5)
We first note that, when x ∈ Ω − t and d Ω (x, Γ t ) = A, then u(t,
x) ≤ p − (t, x) + δ from (4.2), while u s (t, x) ≥ p + (t, x) − δ from (4.3). Hence u s (t, x) − u(t, x) ≥ p + (t, x) − p − (t, x) − 2δ ≥ κ − 2δ ≥ 0 when x ∈ Ω − t and d Ω (x, Γ t ) = A. (4.6)
Since ∇ x u is globally bounded in R × Ω, it follows from (4.5) and the positivity of ε * that there exists ρ > 0 such that
lim inf n→+∞ d Ω (x n , Γ tn ) ≥ A + 2ρ.
Even if it means decreasing ρ, one can also assume without loss of generality that
0 < ρ < τ,
where τ is given in (1.7), and that
ρ × (u − p + ) t L ∞ (R×Ω) + ∇ x (u − p + ) L ∞ (R×Ω) ≤ ε * 2 (4.7)
since u and p + have bounded derivatives. Next, we claim that the sequence (d Ω (x n , Γ tn )) n∈N is bounded. Otherwise, up to extraction of some subsequence, one has d Ω (x n , Γ tn ) → +∞ and then u(t n , x n ) − p − (t n , x n ) → 0 as n → +∞.
But, from Proposition 4.1 and the fact that p − is nondecreasing in time, one has
u(t n , x n ) − p − (t n , x n ) > ε n + u(t n + s, x n ) − p − (t n , x n ) ≥ ε n + p − (t n + s, x n ) − p − (t n , x n ) ≥ ε n → ε * > 0 as n → +∞,
which gives a contradiction. Therefore, the sequence (d Ω (x n , Γ tn )) n∈N is bounded.
Since x n ∈ Ω − tn and d Ω (x n , Γ tn ) ≥ A + ρ for n large enough (say, for n ≥ n 0 ), and since p + invades p − , it follows that x n ∈ Ω − t and d Ω (x n , Γ t ) ≥ A + ρ for all n ≥ n 0 and t ≤ t n and even that
x ∈ Ω − t and d Ω (x, Γ t ) ≥ A for all n ≥ n 0 , x ∈ B Ω (x n , ρ) and t ≤ t n . (4.8)
As a consequence, since ρ < τ , there exists a sequence of points (y n ) n∈N, n≥n 0 in Ω such that y n ∈ Ω − tn−τ +ρ and A + ρ = d Ω (y n , Γ tn−τ +ρ ) = d Ω (x n , Γ tn−τ +ρ ) − d Ω (x n , y n ) (4.9)
for all n ≥ n 0 . Thus, for each n ∈ N with n ≥ n 0 , there exists a C 1 path P n : [0, 1] → Ω − tn−τ +ρ such that P n (0) = x n , P n (1) = y n , the length of P n is equal to d Ω (x n , y n ) and
d Ω (P n (σ), Γ tn−τ +ρ ) ≥ A + ρ for all σ ∈ [0, 1].
Once again, since p + invades p − , it follows that
∀ n ≥ n 0 , ∀ σ ∈ [0, 1], ∀ x ∈ B Ω (P n (σ), ρ), ∀ t ≤ t n − τ + ρ, x ∈ Ω − t and d Ω (x, Γ t ) ≥ A.
(4.10) Together with (4.8), one gets that, for each n ≥ n 0 , the set
E n = [t n − τ, t n ]×B Ω (x n , ρ) ∪ [t n − τ, t n − τ + ρ]× x ∈ Ω; d Ω (x, P n ([0, 1])) ≤ ρ is included in ω −
A . As a consequence, for all n ≥ n 0 , v := u s − (u − ε * ) ≥ 0 in E n from (4.4), and
u(t, x) − ε * < u(t, x) ≤ p − (t, x) + δ for all (t, x) ∈ E n from (4.2). Thus, (u − ε * ) t = ∇ x · (A(x)∇ x (u − ε * )) + q(x) · ∇ x (u − ε * ) + f (t, x, u) ≤ ∇ x · (A(x)∇ x (u − ε * )) + q(x) · ∇ x (u − ε * ) + f (t, x, u − ε * ) in E n for all n ≥ n 0 , because f (t, x, ·) is nonincreasing in (−∞, p − (t, x) + δ].
In other words, the function u − ε * is a subsolution of (1.2) in E n for all n ≥ n 0 . As far as the function u s (t, x) = u(t + s, x) is concerned, it satisfies
u s t = ∇ x · (A(x)∇ x u s ) + q(x) · ∇ x u s + f (t + s, x, u s ) ≥ ∇ x · (A(x)∇ x u s ) + q(x) · ∇ x u s + f (t, x, u s ) for all (t, x) ∈ R × Ω because f (·, x, ξ)
is nondecreasing for all (x, ξ) ∈ Ω × R. Notice that we here use the fact that A and q are independent from the variable t. Furthermore, u s still satisfies
µ(x) · ∇ x u s (t, x) = 0 on R × ∂Ω because µ is independent of t.
In other words, u s is a supersolution of (1.2). Consequently, since the functions f (t, x, ·) are locally Lipschitz-continuous uniformly with respect to (t, x) ∈ R × Ω, the function v satisfies inequations of the type
v t ≥ ∇ x · (A(x)∇ x v) + q(x) · ∇ x v + b(t, x)v in E n
for all n ≥ n 0 , where the sequence ( b L ∞ (En) ) n∈N, n≥n 0 is bounded. On the other hand, since the sequence (d Ω (x n , Γ tn )) n∈N is bounded, it follows from assumption (1.7) that there exists then a sequence of points ( x n ) n∈N in Ω such that x n ∈ Γ tn−τ for all n ∈ N, and sup d Ω (x n , x n ); n ∈ N < +∞.
Thus, for all n ≥ n 0 ,
d Ω (x n , y n ) = d Ω (x n , Γ tn−τ +ρ ) − (A + ρ) ≤ d Ω (x n , Γ tn−τ ) − (A + ρ) ≤ d Ω (x n , x n ) − (A + ρ) since x n ∈ Ω −
tn and the sets Ω − t are non-increasing with respect to t in the sense of the inclusion (because p + invades p − ). The sequence (d Ω (x n , y n )) n∈N, n≥n 0 is then bounded. Lastly, remember that the function ∇ x v is bounded in R×Ω. As a conclusion, since v(t n , x n ) → 0 as n → +∞ (because of (4.5) and v(t n , x n ) ≥ 0), it follows from the linear parabolic estimates that v(t n − τ, y n ) → 0 as n → +∞. (4.11) But, because of (4.9), there exists a sequence (z n ) n∈N, n≥n 0 such that z n ∈ Ω − tn−τ +ρ , d Ω (y n , z n ) = ρ and d Ω (z n , Γ tn−τ +ρ ) = A for all n ≥ n 0 . Thus, for all n ≥ n 0 , 3) and (4.7). Moreover, u(t n − τ, y n ) ≤ p − (t n − τ, y n ) + δ for all n ≥ n 0 from (4.2) and (4.10). Eventually, for all n ≥ n 0 , there holds
u s (t n − τ, y n ) − p + (t n − τ, y n ) ≥ u s (t n − τ + ρ, z n ) − p + (t n − τ + ρ, z n ) − ε * 2 ≥ −δ − ε * 2 from (4.v(t n − τ, y n ) = u s (t n − τ, y n ) − u(t n − τ, y n ) + ε * = u s (t n − τ, y n ) − p + (t n − τ, y n ) + p + (t n − τ, y n ) − u(t n − τ, y n ) + ε * ≥ −δ − ε * 2 + p + (t n − τ, y n ) − p − (t n − τ, y n ) − δ + ε * ≥ κ − 2δ + ε * 2 ≥ ε * 2 > 0
from (1.17) and the inequality 2δ ≤ κ.
One has then reached a contradiction with (4.11). Hence ε * = 0 and the proof of Lemma 4.2 is thereby complete.
Similarly, using now that
f (t, x, ·) is nonincreasing in [p + (t, x)−δ, +∞) and that u s (t, x) ≥ p + (t, x)−δ/2 ≥ p + (t, x)−δ provided that (t, x) ∈ ω −
A and s ≥ s 0 , we shall prove the following:
u s ≥ u in ω + A := R × Ω \ ω − A .
Proof. The proof uses some of the tools of that of Lemma 4.2, but it is not just identical, because the time-sections of ω + A , namely the sets Ω + t ∪ x ∈ Ω − t ; d Ω (x, Γ t ) < A , are now nondecreasing with respect to time t in the sense of the inclusion.
Fix s ≥ s 0 and define
ε * = inf ε > 0; u s + ε ≥ u in ω + A .
This nonnegative real number is well-defined since u is globally bounded, and one has
w := u s + ε * − u ≥ 0 in ω + A .
Furthermore, Lemma 4.2 implies that
w ≥ ε * in ω − A . (4.12)
In particular, w is nonnegative in R × Ω.
To get the conclusion of Lemma 4.3, it is sufficient to prove that ε * = 0. Assume by contradiction that ε * > 0. There exists then a sequence (ε n ) n∈N of positive real numbers and a sequence of points (t n , x n ) n∈N in ω + A such that ε n → ε * as n → +∞, and u s (t n , x n ) + ε n < u(t n , x n ) for all n ∈ N.
If the sequence (d Ω (x n , Γ tn )) n∈N were not bounded, then, up to extraction of a subsequence, it would converge to +∞, whence
x n ∈ Ω + tn ⊂ Ω + tn+s and d Ω (x n , Γ tn+s ) ≥ d Ω (x n , Γ tn ) for large n.
Therefore, d Ω (x n , Γ tn+s ) → +∞ and u s (t n , x n ) − p + (t n + s, x n ) → 0 as n → +∞. But
u s (t n , x n ) − p + (t n + s, x n ) < u(t n , x n ) − ε n − p + (t n + s, x n ) ≤ p + (t n , x n ) − p + (t n + s, x n ) − ε n ≤ −ε n → −ε * < 0 as n → +∞
from Proposition 4.1 and since p + is nondecreasing in time. This gives a contradiction. Thus, the sequence (d Ω (x n , Γ tn )) n∈N is bounded. From (1.7), there exists then a sequence ( x n ) n∈N in Ω such that x n ∈ Γ tn−τ for all n ∈ N, and sup d Ω (x n , x n ); n ∈ N < +∞.
Because of (1.4), there exist r > 0 and a sequence (y n ) n∈N in Ω such that
y n ∈ Ω − tn−τ , d Ω ( x n , y n ) = r and d Ω (y n , Γ tn−τ ) ≥ A for all n ∈ N.
There exists then a sequence (z n ) n∈N in Ω such that
z n ∈ Ω − tn−τ and A = d Ω (z n , Γ tn−τ ) = d Ω (y n , Γ tn−τ ) − d Ω (y n , z n ) for all n ∈ N. (4.13)
Since d Ω (y n , z n ) ≤ d Ω (y n , Γ tn−τ ) ≤ d Ω (y n , x n ) = r and since the sequence (d Ω (x n , x n )) n∈N is bounded, one gets finally that the sequence (d Ω (x n , z n )) n∈N is bounded. Choose now ρ > 0 so that
ρ (u s − u) t L ∞ (R×Ω) + 2 ρ ∇ x (u s − u) L ∞ (R×Ω) < ε * (4.14)
and K ∈ N\{0} so that K ρ ≥ max τ, sup d Ω (x n , z n ); n ∈ N . For each n ∈ N, there exists then a sequence of points (X n,0 , X n,1 , . . . , X n,K ) in Ω such that X n,0 = x n , X n,K = z n and d Ω (X n,i , X n,i+1 ) ≤ ρ for each 0 ≤ i ≤ K − 1.
For each n ∈ N and 0 ≤ i ≤ K − 1, set
E n,i = t n − i + 1 K τ, t n − i K τ × B Ω (X n,i , 2 ρ).
Since w(t n , x n ) → 0 as n → +∞, it follows from (4.14) and (4.15) that w < ε * in E n,0 for large n, whence E n,0 ⊂ ω + A from (4.12). Consequently, u s (t, x) + ε * > u s (t, x) ≥ p + (t, x) − δ in E n,0 for large n from (4.3). Since f (t, x, ·) is nonincreasing in [p + (t, x) − δ, +∞) for all (t, x) ∈ R × Ω and since u s is a supersolution of (1.2), it follows then as in the proof of Lemma 4.2 that the nonnegative function w satisfies inequations of the type
w t ≥ ∇ x (A(x)∇ x w) + q(x) · ∇ x w + b(t, x)w in E n,0
for n large enough, where the sequence ( b L ∞ (E n,0 ) ) n∈N is bounded. Remember also that µ(x) · ∇ x w(t, x) = 0 for all (t, x) ∈ R × ∂Ω, and that ∇ x w is bounded in R × Ω. Since w(t n , X n,0 ) = w(t n , x n ) → 0 as n → +∞, one concludes from the linear parabolic estimates that w t n − τ K , X n,1 → 0 as n → +∞.
An immediate induction yields w(t n − iτ /K, X n,i ) → 0 as n → +∞ for each i = 1, . . . , K. In particular, for i = K, w(t n − τ, z n ) → 0 as n → +∞.
But z n ∈ Ω − tn−τ and d Ω (z n , Γ tn−τ ) = A for all n ∈ N. As a consequence, for all n ∈ N, (t n − τ, z n ) ∈ ω − A and w(t n − τ, z n ) ≥ ε * from (4.12). One has then reached a contradiction, which means that ε * = 0. That completes the proof of Lemma 4.3.
End of the proof of Theorem 1.10. It follows from Lemmata 4.2 and 4.3 that
u s ≥ u in R × Ω for all s ≥ s 0 . Now call s * = inf {s > 0; u σ ≥ u in R × Ω for all σ ≥ s}.
One has 0 ≤ s * ≤ s 0 and one shall prove that s * = 0. Assume that s * > 0. Since u s * ≥ u in R × Ω, two cases may occur: either inf u s * (t,
x) − u(t, x); d Ω (x, Γ t ) ≤ A > 0 or inf u s * (t, x) − u(t, x); d Ω (x, Γ t ) ≤ A = 0. Case 1: assume that inf u s * (t, x) − u(t, x); d Ω (x, Γ t ) ≤ A > 0.
Since u t is globally bounded, there exists η 0 ∈ (0, s * ) such that (4.2). Therefore, the same arguments as in Lemma 4.2 imply that
∀ η ∈ [0, η 0 ], ∀ (t, x) ∈ R × Ω, d(x, Γ t ) ≤ A =⇒ u s * −η (t, x) ≥ u(t, x) . (4.16) For each η ∈ [0, η 0 ], one then has u s * −η (t, x) ≥ u(t, x) for all (t, x) ∈ R × Ω such that x ∈ Ω − t and d Ω (x, Γ t ) = A, while u(t, x) ≤ p − (t, x) + δ if x ∈ Ω − t and d Ω (x, Γ t ) ≥ A (i.e. (t, x) ∈ ω − A ) from∀ η ∈ [0, η 0 ], u s * −η ≥ u in ω − A . (4.17)
On the other hand,
x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ u s * (t, x) ≥ u(t, x) ≥ p + (t, x) − δ 2
from (4.2). Hence, even if it means decreasing η 0 > 0, one can assume without loss of generality that
∀η ∈ [0, η 0 ], x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ u s * −η (t, x) ≥ p + (t, x) − δ .
Notice that this is the place where we use the choice of δ/2 (< δ) in the second property of (4.2). Furthermore, remember from (4.16) and (4.17) that, for all η ∈ [0, η 0 ], u s * −η (t, x) ≥ u(t, x) for all (t, x) ∈ R × Ω such that x ∈ Ω − t , or x ∈ Ω + t and d Ω (x, Γ t ) ≤ A. As in Lemma 4.3, one then gets that
∀η ∈ [0, η 0 ], x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ u s * −η (t, x) ≥ u(t, x) .
One concludes that u s * −η ≥ u in R × Ω for all η ∈ [0, η 0 ]. That contradicts the minimality of s * and case 1 is then ruled out.
Case 2: assume that
inf u s * (t, x) − u(t, x); d Ω (x, Γ t ) ≤ A = 0.
There exists then a sequence (t n , x n ) n∈N in R × Ω such that
d Ω (x n , Γ tn ) ≤ A and u s * (t n , x n ) − u(t n , x n ) → 0 as n → +∞.
Since u s * is a supersolution of (1.2) in R × Ω (as already noticed in the proof of Lemma 4.2) and since u s * ≥ u in R × Ω, it follows from the linear parabolic estimates that u(t n , x n ) − u(t n − s * , x n ) = u s * (t n − s * , x n ) − u(t n − s * , x n ) → 0 as n → +∞.
By immediate induction, one has that u(t n , x n ) − u(t n − ks * , x n ) → 0 as n → +∞ (4.18) for each k ∈ N. Fix any ε > 0. Let B ε > 0 be such that
∀ (t, x) ∈ R × Ω, x ∈ Ω − t and d Ω (x, Γ t ) ≥ B ε =⇒ u(t, x) ≤ p − (t, x) + ε .
On the other hand, since p + invades p − and since the sequence (d Ω (x n , Γ tn )) n∈N is bounded, there exists m ∈ N such that
x n ∈ Ω − tn−ms * and d Ω (x n , Γ tn−ms * ) ≥ B ε for all n ∈ N.
Hence, u(t n − ms * , x n ) ≤ p − (t n − ms * , x n ) + ε ≤ p − (t n , x n ) + ε for all n ∈ N since p − is nondecreasing in time. Together with (4.18) applied to k = m, one concludes that lim sup n→+∞ u(t n , x n ) − p − (t n , x n ) ≤ ε.
But u ≥ p − from Proposition 4.1, and ε > 0 was arbitrary. One obtains that u(t n , x n ) − p − (t n , x n ) → 0 as n → +∞. (4.19) Let now B > 0 be such that
∀ (t, x) ∈ R × Ω, x ∈ Ω + t and d Ω (x, Γ t ) ≥ B =⇒ u(t, x) ≥ p + (t, x) − κ 2 ,
where κ > 0 has been defined in (1.17). From assumption (1.7), and since the sequence (d Ω (x n , Γ tn )) n∈N is bounded, there exists a sequence ( x n ) n∈N in Ω such that
x n ∈ Γ tn−τ for all n ∈ N, and sup d Ω (x n , x n ); n ∈ N < +∞.
Because of (1.4), there exist r > 0 and a sequence (y n ) n∈N in Ω such that y n ∈ Ω + tn−τ , d Ω (y n , x n ) = r and d Ω (y n , Γ tn−τ ) ≥ B for all n ∈ N. Thus, u(t n − τ, y n ) ≥ p + (t n − τ, y n ) − κ 2 for all n ∈ N.
Remember now that both u ≥ p − are two bounded solutions of (1.2) and that f (t, x, ξ) is locally Lipschitz-continuous in ξ, uniformly with respect to (t, x) ∈ R × Ω. Notice also that the sequence (d Ω (x n , y n )) n∈N is bounded. Since u(t n , x n ) − p − (t n , x n ) → 0 as n → +∞ because of (4.19), one concludes that
u(t n − τ, y n ) − p − (t n − τ, y n ) → 0 as n → +∞. But u(t n − τ, y n ) − p − (t n − τ, y n ) ≥ p + (t n − τ, y n ) − κ 2 − p − (t n − τ, y n ) ≥ κ 2 > 0
owing to the definition of κ. One has then reached a contradiction and case 2 is then ruled out too. As a consequence, s * = 0 and u s ≥ u in R × Ω for all s ≥ 0.
Let us now prove that the inequality is strict if s > 0. Choose any s > 0 and assume that u s (t 0 , x 0 ) = u(t 0 , x 0 ) for some (t 0 , x 0 ) ∈ R × Ω.
Since u s (≥ u) is a supersolution of (1.2), one gets that u s (t, x) = u(t, x) for all t ≤ t 0 and x ∈ Ω from the strong parabolic maximum principle and Hopf lemma. Fix any t ≤ t 0 and x ∈ Ω. For all k ∈ N, one then has
0 ≤ u(t, x) − p − (t, x) = u(t − ks) − p − (t, x) ≤ u(t − ks, x) − p − (t − ks, x)
because p − is nondecreasing in time. But the right-hand side converges to 0 as k → +∞, because s > 0 and because of Definition 1.4 (here, p + invades p − ). It follows that u(t, x) = p − (t, x) for all t ≤ t 0 and x ∈ Ω, which is impossible because of Proposition 4.1. As a conclusion, u s (t, x) > u(t, x) for all (t, x) ∈ R × Ω and s > 0. That completes the proof of Theorem 1.10.
Uniqueness of the mean speed, comparison of almost planar fronts and reduction to pulsating fronts
In this section, we prove, under some appropriate assumptions, the uniqueness of the speed among all almost-planar invasion fronts, and that the transition fronts reduce in some standard situations to the usual planar or pulsating fronts. Let us first process with the Proof of Theorem 1.11. Notice first that c and c are (strictly) positive. Indeed,
d Ω (Γ t , Γ s ), d Ω ( Γ t , Γ s ) → +∞ as |t − s| → +∞,
and the quantities d Ω (Γ t , Γ s ) − c |t − s| and d Ω ( Γ t , Γ s ) − c |t − s| are assumed to be bounded uniformly with respect to (t, s) ∈ R 2 . One shall prove that c = c and that u is above u up to shift in time. Assume that c < c (the other case can be treated similarly by permuting the roles of u and u). Define
v(t, x) = u c c t, x and notice that v t (t, x) = c c u t c c t, x ≥ u t c c t, x = ∇ x · (A(x)∇ x v(t, x)) + q(x) · ∇ x v(t, x) + f (x, v(t, x))
because c/ c ≥ 1 and u t ≥ 0 from Theorem 1.10. We also use the fact that both A, q and f are independent of t. Furthermore, µ(x)·∇ x v(t, x) = 0 on R×∂Ω. Therefore, the function v, as well as all its time-shifts, is a supersolution for (1.2). It also follows from Definition 1.1 that v(t, x) − p ± (x) → 0 uniformly as x ∈ Ω ± ct/ c and d Ω (x, Γ ct/ c ) → +∞, (5.1)
where Ω ± ct/ c = {x ∈ Ω, ±(x · e − ξ ct/ c ) < 0} and Γ ± ct/ c = {x ∈ Ω, x · e = ξ ct/ c }. Remember that the quantities
d Ω ( Γ ct/ c , Γ cs/ c ) − c c c t − c c s = d Ω ( Γ ct/ c , Γ cs/ c ) − c|t − s|
are bounded independently of (t, s) ∈ R 2 . As a consequence, the map
t → d Ω (Γ t , Γ 0 ) − d Ω ( Γ ct/ c , Γ 0 )
is bounded in R. Furthermore, both u and u are almost planar invasion fronts (p + invades p − ) in the same direction e, whence the maps t → ξ t and t → ξ t are nondecreasing. Eventually, one gets that sup d Ω ( Γ ct/ c , Γ t ); t ∈ R < +∞ and sup | ξ ct/ c − ξ t |; t ∈ R < +∞.
(5.2)
On the other hand, Definition 1.1 applied to u implies that there exists A > 0 such that
∀ (t, x) ∈ R × Ω, x ∈ Ω − t and d Ω (x, Γ t ) ≥ A =⇒ u(t, x) ≤ p − (x) + δ x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ u(t, x) ≥ p + (x) − δ 2 . (5.3)
Since u and u are almost planar in the same direction e and since u is an invasion of p − by p + , properties (5.1) and (5.2) yield the existence of s 0 > 0 such that, for all s ≥ s 0 and for all (t, x) ∈ R × Ω,
x ∈ Ω + t or x ∈ Ω − t and d Ω (x, Γ t ) ≤ A =⇒ v s (t, x) = v(t + s, x) ≥ p + (x) − δ .
Choose any s ≥ s 0 . Since p − ≤ u, v ≤ p + (from Proposition 4.1) and
0 < 2δ ≤ κ := inf p + (t, x) − p − (t, x); (t, x) ∈ R × Ω
(even if it means decreasing δ without loss of generality), the arguments used in Lemma 4.2 imply that
u(t, x) ≤ v s (t, x) in ω − A , i.e. for all x ∈ Ω − t such that d Ω (x, Γ t ) ≥ A.
Therefore, the arguments used in the proof of Lemma 4.3 similarly imply that
u(t, x) ≤ v s (t, x) for all (t, x) ∈ R × Ω \ ω − A .
Thus, u ≤ v s in R × Ω for all s ≥ s 0 .
Call now s * = inf {s ∈ R; u ≤ v s in R × Ω}.
One has s * ≤ s 0 and s * > −∞ because p − (x) < u(t, x) < p + (x) for all (t, x) ∈ R × Ω (from Theorem 1.10) and v s (0,
x 0 ) = u c c s, x 0 → p − (x 0 ) as s → −∞
for all x 0 ∈ Ω (see Definition 1.4). There holds
u ≤ v s * in R × Ω.
In particular,
x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ v s * (t, x) ≥ u(t, x) ≥ p + (x) − δ 2 . (5.4) Assume now that inf v s * (t, x) − u(t, x); d Ω (x, Γ t ) ≤ A > 0. (5.5)
The same property then holds when s * is replaced with s * − η for any η ∈ [0, η 0 ] and η 0 > 0 small enough, since v t (like u t ) is globally bounded. From (5.4), one can assume that η 0 > 0 is small enough so that
x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ v s * −η (t, x) ≥ p + (x) − δ for all η ∈ [0, η 0 ]. The first property of (5.3) implies, as in Lemma 4.2, that v s * −η (t, x) ≥ u(t, x) for all η ∈ [0, η 0 ] and (t, x) ∈ R × Ω with x ∈ Ω − t and d Ω (x, Γ t ) ≥ A.
The above inequality then holds for all (t, x) ∈ R × Ω such that x ∈ Ω − t , or x ∈ Ω + t and d Ω (x, Γ t ) ≤ A. As in Lemma 4.3, one then gets that
v s * −η (t, x) ≥ u(t, x) for all η ∈ [0, η 0 ] and (t, x) ∈ R × Ω with x ∈ Ω + t and d Ω (x, Γ t ) ≥ A. Eventually, v s * −η ≥ u in R × Ω for all η ∈ [0, η 0 ].
That contradicts the minimality of s * and assumption (5.5) is false.
Therefore, inf v s * (t, x) − u(t, x); d Ω (x, Γ t ) ≤ A = 0.
Then, there exists a sequence (t n , x n ) ∈ R × Ω such that d Ω (x n , Γ tn ) ≤ A for all n ∈ N and v s * (t n , x n ) − u(t n , x n ) → 0 as n → +∞.
Because of (1.7), there exists a sequence ( x n ) n∈N in Ω such that
x n ∈ Γ tn−τ for all n ∈ N, and sup d Ω (x n , x n ); n ∈ N < +∞.
Since v s * is a supersolution of (1.2) and v s * ≥ u in R × Ω, it follows from the linear parabolic estimates that
max v s * (t, x) − u(t, x); t n − τ − 1 ≤ t ≤ t n − τ, d Ω (x, x n ) ≤ 1 → 0 as n → +∞ and, since the functions v s * t , v s * x i , v s * x i x j , u t , u x i and u x i x j are globally Hölder continuous in R × Ω for all 1 ≤ i, j ≤ N , one gets that v s * t (t n − τ, x n ) − u t (t n − τ, x n ) + v s * x i (t n − τ, x n ) − u x i (t n − τ, x n ) + v s * x i x j (t n − τ, x n ) − u x i x j (t n − τ, x n ) → 0 as n → +∞ for all 1 ≤ i, j ≤ N . But c c v s * t = ∇ x · (A(x)∇ x v s * ) + q(x) · ∇ x v s * + f (x, v s * ), u t = ∇ x · (A(x)∇ x u) + q(x) · ∇ x u + f (x, u).
Therefore, ( c/c − 1) u t (t n − τ, x n ) → 0 as n → +∞, whence u t (t n − τ, x n ) → 0 as n → +∞, because 0 < c < c.
On the other hand, there exists A > 0 such that
x ∈ Ω + t and d Ω (x, Γ t ) ≥ A =⇒ u(t, x) ≥ p + (x) − κ 3 ,
where κ was defined in (1.17). From (1.7), there exists a sequence (y n ) n∈N in Ω such that y n ∈ Γ tn−2τ for all n ∈ N, and sup d Ω ( x n , y n ); n ∈ N < +∞.
Because of (1.4), there exist r > 0 and a sequence (z n ) n∈N in Ω such that
z n ∈ Ω + tn−2τ , d Ω (z n , y n ) = r and d Ω (z n , Γ tn−2τ ) ≥ A for all n ∈ N. Thus, u(t n − 2τ, z n ) ≥ p + (z n ) − κ 3 for all n ∈ N. (5.6)
Since the sequence (d Ω (z n , x n )) n∈N is bounded, since u t (t n − τ, x n ) → 0 as n → +∞ and since the globally C 1 (R × Ω) nonnegative function u t satisfies
(u t ) t = ∇ x · (A(x)∇ x u t ) + q(x) · ∇ x u t + f t (t, x, u) u t in R × Ω
with f t (·, ·, u(·, ·)) L ∞ (R×Ω) < +∞ and µ(x) · ∇ x u t = 0 on R × ∂Ω, the linear parabolic estimates imply that u t (t n − 2τ, z n ) → 0 as n → +∞.
Let now ε be any positive real number. Since the function u t is globally C 1 (R × Ω), there exist σ > 0 and n 0 ∈ N such that 0 ≤ max u t (t, z n ); t ∈ [t n − 2τ − σ, t n − 2τ ] ≤ ε for all n ≥ n 0 .
Remember that y n ∈ Γ tn−2τ and d Ω (z n , Γ tn−2τ ) ≤ d Ω (z n , y n ) = r. Since u is an invasion front of p − by p + , there exists σ > 0 (σ is independent of n and ε) such that
u(t n − 2τ − σ , z n ) ≤ p − (z n ) + κ 3 for all n ∈ N. (5.7)
Since u t (t n − 2τ, z n ) → 0 as n → +∞ and u t ≥ 0 in R × Ω, it follows that, if σ ≥ σ, then
0 ≤ max u t (t, z n ); t ∈ [t n − 2τ − σ , t n − 2τ − σ] → 0 as n → +∞,
and then is less than ε for n ≥ n 1 (for some n 1 ∈ N). Therefore, in both cases σ ≥ σ or σ ≤ σ, one has 0 ≤ max u t (t, z n ); t ∈ [t n − 2τ − σ , t n − 2τ ] ≤ ε for all n ≥ max(n 0 , n 1 ).
Hence
u(t n − 2τ − σ , z n ) ≤ u(t n − 2τ, z n ) ≤ u(t n − 2τ − σ , z n ) + σ ε
for n large enough, and then
u(t n − 2τ, z n ) − u(t n − 2τ − σ , z n ) → 0 as n → +∞
because ε > 0 was arbitrary and σ was independent of ε. But
u(t n − 2τ, z n ) − u(t n − 2τ − σ , z n ) ≥ p + (z n ) − κ 3 − p − (z n ) − κ 3 ≥ κ 3
> 0 for all n ∈ N because of (5.6), (5.7) and of the definition of κ in (1.17). One has then reached a contradiction.
As a consequence, c ≥ c.
The other inequality follows by reversing the roles of u and u. Thus, c = c.
The above arguments also imply that, for u and u as in Theorem 1.11, there exists (the smallest) T ∈ R such that u(t + T, x) ≥ u(t, x) for all (t, x) ∈ R × Ω. The strong parabolic maximum principle and Hopf lemma imply that either the inequality is strict everywhere, or the two functions u and u T are identically equal. That completes the proof of Theorem 1.11.
Let us now turn to the proof of the reduction of almost planar invasion fronts to pulsating fronts in periodic media.
Proof of Theorem 1.13. To prove part (i), fix k ∈ L 1 Z × · · · × L N Z. By periodicity, the function
u(t, x) = u(t, x + k)
is a solution of (1.2). Furthermore, u, like u, satisfies all assumptions of Theorem 1.11. Thus, there exists (the smallest) T ∈ R such that
u(t + T, x) = u(t + T, x + k) ≥ u(t, x) for all (t, x) ∈ R × Ω (5.8)
and there exists a sequence of points (t n , x n ) n∈N in R × Ω such that (d Ω (x n , Γ tn )) n∈N is bounded and u(t n + T, x n + k) − u(t n , x n ) → 0 as n → +∞. (5.9) It shall then follow that lim inf
n→+∞ |u(t n , x n ) − p ± (x n )| > 0. (5.10)
Indeed, assume for instance that, up to extraction of some subsequence, u(t n , x n )−p − (x n ) → 0 as n → +∞ (the case u(t n , x n ) − p + (x n ) → 0 as n → +∞ could be handled similarly). Then max u(t n − τ, y) − p − (y); d Ω (y, x n ) ≤ C → 0 as n → +∞ for any C ≥ 0, from the linear parabolic estimates applied to the nonnegative function u−p − (remember that τ > 0 is given in (1.7)). But there is a sequence (y n ) n∈N in Ω such that (d Ω (y n , x n )) n∈N is bounded , y n ∈ Ω + tn−τ and u(t n − τ, y n ) ≥ p + (y n ) − κ 2 for all n ∈ N (one uses the facts that the sequence (d Ω (x n , Γ tn )) n∈N is bounded and that (1.4) is automatically satisfied by periodicity of Ω). One then gets a contradiction as n → +∞. Thus, (5.10) holds. Write
x n = x n + x n for all n ∈ N, where x n ∈ L 1 Z × · · · × L N Z and x n ∈ [0, L 1 ] × · · · × [0, L N ] ∩ Ω. Set u n (t, x) = u(t + t n , x + x n )
for all n ∈ N and (t, x) ∈ R × Ω. The functions u n satisfy the same equation (1.2) with the same boundary conditions (1.15) as u, since the domain Ω is periodic and the coefficients A, q, f and µ are periodic and independent of t. Up to extraction of a subsequence one can assume that x n → x ∞ ∈ Ω as n → +∞ and that, from standard parabolic estimates, u n (t, x) → u ∞ (t, x) locally uniformly in R×Ω, where u ∞ solves (1.2) and (1.15). Furthermore,
u ∞ (t + T, x + k) ≥ u ∞ (t, x) for all (t, x) ∈ R × Ω from (5.8), and u ∞ (T, x ∞ + k) = u ∞ (0, x ∞ )
from (5.9). It follows then from the strong maximum principle, Hopf lemma and the uniqueness of the solution of the Cauchy problem for (1.2) and (1.15), that
u ∞ (t + T, x + k) = u ∞ (t, x) for all (t, x) ∈ R × Ω. (5.11) Furthermore, u ∞ (0, x ∞ ) = p ± (x ∞ ) (5.12)
from (5.10).
On the other hand, as already noticed in the proof of Theorem 1.11, the global mean speed c is positive. Since the quantities d Ω (Γ t , Γ s ) − c|t − s| are bounded independently of (t, s) ∈ R 2 , since Ω ± t = {x ∈ Ω, ±(x · e − ξ t ) < 0} and since t → ξ t is nondecreasing (because p + invades p − ), it follows from the definition of γ = γ(e) in (1.21) that there exists M ≥ 0 such that
ξ t − c γ −1 t ≤ M for all t ∈ R.
(5.13) But, from Definition 1.1, since the geodesic distance is not smaller than the Euclidean distance, one has that
u n (t, x) − p ± (x) = u(t + t n , x + x n ) − p ± (x + x n ) → 0 as (x + x n ) · e − ξ t+tn → ∓∞,
uniformly with respect to n and (t, x). Write (x + x n ) · e − ξ t+tn = x · e − ξ t + x n · e − ξ tn − x n · e + ξ t + ξ tn − ξ t+tn .
The sequence (x n · e − ξ tn ) n∈N is bounded because (d Ω (x n , Γ tn )) n∈N is bounded. The quantities ξ t + ξ tn − ξ t+tn are bounded independently of t and n because of (5.13). Lastly, the sequence (x n · e) n∈N is also bounded. Finally, one gets that
u ∞ (t, x) − p ± (x) → 0 as x · e − ξ t → ∓∞
uniformly with respect to (t, x).
Assume now, by contradiction, that T > γ(k · e)/c (one shall actually prove that T = γ(k · e)/c). Since (x ∞ + mk) · e − ξ mT → ∓∞ as m ∈ Z and m → ±∞ because of our assumption and because of (5.13), it follows that
u ∞ (mT, x ∞ + mk) − p ± (x ∞ + mk) → 0 as m ∈ Z and m → ±∞.
But p ± (x ∞ + mk) = p ± (x ∞ ) for all m ∈ Z by periodicity of p ± , and u ∞ (mT, x ∞ + mk) = u ∞ (0, x ∞ ) for all m ∈ Z because of (5.11). One finally gets a contradiction with (5.12).
Therefore, the inequality T > γ(k · e)/c was impossible, whence T ≤ γ(k · e)/c and
u t + γ k · e c , x + k ≥ u(t, x) for all (t, x) ∈ R × Ω.
Similarly, by fixing the function u(t, x + k) and sliding u(t, x) with respect to t, one can prove that u t − γ k · e c , x ≥ u(t, x + k) for all (t, x) ∈ R × Ω.
As a consequence, u t + γ k · e c , x + k = u(t, x) for all (t, x) ∈ R × Ω, (5.14)
namely u is a pulsating traveling front in the sense of (1.20). Its global mean speed is equal to c γ −1 in the sense of (1.20), but it is equal to c in the more intrinsic sense of Definition 1.6. Let now u and v be two fronts satisfying all assumptions of part (i) of Theorem 1.13. One shall prove that u and v are equal up to shift in time. From Theorem 1.11, there exists (the smallest) T ∈ R such that v(t + T, x) ≥ u(t, x) for all (t, x) ∈ R × Ω and there exists a sequence (t n , x n ) n∈N in R × Ω such that (d Ω (x n , Γ tn )) n∈N is bounded, and v(t n + T, x n ) − u(t n , x n ) → 0 as n → +∞.
Since both u and v satisfy (5.14) for all k ∈ L 1 Z × · · · × L N Z, one can assume without loss of generality that the sequence (x n ) n∈N is bounded. But since the sequence (d Ω (x n , Γ tn )) n∈N is itself bounded and since u is an invasion front, the sequence (t n ) n∈N is then bounded as well.
Up to extraction of some subsequence, one can then assume that (t n , x n ) → (t, x) ∈ R × Ω, whence v(t + T, x) = u(t, x).
The strong parabolic maximum principle and Hopf lemma then yield v(t + T, x) = u(t, x) for all (t, x) ∈ R × Ω, which completes the proof of part (i) of Theorem 1.13.
To prove part (ii), assume, without loss of generality, that e = e 1 = (1, 0, . . . , 0). Fix any σ ∈ R\{0}. The data Ω, A, q, f , µ and p ± are then periodic with respect to the positive vector (|σ|, L 2 , . . . , L N ). Part (i) applied to k = (σ, 0, . . . , 0) then implies that
u t + γ σ c , x = u(t, x 1 − σ, x 2 , . . . , x N )
for all (t, x) ∈ R × Ω, where γ = γ(e) = 1 since Ω is invariant in the direction e. Since this property holds for any σ ∈ R\{0} (and also for σ = 0 obviously), it follows that
u(t, x) = φ(x 1 − ct, x ) for all (t, x) ∈ R × Ω,
where x = (x 2 , . . . , x N ) and the function φ : Ω → R is defined by
φ(ζ, x ) = u − ζ c , 0, x for all (ζ, x ) ∈ Ω.
The function φ is then decreasing in ζ since u is increasing in t and c > 0.
Lastly, part (iii) is a consequence of part (ii) and of Theorem 1.15. Namely, part (ii) implies that u depends only on x · e − ct and on the variables x which are orthogonal to e, and Theorem 1.15 (its proof will be done in Section 6) implies that u does not depend on x . 6 Therefore,
u(t, x) = φ(x · e − ct) for all (t, x) ∈ R × R N ,
where the function φ : R → R is defined by φ(ζ) = u(−ζ/c, 0, . . . , 0) for all ζ ∈ R, is decreasing in R and satisfies φ(∓∞) = p ± . The proof of Theorem 1.13 is now complete.
6 The case of media which are invariant or monotone in the direction of propagation
In this section, we assume that the domain is invariant in a direction e and we prove that, under appropriate conditions on the coefficients of (1.2), the almost planar fronts, which may not be invasions, do not depend on the transverse variables or have a constant profile in the direction e. We start with the Proof of Theorem 1.15. Up to rotation of the frame, one can assume without loss of generality that e = e 1 = (1, 0, . . . , 0). We shall then prove that u is decreasing in x 1 and that it does not depend on the variable x = (x 2 , . . . , x N ). First, notice that the same arguments as in Proposition 4.1 yield the inequalities (1.23). The proof is even simpler here due to the facts that Γ t = {x 1 = ξ t } and that assumption (1.22) is made.
Actually, because of (1.22) and Definition 1.1, one can assume without loss of generality in the sequel that the map t → ξ t is uniformly continuous in R.
Fix any vector θ ∈ R N −1 and call v(t, x) = u(t, x 1 , x + θ).
Since the coefficients of (1.2) are assumed to be independent of x , the function v is a solution of the same equation (1.2) as u, with the same choice of sets (Ω ± t ) t∈R and (Γ t ) t∈R . Let A ≥ 0 be such that
∀ (t, x) ∈ R × R N , x 1 − ξ t ≥ A =⇒ u(t, x) ≤ p − (t, x 1 ) + δ x 1 − ξ t ≤ −A =⇒ u(t, x) ≥ p + (t, x 1 ) − δ 2 . (6.1) For all ξ ≥ 2A and x 1 − ξ t ≤ A, one has v ξ (t, x) := v(t, x 1 − ξ, x ) ≥ p + (t, x 1 − ξ) − δ 2 ≥ p + (t, x 1 ) − δ ≥ p − (t, x 1 ) + δ (6.2)
because p + is nonincreasing in x 1 and one can assume, without loss of generality, that 0 < 2δ ≤ κ, under the notation used in (1.16) and (1.17).
Lemma 6.1 For all ξ ≥ 2A, there holds v ξ (t, x) ≥ u(t, x) for all (t, x) ∈ R × R N such that x 1 − ξ t ≥ A (6.3) and v ξ (t, x) ≥ u(t, x) for all (t, x) ∈ R × R N such that x 1 − ξ t ≤ A. (6.4)
Proof. Fix any ξ ≥ 2A. We will just prove property (6.3), the proof of the second one being similar. Since u is bounded, the nonnegative real number
ε * = inf ε > 0; v ξ (t, x) ≥ u(t, x) − ε for all (t, x) ∈ R × R N with x 1 − ξ t ≥ A is well-defined. Observe that v ξ (t, x) ≥ u(t, x) − ε * for all (t, x) ∈ R × R N with x 1 − ξ t ≥ A. (6.5)
Assume by contradiction that ε * > 0. Then there exist a sequence (ε n ) n∈N of positive real numbers and a sequence (t n , x n ) n∈N = (t n , x 1,n , x n ) n∈N in R × R N such that ε n → ε * as n → +∞, and x 1,n − ξ tn ≥ A, v ξ (t n , x n ) < u(t n , x n ) − ε n for all n ∈ N. (6.6)
Since v ξ (t, x) ≥ u(t, x) when x 1 − ξ t = A from (6.1) and (6.2), and since u is globally
C 1 (R × R N ), there exists κ > 0 such that v ξ (t, x) ≥ u(t, x) − ε * 2 for all (t, x) ∈ R × R N such that |x 1 − ξ t − A| < κ. (6.7)
In particular, there holds x 1,n − ξ tn ≥ A + κ for large n.
Furthermore, we claim that the sequence (x 1,n − ξ tn ) n∈N is bounded. Otherwise, up to extraction of a subsequence, it would converge to +∞. Thus, v ξ (t n , x n ) − p − (t n , x 1,n − ξ) = u(t n , x 1,n − ξ, x n + θ) − p − (t n , x 1,n − ξ) → 0 as n → +∞ and u(t n , x n ) − p − (t n , x 1,n ) → 0 as n → +∞.
Since ξ ≥ 0 and p − (t, x 1 ) is nonincreasing with respect to x 1 , it would then follow that lim inf n→+∞ v ξ (t n , x n ) − u(t n , x n ) ≥ 0, which contradicts (6.6). Thus, the sequence (x 1,n − ξ tn ) n∈N is bounded. Remember now that, because of (1.22) and Definition 1.1, the function t → ξ t can be assumed to be uniformly continuous. In particular, the sequence (ξ tn − ξ tn−1 ) n∈N is bounded, whence the sequence (x 1,n − ξ tn−1 ) n∈N is bounded as well. Moreover, there exists a real number ρ such that 0 < ρ ≤ κ 4 and |ξ s − ξ s | ≤ κ 2 for all (s, s ) ∈ R 2 such that |s − s | ≤ ρ. (6.8)
Choose now K ∈ N\{0} such that K ρ ≥ max 1, sup |x 1,n − ξ tn−1 − A|; n ∈ N . (6.9) For each n ∈ N and i = 0, . . . , K, set
x n,i = x 1,n + i K (ξ tn−1 + A − x 1,n ) and E n,i = t n − i + 1 K , t n − i K × [ x n,i − 2 ρ, x n,i + 2 ρ] × x ∈ R N −1 ; |x − x n ] ≤ 1}.
Observe that | x n,i+1 − x n,i | ≤ ρ for all 0 ≤ i ≤ K − 1, from (6.9). Furthermore, since x 1,n − ξ tn > A + κ for large n, say for n ≥ n 0 , it follows from (6.8) and (6.9) that
x n,0 − 2 ρ = x 1,n − 2 ρ ≥ ξ t + A for all t n − 1 K ≤ t ≤ t n and for all n ≥ n 0 . Consequently, E n,0 ⊂ (t, x) ∈ R × R N ; x 1 − ξ t ≥ A} for all n ≥ n 0 . (6.10)
Thus, w := v ξ − (u − ε * ) ≥ 0 in E n,0 and u(t, x) − ε * < u(t, x) ≤ p − (t, x 1 ) + δ in E n,0 for all n ≥ n 0 from (6.1) and (6.5). Since f (t, x 1 , ·) is nonincreasing in (−∞, p − (t, x 1 ) + δ], it follows that u − ε * is a subsolution of (1.2) in E n,0 for all n ≥ n 0 , while v ξ is a supersolution of (1.2) in R × R N , because A and q only depend on t, and f (t, x 1 , s) is nonincreasing in x 1 . Finally, for all n ≥ n 0 , the globally C 1 (R × R N ) function w is nonnegative in E n,0 , it satisfies inequations of the type
w t ≥ ∇ x · (A(t)∇ x w) + q(t) · ∇ x w + b(t, x)w in E n,0
where the sequence ( b L ∞ (E n,0 ) ) n∈N is bounded. Since w(t n , x n,0 , x n ) = w(t n , x n ) → 0 as n → +∞, one finally concludes from the linear parabolic estimates that w t n − 1 K , x n,1 , x n → 0 as n → +∞. (6.11)
But since x n,1 − ξ tn−1/K ≥ x n,0 − ρ − ξ tn−1/K ≥ A from (6.10) for all n ≥ n 0 , it follows from (6.7) and (6.11) that x n,1 − ξ tn−1/K ≥ A + κ for n large enough. By repeating the arguments inductively, one concludes that
x n,i − ξ tn−i/K ≥ A + κ for all i = 1, . . . , K and for n large enough.
One gets a contradiction at i = K, since x n,K = ξ tn−1 + A. As a conclusion, the assumption ε * > 0 was false. Hence, the claim (6.3) is proved, and, as already emphasized, the proof of (6.4) follows the same scheme.
End of the proof of Theorem 1.15. Lemma 6.1 yields v ξ ≥ u in R × R N for all ξ ≥ 2A.
Now define
ξ * = inf ξ > 0, v ξ ≥ u in R × R N for all ξ ≥ ξ .
One has 0 ≤ ξ * ≤ 2A, and v ξ * (t, x) ≥ u(t, x) for all (t, x) ∈ R × R N . Assume now that ξ * > 0. Two cases may occur: Case 1: assume here that
inf v ξ * (t, x) − u(t, x); |x 1 − ξ t | ≤ A > 0.
From the boundedness of u x 1 , there exists then η 0 ∈ (0, ξ * ) such that v ξ * −η (t, x) ≥ u(t, x) for all η ∈ [0, η 0 ] and |x 1 − ξ t | ≤ A. (6.12)
Since v ξ * (t, x) ≥ u(t, x) ≥ p + (t, x 1 ) − δ 2 for all x 1 − ξ t ≤ −A, one can assume that η 0 > 0 is small enough so that v ξ * −η (t, x) ≥ p + (t, x 1 ) − δ for all x 1 − ξ t ≤ −A.
Applying again the arguments used in Lemma 6.1, one then concludes that, for all η ∈ [0, η 0 ], there holds v ξ * −η (t, x) ≥ u(t, x) for all (t, x) ∈ R × R N such that
x 1 − ξ t ≤ −A or x 1 − ξ t ≥ A.
Eventually, together with (6.12), v ξ * −η ≥ u in R × R N for all η ∈ [0, η 0 ], which contradicts the minimality of ξ * . Thus, case 1 is ruled out. Case 2: one then has
inf v ξ * (t, x) − u(t, x); |x 1 − ξ t | ≤ A = 0.
There exists then a sequence (t n , x n ) n∈N = (t n , x 1,n , x n ) n∈N in R × R N such that |x 1,n − ξ tn | ≤ A for all n ∈ N, u(t n , x 1,n − ξ * , x n + θ) − u(t n , x n ) = v ξ * (t n , x n ) − u(t n , x n ) → 0 as n → +∞.
Fix now any σ > 0 and m ∈ N\{0}. Since v ξ * ≥ u and v ξ * is a supersolution of (1.2) in R × R N , the linear parabolic estimates then imply that u t n − σ m , x 1,n − 2ξ * , x n + 2θ − u t n − σ m , x 1,n − ξ * , x n + θ = v ξ * t n − σ m , x 1,n − ξ * , x n + θ − u t n − σ m , x 1,n − ξ * , x n + θ −→ 0 as n → +∞.
By immediate induction, one gets that u t n − k σ m , x 1,n − (k + 1)ξ * , x n + (k + 1)θ − u t n − k σ m , x 1,n − kξ * , x n + kθ −→ n→+∞ 0, for each k = 1, . . . , m. Therefore, lim sup n→+∞ u(t n − σ, x 1,n − (m + 1)ξ * , x n + (m + 1)θ) − u(t n , x n ) ≤ σ u t L ∞ (R×R N ) .
Similarly, by considering the points (t n − kσ/m, x 1,n + kξ * , x n + kθ), one gets that lim sup n→+∞ u(t n − σ, x 1,n + (m − 1)ξ * , x n + (m − 1)θ) − u(t n , x n ) ≤ σ u t L ∞ (R×R N ) .
Hence, lim sup n→+∞ u(t n − σ, x 1,n − (m + 1)ξ * , x n + (m + 1)θ) −u(t n − σ, x 1,n + (m − 1)ξ * , x n + (m − 1)θ) ≤ 2σ u t L ∞ (R×R N ) .
(6.13)
Choose now σ > 0 such that 2σ u t L ∞ (R×R N ) ≤ κ 4 . (6.14)
But |x 1,n − ξ tn | ≤ A for all n ∈ N and the sequence (ξ tn − ξ tn−σ ) n∈N is bounded from the assumption made in Theorem 1.15. Therefore, the sequence (x 1,n − ξ tn−σ ) n∈N is bounded. Let C ≥ 0 be such that
x 1 ≥ ξ t + C =⇒ u(t, x) ≤ p − (t, x 1 ) + κ 4 x 1 ≤ ξ t − C =⇒ u(t, x) ≥ p + (t, x 1 ) − κ 4 .
Since ξ * is assumed to be positive, there exists m ∈ N\{0} such that x 1,n + (m − 1)ξ * ≥ ξ tn−σ + C and x 1,n − (m + 1)ξ * ≤ ξ tn−σ − C for all n ∈ N.
Thus, u(t n − σ, x 1,n + (m − 1)ξ * , x n + (m − 1)θ) ≤ p − (t n − σ, x 1,n + (m − 1)ξ * ) + κ 4
≤ p − (t n − σ, x 1,n ) + κ 4 and u(t n − σ, x 1,n − (m + 1)ξ * , x n + (m + 1)θ) ≥ p + (t n − σ, x 1,n − (m + 1)ξ * ) − κ 4
≥ p + (t n − σ, x 1,n ) − κ 4 for all n ∈ N, because p ± are nonincreasing in x 1 . Hence, u(t n − σ, x 1,n − (m + 1)ξ * , x n + (m + 1)θ) − u(t n − σ, x 1,n + (m − 1)ξ * , x n + (m − 1)θ)
≥ p + (t n − σ, x 1,n ) − p − (t n − σ, x 1,n ) − κ 2 ≥ κ 2 for all n ∈ N,
by definition of κ. Therefore, lim sup n→+∞ u(t n − σ, x 1,n − (m + 1)ξ * , x n + (m + 1)θ)
−u(t n − σ, x 1,n + (m − 1)ξ * , x n + (m − 1)θ) ≥ κ 2 ,
while it is less than or equal to κ/4 from (6.13) and (6.14).
One has then reached a contradiction, which means that ξ * = 0. Then, v(t, x 1 − ξ, x + θ) ≥ u(t, x 1 , x ) for all (t, x 1 , x ) ∈ R × R N , ξ ≥ 0 and θ ∈ R N −1 .
As a consequence, u is nonincreasing in x 1 and it does not depend on x . Furthermore, the strong parabolic maximum principle, together with the same arguments as above, implies that u is actually decreasing in x 1 . That completes the proof of Theorem 1.15.
Proof of Theorem 1.14. Assume that all assumptions made in Theorem 1.14 are satisfied. Up to rotation of the frame, one can assume without loss of generality that e = e 1 = (1, 0, . . . , 0). Consider first the case where c > 0. There exists ε ∈ {−1, 1} such that ξ t − ξ s t − s → ε c as t − s → ±∞ and sup |ξ t − ε c t|; t ∈ R < +∞.
The function v(t, x) = u(t, x + ε c t e) = u(t, x 1 + ε c t, x )
is well-defined for all (t, x) ∈ R × Ω (because Ω is invariant in the direction e) and it satisfies
v t = ∇ x · (A(x )∇ x v) + q(x ) · ∇ x v + ε c v x 1 + f (x , v) in R × Ω, µ(x ) · ∇ x v = 0 on R × ∂Ω,
because A, q, µ and f are independent of x 1 (and of t). Furthermore, since p ± only depend on x , v is a transition front connecting p − and p + , with the sets Ω ± t = x ∈ Ω, ±x 1 < 0 and Γ t = x ∈ Ω, x 1 = 0 .
With the same type of arguments as in the proof of Theorem 1.15 above, one can then fix any ζ ∈ R and slide v(t + ζ, x) with respect to v in the x 1 -direction. It follows then that v(t + ζ, x 1 − ξ, x ) ≥ v(t, x 1 , x ) for all (t, x) ∈ R × Ω, ξ ≥ 0 and ζ ∈ R.
Therefore, v is independent of t and it is nonincreasing in x 1 . As above, v is then decreasing in x 1 . That gives the required conclusion in the case where c > 0.
In the case where c = 0, the function t → ξ t is then bounded. Because of Definition 1.1, one can then assume, without loss of generality, that ξ t = 0 for all t ∈ R. The functions p ± and f may depend on x 1 , but are assumed to be nonincreasing in x 1 . For any ζ ∈ R and ξ ≥ 0, the function u(t + ζ, x 1 − ξ, x ) is then a supersolution of the equation (1.2) which is satisfied by u. One can then slide u(t + ζ, x 1 , x ) with respect to u in the (positive) x 1 -direction, and it follows as in the proof of Theorem 1.15 that u(t + ζ, x 1 − ξ, x ) ≥ u(t, x 1 , x ) for all (t, x) ∈ R × Ω, ξ ≥ 0 and ζ ∈ R.
As usual, one concludes that u does not depend on t and is decreasing in x 1 .
Ω
∩ B(y, r 0 ) = y + R y {x ∈ R N ; (x 1 , . . . , x N −1 ) ∈ B N −1 2r 0 , φ y (x 1 , . . . , x N −1 ) < x N } ∩B(y, r 0 ),where B(y, r 0 ) = {x ∈ R N ; |x − y| < r 0 }, | | denotes the Euclidean norm in R N and, for any s > 0, B N −1 s
Theorem 1. 8
8For equation (1.11) under the assumption (1.12), there exist transition invasion fronts connecting p − = 0 and p + = 1, for which
(1.11) with the same function f , according to the starting speed c 1 : for instance, there are examples of functions f 1 and f 2 for which
Definition 1 .
118 (Localized pulses) In Definition 1.17, if k = 1 and if
,c s as s → +∞, where A 1,c is a positive constant and λ 1,c > 0 has been defined in (1.13). If c = c * 1 and c * 1 > 2 f 1 (0), then the same property holds. If c = c * 1 and c * 1 = 2 f 1 (0), then ϕ 1,c (s) ∼ (A 1,c s + B 1,c ) e −λ 1,c s as s → +∞,
Proposition 4. 1
1Under the assumptions of Theorem 1.10, one has
Lemma 4. 3
3For all s ≥ s 0 , one has
and (1.5), and if there is d 0 > 0 such that the sets9)
2. Conversely, if (1.8) and (1.9) hold for some choices of sets (Ω ±
t , Γ t ) t∈R satis-
fying (1.3), (1.4)
Therefore, u and its derivatives u t , u xi and u xixj , for all 1 ≤ i, j ≤ N , are bounded and globally Hölder continuous in R × Ω.
∀ t ∈ R, ∀ j = j ∈ {1, . . . , k}, Ω j t ∩ Ω j t = ∅, ∀ t ∈ R, 1≤j≤k (∂Ω j t ∩ Ω) = Γ t , Γ t ∪ 1≤j≤k Ω j t = Ω, ∀ t ∈ R, ∀ j ∈ {1, . . . , k}, sup d Ω (x, Γ t ); x ∈ Ω j t = +∞, ∀ A ≥ 0, ∃ r > 0, ∀ t ∈ R, ∀ x ∈ Γ t , ∃ 1 ≤ j = j ≤ k, ∃ y j ∈ Ω j t , ∃ y j ∈ Ω j t , d Ω (x, y j ) = d Ω (x, y j ) = r and min d Ω (y j , Γ t ), d Ω (y j , Γ t ) ≥ A,if N = 1 then Γ t is made of at most n points, if N ≥ 2 then (1.5) is satisfied,
Intrinsic character of the interface localization and the global mean speedGiven a generalized transition wave u, we can view the set Γ t as the continuous interface of u at time t. Of course this set is not uniquely defined, however, as we shall prove here,
Notice that, in this part (iii), one can assume without loss of generality that ξ t = c t for all t ∈ R, because of Definition 1.1.
Multidimensional nonlinear diffusions arising in population genetics. D G Aronson, H F Weinberger, Adv. Math. 30D.G. Aronson, H.F. Weinberger, Multidimensional nonlinear diffusions arising in popula- tion genetics, Adv. Math. 30 (1978), 33-76.
Front propagation in periodic excitable media. H Berestycki, F Hamel, Comm. Pure Appl. Math. 55H. Berestycki, F. Hamel, Front propagation in periodic excitable media, Comm. Pure Appl. Math. 55 (2002), 949-1032.
Generalized traveling waves for raction-diffusion equations, In: Perspectives in Nonlinear Partial Differential Equations. In honor of H. Brezis. H Berestycki, F Hamel, Contemp. Math. 446Amer. Math. SocH. Berestycki, F. Hamel, Generalized traveling waves for raction-diffusion equations, In: Perspectives in Nonlinear Partial Differential Equations. In honor of H. Brezis, Amer. Math. Soc., Contemp. Math. 446, 2007, 101-123.
Bistable traveling waves passing an obstacle. H Berestycki, F Hamel, H Matano, Comm. Pure Appl. Math. 62H. Berestycki, F. Hamel, H. Matano, Bistable traveling waves passing an obstacle, Comm. Pure Appl. Math. 62 (2009), 729-788.
The principal eigenvalue of elliptic operators with large drift and applications to nonlinear propagation phenomena. H Berestycki, F Hamel, N Nadirashvili, Comm. Math. Phys. 253H. Berestycki, F. Hamel, N. Nadirashvili, The principal eigenvalue of elliptic operators with large drift and applications to nonlinear propagation phenomena, Comm. Math. Phys. 253 (2005), 451-480.
The speed of propagation for KPP type problems. I -Periodic framework. H Berestycki, F Hamel, N Nadirashvili, J. Europ. Math. Soc. 7H. Berestycki, F. Hamel, N. Nadirashvili, The speed of propagation for KPP type problems. I -Periodic framework, J. Europ. Math. Soc. 7 (2005), 173-213.
The speed of propagation for KPP type problems. II -General domains. H Berestycki, F Hamel, N Nadirashvili, J. Amer. Math. Soc. 23H. Berestycki, F. Hamel, N. Nadirashvili, The speed of propagation for KPP type problems. II -General domains, J. Amer. Math. Soc. 23 (2010), 1-34.
Analysis of the periodically fragmented environment model : II -Biological invasions and pulsating traveling fronts. H Berestycki, F Hamel, L Roques, J. Math. Pures Appl. 84H. Berestycki, F. Hamel, L. Roques, Analysis of the periodically fragmented environment model : II -Biological invasions and pulsating traveling fronts, J. Math. Pures Appl. 84 (2005), 1101-1146.
Traveling waves solutions to combustion models and their singular limits. H Berestycki, B Nicolaenko, B Scheurer, SIAM J. Math. Anal. 16H. Berestycki, B. Nicolaenko, B. Scheurer, Traveling waves solutions to combustion models and their singular limits, SIAM J. Math. Anal. 16 (1985), 1207-1242.
Existence of non-planar solutions of a simple model of premixed Bunsen flames. A Bonnet, F Hamel, SIAM J. Math. Anal. 31A. Bonnet, F. Hamel, Existence of non-planar solutions of a simple model of premixed Bunsen flames, SIAM J. Math. Anal. 31 (1999), 80-118.
A three-layered minimizer in R 2 for a variational problem with a symmetric three-well potential. L Bronsard, C Gui, M Schatzman, Comm. Pure Appl. Math. 49L. Bronsard, C. Gui, M. Schatzman, A three-layered minimizer in R 2 for a variational problem with a symmetric three-well potential, Comm. Pure Appl. Math. 49 (1996), 677- 715.
Mathematical aspects of reacting and diffusing systems. P C Fife, Springer VerlagP. C. Fife, Mathematical aspects of reacting and diffusing systems, Springer Verlag, 1979.
The approach of solutions of non-linear diffusion equations to traveling front solutions. P C Fife, J B Mcleod, Arch. Ration. Mech. Anal. 65P.C. Fife, J.B. McLeod, The approach of solutions of non-linear diffusion equations to traveling front solutions, Arch. Ration. Mech. Anal. 65 (1977), 335-361.
Travelling waves in infinite cylinders with time-periodic coefficients. G Fréjacques, Ph. D. DissertationG. Fréjacques, Travelling waves in infinite cylinders with time-periodic coefficients, Ph. D. Dissertation, 2005.
Travelling fronts in nonlinear diffusion equations. K P Hadeler, F Rothe, J. Math. Biol. 2K.P. Hadeler, F. Rothe, Travelling fronts in nonlinear diffusion equations, J. Math. Biol. 2 (1975), 251-263.
Qualitative properties of KPP and monostable fronts: monotonicity and exponential decay. F Hamel, J. Math. Pures Appl. 89F. Hamel, Qualitative properties of KPP and monostable fronts: monotonicity and expo- nential decay, J. Math. Pures Appl. 89 (2008), 355-399.
Existence and qualitative properties of conical multidimensional bistable fronts. F Hamel, R Monneau, J.-M Roquejoffre, Disc. Cont. Dyn. Syst. A. 13F. Hamel, R. Monneau, J.-M. Roquejoffre, Existence and qualitative properties of conical multidimensional bistable fronts, Disc. Cont. Dyn. Syst. A 13 (2005), 1069-1096.
Travelling waves and entire solutions of the Fisher-KPP equation in R N. F Hamel, N Nadirashvili, Arch. Ration. Mech. Anal. 157F. Hamel, N. Nadirashvili, Travelling waves and entire solutions of the Fisher-KPP equation in R N , Arch. Ration. Mech. Anal. 157 (2001), 91-163.
Fast propagation for KPP equations with slowly decaying initial conditions. F Hamel, L Roques, J. Diff. Equations. 249F. Hamel, L. Roques, Fast propagation for KPP equations with slowly decaying initial conditions, J. Diff. Equations 249 (2010), 1726-1745.
Uniqueness and stability properties of monostable pulsating fronts. F Hamel, L Roques, J. Europ. Math. Soc. to appearF. Hamel, L. Roques, Uniqueness and stability properties of monostable pulsating fronts, J. Europ. Math. Soc. (2011), to appear.
Corner defects in almost planar interface propagation. M Haragus, A Scheel, Ann. Inst. H. Poincaré, Analyse Non Linéaire. 23M. Haragus, A. Scheel, Corner defects in almost planar interface propagation, Ann. Inst. H. Poincaré, Analyse Non Linéaire 23 (2006), 283-329.
Stability of curved KPP fronts. R Huang, Nonlin. Diff. Eq. Appl. 15R. Huang, Stability of curved KPP fronts, Nonlin. Diff. Eq. Appl. 15 (2008), 599-622.
Existence of traveling waves for reaction-diffusion equations of Fisher type in periodic media, In: Boundary Value Problems for Functional-Differential Equations. W Hudson, B Zinner, J. HendersonWorld ScientificW. Hudson, B. Zinner, Existence of traveling waves for reaction-diffusion equations of Fisher type in periodic media, In: Boundary Value Problems for Functional-Differential Equations, J. Henderson (ed.), World Scientific, 1995, 187-199.
Asymptotic behaviour of a reaction-diffusion equation in higher space domains, Rocky Mount. C K R T Jones, J. Math. 13C.K.R.T. Jones, Asymptotic behaviour of a reaction-diffusion equation in higher space do- mains, Rocky Mount. J. Math. 13 (1983), 355-364.
Étude de l'équation de la diffusion avec croissance de la quantité de matière et son applicationà un problème biologique. A N Kolmogorov, I G Petrovsky, N S Piskunov, Bull. Univ. Etat Moscou, Sér. Intern. A. 1A.N. Kolmogorov, I.G. Petrovsky, N.S. Piskunov,Étude de l'équation de la diffusion avec croissance de la quantité de matière et son applicationà un problème biologique, Bull. Univ. Etat Moscou, Sér. Intern. A 1 (1937), 1-26.
Transient bounds and time-asymptotic behavior of solutions to nonlinear equations of Fisher type. D A Larson, SIAM J. Appl. Math. 34D.A. Larson, Transient bounds and time-asymptotic behavior of solutions to nonlinear equa- tions of Fisher type, SIAM J. Appl. Math. 34 (1978), 93-103.
Traveling waves in spatially inhomogeneous diffusive media -The non-periodic case. H Matano, preprintH. Matano, Traveling waves in spatially inhomogeneous diffusive media -The non-periodic case, preprint.
Stability of generalized transition fronts. A Mellet, J Nolen, J.-M Roquejoffre, L Ryzhik, Comm. Part. Diff. Equations. 34A. Mellet, J. Nolen, J.-M. Roquejoffre, L. Ryzhik, Stability of generalized transition fronts, Comm. Part. Diff. Equations 34 (2009), 521-552.
Sire Generalized fronts for one-dimensional reactiondiffusion equations. A Mellet, J.-M Roquejoffre, Y , Disc. Cont. Dyn. Syst. A. 26A. Mellet, J.-M. Roquejoffre, Y. Sire Generalized fronts for one-dimensional reaction- diffusion equations, Disc. Cont. Dyn. Syst. A 26 (2010), 303-312.
Travelling fronts in space-time periodic media. G Nadin, J. Math. Pures Appl. 92G. Nadin, Travelling fronts in space-time periodic media, J. Math. Pures Appl. 92 (2009), 232-262.
Existence and global stability of traveling curved fronts in the Allen-Cahn equations. H Ninomiya, M Taniguchi, J. Diff. Equations. 213H. Ninomiya, M. Taniguchi, Existence and global stability of traveling curved fronts in the Allen-Cahn equations, J. Diff. Equations 213 (2005), 204-233.
Existence of KPP fronts in spatially-temporally periodic advection and variational principle for propagation speeds. J Nolen, M Rudd, J Xin, Dyn. Part. Diff. Equations. 2J. Nolen, M. Rudd, J. Xin, Existence of KPP fronts in spatially-temporally periodic advec- tion and variational principle for propagation speeds, Dyn. Part. Diff. Equations 2 (2005), 1-24.
Traveling waves in a one-dimensional heterogeneous medium. J Nolen, L Ryzhik, Ann. Inst. H. Poincaré, Analyse Non Linéaire. 26J. Nolen, L. Ryzhik, Traveling waves in a one-dimensional heterogeneous medium, Ann. Inst. H. Poincaré, Analyse Non Linéaire 26 (2009), 1021-1047.
Existence of KPP type fronts in space-time periodic shear flows and a study of minimal speeds based on variational principle. J Nolen, J Xin, Disc. Cont. Dyn. Syst. A. 13J. Nolen, J. Xin, Existence of KPP type fronts in space-time periodic shear flows and a study of minimal speeds based on variational principle, Disc. Cont. Dyn. Syst. A 13 (2005), 1217-1234.
Traveling waves in time almost periodic structures governed by bistable nonlinearities, I. Stability and uniqueness. W Shen, J. Diff. Equations. 159W. Shen, Traveling waves in time almost periodic structures governed by bistable nonlin- earities, I. Stability and uniqueness, J. Diff. Equations 159 (1999), 1-54.
Traveling waves in time almost periodic structures governed by bistable nonlinearities, II. Existence. W Shen, J. Diff. Equations. 159W. Shen, Traveling waves in time almost periodic structures governed by bistable nonlin- earities, II. Existence, J. Diff. Equations 159 (1999), 55-101.
Dynamical systems and traveling waves in almost periodic structures. W Shen, J. Diff. Equations. 169W. Shen, Dynamical systems and traveling waves in almost periodic structures, J. Diff. Equations 169 (2001), 493-548.
Traveling waves in diffusive random media. W Shen, J. Dyn. Diff. Equations. 16W. Shen, Traveling waves in diffusive random media, J. Dyn. Diff. Equations 16 (2004), 1011-1060.
Traveling periodic waves in heterogeneous environments. N Shigesada, K Kawasaki, E Teramoto, Theor. Pop. Biol. 30N. Shigesada, K. Kawasaki, E. Teramoto, Traveling periodic waves in heterogeneous envi- ronments, Theor. Pop. Biol. 30 (1986), 143-160.
Traveling fronts of pyramidal shapes in the Allen-Cahn equation. M Taniguchi, SIAM J. Math. Anal. 39M. Taniguchi, Traveling fronts of pyramidal shapes in the Allen-Cahn equation, SIAM J. Math. Anal. 39 (2007), 319-344.
The uniqueness and asymptotic stability of pyramidal traveling fronts in the Allen-Cahn equations. M Taniguchi, J. Diff. Equations. 246M. Taniguchi, The uniqueness and asymptotic stability of pyramidal traveling fronts in the Allen-Cahn equations, J. Diff. Equations 246 (2009), 2103-2130.
The behavior of solutions of some semilinear diffusion equation for large time. K Uchiyama, J. Math. Kyoto Univ. 18K. Uchiyama, The behavior of solutions of some semilinear diffusion equation for large time, J. Math. Kyoto Univ. 18 (1978), 453-508.
Existence of planar flame fronts in convective-diffusive periodic media, Arch. Ration. X Xin, Mech. Anal. 121X. Xin, Existence of planar flame fronts in convective-diffusive periodic media, Arch. Ra- tion. Mech. Anal. 121 (1992), 205-233.
Analysis and modeling of front propagation in heterogeneous media. J X Xin, SIAM Review. 42J.X. Xin, Analysis and modeling of front propagation in heterogeneous media, SIAM Review 42 (2000), 161-230.
Generalized traveling waves in disordered media: existence, uniqueness, and stability. A Zlatoš, preprintA. Zlatoš, Generalized traveling waves in disordered media: existence, uniqueness, and stability, preprint.
| [] |
[
"GASP XIX: AGN and their outflows at the center of jellyfish galaxies",
"GASP XIX: AGN and their outflows at the center of jellyfish galaxies"
] | [
"Mario Radovich \nINAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly\n",
"Bianca Poggianti \nINAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly\n",
"Yara L Jaffé \nFacultad de Ciencias\nInstituto de Física y Astronomía\nUniversidad de Valparaíso\nAvda. Gran Bretaña 11115030Casilla, ValparaísoChile\n",
"Alessia Moretti \nINAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly\n",
"Daniela Bettoni \nINAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly\n",
"Marco Gullieuszik \nINAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly\n",
"Benedetta Vulcani \nINAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly\n",
"Jacopo Fritz \nInstituto de Radioastronomía y Astrofísica\nUNAM\nCampus Morelia, A.P. 3-72C.P. 58089Mexico\n"
] | [
"INAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly",
"INAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly",
"Facultad de Ciencias\nInstituto de Física y Astronomía\nUniversidad de Valparaíso\nAvda. Gran Bretaña 11115030Casilla, ValparaísoChile",
"INAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly",
"INAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly",
"INAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly",
"INAF-Osservatorio astronomico di Padova\nVicolo Osservatorio 5IT-35122PadovaItaly",
"Instituto de Radioastronomía y Astrofísica\nUNAM\nCampus Morelia, A.P. 3-72C.P. 58089Mexico"
] | [] | The GASP survey, based on MUSE data, is unveiling the properties of the gas in the so-called "jellyfish" galaxies: these are cluster galaxies with spectacular evidence of gas stripping by ram pressure. In a previous paper, we selected the seven GASP galaxies with the most extended tentacles of ionized gas, and based on individual diagnostic diagrams concluded that at least five of them present clear evidence for an Active Galactic Nucleus. Here we present a more detailed analysis of the emission lines properties in these galaxies. Our comparison of several emission line ratios with both AGN and shock models show that photoionization by the AGN is the dominant ionization mechanism. This conclusion is strengthened by the analysis of Hβ luminosities, the presence of nuclear iron coronal lines and extended (> 10 kpc) emission line regions ionized by the AGN in some of these galaxies. From emission line profiles, we find the presence of outflows in four galaxies, and derive mass outflow rates, timescales and kinetic energy of the outflows. | 10.1093/mnras/stz809 | [
"https://arxiv.org/pdf/1905.08972v1.pdf"
] | 127,113,085 | 1905.08972 | b53aff6252a85c12233788475167c0baee847a86 |
GASP XIX: AGN and their outflows at the center of jellyfish galaxies
Mario Radovich
INAF-Osservatorio astronomico di Padova
Vicolo Osservatorio 5IT-35122PadovaItaly
Bianca Poggianti
INAF-Osservatorio astronomico di Padova
Vicolo Osservatorio 5IT-35122PadovaItaly
Yara L Jaffé
Facultad de Ciencias
Instituto de Física y Astronomía
Universidad de Valparaíso
Avda. Gran Bretaña 11115030Casilla, ValparaísoChile
Alessia Moretti
INAF-Osservatorio astronomico di Padova
Vicolo Osservatorio 5IT-35122PadovaItaly
Daniela Bettoni
INAF-Osservatorio astronomico di Padova
Vicolo Osservatorio 5IT-35122PadovaItaly
Marco Gullieuszik
INAF-Osservatorio astronomico di Padova
Vicolo Osservatorio 5IT-35122PadovaItaly
Benedetta Vulcani
INAF-Osservatorio astronomico di Padova
Vicolo Osservatorio 5IT-35122PadovaItaly
Jacopo Fritz
Instituto de Radioastronomía y Astrofísica
UNAM
Campus Morelia, A.P. 3-72C.P. 58089Mexico
GASP XIX: AGN and their outflows at the center of jellyfish galaxies
Accepted 2019 March 15. Received 2019 March 15; in original form 2019 February 5MNRAS 000, 1-20 (2019) Preprint 23 May 2019 Compiled using MNRAS L A T E X style file v3.0galaxies: clusters: general -galaxies: active
The GASP survey, based on MUSE data, is unveiling the properties of the gas in the so-called "jellyfish" galaxies: these are cluster galaxies with spectacular evidence of gas stripping by ram pressure. In a previous paper, we selected the seven GASP galaxies with the most extended tentacles of ionized gas, and based on individual diagnostic diagrams concluded that at least five of them present clear evidence for an Active Galactic Nucleus. Here we present a more detailed analysis of the emission lines properties in these galaxies. Our comparison of several emission line ratios with both AGN and shock models show that photoionization by the AGN is the dominant ionization mechanism. This conclusion is strengthened by the analysis of Hβ luminosities, the presence of nuclear iron coronal lines and extended (> 10 kpc) emission line regions ionized by the AGN in some of these galaxies. From emission line profiles, we find the presence of outflows in four galaxies, and derive mass outflow rates, timescales and kinetic energy of the outflows.
INTRODUCTION
It is now widely accepted that there is a strong connection between the presence of an Active Galactic Nucleus (AGN) and the host galaxy properties, based both on cosmological models and observational results from wide-field surveys (see e.g. Heckman & Best 2014, and refs for a review). However, the way this interaction occurs is still unclear, and it may actually be the outcome of a wide range of different physical processes (e.g. merging, bars). A major improvement in our understanding of the complex environment around AGN is given by the availability of Integral Field Spectroscopy (IFU), allowing to map emission line fluxes and kinematics tracing the AGN and its surroundings (see e.g. Venturi et al. 2018;Ilha et al. 2019;Mingozzi et al. 2019).
An important issue is the effect of the environment on the presence of the AGN: it is still debated (see e.g. Marziani et al. 2017, and refs) whether or not a dense galaxy environment such as in galaxy clusters has any effect on the presence of AGN. Early spectroscopic studies (Dressler et al. 1985) suggested that the fraction of AGN in clusters (∼1%) is sig-E-mail: [email protected] nificantly lower than in a field environment (∼5%). Later studies based on X-ray data (Pimbblet et al. 2013) did not however confirm this result, showing comparable fractions of AGN in cluster and field, though there may be an effect related to distance from the cluster centre (Ehlert et al. 2014). In this context, Marshall et al. (2018) used semi-analytic galaxy evolution models to show that both star formation and AGN can be triggered by the ram pressure as they move through the intracluster medium, in galaxies located at distances from the cluster centre larger than the virial radius; at smaller distances, where the ram pressure is higher, models suggest that the gas is stripped from the galaxy and can't feed the AGN. Ramos-Martínez et al. (2018) analyzed the role of the galactic magnetic field in the gas stripping using 3D magnetohydrodynamic simulations: they found that the magnetic field can contribute to generate a gas inflow to the central parts of the galaxies, triggering star formation and maybe feeding the AGN.
Conversely, the presence of the AGN may impact the surrounding environment in many ways (see e.g. Fabian 2012, for a review): in particular, they are able to drive outflows of ionized gas and impact on the galaxy environment on scales that may range from few kpcs (see e.g. Bing Poggianti et al. (2017b) and, for galaxies classified as AGN or LINER, the estimated AGN sizes and the observed and dereddened [OIII] λ5007 luminosities within r NLR .
et al. 2019) for the less luminous AGN to tens of kpc for the brightest AGN (Harrison et al. 2014). The GAs Stripping Phenomena (GASP) survey (Poggianti et al. 2017a, P17a hereafter) is aimed at studying with the MUSE Integral Field spectrograph on VLT the properties of the so-called jellyfish galaxies in clusters, whose tentacles of UV and optically bright material that make them similar to a jellyfish (Smith et al. 2010) are thought to originate via ram-pressure stripping by the intra-cluster medium (Ebeling et al. 2014;Fumagalli et al. 2014;Rawle et al. 2014;Fossati et al. 2016). Poggianti et al. (2017b) (P17b hereafter) showed that at least five and possibly six of seven galaxies with the strongest evidence of gas stripping and the most favourable conditions for ram pressure (Jaffé et al. 2018) host an AGN, suggesting a connection between ram pressure stripping and AGN triggering. In P17b the [NII]λ6583/Hα vs.
[OIII]λ5007/Hβ line ratios were used to select the most likely mechanism that ionized the gas: radiation from hot young stars in star-forming regions, from an AGN, a combination of them (composite), and either lowluminosity AGN or shocks (LINERs), using as reference the classification by Kewley et al. (2006). As already shown in Poggianti et al. (2019), adding other line ratio diagnostic diagrams such as [OI]λ6300/Hα and [SII]λλ6716,6731/Hα vs.
[OIII]λ5007/Hβ (Veilleux & Osterbrock 1987) can provide a more detailed description of the physical processes at work. In this paper we expand the work by P17b and critically scrutinize those results using all three main diagnostic diagrams simultaneously and comparing observed line ratios with photoionization and shock models. Moreover, we inspect additional features such as coronal Fe lines and analyze separately the extended extranuclear AGN-powered emission regions. Finally, we discuss the presence and properties of outflows.
The paper is structured as follows. A short summary of the data and how they were analyzed is given in Sect.2. In Sect.3, observed emission line ratios are compared with both photoionization and shock models, to confirm that photoionization from the AGN is required to reproduce the line ratios and derive the best-fit model parameters. As further probes of the AGN, we estimate the maximum contribute to the observed Hβ luminosity from shock models; in some cases, we detect the presence of high-ionization iron coronal lines (JO201 and JO135) and of extended (> 10 kpc) AGN-like emission lines (JO204 and JO135): this is used in Sect.3.4 to derive the number of ionizing photons that should be emit-ted by the AGN. In Sect.4 we analyze the [OIII]λ5007 line as a tracer of outflows around the AGN and derive their size, outflow rates, timescales and kinetic energy. Conclusions are given in Sect. 5. The cosmology concordance model was adopted: H0 = 70 km s −1 Mpc −1 , Ωm = 0.3, ΩΛ = 0.7.
GALAXY SAMPLE, OBSERVATIONS AND DATA ANALYSIS
In this paper we analyze the seven galaxies in P17b (Table 1 and Fig. 1), which are all characterized by tails of ionized gas at least as long as the stellar galaxy diameter. These galaxies represent extreme cases where the cluster environment strongly acts on the gas and possibly on the AGN. Spectra and individual emission lines for the central spaxel of each galaxy, selected as discussed later, are displayed in Fig. 2. In the following description, a reference is given in square brackets after the galaxy name for those galaxies studied individually in a GASP paper. JO201, or Kaz 364 [P17a, Bellhouse et al. (2017), George et al. (2018)] was also classified by Arnold et al. (2009) (Fig. 2). AGN-like emission is present up to ∼ 10 kpc (P17b), see Sect. 3.4. JW100, or IC 5337, was classified as an AGN by Wong et al. (2008), based on X-ray Chandra observations. It was classified as a head-tail radio source by Gitti (2013), who detected radio emission in Very Large Array radio measurements at 1.4 and 4.8 GHz: the peak of the radio emission coincides with the MUSE center. Double-peaked profiles are detected in the region around the nucleus (P17b). JO175, JO206 [P17a], JO194: in these galaxies emission lines appear as single-component Gaussians. We refer to P17a for a detailed description of the GASP survey, data and adopted reduction techniques. Observations were obtained with the MUSE spectrograph in widefield mode with natural seeing (Bacon et al. 2010). One or two MUSE pointings per galaxy, each with a 2700sec exposure and covering a 1'x1' field of view, are sampled with 0.2"x0.2" pixels over the spectral range 4800-9300Å with a spectral resolution FHWM ∼ 2.6Å. Data were taken un-der clear dark sky conditions, with < 1" seeing (Table 1).
We remind the reader that the fitting of emission lines in GASP was done using KubeViz (Fossati et al. 2016). Velocity and velocity dispersion were derived from the fit of the lineset consisting of Hα and the [NII]λλ6548,6583 doublet, and used for all other lines; as necessary, one or two Gaussian components were adopted. Emission line veloc- ity dispersions (σ obs ) were corrected for the instrumental component: σ = σ 2 obs − σ 2 inst ; σinst was derived at each wavelength using a third order polynomial fit of the MUSE resolution curve (Fumagalli et al. 2014). In the following, we will use the data cube average filtered with a 5x5 pixel kernel in the spatial direction, unless otherwise stated, hav-ing subtracted the stellar underlying spectrum fitted with the SINOPSIS code (Fritz et al. 2017). The stellar kinematics is derived with the pPXF code (Cappellari & Emsellem 2004) using stellar population templates from Vazdekis et al. (2010), see P17a for details. The galaxy center was defined as All the line fluxes, selected to have a signal to noise ratio SN >3, were corrected for dust extinction using the Balmer decrement as in P17a. Fig. 3 displays the extinction map (Av) in each galaxy, as well as the electron density (ne): this was derived using the relations in Proxauf et al. (2014), as in P17a. We detect increased values of the extinction (Av > 2) in the central spaxels of JO135, JO204, JO206 and JW100. In JO201 the extinction is low in the nucleus (Av < 1). In JO194 the extinction is high (Av > 2.5) in an extended region of radius ∼ 3 around the central spaxel. JO201, JO204 and JO135 show a steep increase in density (ne > 10 2.5 cm −3 ) in the nucleus: lower densities are measured in JO206 and JW100 (ne ∼ 10 2 cm −3 ), and JO194, JO175 (ne < 10 2 cm −3 ) .
IONIZATION MECHANISMS
The left panels of Fig. 4 present the classification in HII-regions, Composite, AGN and Liners based on [NII]λ6583/Hα vs.
[OIII]λ5007/Hβ as in P17b, whose main conclusions are summarized here. In some cases (JW100, JO135 1 , JO201, JO204) one Gaussian was not enough to fit the emission line profiles and a double-Gaussian fit was adopted: in these cases, the classification displayed in Fig. 4 refers to the narrow component.
Based on the P17b analysis, JO201, JO204, JO206, JO135 and JW100 present AGN-like line ratios in the inner kpcs. None of them shows broad (> 5000 km s −1 ), permitted lines typical of the Broad Line Region in AGN; the observed emission lines are therefore produced in the Narrow Line Region (NLR). Extended AGN-like emission over several kpcs is observed in JO204 and JO135, and it can be attributed to anisotropic ionization from the AGN (the so-called ionization cones). In JO175 the emission is mostly due to star formation, while in JO194, composite line ratios are detected throughout the galaxy.
We define the size of the Narrow Line Region in the AGN candidates as in Bae et al. (2017), that is weighting the projected distances from central spaxel on the [OIII]λ5007 fluxes, over the spaxels classified as AGN in P17b:
rNLR = r<5Kpc rf [OIII] (r)/ r<5Kpc f [OIII] (r),(1)
r being the distance from the central spaxel and f [OIII] the line flux at that distance. The NLR size so defined is of the order of 1 kpc for all galaxies: as the typical seeing was ∼ 0.8-1 arcsec, the NLR is unresolved. To have an estimate of the maximum extension of the AGN-ionized region, we also computed the 95% percentile of the distances for the AGN spaxels (rAGN in Table 1), keeping in mind that these values may be biased by spaxels with fainter [OIII] fluxes, where the measurement uncertainties may produce a wrong classification. The AGN emission can be therefore described as the sum of a pointlike source producing the bulk of the emission and a fainter, extended (r > 1 kpc) emission.
Photoionization and shock models
We now complement the classification done in P17b with a more detailed analysis: we simultaneously consider the lines ( (Veilleux & Osterbrock 1987;Kewley et al. 2006) and compare them to the predictions from photoionization and shock models.
In those cases where two components were required for the fit, we considered the summed fluxes for the comparison with models. We proceed as follows: 1. we selected only spaxels classified in P17b as either AGN, composite or LINERs; 2. for each spaxel we run NebulaBayes (Thomas et al. 2018), a python code that adopts a Bayesian approach to select the model optimally fitting the target emission line fluxes.
NebulaBayes includes grids where constant gas pressure photoionization models are computed with the MAP-PINGS V code, for HII regions and AGN. A full discussion of . For each galaxy the plots show the spatial distribution in the central 10 x 10 of left: extinction (Av), right: log ne. For all galaxies, 1 arcsec is ∼ 1 kpc (see Table 1). the assumptions and parameters of these models is given in Thomas et al. (2018), we summarize here the main aspects. For HII regions, the ionizing continuum is defined by the SLUG2 (Krumholz et al. 2015) stellar population synthesis code, with five metallicities (Z = 0.0004, 0.004, 0.008, 0.02, 0.05). For AGN, the ionizing continuum is described in Thomas et al. (2016), and is parametrized by the energy of the peak of the accretion disk emission (E peak ), the pho-ton index of the inverse Compton scattered power-law tail (Γ), and the proportion of the total flux in the non-thermal tail (pNT). In the grid models, the latter two parameters are fixed (Γ=2, pNT=0.15).
For both HII regions and AGN, the other model free parameters are: the metallicity (12 + log O/H), the ionization parameter (U ) and the gas pressure (log P/k), with P/k ∼ 2.4neT , see e.g. Kakkad et al. (2018).
Considering the environment of these galaxies and the presence of outflows in the nuclear regions, it is important to understand what may be the contribution from shocks, and if shocks alone can produce the observed line ratios. As extensively discussed by Allen et al. (2008), in the socalled fast shock models the cooling of the hot gas behind the shock front produces high energy photons which ionize the pre-shocked gas (precursor). When the shock velocity is > 170 km s −1 , the contribution from the photoionized gas in the precursor starts to become increasingly important and both high and low ionization lines are present in the observed spectrum. Varying the input model parame- . Color coded maps in the a region of 100x100 spaxels around the nucleus. Left: classification from P17b (HII, composite, AGN, LINER). Right: NB models: HII; shock with n = 0.1 cm −3 , solar abundances; shock with n = 1 cm −3 , solar abundances (M); shock with n = 1 cm −3 , 2x solar abundances (R); AGN. Spaxels classified as HII in P17b were not fitted with NB.
ters, that is the pre-shock density, n, the shock velocity, vs, the pre-shock transverse magnetic field, B, and the gas atomic abundances, it is possible to produce a wide range of emission line ratios, from HII-like regions to Liners and AGN.
Since shock model libraries are not directly available in NebulaBayes, we adapted the Allen et al. (2008) fast shock grids so that they could be used in NebulaBayes. From these grids, we selected the models with 2 n=0.1, 1, 10 cm −3 , for which solar (n=0.1, 1, 10 cm −3 ) and 2x solar (n=1 cm −3 ) abundances are available.
For each model type (shock, HII and AGN), Nebula- Bayes was run spaxel by spaxel, providing as output the best-fit model line ratios and the χ 2 : the optimal model was selected as the one giving the lowest χ 2 . We stress that different reasons may contribute to produce the wrong classification for a given spaxel, as for instance the uncertainties on the line measurements, the limited number of parameters in the models, and the fact that in many cases we may have at the same time a contribution from different ionizing mechanisms.
3.2 Results: Shock vs. AGN The maps of the best fit classification from Nebula-Bayes, compared with the classification derived as in P17b, are displayed in Fig. 4.
Fast shock models in individual diagnostic diagrams can produce a wide range of line ratios, covering both the HII and AGN regions of the diagrams. For the most extreme cases like JO135 and JO201 that have log [OIII]/Hβ ∼ 1, in order to reach the observed line ratios the shock models require v sh > 500 km s −1 while in our case the width of the strongest component is σv ∼ 100 km s −1 . When log [OIII]λ5007/Hβ is low (< 0.5) the shock velocities do not need to be so extreme. However, as shown in Fig. 5, for JO135, JO201, JO204, JO206, JW100, shock models produce either too high [OI]λ6300/Hα ratios or too low [NII]λ6583/Hα ratios, while AGN photoionization models reproduce all diagnostic ratios, as found for typical AGN by Allen et al. (2008). Kewley et al. (2006). For each galaxy, overlaid are best-fit AGN (not for JO175) and shock models. AGN models -The red lines display models for different values of log U , adopting the best-fit values of log P/k, 12 + log O/H and E peak in the central spaxels; models with an offset ± 0.25 in E peak are displayed in cyan and blue respectively. For JO204 and JO135, the green line shows AGN models in the EENLR. Shock models -The lines display models for v sh = 100-700 km s −1 , pre-shock density n = 0.1 cm −3 , solar abundances (JO135, JO201, JO206); n = 1 cm −3 , 2x solar abundances (JO204, JW100, JO175, JO194), and magnetic field B as in the legend.
MNRAS 000, 1-20 ( Around the central spaxel there is a very small region, whose size is few spaxels, where AGN models produce a lower χ 2 than shock models. However, as displayed in Fig. 4 the line ratios move to the SF region of the diagrams already within 2 kpc; line ratios similar to what observed in the nucleus are also present up to ∼ 5 kpc. Considering the uncertainties due e.g. to the fact that here we do not consider mixed models where both SF and AGN or shocks contribute to the line ratios, we conclude that we can't confirm or discard the presence of the AGN; in P17b the AGN option was favoured, considering the high Chandra X-ray luminosity (L 0.3−8keV = 1.4 × 10 41 erg s −1 ). Since we detect a strong extinction (Av > 2.5) in the nuclear regions, it is possible that AGN emission is obscured by dust in the optical.
For JO175, line ratios are consistent with star formation, as well as with some of the shock models as suggested by the high [OI]λ6300/Hα ratio, but not with an AGN.
As a further test, we compare the observed, dust corrected L(Hβ) luminosity within rNLR with the value derived from the Allen et al. (2008) library, selecting models with n = 1 cm −3 and best-fit values in the central spaxel for B and v sh . We estimate the maximum contribution from shocks to be negligible (< 3%) for JO135, < 20% for JO194, JO201, JO204 and JO206, < 40% for JW100.
We conclude that, in agreement with P17b, the Nebula-Bayes results confirm that the central spaxels of all galaxies except JO175 are best fitted by AGN models, whose parameters are given in Table 2. For JO175, nuclear line ratios can be fitted either by SF or by shocks: HII-like [NII]λ6583/Hα and [SII]λλ6716+6731/Hα, but high [OI]λ6300/Hα, agree well with shock models (either fast or slow).
Finally, four galaxies (JO201, JO204, JO206 and JW100) also show AGN line ratios in the circumnuclear regions (< 5kpc), with an expected decrease of the ionization parameter, and a decrease of E peak consistent with an increasing contribution from HII (composite) regions. In fact, as discussed in Thomas et al. (2018), variations in E peak may be due either to screening by gas and dust (hardening the ionizing continuum and thus increasing E peak ), or to contamination from shock or HII regions (softening the continuum and thus decreasing E peak ).
Coronal lines
The high ionization (coronal) line [Fe VII]λ6087 (Fig. 2, Fig. 6) is detected in the inner kpc region of JO201 and JO135, with a peak SN of ∼ 20, corresponding to [Fe VII]λ6087/Hα ∼ 0.05. The weaker lines [Fe VIII]λ5721 and [FeX]λ6375 (blended with [OI]λ6363) are also present in the same regions. We report the presence in JO206 of a faint feature in the [Fe VII]λ6087 spectral region, but the SN < 3 is too low for a sure identification. Both in JO135 and JO201, the [Fe VII]λ6087 line is characterized by a profile similar to that of low ionization lines, but it has a redshifted peak with respect to the Balmer recombination lines, in particular in JO201, suggesting that these lines are produced in different regions. This is confirmed by the fact that, compared to the other lines, its emission is concentrated (Fig. 6) in a region whose size is close to the size of the PSF (FWHM ∼ 1 arcsec). Adopting the definition of the NLR size presented before, we obtain rNLR(FeVII) < 0.5 kpc, and rAGN(FeVII) ∼ 0.5 kpc (JO135), 1.2 kpc (JO201): it is therefore unresolved in JO135, while in JO201 there is a faint extended emission that could be however an artifact due to the PSF wings. The intensities of coronal lines are not available in the NebulaBayes grids, since MAPPINGS does not accurately model these lines (Davies et al. 2016). Different models were presented to reproduce coronal lines in AGN. Mingozzi et al. (2019) reported the presence of Fe coronal lines in a sample of AGN with outflows, observed with MUSE: they attributed them to the inner, optically thin and highly ionized regions of the outflows. Korista & Ferland (1989) attributed them to a low-density (ne ∼ 1 cm −3 ) ISM heated by the AGN radiation, in a region whose size is similar or larger than the NLR (∼ 1-2 kpc). Ferguson et al. (1997); Komossa & Schulz (1997) showed that multi-component photoionization models, where emission lines are produced in an ensemble of clouds with different gas densities and distances from the center, are able to successfully fit the observed values: in particular, in these models coronal lines are mostly produced by hot (T ∼ 10 5 K) gas heated by the AGN in the high-ionization inner regions (see also Mazzalay et al. 2010). Thomas et al. (2017) reported the existence of a correlation between [Fe VII]λ6087/Hα and [OIII]λ5007/Hβ in Seyfert galaxies in their sample, that they interpreted as an effect of a radiation pressure dominated environment, where Compton heating in the central regions triggers the production of coronal lines (Davies et al. 2016). Consistently, both JO201 and JO135 show a high [OIII]λ5007/Hβ (> 10) ratio in the inner regions where the emission of [FeVII] is observed. The absence of this line in the other galaxies may be due either to orientation effects preventing us to see the inner regions, or to an intrinsic difference in the properties of the ionized gas.
Extranuclear regions
AGN-like regions are detected in JO135 and JO204 (Gullieuszik et al. 2017) up to ∼ 20 kpc from the nucleus (Fig. 7), with high values of [OIII]/Hβ ∼ 10 (Fig. 5). This is typical of the so-called Extended Emission Line Regions (EELR) (see e.g. Yoshida et al. 2004;Maddox 2018), where the gas ionized by the AGN extends over scales of tens of kiloparsecs.
In these regions the [SII]λ6716/λ6713 ratio, close to ∼ 1.4, indicates low values of the electron density (ne < 50 cm −3 ), and the emission line widths are close to the instrumental value. The line ratios could be reproduced by shock models with n ∼ 0.1, but the required shock velocity, V sh >400 km s −1 , is too high compared to the observed values: this rules out the presence of fast shocks (see also Fu & Stockton 2009), and favors photoionization from the AGN (Fig. 5).
In JO204, the required ionization parameter in the outer regions (∼ 15 kpc) is close to the value derived in the nuclear (r < 2 kpc) regions (log U ∼ −3). A nearly constant ionization parameter implies that the photon flux and the gas density should both decrease as r −2 . This is consistent if there is a coupling between the radiation and the illuminated gas, as in the case of radiation pressure mechanisms (Thomas et al. 2018). The best-fit AGN models indicate for JO204 log P/k ∼ 7 in the nucleus (r ∼ 1 kpc), log P/k ∼ 4.6 at 15 kpc: this would be consistent with nH ∼ r −2 , suggest-ing that the low-density gas in the ISM may be ionized by anisotropic radiation from the AGN. In order to verify if we can reproduce the observed Hβ luminosities, we proceed as follows. For a gaseous cloud, the rate (photons s −1 ) of ionizing photons required to produce the observed, dereddened Hβ luminosity (erg s −1 ) is (Osterbrock & Ferland 2006):
Q(H 0 ) = L(Hβ) hν Hβ αB(H 0 , T ) α eff Hβ (H 0 , T ) ∼ 2.09 × 10 12 L(Hβ)(2)
where L(Hβ) is the observed, dereddened Hβ luminosity (erg/s); in the Case B approximation αB(H 0 , T ) = 2.59 × 10 13 cm 3 s −1 , α eff Hβ (H 0 , T ) = 3.03 × 10 14 cm 3 s −1 (T ∼ 10 4 K).
We can thus derive the rate of ionizing photons that should be emitted by the nucleus:
Q(H 0 )nuc = Q(H 0 ) Ω 4π −1 (3)
where Ω is the solid angle covered by the extra-nuclear region, that is Ω = A/d 2 , for a region of area A and projected distance d from the nucleus. From the Hβ fluxes measured in the EENLR of JO204 at 15 kpc, we obtain 3 Q(H 0 )nuc ∼ 10 54 ph/s: using this value to compute the ionization parameter, we obtain log U ∼ −3 for nH ∼ 1 cm −3 , in agreement with the value expected from photoionization models. Similar results are obtained for JO135.
GAS PROPERTIES: DISK AND OUTFLOW COMPONENTS
As discussed before, in four galaxies (JO201, JO204, JW100 and JO135) emission lines in the circumnuclear regions are characterized by complex profiles, that require at least two Gaussian components to be fitted. In order to disentangle the contribution to the emission lines from gas in disk and other components (e.g. outflows), we focus on the [OIII] λ5007 line. Compared to the Hα+[NII] lineset used in P17b,
[OIII] is more suitable for this analysis as it is not affected by the presence of other nearby emission lines (the [NII] doublet in the case of Hα) or by possible residual broad components in the inner AGN regions, as it may be the case for permitted lines, and better traces the ionized gas in the outflows (see e.g. Bae & Woo 2014, and refs.). To this end, we selected a region of ∼ 10 x 10 around the center of each galaxy and fitted the [OIII]λ5007 line, using the functions available in the Python library lmfit. For each spaxel, we made the fit adopting both one and two Gaussian components. The two component solution was chosen if it gave an appreciable improvement to the fit compared to the one component solution: based on visual inspection, we defined this condition as χ 2 n=1 > 1.5 χ 2 n=2 , χ 2 being the chi-square of the fit (see Davis et al. 2012, for a similar approach), and in addition we requested that the flux in each component must be at least 10% of the summed flux. We also discarded those fits where S/N([OIII]) < 5, where the signal S is the total line flux and the noise N is the standard deviation of the fitting residuals.
From the velocities measured at 10%, 50% and 90% of the cumulative flux percentiles, we used the definitions in Liu et al. (2013) to estimate the parameters introduced by Whittle (1985): the peak velocity of the [OIII] line (v pk ), the median velocity (v50), the width W80 = v90 − v10 and the asymmetry Asym
= (v 90 −v 50 )−(v 50 −v 10 ) W 80
. In this definition, positive/negative values of Asym indicate red/blue asymmetric lines. We used the fit to model the line profile and compute these parameters. In this way we do not assign any physical meaning to the decomposition, using the fit only to reduce the effect of the noise on the estimate of the profile parameters (Liu et al. 2013;Harrison et al. 2014;Balmaverde et al. 2016).
The spatial distribution of these parameters for the galaxies showing two emission line components (JO135, JO201, JO204 and JW100) is displayed in Fig. 8, where the velocity measured for stars is also displayed as reference.
For a more quantitative analysis, we then analyzed (Fig. 9) the velocity and velocity dispersion of the two fitted components, bearing in mind that the fit may be degenerate, in particular when the components are close, or in the outer regions where the line is fainter and noise may introduce spurious features.
In the presence of an outflow we expect to see a clear separation in both velocity and velocity dispersion between a (primary) component, kinematically dominated by the gravitational potential traced by the stellar component, and a secondary (slightly broader) component with a velocity offset showing its non-gravitational origin (Woo et al. 2016;Karouzos et al. 2016a). To check if this is the case, Fig. 10 shows the radially binned values of the deviations from the stellar values of the velocity and velocity dispersion.
As discussed in Crenshaw et al. (2010), the observed line profiles can be explained by a combination of biconical outflows and extinction from dust in the inner galaxy disk. Biconical outflows are most often observed as broad, blueshifted components in the emission lines as the redshifted, receding part of the outflow more likely lies behind the galaxy disk and is thus suppressed by dust. In some cases, instead, red asymmetric lines are observed: this can still happen when the inclination of the disk is such to hide the approaching part of the outflow. In addition, Lena et al. (2015, see their Fig. 15) proposed a model where dust is embedded in the outflowing clouds: in this case, blueshifted clouds at small distances from the center preferentially show their non ionized face and are thus fainter compared to redshifted clouds, showing instead their ionized face. At large projected distances, an increasing fraction of the ionized face is visible in the blueshifted clouds, which are then brighter. Fig. 8, Fig. 9 and Fig. 10 demonstrate that the four galaxies with double components have an outflow, and that their outflow properties are quite diverse, as we discuss in the following.
Outflows properties for individual galaxies
JO135 : within a radius ∼ 1 kpc from the center, it presents a quite broad [OIII]λ5007 (W80 ∼ 1000 km s −1 ), which is asymmetric in the red (Fig. 8): in the same region, there is an increased dust extinction (Av ∼ 2), that also extends in the outer regions (Fig.3). The presence of dust could point . Spatial distribution of velocity and velocity dispersion from [OIII]λ5007 for galaxies with two components. (Liu et al. 2013), allowing to see the inner outflow regions and hence the [FeVII] λ6087 line, as discussed in Sect.3.3.
JO204 : both the peak and median velocities differ from the stellar velocities (see also Gullieuszik et al. 2017) within a region of size ∼ 2 kpc. In a narrow central strip around the center, emission line profiles are complex (see Fig.2), thus producing higher values of W80 (∼ 900 km s −1 ). Outside this strip, we identify two regions, one (NW) with blue asymmetric lines and one (SE) with red asymmetric lines. From the two-component fits, we find a narrow component, where σv is close to the stellar values, and a significantly broader, but fainter, one (σv ≤ 500 km s −1 ). Since up to a distance of 1 kpc both components have a velocity that is significantly different than the stellar value, we interpret both of them as produced by the two sides of a biconical outflow; at larger distances (> 1 kpc) the gas is most likely dominated by the disk component. JW100 : we detect two line components, both of which are narrow (σv < 200 km s −1 ), blue and redshifted, with a velocity offset compared to the stellar velocity of v ∼ ±200 km s −1 up to a radius r ∼ 2.5 kpc. The two components are emitted from distinct regions, with the exception of a small area around the center where a double-peaked profile is observed. As for JO204, we interpret this as a biconical outflow, extending to a distance of ∼ 2 kpc.
Outflow: size, mass and energy
From the secondary (broader) components, we computed the radius containing a fraction f (f = 50% and 90%) of the total broad [OIII] flux: r<r f F [OIII],broad = f r<rmax F [OIII],broad , rmax being the outer radius where the presence of the second component is significant. We adopted r90 as the outflow size as it agrees with a dynamic definition of the outflow size (Karouzos et al. 2016a), that is the radius where the velocity and velocity dispersions of the outflow component start to decline and approach the stellar values.
The mass related to the outflow is (Carniani et al. 2015
) Mout = 8×10 7 M C L[OIII]
10 44 ergs −1 <ne> 500cm −3 −1 , with C = ne 2 / n 2 e ∼ 1. A typical density ne = 500 cm −3 was assumed. The outflow kinetic energy is E kin = 1 2 Moutv 2 out , vout being the bulk velocity of the outflow. As discussed e.g. in Karouzos et al. (2016b), different choices to estimate vout were made in the literature, reflecting different strategies to take into account geometrical effects (e.g. projection, opening angle of the outflow). Here we adopt the approximation (Karouzos et al. 2016b;Harrison et al. 2018) v 2 out = v 2 rad +σ 2 , where v rad is the measured radial velocity and σ the [OIII] velocity dispersion, corrected (Karouzos et al. 2016b) for the contribution from the gravitational potential by subtracting in quadrature the stellar velocity dispersion. The mass outflow was derived summing on all spaxels within r90 the JO135 JO201 JO204 JW100 Figure 10. Velocity and velocity dispersion of the narrow (green) and broad (red) components of [OIII]λ5007. The velocity of the stars is subtracted from the velocity, while ∆σv is the velocity dispersion from which the stellar velocity dispersion was subtracted in quadrature. For the velocity dispersion, a value of 0 was assigned if the velocity dispersion was lower than the stellar velocity dispersion.
contributes from both line components since, with the possible exception of JO201, we were not able to unambiguously separate the disk and outflow contributions to [OIII]. From the mean bulk velocity and the size of the outflow we can then compute the outflow lifetime, tout = rout/vout, the outflow mass rate,Ṁout = Mout/tout, the energetic rate, E kin = E kin /tout and the outflow efficiency, η =Ė kin /LAGN where LAGN is the bolometric luminosity. In Karouzos et al. (2016b) the bolometric AGN luminosity was computed as LAGN = 3500L [OIII] erg/s (Heckman et al. 2004), L [OIII] being the dust uncorrected luminosity, for comparison with other literature samples. We adopted the same choice and used the [OIII] luminosities in Table 1, that include both emission line components.
The results obtained from the above analysis are displayed in Table 3. We emphasize all the uncertainties related both to the outflow kinetic energy and the estimate of the bolometric luminosity. Nevertheless, the outflow mass rates and kinetic energies that we obtain are comparable with the values obtained by Karouzos et al. (2016b) in a sample of AGN having similar [OIII] luminosities (L [OIII] < 10 42 erg s −1 ). Consistently with these results, we derive low efficiencies, η 0.01%, suggesting that the outflow is not able to impact the host galaxy environment on large scales. As discussed by e.g. Bing et al. (2019), moderate luminosity AGN may however suppress the star formation in the inner few kpc. For the galaxies in our sample, the outflow lifetime derived above is tout ∼ 10 7 yr and the outflow velocity is vout ∼ 300 km s −1 : this corresponds to a distance of ∼ 3 kpc up to which the outflow can propagate from the center, in agreement with the outflow size (r90 in Table 3). Evidence for star formation suppression around the AGN in JO201, based on NUV and CO data, will be presented in George et al. (2019, submitted). For comparison, from literature the mass outflow and energy rates in the brightest AGN (L [OIII] > 10 43 erg s −1 ) can be as high as 10 4 M yr −1 and logĖ kin ∼ 10 45 erg s −1 (Liu et al. 2013), thus being able to impact on much larger scales in the host galaxy environment.
In this paper we have carried out a detailed investigation of the seven jellyfish galaxies presented in P17b, where based on the [NII]/Hα ratio it was found that at least five of them (JO201, JO204, JO206, JO135 and JW100) host an AGN. We first performed a detailed comparison with photoionization and shock model taking into account several diagnostic diagrams simultaneously. We concluded that while shock models can play a role in the ionization of the gas, AGN models are required to explain the line ratios observed in the nuclear regions of these five galaxies: this conclusion is corroborated by an analysis of the Hβ luminosity. The presence of iron coronal lines in the nuclei of JO201 and JO135 indicates the existence of hot (T ∼ 10 5 K) gas heated by the AGN. JO204 and JO135 also present Extended Emission Line Regions of > 10 kpc that are ionized by the AGN. In JO194, that was classified as a LINER in P17b, line ratios in the central spaxels are better reproduced by an AGN model, though shock models may also marginally reproduce the observed ratios. Finally, in JO175 the [NII]λ6583/Hα ratio is typical of star forming regions, but we still observe a high [OI]λ6300/Hα ratio that could point to the presence of shocks.
We then focused on the [OIII]λ5007 line profile, which is a good tracer of the presence of outflowing gas. Four of the galaxies hosting an AGN present complex emission line profiles, with at least two Gaussian components with different line widths that testify the presence of AGN outflows. We have studied the properties and energetics of the outflows, with results comparable to what reported in other AGN of similar luminosity (L [OIII] < 10 42 erg s −1 ), with outflows. Finally, we derived conclusions on possible AGN feedback effects on the circumnuclear regions (∼ 3 kpc).
This study confirms on much more solid ground the conclusions from P17b regarding the presence of an AGN in several GASP jellyfish galaxies with the longest tails, that suggests a causal connection between ram pressure stripping and AGN activity. Moreover, in this work we demonstrate the presence of outflows and derive their properties. The current sample is too small to draw final conclusions regarding the AGN-ram pressure connection, and a detailed analysis of the whole GASP sample will be presented in a forthcoming paper, investigating the evidence for AGN activity in the other jellyfish galaxies with long tails but also for different stripping stages and as a function of the galaxy mass.
Figure 1 .
1VRI images built from the MUSE cubes for the seven galaxies analyzed in the paper.
Figure 2 .
2The plots show for the central spaxel of each galaxy: left -the uncorrected spectra with the identification of the main emission lines; right -the normalized line profiles after the subtraction of the stellar underlying spectrum as described in the text: the dashed lines indicate the assumed galaxy velocity. A meaningful detection (SN > 3) of [FeVII]λ6087 is seen only in JO135 and JO201.
Figure 2 -
2continued the centroid of the continuum map obtained by the KubeViz best fit model in the Hα region.
Figure 3
3Figure 3. For each galaxy the plots show the spatial distribution in the central 10 x 10 of left: extinction (Av), right: log ne. For all galaxies, 1 arcsec is ∼ 1 kpc (see Table 1).
Figure 4
4Figure 4. Color coded maps in the a region of 100x100 spaxels around the nucleus. Left: classification from P17b (HII, composite, AGN, LINER). Right: NB models: HII; shock with n = 0.1 cm −3 , solar abundances; shock with n = 1 cm −3 , solar abundances (M); shock with n = 1 cm −3 , 2x solar abundances (R); AGN. Spaxels classified as HII in P17b were not fitted with NB.
Figure 4
4-continued
Fig. 5 presents
5AGN and shock model grids overlaid on the observed spaxel line ratios; the latter are color coded with the distance from the galaxy center. For AGN models, we display those with log E peak = [−0, 5, 0, +0.5] around the best-fit value of the central spaxel, and for different values of log U . The abundances in the nucleus derived from the NB fits are super-solar (12 + log O/H > 9) in the AGNdominated nuclei, and are Solar (12 + log O/H = 8.76) outside. For fast shock models, we plot a grid of varying veloci-ties and magnetic field values, fixing the best fit density and metallicity.
ForFigure 5 .
5JO194 the emission line ratios fall in the LINER side of the diagrams, with log [OIII]λ5007/Hβ ≤ 0.2, where optical lines alone are not able to clearly separate between JO135: observed vs. AGN models JO135: observed vs. shock models JO201: observed vs. AGN models JO201: observed vs. shock models Observed emission line ratios color coded with the projected distance from the center; the empty square displays the value measured in the central spaxel. The black solid and dashed ([NII]/Hα panel) curves indicate the empirical SF/Composite/AGN classification by
Figure 5 Figure 5 Figure 5
555-continued JW100: observed vs. AGN models JW100: observed vs. shock models JO194: observed vs. AGN models JO194: observed vs.shock models -continued JO175: observed vs. shock models -continued AGN and other ionization mechanisms (see e.g. Belfiore et al. 2016).
Figure 6 .
6The contour maps showing emission at SN of ∼ 5 -20 in the [Fe VII] λ6087 line are overplotted on the [OIII]λ5007/Hβ map for the nuclear AGN regions of JO135 (left) and JO201 (right). The cross displays the position of the peak in [OIII] λ5007.
Figure 7 .
7[OIII]λ5007/Hβ maps showing the AGN-like extranuclear emission in JO135 (left) and JO204 (right). The coordinates are centered on the position of the central spaxel.
Figure 8 .
8For each galaxy hosting an AGN, the plots display the spatial distribution of the stellar velocity [km s −1 ] (first panel) and of the following parameters (see text for details) derived from the [OIII] line: peak and median velocity [km s −1 ]; W 80 [km s −1 ]; asymmetry (right panel). In each plot, North is up and East is at left.
Figure 9
9to the model proposed byLena et al. (2015), to explain the dominant redshifted component in the outflow. Outside this region, the velocity pattern follows the stellar one. This can also be seen in the radial plots of the fitted components (Fig. 10), where at r > 1 kpc the velocity and velocity dispersion of the narrow component are very close to the stellar values.JO201 : a broader component (W80 ∼ 600 km s −1 ) is de-tected in the inner kpc, with a small blue asymmetry. In the same region, we measure an increase of the dust extinction (Av ∼ 1), as well as of the electron density (ne ∼ 10 3 cm −3 ). The velocity and velocity dispersion of the narrow component agree with the stellar values: we therefore identify the narrow component with gas in the disk and the broad blueshifted component with the outflow. The low asymmetry in [OIII] may imply a spherical or wide-angle outflow
Table 2. Best-fit parameters for the nuclear AGN photoionization models.id
log P/k
log E peak
12 + log O/H
log U
JO135
6.6
-1.2
9.16
-2.49
JO201
7.0
-1.5
8.99
-2.77
JO204
7.0
-1.5
8.99
-3.06
JO206
6.6
-1.5
8.87
-3.06
JW100
7.0
-1.5
9.15
-3.06
JO194
6.2
-1.7
9.30
-3.34
MNRAS 000, 1-20(2019)
Compared to P17b, we improved the fitting of the lines of the central spaxels of JO135.
We remind that the pre-shock density is not directly related to the electron density measured e.g. by the [SII]λλ6716,6731 lines, giving the post-shock density (see e.g.Dopita & Sutherland 1995) MNRAS 000, 1-20(2019)
We neglect the factor due to the unknown projection, but here we are interested in order of magnitudes.
ACKNOWLEDGEMENTSThis work is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO program 196.B-0578. We acknowledge financial support from PRIN-SKA ESKAPE-HI (PI L.Hunt). Y.J. acknowledges financial support from CONICYT PAI (Concurso Nacional de Inserción en la Academia 2017) No. 79170132 and FONDECYT Iniciación 2018 No. 11180558. We acknowledge the usage of the following Python libraries: AstroPy, lmfit, NebulaBayes, lineid_plot, mpdaf.This paper has been typeset from a T E X/L A T E X file prepared by the author.
. M G Allen, B A Groves, M A Dopita, R S Sutherland, L J Kewley, 10.1086/589652ApJS. 17820Allen M. G., Groves B. A., Dopita M. A., Sutherland R. S., Kew- ley L. J., 2008, ApJS, 178, 20
. T J Arnold, P Martini, J S Mulchaey, A Berti, T E Jeltema, 10.1088/0004-637X/707/2/1691ApJ. 7071691Arnold T. J., Martini P., Mulchaey J. S., Berti A., Jeltema T. E., 2009, ApJ, 707, 1691
R Bacon, 10.1117/12.856027Ground-based and Airborne Instrumentation for Astronomy III. 773508Bacon R., et al., 2010, in Ground-based and Airborne Instrumen- tation for Astronomy III. p. 773508, doi:10.1117/12.856027
. H.-J Bae, J.-H Woo, 10.1088/0004-637X/795/1/30ApJ. 79530Bae H.-J., Woo J.-H., 2014, ApJ, 795, 30
. H.-J Bae, J.-H Woo, M Karouzos, E Gallo, H Flohic, Y Shen, S.-J Yoon, 10.3847/1538-4357/aa5f5cApJ. 83791Bae H.-J., Woo J.-H., Karouzos M., Gallo E., Flohic H., Shen Y., Yoon S.-J., 2017, ApJ, 837, 91
. B Balmaverde, 10.1051/0004-6361/201526694A&A. 585148Balmaverde B., et al., 2016, A&A, 585, A148
. F Belfiore, 10.1093/mnras/stw1234MNRAS. 4613111Belfiore F., et al., 2016, MNRAS, 461, 3111
. C Bellhouse, 10.3847/1538-4357/aa7875ApJ. 84449Bellhouse C., et al., 2017, ApJ, 844, 49
. L Bing, 10.1093/mnras/sty2662MNRAS. 482194Bing L., et al., 2019, MNRAS, 482, 194
. M Cappellari, E Emsellem, 10.1086/381875Publications of the Astronomical Society of the Pacific. 116138Cappellari M., Emsellem E., 2004, Publications of the Astronom- ical Society of the Pacific, 116, 138
. S Carniani, 10.1051/0004-6361/201526557A&A. 580102Carniani S., et al., 2015, A&A, 580, A102
. D M Crenshaw, H R Schmitt, S B Kraemer, R F Mushotzky, J P Dunn, 10.1088/0004-637X/708/1/419ApJ. 708419Crenshaw D. M., Schmitt H. R., Kraemer S. B., Mushotzky R. F., Dunn J. P., 2010, ApJ, 708, 419
. R L Davies, 10.3847/0004-637X/824/1/50ApJ. 82450Davies R. L., et al., 2016, ApJ, 824, 50
. T A Davis, 10.1111/j.1365-2966.2012.21770.xMNRAS. 4261574Davis T. A., et al., 2012, MNRAS, 426, 1574
. M A Dopita, R S Sutherland, 10.1086/176596ApJ. 455468Dopita M. A., Sutherland R. S., 1995, ApJ, 455, 468
. A Dressler, I B Thompson, S A Shectman, 10.1086/162813ApJ. 288481Dressler A., Thompson I. B., Shectman S. A., 1985, ApJ, 288, 481
. H Ebeling, L N Stephenson, A C Edge, 10.1088/2041-8205/781/2/L40ApJ. 78140Ebeling H., Stephenson L. N., Edge A. C., 2014, ApJ, 781, L40
. S Ehlert, 10.1093/mnras/stt2025MNRAS. 4371942Ehlert S., et al., 2014, MNRAS, 437, 1942
. A C Fabian, 10.1146/annurev-astro-081811-125521Annual Review of Astronomy and Astrophysics. 50455Fabian A. C., 2012, Annual Review of Astronomy and Astro- physics, 50, 455
. J W Ferguson, K T Korista, J A Baldwin, G J Ferland, 10.1086/304611ApJ. 487122Ferguson J. W., Korista K. T., Baldwin J. A., Ferland G. J., 1997, ApJ, 487, 122
. M Fossati, M Fumagalli, A Boselli, G Gavazzi, M Sun, D J Wilman, 10.1093/mnras/stv2400MNRAS. 4552028Fossati M., Fumagalli M., Boselli A., Gavazzi G., Sun M., Wilman D. J., 2016, MNRAS, 455, 2028
. J Fritz, 10.3847/1538-4357/aa8f51ApJ. 848132Fritz J., et al., 2017, ApJ, 848, 132
. H Fu, A Stockton, 10.1088/0004-637X/690/1/953ApJ. 690953Fu H., Stockton A., 2009, ApJ, 690, 953
. M Fumagalli, M Fossati, G K T Hau, G Gavazzi, R Bower, M Sun, A Boselli, 10.1093/mnras/stu2092MNRAS. 4454335Fumagalli M., Fossati M., Hau G. K. T., Gavazzi G., Bower R., Sun M., Boselli A., 2014, MNRAS, 445, 4335
. K George, 10.1093/mnras/sty1452MNRAS. 4794126George K., et al., 2018, MNRAS, 479, 4126
. M Gitti, 10.1093/mnrasl/slt118MNRAS. 43684Gitti M., 2013, MNRAS, 436, L84
. M Gullieuszik, 10.3847/1538-4357/aa8322ApJ. 84627Gullieuszik M., et al., 2017, ApJ, 846, 27
. C M Harrison, D M Alexander, J R Mullaney, A M Swinbank, 10.1093/mnras/stu515MNRAS. 4413306Harrison C. M., Alexander D. M., Mullaney J. R., Swinbank A. M., 2014, MNRAS, 441, 3306
. C M Harrison, T Costa, C N Tadhunter, A Flütsch, D Kakkad, M Perna, G Vietri, 10.1038/s41550-018-0403-6Nature Astronomy. 2Harrison C. M., Costa T., Tadhunter C. N., Flütsch A., Kakkad D., Perna M., Vietri G., 2018, Nature Astronomy, 2, 198
. T M Heckman, P N Best, 10.1146/annurev-astro-081913-035722Annual Review of Astronomy and Astrophysics. 52589Heckman T. M., Best P. N., 2014, Annual Review of Astronomy and Astrophysics, 52, 589
. T M Heckman, G Kauffmann, J Brinchmann, S Charlot, C Tremonti, S D M White, 10.1086/422872ApJ. 613109Heckman T. M., Kauffmann G., Brinchmann J., Charlot S., Tremonti C., White S. D. M., 2004, ApJ, 613, 109
. G S Ilha, 10.1093/mnras/sty3373MNRAS. 484252Ilha G. S., et al., 2019, MNRAS, 484, 252
. Y L Jaffé, 10.1093/mnras/sty500MNRAS. 4764753Jaffé Y. L., et al., 2018, MNRAS, 476, 4753
. D Kakkad, 10.1051/0004-6361/201832790A&A. 6186Kakkad D., et al., 2018, A&A, 618, A6
. M Karouzos, J.-H Woo, H.-J Bae, 10.3847/0004-637X/819/2/148ApJ. 819148Karouzos M., Woo J.-H., Bae H.-J., 2016a, ApJ, 819, 148
. M Karouzos, J.-H Woo, H.-J Bae, 10.3847/1538-4357/833/2/171ApJ. 833171Karouzos M., Woo J.-H., Bae H.-J., 2016b, ApJ, 833, 171
. L J Kewley, B Groves, G Kauffmann, T Heckman, 10.1111/j.1365-2966.2006.10859.x372961MN-RASKewley L. J., Groves B., Kauffmann G., Heckman T., 2006, MN- RAS, 372, 961
. S Komossa, H Schulz, A&A. 32331Komossa S., Schulz H., 1997, A&A, 323, 31
. K T Korista, G J Ferland, 10.1086/167739ApJ. 343678Korista K. T., Ferland G. J., 1989, ApJ, 343, 678
. M R Krumholz, M Fumagalli, R L Da Silva, T Rendahl, J Parra, 10.1093/mnras/stv1374MNRAS. 4521447Krumholz M. R., Fumagalli M., da Silva R. L., Rendahl T., Parra J., 2015, MNRAS, 452, 1447
. Lena D , 10.1088/0004-637X/806/1/84ApJ. 80684Lena D., et al., 2015, ApJ, 806, 84
. G Liu, N L Zakamska, J E Greene, N P H Nesvadba, X Liu, 10.1093/mnras/stt1755MNRAS. 4362576Liu G., Zakamska N. L., Greene J. E., Nesvadba N. P. H., Liu X., 2013, MNRAS, 436, 2576
. N Maddox, 10.1093/mnras/sty2201MNRAS. 4805203Maddox N., 2018, MNRAS, 480, 5203
. M A Marshall, S S Shabala, M G H Krause, K A Pimbblet, D J Croton, 10.1093/mnras/stx2996Owers M. S. 4743615MNRASMarshall M. A., Shabala S. S., Krause M. G. H., Pimbblet K. A., Croton D. J., Owers M. S., 2018, MNRAS, 474, 3615
. P Marziani, 10.1051/0004-6361/201628941A&A. 59983Marziani P., et al., 2017, A&A, 599, A83
. X Mazzalay, A Rodríguez-Ardila, S Komossa, 10.1111/j.1365-2966.2010.16533.xMNRAS. 4051315Mazzalay X., Rodríguez-Ardila A., Komossa S., 2010, MNRAS, 405, 1315
. M Mingozzi, 10.1051/0004-6361/201834372A&A. 622146Mingozzi M., et al., 2019, A&A, 622, A146
. R Nevin, J Comerford, F Müller-Sánchez, R Barrows, M Cooper, 10.3847/0004-637X/832/1/67ApJ. 83267Nevin R., Comerford J., Müller-Sánchez F., Barrows R., Cooper M., 2016, ApJ, 832, 67
Astrophysics of gaseous nebulae and active galactic nuclei. D E Osterbrock, G J Ferland, Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei
. K A Pimbblet, S S Shabala, C P Haines, A Fraser-Mckelvie, D J E Floyd, 10.1093/mnras/sts470MNRAS. 4291827Pimbblet K. A., Shabala S. S., Haines C. P., Fraser-McKelvie A., Floyd D. J. E., 2013, MNRAS, 429, 1827
. B M Poggianti, ApJ. 84448Poggianti B. M., et al., 2017a, ApJ, 844, 48
. B M Poggianti, Nature. 548304Poggianti B. M., et al., 2017b, Nature, 548, 304
. B M Poggianti, 10.1093/mnras/sty2999MNRAS. 4824466Poggianti B. M., et al., 2019, MNRAS, 482, 4466
. B Proxauf, S Öttl, S Kimeswenger, 10.1051/0004-6361/201322581A&A. 56110Proxauf B.,Öttl S., Kimeswenger S., 2014, A&A, 561, A10
. M Ramos-Martínez, G C Gómez, Pérez-Villegasá, 10.1093/mnras/sty3934763781MN-RASRamos-Martínez M., Gómez G. C., Pérez-VillegasÁ., 2018, MN- RAS, 476, 3781
. T D Rawle, 10.1093/mnras/stu868MNRAS. 442196Rawle T. D., et al., 2014, MNRAS, 442, 196
. R J Smith, 10.1111/j.1365-2966.2010.17253.xMNRAS. 4081417Smith R. J., et al., 2010, MNRAS, 408, 1417
. A D Thomas, B A Groves, R S Sutherland, M A Dopita, L J Kewley, C Jin, 10.3847/1538-4357/833/2/266ApJ. 833266Thomas A. D., Groves B. A., Sutherland R. S., Dopita M. A., Kewley L. J., Jin C., 2016, ApJ, 833, 266
. A D Thomas, 10.3847/1538-4365/aa855aApJS. 23211Thomas A. D., et al., 2017, ApJS, 232, 11
. A D Thomas, M A Dopita, L J Kewley, B A Groves, R S Sutherland, A M Hopkins, G A Blanc, 10.3847/1538-4357/aab3dbApJ. 85689Thomas A. D., Dopita M. A., Kewley L. J., Groves B. A., Suther- land R. S., Hopkins A. M., Blanc G. A., 2018, ApJ, 856, 89
. A Vazdekis, P Sánchez-Blázquez, J Falcón-Barroso, A J Cenarro, M A Beasley, N Cardiel, J Gorgas, R F Peletier, 10.1111/j.1365-2966.2010.16407.xMNRAS. 4041639Vazdekis A., Sánchez-Blázquez P., Falcón-Barroso J., Cenarro A. J., Beasley M. A., Cardiel N., Gorgas J., Peletier R. F., 2010, MNRAS, 404, 1639
. S Veilleux, D E Osterbrock, 10.1086/191166The Astrophysical Journal Supplement Series. 63295Veilleux S., Osterbrock D. E., 1987, The Astrophysical Journal Supplement Series, 63, 295
. G Venturi, 10.1051/0004-6361/201833668A&A. 61974Venturi G., et al., 2018, A&A, 619, A74
. M Whittle, 10.1093/mnras/213.1.1MNRAS. 2131Whittle M., 1985, MNRAS, 213, 1
. K.-W Wong, C L Sarazin, E L Blanton, T H Reiprich, 10.1086/588272ApJ. 682155Wong K.-W., Sarazin C. L., Blanton E. L., Reiprich T. H., 2008, ApJ, 682, 155
. J.-H Woo, H.-J Bae, D Son, M Karouzos, 10.3847/0004-637X/817/2/108ApJ. 817108Woo J.-H., Bae H.-J., Son D., Karouzos M., 2016, ApJ, 817, 108
. M Yoshida, 10.1086/380221AJ. 12790Yoshida M., et al., 2004, AJ, 127, 90
| [] |
[
"ON BOSE-CONDENSATION OF WAVE-PACKETS IN HEAVY ION COLLISIONS",
"ON BOSE-CONDENSATION OF WAVE-PACKETS IN HEAVY ION COLLISIONS"
] | [
"T Csörgő [email protected] \nMTA KFKI RMKI\nPOB 49H-1525, 114BudapestHungary\n",
"J Zimányi s:[email protected] \nMTA KFKI RMKI\nPOB 49H-1525, 114BudapestHungary\n"
] | [
"MTA KFKI RMKI\nPOB 49H-1525, 114BudapestHungary",
"MTA KFKI RMKI\nPOB 49H-1525, 114BudapestHungary"
] | [] | A recently obtained exact analytic solution to the wave-packet version of the pion -laser model is presented. In the rare gas limit, a thermal behaviour is found while the dense gas limiting case corresponds to Bose-Einstein condensation of wave-packets to the wave-packet mode with the smallest possible energy. | null | [
"https://arxiv.org/pdf/hep-ph/9811283v1.pdf"
] | 119,081,042 | hep-ph/9811283 | c2683a29f6b7f2f2e1d4c1958ea63c9d87a504b6 |
ON BOSE-CONDENSATION OF WAVE-PACKETS IN HEAVY ION COLLISIONS
Nov 1998
T Csörgő [email protected]
MTA KFKI RMKI
POB 49H-1525, 114BudapestHungary
J Zimányi s:[email protected]
MTA KFKI RMKI
POB 49H-1525, 114BudapestHungary
ON BOSE-CONDENSATION OF WAVE-PACKETS IN HEAVY ION COLLISIONS
Nov 1998arXiv:hep-ph/9811283v1 9
A recently obtained exact analytic solution to the wave-packet version of the pion -laser model is presented. In the rare gas limit, a thermal behaviour is found while the dense gas limiting case corresponds to Bose-Einstein condensation of wave-packets to the wave-packet mode with the smallest possible energy.
Introduction
The study of the statistical properties of quantum systems has a long history with important recent developments. In high energy physics quantum statistical correlations are studied in order to infer the space-time dimensions of the intermediate state formed in elementary particle reactions. In high energy heavy ion collisions hundreds of bosons are created. The correct theoretical description of their correlations is difficult. In this conference contribution we present a contradiction free treatement of multi-particle Bose-Einstein correlations for arbitrary large number of particles.
Model Assumptions:
A model system is described asρ
= ∞ n=0 p nρn ,(1)
the density matrix of the whole system is normalized to one. Hereρ n is the density matrix for events with fixed particle number n, which is normalized also to one. The probability for such an event is p n . The multiplicity distribution is described by the set {p n } ∞ n=0 . The density matrix of a system with a fixed number of boson wave packets has the form ρ n = dα 1 ...dα n ρ n (α 1 , ..., α n ) |α 1 , ..., α n α 1 , ..., α n |,
where |α 1 , ..., α n denote properly normalized n-particle wave-packet boson states.
1
The wave packet creation operator is given as
α † i = d 3 p (πσ 2 ) 3 4 e − (p−π i ) 2 2σ 2 i −iξi(p−πi)+iω(p)(t−ti)â † (p).(3)
The commutator
α i , α † j = α i |α j(4)
vanishes only in the case, when the wave packets do not overlap.
Here α i = (ξ i , π i , σ i , t i ) refers to the center in space, in momentum space, the width in momentum space and the production time, respectively.
For simplicity we assume that α i = (π i , ξ i , σ, t 0 ). We call the attention to the fact that although one cannot attribute exactly defined values for space and momentum at the same time, one can define precisely the π i , ξ i parameters.
The n boson states, normalized to unity, are given as
| α 1 , ... , α n = 1 σ (n) n i=1 α i |α σi α † n ... α † 1 |0 .(5)
Here σ (n) denotes the set of all the permutations of the indexes {1, 2, ..., n} and the subscript σi denotes the index that replaces the index i in a given permutation from σ (n) . The normalization factor contains a sum of n! term. These terms contain n different α i parameters. Thus the further calculation with these normalized states seems to be extremely difficult.
A New Type of Density Matrix
There is one special density matrix, however, for which one can overcome the difficulty, related to the non-vanishing overlap of many hundreds of wavepackets, even in an explicit analytical manner. This density matrix is the product uncorrelated single particle matrices multiplied with a correlation factor, related to the induced emission:
ρ n (α 1 , ..., α n ) = 1 N (n) n i=1 ρ 1 (α i ) σ (n) n k=1 α k |α σ k .(6)
Normalization to one yields N (n). The weight factors describe induced emission:
ρ n (α 1 , ..., α n ) n j=1 ρ 1 (α j ) = σ (n) n k=1 α k |α σ k N (n) (overlap).(7)
This is maximal if all are emitted with same α 1 :
ρ n (α 1 , ..., α 1 ) [ρ 1 (α 1 )] n = n! N (n) , (full overlap),(8)
and minimal if the overlap is negligible,
ρ n (α 1 , ..., α n ) n j=1 ρ 1 (α j ) = 1 N (n) (no overlap).(9)
Thus we have wildly fluctuating weight. E.g. for 800 pions the induced emission weight fluctuates between [1, 800!] ≃ [1, 10 1977 ]. With this special density matrix one can proceed with the calculations.
Single-Particle Density Matrix :
For the sake of simplicity we assume a factorizable Gaussian form for the distribution of the parameters of the single-particle states:
ρ 1 (α) = ρ x (ξ) ρ p (π) δ(t − t 0 ), ρ x (ξ) = 1 (2πR 2 ) 3 2 exp(−ξ 2 /(2R 2 )), ρ p (π) = 1 (2πmT ) 3 2 exp(−π 2 /(2mT )).(10)
These expressions are given in the frame where we have a non-expanding static source at rest. A multiplicity distribution when Bose-Einstein effects are switched off (denoted by p (0) n ), is a FREE choice in the model. We assume independent emission,
p (0) n = n n 0 n! exp(−n 0 ).(11)
This completes the specification of the model.
4
Solution of Recurrences:
The probability of finding events with multiplicity n, as well as the singleparticle and the two-particle momentum distribution in such events is given as
N (n) 1 (k) = n i=1 p n−i p n G i (k, k),(12)N (n) 2 (k 1 , k 2 ) = n l=2 l−1 m=1 p n−l p n [G m (k 1 , k 1 )G l−m (k 2 , k 2 )+ G m (k 1 , k 2 )G l−m (k 2 , k 1 )] .(13)
In ref. 3 recurrence relations were given for the construction of the functions G m (k 1 , k 2 ). In refs. 1 , 2 an explicit analytic form was obtained for these functions. We present now this method.
To arrive to this solution we introduce the following auxiliary quantities:
γ ± = 1 2 1 + x ± √ 1 + 2x , x = R 2 p σ 2 T .(14)
Using the notation
σ T = 2mT p , T p = T + σ 2 2m , R 2 p = R 2 + mT σ 2 σ 2 T ,(15)
we arrive to the formulae of the plane-wave pion laser model of Pratt, ref. 3 , if we replace R in ref. 3 with R p and T in ref. 3 with T p . The general analytical solution of the model is given through the generating function of p n ,
G(z) = ∞ n=0 p n z n = exp ∞ n=1 C n (z n − 1) ,(16)
where C n is the combinant 4,5 of order n,
C n = n n 0 n γ n 2 + − γ n 2 − −3 ,(17)
and the general analytic solution for the functions G n (k 1 , k 2 ) reads as:
G n (k 1 , k 2 ) = j n e − bn 2 γ n 2 + k1−γ n 2 − k2 2 + γ n 2 + k2−γ n 2 − k1 2 ,(18)j n = n n 0 b n π 3 2 , b n = 1 σ 2 T γ + − γ − γ n + − γ n − .(19)
The detailed proof that the analytic solution to the multi-particle wave-packet model is indeed given by the above equations is described in refs. 1 , 2 .
With our analytic solution we decreased the algorithmic complexity of the problem of n symmetrized boson system from the original n! to 1.
Multiplicity Distribution:
The mean and the second factorial moments are defined as follows
n = n n p n = ∞ i=1 iC i ,(20)n(n − 1) = n n (n − 1) p n = n 2 + ∞ i=2 i (i − 1) C i .(21)
The large n behavior of the multiplicity distribution, p n , depends on n 0 /γ 3 2 + . This multiplicity distribution has an interesting property. To see this, we define the quantity n c as n c = γ 3 2
+ = 1 + x + √ 1 + 2x 2 3 2 .(22)
One can show that n is finite, if n 0 < n c and n is infinite, if n 0 > n c Hence n c is a critical value for the parameter n 0 .
6 Solution for the Inclusive Distributions:
In the previous sections we obtained results the multiplicity distribution and for exclusive momentum distributions. To obtain the inclusive distribution we introduce the auxiliary quantity
G(k 1 , k 2 ) = ∞ i=1 G i (k 1 , k 2 ).(23)
( Higher order Bose-Einstein symmetrization effects are negligible, if the first term dominates the above infinite sum, i.e. if G(k 1 , k 2 ) = G 1 (k 1 , k 2 ).) Now the averaging over the exclusive distributions can be performed,
N 1 (k) = ∞ n=1 p n N (n) 1 (k) = G(k, k),(24)
and the two-particle inclusive correlation functions can be evaluated as
C 2 (k 1 , k 2 ) = N 2 (k 1 , k 2 ) N 1 (k 1 ) N 1 (k 2 ) = 1 + G(k 1 , k 2 )G(k 2 , k 1 ) G(k 1 , k 1 )G(k 2 , k 2 ) .(25)
This is an exact result, obtained without the random phase approximation and without assuming that the number of sources is large. However, this result is valid only when Bose-Einstein condensation does not play a role, i.e. when n 0 < n c = γ 3/2 + .
7
Rare gas limiting case:
For large source sizes or large effective temperatures, x >> 1 we have :
G n (k 1 , k 2 ) ∝ 2 x 3 2 (n−1) exp − n 2σ 2 T (k 2 1 + k 2 2 ) − R 2 p 2n (k 1 − k 2 ) 2 , (26) C n = n n 0 n 4 2 x 3 2 (1−n) .(27)
From this equation we can see that the effective temperatures and the effective radii are decreased by a factor of 1/n for n-th order symmetrization effects. The multiplicity distribution is shifted to high multiplicities:
p n = n n 0 n! exp(−n 0 ) 1 + n(n − 1) − n 2 0 2(2x) 3 2 .(28)
The momentum distribution is obtained in this approximation by an expansion in the small parameter ǫ = (2/x) 3/2 . In this approximation we get
N 1 (k) = n 0 (πσ 2 T ) 3 2 exp − k 2 σ 2 T + n 2 0 (πσ 2 T x) 3 2 exp − 2 σ 2 t k 2 (29) N (n) 1 (k) = n (πσ 2 T ) 3 2 exp − k 2 σ 2 T 1 + (n − 1) (2x) 3 2 2 3 2 exp − k 2 σ 2 t − 1 .(30)
The single-particle inclusive and exclusive momentum distributions are enhanced at low momentum. The influence of the wave packet size on the pion multiplicity distribution is shown in Fig.1. The effect of the wave packet width on the momentum distribution is displayed on Fig.2.
Correlation functions:
In the highly condensed x << 1 and n 0 >> n c Bose gas limit a kind of lasing behaviour and an optically coherent behaviour is obtained, which is characterized by 2 the disappearance of the bump in the correlation function:
C 2 (k 1 , k 2 ) = 1.
On the other hand, in the rare gas limit we get
C 2 (k 1 , k 2 ) = 1 + exp −R 2 p ∆k 2 .(31)
Exclusive Correlation Functions:
In a highly condensed limiting case, x << 1 and n 0 >> n c , the exclusive and inclusive correlation functions coincide, C 2 (k 1 , k 2 ) = 1.
In the rare gas limit, x >> 1, the exclusive and inclusive correlations are different, and the exclusive correlation has the form 9 Highlights:
In this paper we presented a consequent quantum mechanical description of the correlations caused by the multi-particle Bose-Einstein symmetrization of a system of large number of bosons.
We introduced the overlapping multi-particle wave-packet density matrix describing stimulated emission. We reduced the algorithmic complexity of the description of the n boson states from n! to 1 by obtaining an explicite analytical solution for the problem.
We have shown, that the radius and intercept parameters depend on the mean momentum of the two pions even for static sources, due to multi-particle symmetrization effects.
We have found an enhancement of the wave-packets in the low momentum modes, due to multi-particle Bose-Einstein symmetrization. When all pions are in the lowest momentum state, the system may be interpreted as a pion-laser.
Figure 1 :
1As the size of the wave packets is changed, the overlap changes and stimulated emission results in larger multiplicities.
Figure 2 :
2Stimulated emission results in enhancement of pions in the low momentum modes.
Figure 3
31 , k 2 ) for wave packet sizes σx = 1.0, 2.5, 4.0 and 15.0 fm (solid, dashed, dotted and dense-dotted lines). |k 1 + k 2 |/2 = 50 MeV, out component. The number of pions was set to n = 800.
Figure 4 :
4Momentum-dependent reduction of the intercept parameter λ K , the side-wards and the outwards radius parameters, R K,s and R K,o from their static values of 1 and Rp, respectively.
AcknowledgmentsThe present work was supported in part by the US -Hungarian Joint Fund MAKA 652/1998, by the National Scientific Research Fund (OTKA, Hungary) Grant 024094, and by NWO (Netherland) -OTKA grant N25487. The authors thank these sponsoring organizations for their aid.where K = 0.5(k 1 + k 2 ), ∆k s = ∆k − K(∆k · K)/(K · K) and ∆k o = K(∆k · K)/(K · K), The momentum dependent parameters are given as follows:The dependence of the parameters of the correlation functions on the mean momentum of the two pions, K, is shown onFig.2. The influence of the wave packet size on the two particle correlation is illustrated onFig.3. (These limiting cases are discussed in more details in ref.1 .)
. J Zimányi, T Csörgő, J. Zimányi and T. Csörgő, http://xxx.lanl.gov/hep-ph/9705432
. T Csörgő, J Zimányi, Phys. Rev. Lett. 80T. Csörgő and J. Zimányi, Phys. Rev. Lett. 80 (1998) 916 -918
. S Pratt, Phys. Lett. 301159S. Pratt, Phys. Lett. B301 (1993) 159 ;
. W Q Chao, C S Gao, Q H Zhang, J. Phys. G. Nucl. Part. Phys. 21847W. Q. Chao, C. S. Gao and Q. H. Zhang, J. Phys. G. Nucl. Part. Phys. 21 (1995) 847
. M Gyulassy, S K Kaufmann, Phys. Rev. Lett. 40298M. Gyulassy and S. K. Kaufmann, Phys. Rev. Lett. 40 (1978) 298
. S Hegyi, Phys. Lett. 309S. Hegyi, Phys. Lett. B309 (1993) 443-450
| [] |
[
"KALMAN FILTER BASED TRACKER STUDY FOR LEPTON FLAVOR VIOLATION EXPERIMENTS",
"KALMAN FILTER BASED TRACKER STUDY FOR LEPTON FLAVOR VIOLATION EXPERIMENTS"
] | [
"Rashid M Djilkibaev \nDepartment of Physics\nNew York University\n10003New YorkNY\n",
"Rostislav V Konoplich \nDepartment of Physics\nNew York University\n10003New YorkNY\n\nManhattan College\n10471Riverdale, New YorkNY\n"
] | [
"Department of Physics\nNew York University\n10003New YorkNY",
"Department of Physics\nNew York University\n10003New YorkNY",
"Manhattan College\n10471Riverdale, New YorkNY"
] | [] | A tracking detector is proposed for lepton flavor violation experiments (µ → e conversion, µ → e + γ, µ → 3e) consisting of identical chambers which can be reconfigured to meet the requirements for all three experiments. A pattern recognition and track reconstruction procedure based on the Kalman filter technique is presented for this detector.The pattern recognition proceeds in two stages. At the first stage only hit straw tube center coordinates, without drift time information, are used to reduce the background to a manageable level. At the second stage the drift time information is incorporated and a deterministic annealing filter is applied to reach the final level of background suppression. The final track momentum reconstruction is provided by a combinatorial drop filter which is effective in hit-to-track assignment.The momentum resolution of the tracker in measuring monochromatic leptons is found to be σ p = 0.17 and 0.26 MeV for the µ → e conversion and µ + → e + + γ processes, respectively. The tracker reconstruction resolution for the total scalar lepton momentum is σ p = 0.33 MeV for the µ → 3e process. The obtained tracker resolutions allow an increase in sensitivity to the branching ratios for these processes by a few orders of magnitude over current experimental limits. | 10.1088/1748-0221/4/08/p08004 | [
"https://arxiv.org/pdf/0904.3792v2.pdf"
] | 6,775,916 | 0904.3792 | 48ac689dbfe08bc3c99596644dbde77f66070fde |
KALMAN FILTER BASED TRACKER STUDY FOR LEPTON FLAVOR VIOLATION EXPERIMENTS
19 Aug 2009 August 19, 2009
Rashid M Djilkibaev
Department of Physics
New York University
10003New YorkNY
Rostislav V Konoplich
Department of Physics
New York University
10003New YorkNY
Manhattan College
10471Riverdale, New YorkNY
KALMAN FILTER BASED TRACKER STUDY FOR LEPTON FLAVOR VIOLATION EXPERIMENTS
19 Aug 2009 August 19, 2009
A tracking detector is proposed for lepton flavor violation experiments (µ → e conversion, µ → e + γ, µ → 3e) consisting of identical chambers which can be reconfigured to meet the requirements for all three experiments. A pattern recognition and track reconstruction procedure based on the Kalman filter technique is presented for this detector.The pattern recognition proceeds in two stages. At the first stage only hit straw tube center coordinates, without drift time information, are used to reduce the background to a manageable level. At the second stage the drift time information is incorporated and a deterministic annealing filter is applied to reach the final level of background suppression. The final track momentum reconstruction is provided by a combinatorial drop filter which is effective in hit-to-track assignment.The momentum resolution of the tracker in measuring monochromatic leptons is found to be σ p = 0.17 and 0.26 MeV for the µ → e conversion and µ + → e + + γ processes, respectively. The tracker reconstruction resolution for the total scalar lepton momentum is σ p = 0.33 MeV for the µ → 3e process. The obtained tracker resolutions allow an increase in sensitivity to the branching ratios for these processes by a few orders of magnitude over current experimental limits.
Introduction
The goal of this work is to demonstrate a pattern recognition and track reconstruction procedure for a tracking detector which provides sufficient resolution to be used in new lepton flavor violation experiments sensitive to branching ratios a few order of magnitude lower than current experimental limits. The observation of one of the three processes µ → e conversion, µ → e + γ and µ → 3e would provide the first direct evidence for lepton flavor violation in the charged lepton sector [1,2] and require new physics, beyond the Standard Model. To reach such high sensitivity a substantial improvement in muon beam intensity [3] is required. This leads to the need for new designs of tracking detectors. A straw tube tracker proposed in this work consists of a set of identical chambers. A simple reconfiguration of the chambers allows meeting requirements for all three lepton flavor violation experiments.
The signature of the µ → e conversion (µ − + N → e − + N) process is clear: a single monochromatic electron in the final state with energy close to the muon mass m µ . The µ + → e + + γ process has in the final state a monochromatic positron and a photon with an energy equal to half the muon mass. It is assumed that an external trigger is provided for the tracker. Photon reconstruction is not considered in this article because this study is devoted to pattern recognition and track reconstruction of electrons and positrons. The µ + → e + + e + + e − process has in the final state an electron and two positrons with energies from 0 to m µ /2 and an average energy equal to one third of the muon mass.
In this work a pattern recognition and tracker reconstruction study was done for the µ → e conversion process, in the presence of background. A tracker resolution study for charged particles in µ + → e + + γ and µ + → e + +e + +e − processes was done without background. Our analysis is based on a full GEANT3 [4] simulation taking into account individual straw structure.
Tracker description
Muons stopping in a target produce electrons or positrons depending on the physical process. These charged particles move in a uniform magnetic field following helical trajectories. A tracking detector is used to measure the particle trajectory and thereby determine its momentum. The proposed tracker ( Fig. 1) consists of chambers with straw tubes placed transverse to a uniform magnetic field, which is along the tracker axis. Each straw tube measures the drift time corresponding to the radial distance at the closest approach of a charged particle from the sense wire. The tracker is operated in vacuum. The high muon beam intensity forces a tracker design without matter in a cylindrical central zone, to let the beam pass through the tracker. To get more redundancy for very rare signal events it is required that the measured trajectory should have two full turns in the tracker. This requirement sets limit on the minimal tracker length of about 300 cm.
A tracker plane consists of two trapezoidal straw tube chambers (see Fig. 1). The planes are distributed uniformly over the length of the tracker. Rotating consecutive planes by an angle of 30 o gives an effective "stereo" of crossed directions for 12 different views. Each chamber consists of one layer of straw tubes (60 straws) of 5 mm diameter. The straws are assumed to have wall thickness 25 µm and are constructed of kapton. Isobutane (C 4 H 10 ) gas is assumed to fill the tubes. The tracker design allows changing the central zone radius, by moving the chambers closer to or further from the central axis, and changing the number of chambers along the tracker axis. This is needed to meet the different requirements for registration of electrons or positrons in the final state of each lepton flavor violation experiment.
Two tracker chamber layouts and magnetic field configurations are studied: one for the µ → e conversion process and the second for µ → e + γ and µ → e + e + e processes. The first tracker layout contains 108 planes and has the central zone radius of 38 cm. The second tracker layout contains 54 planes and has the central zone radius of 10 cm. The tracker length for both layouts is the same, 300 cm. In the two layouts the tracker is immersed in a magnetic field of 1 T and 0.5 T, respectively.
Pattern recognition
Track reconstruction in lepton flavor violation experiments faces a significant amount of background hits in a tracker because of the high beam intensity needed to detect rare processes. A two stage procedure was developed to provide the pattern recognition in the tracker and to suppress background hits. At the first stage of pattern recognition only information on the centers of hit straws is used. The result of this stage is a significant suppression of background hits. Also, an approximate helix fitted to the straw hit centers is found and used in the next stage of pattern recognition. To improve the suppression of background hits the second stage of pattern recognition uses drift time. A deterministic annealing filter DAF [5,6] is applied at the second stage. The DAF effectively suppresses backgrounds by dealing simultaneously with multiple competing hits.
Pattern recognition without drift time information
A detailed pattern recognition and tracker reconstruction study was done for monochromatic electrons of 105 MeV from the µ → e conversion process in the presence of background hits. In a uniform magnetic field the trajectory of a charged particle is a helix described by 5 parameters. The construction of the tracker allows significant simplification of the initial step of the pattern recognition by reducing the problem to two dimensional tracker views. Twelve tracker views are formed by planes through the tracker axis normal to the straws. In a typical event, a two-dimensional projection of the helical trajectory, a sine curve, is observed in a few views. There are approximately 10 hits in each view, and 30 hits in the average in the event. The average number of background hits is 300 per event, which corresponds to an average straw hit rate of about 800 kHz. The hits are grouped in lobes (see Fig. 2) with a typical gap between lobes of about 70 cm. As seen in Fig. 2 the sensitive area of the tracker measures a small part of a track, but combining hits from two lobes significantly improves the precision of the momentum reconstruction. The sinusoidal projection of a helix in each view is described by four parameters: x ′ 0 , z ′ 0 , P L , P T ,
x ′ i = x ′ 0 + aP T cos( z i − z ′ 0 aP L )(1)
where P L and P T are the longitudinal and transversal momenta. The constant a equals 1/(2.998 B), where B is the magnetic field measured in Tesla. The coordinate z is common to all projections. Parameters x ′ 0 and z ′ 0 are defined for each given view such that the particle is created at the target with coordinate (x ′ 0 + aP T , z ′ 0 ). The reconstruction algorithm starts from a single tracker view. Four hits from this view are used to give a set of equations for helix parameters in the coordinate system related to that view. The system of four equations is solved to get the parameters P L , P T , z ′ 0 , x ′ 0 [7]. All possible four hit combinations are considered. Only the combinations that survive a cut-off in the particle momentum (±20% of the expected value) are retained for subsequent analysis.
Adding a fifth hit from a different view than the four hit combination allows defining a helical 3-D trajectory for the five-hit combination.
The following approach is used to select signal hits. If we select a combination with five signal hits the defined helical trajectory should correlate in space with other signal hits. For further analysis a five-hit combination is selected if the helical trajectory matches at least 15 additional hits in the road (±1 straw). A hit is rejected if it is not in any selected five-hit combinations. Due to the strong spatial correlations between the signal hits in comparison with the un-correlated background hits, the number of background hits is reduced drastically by applying this road requirement. A fit is applied to reconstruct an average helical trajectory on the basis of the list of the selected tracker hits. This fit does not take into account multiple scattering.
At the first stage of pattern recognition procedure the overall background rejection factor is about 130. The electron momentum from the µ → e conversion process can be reconstructed in this stage with a standard deviation σ p = 0.45 MeV corresponding to relative momentum resolution σ p /p = 0.5%. Figure 3 shows the multiple view superposition of tracker hits for the sample event with 29 signal and 260 background hits before and after the pattern recognition procedure. There are no missed signal hits and two surviving background hits in this case.
Kalman filter application to track reconstruction
The Kalman filter (KF) [8,9] addresses the general problem of trying to estimate at different points (1 ≤ k ≤ n) the state x k of a discrete process that is governed by the linear stochastic difference equation
x k = F k−1 x k−1 + w k−1(2)
with a measurement m k = H k x k + ε k . In the system of equations (2) stochastic processes such as multiple scattering and bremsstrahlung are taken into account by the process noise w k . The measurement noise is represented by ε k . Q k and V k are process noise and measurement noise covariances, respectively. In the absence of the last term Eq. (2) is the standard equation of motion with a propagator F k−1 (transport matrix). The F matrix propagates the state vector on one measurement plane to the state vector on the next plane combining position information with directional information. For a particle moving in a uniform magnetic field (see Fig. 1) one has to choose the state vector parameters, define the initial state vector and calculate the transport matrix F, the projection matrix H, and the noise matrix Q. The state vector can be chosen in the form x k = (x, y, t x , t y , 1/p L ) where x, y are the track coordinates in the tracker system, and t x = p x /p L , t y = p y /p L define the track direction. The projection matrix is given by H = (cos α, sin α, 0, 0, 0), where α is a tracker view angle.
Due to multiple scattering the absolute value of electron momentum remains unaffected, while the direction is changed. This deflection can be described using two orthogonal scattering angles, which are also orthogonal to the particle momentum [10]. In terms of these variables the noise matrix is given by
Q k =< Θ 2 > (t 2 x + t 2 y + 1) 0 0 0 0 0 0 0 0 0 0 0 0 t 2 x + 1 t x t y t x /p L 0 0 t x t y t 2 y + 1 t y /p L 0 0 t x /p L t y /p L (t 2 x +t 2 y ) p 2 L (t 2 x +t 2 y +1)
For the variance of the multiple scattering angle the well-known expression is used 6 MeV, X R is a radiation length and t is a distance traveled by the particle inside a scatterer. Energy losses are taken into account by
< Θ 2 >= (p 0 /p) 2 [1 + 0.038 ln(t/X R )]t/X R where p 0 = 13.p′ = p− < dE/dx > t.
There are three types of operations to be performed in the analysis of a track. Prediction x k−1 k is the estimation of the "future" state vector at position "k" using all the "past" measurements up to and including "k − 1". Filtering x k k is the estimation of the state vector at position "k" based upon all "past" and "present" measurements up to and including "k". At each step the filtered χ 2 is calculated. The total χ 2 of the track is given by the sum of the χ 2 k contributions for each plane. Smoothing x n k is the estimation of the "past" state vector at position "k" based on all "n" measurements taken up to the present time. The filter runs backward in time updating all filtered state vectors on the basis of information from all n planes. The mathematical equations describing these operations are given in [8,9].
The prediction and filtering are applied consecutively to all points. When the last point is reached the smoothing is applied to all previous points. The result of smoothing is a reconstruction of the state vector x n k which defines particle coordinates and momentum in each plane. The KF approach described above works for a single point in each plane. Background hits create multiple competitive point in each plane. In this case a different approach based on the KF will be applied as described below.
Pattern recognition with drift time
The second stage in the pattern recognition procedure uses the hits selected and fitted helix obtained in the first stage and adds drift time information for the selected hits. The pattern recognition procedure can be improved by taking into account the measured drift time t meas i which is related to the radial distance r at the closest approach to the straw wire. The errors (σ) in radius measurements are taken to be 0.2 mm. This radius r carries an ambiguity as to whether the track passed left or right of the wire. Left and right points are extracted from the intersections of a normal to the helix through the straw center and the circle of the radius r. A deterministic annealing filter (DAF) [5,6] is applied to suppress background hits further. The DAF is a Kalman filter with re-weighted observations. The propagation part of DAF is identical to the standard Kalman filter. In addition to background hits a left-right ambiguity for given hit creates multiple competing points at each KF step. At the DAF step all competing points are assigned to a single layer with weights. The filtered estimate (measurement update) x k k at layer k is calculated as a weighted mean of the prediction x k−1 k and the observations m i k , i = 1, 2, ...n k .
By taking a weighted mean of the filtered states at every layer a prediction for the state vector x n * k along with its covariance matrix C n * k is obtained, using all hits except the ones at layer k. Initially all assignment probabilities for the hits in each layer are set to be equal but based on the estimated state vector x n * k and its covariance matrix, the assignment probabilities of all competing hits are then recalculated in the following way:
p i k ∼ ϕ(m i k ; H k x n * k , V k + H k C n * k H T k )
where ϕ is a multivariate Gaussian probability density, V k is the variance of the observations. If the probability falls below a certain threshold, the hit is considered as background and is excluded from the list of the hits assigned to the track. At the initial step we cannot be sure of calculated probabilities due to insufficient information for the filter. This problem is overcome by adopting a simulated annealing iterative procedure [5,6] which allows avoiding a local minimum and finding the global one corresponding to the minimum chi-square for the track. The annealing schedule is chosen for the iteration n in the form V n = V(1 + C f n ) where the annealing factor f > 1 and constant C ≫ 1. This insures that the initial variance is well above the nominal value V of the observation error but the final one tends to V. After each iteration the assigned probabilities exceeding the threshold are normalized to 1 and used again as weights in the next iteration, and so on. The iterations generally are stopped if the relative change in chi-square is less than a corresponding control parameter (typically of the order 0.01). Since we are dealing with a stochastic process, the best result can be reached repeating the DAF procedure for a few different annealing factors f (1.4 and 2) and then choosing the result corresponding to the minimum chi-square.
The application of the DAF procedure mainly allows resolving the leftright ambiguity. The DAF chooses wrong left-right points for 6% of the tracker hits if the radial distance r at the closest approach to the straw wire is greater then 0.25 mm. This corresponds to an average two tracker hits with the wrong left-right assignment per event. The number of background hits remaining after the second stage averages 0.38 hits per event in comparison with the primary 300 hits on average. The final background suppression factor is thus 300/0.38 ≈ 800. Some of the signal hits are lost due to the selection (0.8 hits or 2.7% per event on average). The momentum resolution of the tracker after application of the DAF procedure is 0.25 MeV for the µ → e conversion process. The pattern recognition procedure described above is thus effective in the rejection of background hits and also provides a good starting point for the track reconstruction. However, further improvement in the left-right assignment is necessary to achieve better resolution. For this we use the procedure described in the following section.
Momentum reconstruction for the µ → e conversion process
The momentum reconstruction is based on the hits selected by the two stage pattern recognition procedure described above. For straw drift tubes there is a set of left and right point for each straw hit due to the left-right ambiguity. An extreme possibility would be to apply the Kalman filter to all possible 2 N combinations to reconstruct a track with N hits and to choose the combination with the best χ 2 . This pure combinatorial approach may be the best but it is computationally unfeasible. The other extreme could be to apply the Kalman filter at each step to left and right points and to select the point providing the best χ 2 . However this algorithm often selects wrong points. It has been proposed in the literature [5,6] to use a Gaussian sum filter (GSF) for solving the assignment problem in a track detector with ambiguities [11]. In the GSF approach [12], both process noise and observation errors are modeled by a mixture of Gaussian densities and in the resulting algorithm several Kalman filters run in parallel. However the application of GSF as a momentum reconstruction procedure to this tracker does not provide sufficiently good results because of large gaps in the measured trajectory. In the tracker hits are distributed approximately uniformly inside lobes in the z-direction, but there are large gaps about 70 cm between lobes due to the motion of the electron outside the sensitive area of the tracker. A Kalman filter prediction step corresponding to a gap often leads to significant deviations from the true hit. As a result, in the GSF procedure the true component often is suppressed so significantly that it can not be recovered by subsequent Kalman filter steps. Therefore we need a different approach to treat the left-right ambiguity in the tracker.
A combinatorial drop filter (CDF) has been developed to improve the momentum reconstruction and efficiency for the tracker. The CDF algorithm uses forced minimization of the number of combinations when it reaches some maximum. So in the CDF approach the number of combinations always oscillates between N min and N max . That means that the Kalman filter runs with paths increasing in number with the combinatorial growth of combinations, but each time the number of combinations reaches N max , only the N min combinations with the best χ 2 are retained. The CDF approach is illustrated by the graph N min →N max (2→8) in Fig. 4. Each vertex represents either a left or right point. Components surviving after the drop are indicated by small (magenta) circles. In the actual strategy the number of retained components was chosen to be 8 and the maximum number of components to be 32.
The procedure begins with the first eight straw hits and builds 2 8 (256) possible hit combinations corresponding to left-right points. The Kalman filter forward and backward procedures are applied to these combinations. Only a few combinations which satisfy a rather loose χ 2 cut, are retained. This step provides a certain precision for state-vector components. Each The Kalman filter reconstructs a trajectory of a particle in three dimensions. The trajectory is bent each time it crosses a tracker plane due to multiple scattering. Therefore, the reconstructed track is a set of helices that intersect at the planes. Fig. 5 shows the 3D trajectory reconstructed for a sample event. On this scale the trajectory looks like a helix, but it consists of many helix parts. Fig. 6 (left) shows the transverse projection of the trajectory for the sample event. In this projection the trajectory looks approximately like a circle. However if a specific region of Fig. 6 (left) is magnified one can see in Fig. 6 (right) that the shape of the circle is distorted due to multiple scattering and energy loss. Three arcs of the trajectory are clearly seen in the Figure. There are two main tracker reconstruction features: momentum and angle resolutions. The momentum resolution is defined from the distribution of the difference between simulated and reconstructed momentum. The angle resolution is defined from the distribution of the angle between simulated and reconstructed momentum.
In this study 10 5 events were simulated and reconstructed. The CDF procedure chooses wrong left-right points for 4% of the tracker hits (to be compared with 6% for the DAF) if the radial distance r at the closest approach to the straw wire is greater then 0.25 mm. The distribution of the difference between the initial momentum reconstructed by the Kalman filter and the simulated initial momentum is shown in Fig. 7 (left) for the µ → e conversion process. According to this distribution the intrinsic tracker resolution is σ = 0.17 MeV if one fits the distribution by a Gaussian. Note that this tracker resolution is by about 30% better than the resolution provided by the DAF procedure.
The distribution of the angle between the reconstructed and simulated momenta is shown in Fig. 7 (right). The most probable value of the angle distribution is 3 mrad. A calculation gives an average angle of multiple scattering in a straw to be 1.8 mrad for the µ → e conversion process. This is consistent with the expectation that the angular resolution is defined by multiple scattering in a single tracker straw.
Momentum reconstruction for the µ + → e + +γ process
As mentioned in the introduction, this study and the one in the next section are less complete than that carried out for the µ → e conversion process. No background hits were added to simulated events. The second tracker layout and field configuration were used for the momentum reconstruction study of the µ + → e + + γ process without background. The tracker is placed in a magnetic field of 0.5T which is half that used for the µ → e conversion setup. The µ → e conversion and the µ + → e + + γ processes have monochromatic charged leptons in the final state whose energies differ by a factor of 2. Due to the twofold change in the magnetic field the charged lepton trajectories for the µ + → e + +γ and the µ → e conversion processes are similar. The pattern recognition and momentum reconstruction procedures described above were applied to positrons of the µ + → e + + γ process. The distribution of the difference between the initial momentum reconstructed by the Kalman filter and the simulated initial momentum is shown in Fig. 8 (left). Fitting this distribution to a Gaussian gives an intrinsic tracker resolution of σ = 0.26 MeV. The distribution of the angle between the reconstructed and simulated momentum is shown in Fig. 8 (right). The most probable value of the angle distribution is 5 mrad. A calculation gives an average angle of multiple scattering in a straw to be 3.6 mrad for a positron in the µ + → e + + γ process. Figure 8: Reconstruction for the µ + → e + + γ process: The distribution of the difference between the reconstructed and simulated momentum (left). The distribution of the angle between the reconstructed and simulated momentum (right).
Momentum reconstruction for the µ → 3e process
The same setup as for the µ + → e + + γ process was used for a momentum reconstruction study of the µ + → e + +e + +e − process. This process has in the final state an electron and two positrons with energies from 0 to m µ /2 and an average energy equal to one third of the muon mass. The pattern recognition and momentum reconstruction procedure applied for µ → e conversion and µ + → e + + γ processes was based on a single track search algorithm. The same procedure was applied separately for each lepton track of the µ + → e + + e + + e − . The observable quantity of the process, total scalar momentum of charged leptons p tot = p i , was reconstructed. The distribution of the difference between reconstructed total scalar momentum and muon mass is shown in Fig. 9 (left). Fitting this distribution by a Gaussian gives a tracker resolution for total scalar momentum of σ = 0.33 MeV. The distribution of the angle between the reconstructed lepton momentum and the simulated momentum is shown in Fig. 9 (right). The most probable value of the angle distribution is 7 mrad. A calculation gives an average angle of multiple scattering in a straw to be 5.4 mrad for charged lepton momentum equals m µ /3.
Conclusion
A pattern recognition and momentum reconstruction procedure based on a Kalman filter technique for a straw tube tracker was developed to suppress background and reconstruct track momentum for the lepton flavor violating Figure 9: Reconstruction for the µ → 3e process: The distribution of the difference between reconstructed total scalar momentum and muon mass (left). The distribution of the angle between the reconstructed and simulated momentum (right).
processes µ → e conversion, µ + → e + + γ and µ → 3e. The simple modular construction of the tracker allows meeting requirements for lepton momentum measurements for all lepton flavor violation processes mentioned above. The pattern recognition procedure allows suppressing the background hits by a factor of 800. Only 3% of signal hits are lost by this procedure.
The momentum resolution of the tracker to register monochromatic leptons was found to be σ p =0.17 and 0.26 MeV for the µ → e conversion with background and the µ + → e + + γ processes without background, respectively. The tracker resolution for the total scalar lepton momentum is σ p = 0.33 MeV for the µ → 3e process without background. For the µ → e conversion, µ + → e + + γ and µ → 3e processes the most probable values of an angle between the simulated and reconstructed momenta (3, 5 and 7 mrad) are in a good agreement with multiple scattering angles in a single tracker straw. The obtained tracker resolutions allow an increase in sensitivity to the branching ratios for these processes by a few orders of magnitude over current experimental limits.
We wish to thank A. Mincer and P.Nemethy for fruitful discussions and helpful remarks. This work has been supported by the National Science Foundation under grants PHY 0428662, PHY 0514425 and PHY 0629419.
Figure 1 :
1Schematic drawing of the tracker design.
Figure 2 :
2Lobes in the tracker view. Points show straw hit centers. The sinusoidal projection of a helix is shown in red. The starting point of the track is not shown in the figure.
Figure 3 :
3Plot of signal (blue dots) and background (red dots) tracker hits before (left) and after the pattern recognition procedure without drift time (right).
Figure 4 :
4Graph 2→8 for CDF procedure.retained combination containing 8 points provides a starting point for the CDF.
Figure 5 :
5The 3D trajectory reconstructed for a sample event.
Figure 6 :
6The transverse projection of the 3D trajectory reconstructed for a sample event (left). Enlargement of the bottom part of the trajectory (right).
Figure 7 :
7Reconstruction for µ → e conversion process. The distribution of the difference between the reconstructed and simulated momentum (left). The distribution of the angle between the reconstructed and simulated momentum (right).
. J D Vergados, Phys.Rep. 133J.D.Vergados, Phys.Rep. 133, 1, (1986).
. Y Kuno, Y Okada, Rev.Mod.Phys. 73151Y.Kuno and Y.Okada, Rev.Mod.Phys.73, 151 ,(2001).
. R M Djilkibaev, V M Lobashev, Sov.J.Nucl.Phys. 49R.M.Djilkibaev and V.M.Lobashev, Sov.J.Nucl.Phys. 49, 384, (1989).
. Geant3 Cern Program, Library. 5013CERNGEANT3 CERN Program Library W5013, CERN (1984).
. R Frühwirth, A Stradlie, Comp.Phys.Comm. 120R. Frühwirth and A. Stradlie, Comp.Phys.Comm. 120, 197, (1999).
. A Stradlie, R Frühwirth, Nucl.Instr.Meth. A. 566A. Stradlie and R. Frühwirth, Nucl.Instr.Meth. A 566, 157, (2006).
. R Djilkibaev, R Konoplich, arXiv:hep-ex/0312022v1R.Djilkibaev and R.Konoplich, arXiv:hep-ex/0312022v1 (2003).
. R E Kalman, Transactions of the ASME: J.Basic Engineering. 82R.E.Kalman, Transactions of the ASME: J.Basic Engineering, D82, 35, (1960).
. P Billoir, Nucl.Instr.Meth. 225P.Billoir, Nucl.Instr.Meth. 225, 352, (1984);
. R Frühwirth, ibid. A262, 444R.Frühwirth, ibid. A262, 444, (1987);
. P Billoir, S Qian, ibid. A294219P.Billoir and S.Qian, ibid. A294, 219, (1990);
. E J Wolin, L L Ho, ibid. A329493E.J.Wolin and L.L.Ho, ibid. A329, 493, (1993).
. R Mankel, Nucl.Instr.Meth. A. 395169R.Mankel, Nucl.Instr.Meth. A 395, 169, (1997).
. R Frühwirth, Comp. Phys. Communication. 100R.Frühwirth, Comp. Phys. Communication 100, 1-16 (1997).
. G Kitagawa, Ann. Inst. Statist. Math. 46605G.Kitagawa, Ann. Inst. Statist. Math. 46, 605 (1994).
| [] |
[
"How Grain Boundaries and Interfacial Electrostatic Interactions Modulate Water Desalination Via Nanoporous Hexagonal Boron Nitride",
"How Grain Boundaries and Interfacial Electrostatic Interactions Modulate Water Desalination Via Nanoporous Hexagonal Boron Nitride"
] | [
"Bharat Bhushan Sharma \nDepartment of Chemical Engineering\nIndian Institute of Science\n560012BengaluruKarnatakaIndia\n",
"Ananth Govind Rajan [email protected] \nDepartment of Chemical Engineering\nIndian Institute of Science\n560012BengaluruKarnatakaIndia\n",
"Ananth Govind Rajan \nDepartment of Chemical Engineering\nIndian Institute of Science\n560012BengaluruKarnatakaIndia\n"
] | [
"Department of Chemical Engineering\nIndian Institute of Science\n560012BengaluruKarnatakaIndia",
"Department of Chemical Engineering\nIndian Institute of Science\n560012BengaluruKarnatakaIndia",
"Department of Chemical Engineering\nIndian Institute of Science\n560012BengaluruKarnatakaIndia"
] | [] | To fulfil the increasing demand for drinking water, researchers are currently exploring nanoporous two-dimensional materials, such as hexagonal boron nitride (hBN), as potential desalination membranes. A prominent, yet unsolved challenge is to understand how such membranes will perform in the presence of defects or surface charge in the membrane material.In this work, we study the effect of grain boundaries (GBs) and interfacial electrostatic interactions on the desalination performance of bicrystalline nanoporous hBN, using classical molecular dynamics simulations supported by quantum-mechanical density functional theory (DFT) calculations. We investigate three different nanoporous bicrystalline hBN configurations, with symmetric tilt GBs having misorientation angles of 13.2°, 21.8°, and 32.2°. Using lattice dynamics calculations, we find that grain boundaries alter the areas and shapes of nanopores in bicrystalline hBN, as compared to the nanopores in monocrystalline hBN. We observe that, although bicrystalline nanoporous hBN with a misorientation angle of 13.2° shows improved water flow rate by ~30%, it demonstrates reduced Na + ion rejection by ~6%, as compared to monocrystalline hBN. We also uncover the role of the nanopore shape in water desalination, finding that more elongated pores with smaller sizes (in 21.8°-and 32.2°misoriented bicrystalline hBN) can match the water permeation through less elongated pores Centre (SERC) at the Indian Institute of Science for computational support. A.G.R. thanks Prof. E. D. Jemmis, Prof. Sudeep Punnathanam, and Dr. Rahul Prasanna Misra for discussions on charge separation between B and N atoms at grain boundaries in hBN, as well as Prof.Yagnaseni Roy for information on the ion rejection performance of commercial RO membranes. | 10.1021/acs.jpcb.1c09287 | [
"https://arxiv.org/pdf/2201.02956v2.pdf"
] | 245,837,357 | 2201.02956 | 2d37c8b067e5317621d0c3160f7a2ac297019228 |
How Grain Boundaries and Interfacial Electrostatic Interactions Modulate Water Desalination Via Nanoporous Hexagonal Boron Nitride
Bharat Bhushan Sharma
Department of Chemical Engineering
Indian Institute of Science
560012BengaluruKarnatakaIndia
Ananth Govind Rajan [email protected]
Department of Chemical Engineering
Indian Institute of Science
560012BengaluruKarnatakaIndia
Ananth Govind Rajan
Department of Chemical Engineering
Indian Institute of Science
560012BengaluruKarnatakaIndia
How Grain Boundaries and Interfacial Electrostatic Interactions Modulate Water Desalination Via Nanoporous Hexagonal Boron Nitride
1 *Corresponding Author:
To fulfil the increasing demand for drinking water, researchers are currently exploring nanoporous two-dimensional materials, such as hexagonal boron nitride (hBN), as potential desalination membranes. A prominent, yet unsolved challenge is to understand how such membranes will perform in the presence of defects or surface charge in the membrane material.In this work, we study the effect of grain boundaries (GBs) and interfacial electrostatic interactions on the desalination performance of bicrystalline nanoporous hBN, using classical molecular dynamics simulations supported by quantum-mechanical density functional theory (DFT) calculations. We investigate three different nanoporous bicrystalline hBN configurations, with symmetric tilt GBs having misorientation angles of 13.2°, 21.8°, and 32.2°. Using lattice dynamics calculations, we find that grain boundaries alter the areas and shapes of nanopores in bicrystalline hBN, as compared to the nanopores in monocrystalline hBN. We observe that, although bicrystalline nanoporous hBN with a misorientation angle of 13.2° shows improved water flow rate by ~30%, it demonstrates reduced Na + ion rejection by ~6%, as compared to monocrystalline hBN. We also uncover the role of the nanopore shape in water desalination, finding that more elongated pores with smaller sizes (in 21.8°-and 32.2°misoriented bicrystalline hBN) can match the water permeation through less elongated pores Centre (SERC) at the Indian Institute of Science for computational support. A.G.R. thanks Prof. E. D. Jemmis, Prof. Sudeep Punnathanam, and Dr. Rahul Prasanna Misra for discussions on charge separation between B and N atoms at grain boundaries in hBN, as well as Prof.Yagnaseni Roy for information on the ion rejection performance of commercial RO membranes.
of slightly larger sizes, with a concomitant ~3-4% drop in Na + rejection. Simulations also predict that the water flow rate is significantly affected by interfacial electrostatic interactions.
Indeed, the water flow rate is the highest when altered partial charges on B and N atoms were determined using DFT calculations, as compared to when no partial charges or bulk partial charges (i.e., charged hBN) were considered. Overall, our work on water/ion transport through nanopores in bicrystalline hBN indicates that the presence of GBs and surface charge can lead, respectively, to a drop in the ion rejection and water permeation performance of hBN membranes.
INTRODUCTION
Increased population, global warming, and industrialization have all put pressure on the supply of fresh and potable water. [1][2][3] Today, about 15% of the world's population faces challenges in obtaining drinking water. 4,5 Although an abundant amount of water is present on the earth's surface, only 3% of it is available as potable water. 1,6 In contrast, the remaining 97% of water is present as salty water in the oceans and seas. 1,6 Consequently, the desalination of seawater is a promising solution to satisfy the demand for fresh and potable water, in the near and long-term future. Although numerous methods are available for the desalination of seawater, membranebased reverse osmosis (RO) is extensively used at both commercial and domestic levels due to its higher energy efficiency. 7 The performance of a RO system depends significantly on the filtration membrane employed. Thus, the role played by the membrane material in a RO system is very crucial. 7 RO systems traditionally use ceramic and polymeric membranes, which face limitations such as lower strength, water permeability, and service life, and higher power requirements and maintenance cost. 8 In view of these limitations of conventional membranes, researchers are currently exploring nanoporous two-dimensional (2D) materials such as hexagonal boron nitride (hBN) and graphene as desalination membranes. 6,[9][10][11][12] As opposed to graphene, hBN and molybdenum disulfide (MoS2) membranes provide opportunities for creating anion-and cation-selective membranes, as boron and nitrogen atoms in hBN, and molybdenum and sulfur atoms in MoS2, are positively and negatively charged, respectively, and can be used to terminate the nanopores. 10,13 However, the smaller lateral dimensions of 2D materials obtained using various synthesis techniques poses a challenge in terms of scaling up the process. 6,11,14,15 Indeed, for commercial water desalination, large-area 2D materials are required to obtain adequate production rates of potable water. Typically, extended 2D layers are produced by the chemical vapor deposition (CVD) method, which inevitably introduces grain boundaries (GBs) in the material due to temperature gradients in the reactor and multiple nucleation sites on the growth surface. 16 Although the monocrystalline forms of nanoporous hBN and graphene nanosheets have extraordinary desalination performance, 6,17,18 the effect of GBs on the water permeability and ionic rejection of such membranes is unknown. Nevertheless, it is well-known that inadvertently induced GBs in these 2D nanosheets alter their mechanical performance and properties. 19 Furthermore, quantification of the effect of defects on water/ion transport through single-digit nanopores is a prominent knowledge gap in the field. 20 Over the years, classical molecular dynamics (MD) simulations and quantum-mechanical density functional theory (DFT) calculations have been used to understand and predict the atomic-level phenomena underlying water permeation through and ion rejection by nanoporous 2D materials. 6,10,17,[21][22][23] However, so far, simulation studies have considered idealized models of nanoporous hBN and graphene that do not contain GBs in them.
In 2015, Surwade et al. 9 experimentally analyzed the capabilities of nanoporous graphene as a desalination membrane. In their work, the oxygen plasma etching method was used to create nanopores in single-layer graphene. They concluded that the resultant nanoporous graphene membranes exhibit excellent water permeability and 100% salt rejection. Apart from this experimental study, several modeling studies have been carried out to analyze the desalination performance of nanoporous 2D materials. 6,[21][22][23][24][25][26][27][28][29] In 2012, Cohen-Tanugi et al. 6 carried out classical MD simulations and concluded that nanoporous graphene shows extraordinary water permeability, which is many times larger than conventional membranes. They also reported that the water permeability depends on the nanopore's size, edge functionalization, and applied pressure. Subsequently, Konatham et al. 24 performed MD simulations to analyze the water desalination performance of graphene membranes and reported that nanopores of diameter ~7.5 Å exhibit effective ion segregation, whereas ions easily passed through nanopores with diameter between ~ 10.5 Å and 14.5 Å. These studies uncovered the critical role of the nanopore size in modulating water and ion transport through 2D material membranes. In addition to graphene, other 2D materials have also been examined. Heiranian et al. 23 performed MD simulations to investigate the desalination performance of single-layer nanoporous MoS2 and predicted that nanopores with molybdenum atoms on their edges showed higher water fluxes (~70% greater than graphene nanopores). Recently, a review on MoS2-based membranes in water treatment and purification was published wherein the authors summarized several aspects of MoS2 membranes including surface modification. 30 In another study, Cao et al. 22 analyzed the desalination performance of 2D metal organic framework (MOF) membranes using MD simulations and concluded that 2D MOF membranes lead to 3-6 times higher water permeation than traditional membranes. It was also reported that few-layered MOF membranes exhibit higher water flux than single-layer graphene or MoS2 membranes without any drilling of nanopores, due to the inherent pore structure of the material.
Besides the above-mentioned 2D nanosheets, hBN nanosheets have also been investigated as separation membranes by various researchers. 4,10,[35][36][37]12,15,17,18,[31][32][33][34] Chen et al. 12 experimentally examined the water transport performance of amino-functionalized nanoporous hBN using a positive pressure setup. They found that the hBN membranes are highly stable in various organic solvents and water and showed high water flux and excellent selectivity in organic and aqueous solvents. In other experimental work, Lei et al. 38 used a thermal treatment method to synthesize nanoporous hBN and demonstrated its potential to purify contaminated water by absorbing various solvents, dyes, and oils. With respect to modeling studies, Garnier et al. 17 carried out MD simulations to predict the surface tension profile of water molecules close to mono and multilayer hBN and graphene nanosheets and to investigate water permeability through nanoporous hBN and graphene. The authors established a correlation between the surface tension profile and the water permeation rate. They further reported that the water surface tension was lower on monolayer hBN as compared to monolayer graphene, which resulted in faster water permeation through monolayer nanoporous hBN due to increased wetting. Gao et al. 18 employed classical MD simulations to determine the desalination performance of hBN nanosheets and investigated how the shape and size of nanopores in hBN affected salt rejection and water permeation. They also showed that triangular nanopores with nitrogen atoms on the edges of the nanopore exhibited high water permeation rates in the range of 0.132 to 2.752 kg m -2 s -1 MPa -1 which is several times higher than the water permeation rates through commercial RO membranes (~ 0.011 kg m -2 s -1 MPa -1 ). Davoy et al. 15 used MD simulations to predict that the water permeability of nanoporous hBN far exceeded that of polymer-based conventional RO membranes. In fact, it was even higher than that through nanoporous graphene. Other studies focused on unravelling the role of doping, 4 nanopore edge functionalization, 10,35,37 and edge type 33 in modulating water permeation, have also been carried out. Nevertheless, in the above mentioned studies, the electrical potential produced by nanoporous hBN was not validated with quantum-mechanical DFT calculations, although the charges on B and N atoms were derived using some DFT-based charge partitioning scheme. In fact, in some studies partial charge values from bulk hBN were used, which may lead to underprediction of the water flux. This aspect is crucial because, as we show, electrostatic interactions play a key role in modulating the performance of nanoporous hBN in desalination applications.
From previous studies, it is evident that monocrystalline nanoporous hBN without any geometrical defects exhibits excellent water permeability 17 , adsorption ability 39 , mechanical strength 19 , and structural stability. 12,19,40 Nevertheless, as mentioned before, for water desalination applications, large-area 2D materials are required, which would lead to the presence of grain boundaries (GBs) in the membrane. So far, studies have investigated the water desalination performance of nanoporous monocrystalline hBN and no study has considered the effect of GBs on the desalination performance of nanoporous hBN. In this work, we investigate the role of GBs in modulating the desalination performance of bicrystalline hBN using classical MD simulations. Because GBs and nanopores alter the charge distribution within the hBN nanosheet, thereby affecting electrostatic interactions between hBN and water, we use quantummechanical DFT to obtain accurate partial charges on the B and N atoms. We also show that the resultant electrostatic interactions significantly affect the desalination performance of nanoporous bicrystalline hBN. Finally, we conclude that the presence of GBs can adversely affect the desalination performance of nanoporous hBN, due to a significant drop in Na + ion rejection through the membrane.
METHODS
Computational modeling and simulation details.
We used classical MD simulations to study the desalination performance of nanoporous bicrystalline hBN using the open-source Largescale Atomic/Molecular Massively Parallel Simulator (LAMMPS) package. 41 The simulation system for water desalination is illustrated in Figure 1, in which nanoporous hBN was placed parallel to the x-y plane. We used a simulation box measuring 160 Å in the z-direction and periodic in the x-y directions with a cross-sectional area of ~31 × 31 Å. The nanoporous hBN sheet was fixed at z = 0 Å, and rigid graphene pistons were placed at z = -75 and z = 18.5 Å.
The feed side region of the simulation box was filled with saline water having 22 Na + ions, 22
Cl − ions, and 2000 water molecules, leading to a salt concentration of 35.7 gm/L, close to the value observed in seawater. The permeate (i.e., pure water) side was filled with 500 water molecules. We verified that the use of a larger number of water molecules on the permeate side did not affect our results. Indeed, the water flow rates through the nanoporous monocrystalline hBN membrane at 50 MPa feed pressure were found to be the same when 500 and 2000 water molecules were present on permeate side, as shown in Figure S1 in Supporting Information Section S1.
In all the simulations, the atoms in the nanoporous hBN membrane were allowed to vibrate, except for the ones closest to the periodic boundary. The latter atoms were kept fixed to maintain the hBN sheet at z = 0 Å by zeroing their momentum. Periodic boundary conditions were implemented along all the three directions. All simulations were carried out at room temperature (298.15 K) and atmospheric pressure (1 atm) in the lateral, i.e., x-y direction, using the NPxyT ensemble. These conditions were maintained using the Nose-Hoover thermostat and barostat. 42,43 Figure 1. Schematic of the water desalination simulation setup confined by graphene pistons. The saline water reservoir (feed side, left) and pure water reservoir (permeate side, right) are separated by a nanoporous hBN membrane. The pressure on the feed site is ! and that on the permeate side is ! !"# =1.013 bar. The water molecules, sodium ions, and chloride ions are represented using spheres with hydrogen in cream, oxygen in blue, Na + in yellow, and Clin magenta.
After carrying out energy minimization, the system was equilibrated in the NPxyT ensemble for a total time period of 50 ps. A short equilibration phase was chosen so that the ions and/or water molecules do not move to the permeate side. The total energy and temperature were recorded during equilibration and are plotted in Figure S2 in Section S2. It can be deduced from Figure S2a and S2b that the total energy and temperature are converged after 50 ps, indicating that the short equilibration time chosen is sufficient. In all the simulations, a time step of 1 fs was used.
LJ interactions were cut-off at a distance of 1.2 nm for water-water and hBN-water interactions.
Intra-hBN short-range LJ/Coulombic interactions were not considered. Long-range electrostatic interactions were included using the particle-particle-particle-mesh (PPPM) method. 44 Post equilibration, a uniform pressure (P) was applied on the feed side piston along the +z direction, i.e., the flow direction, whereas atmospheric pressure was implemented on the permeate side piston in the -z direction, as shown in Figure 1 reproduce qualitatively the water contact angle on the hBN basal plane. 53 To calculate the interactions between unlike atoms in the system, geometric mean combining rules,
' $% = (' $ ' % ) ! "
and * $% = (* $ * % ) ! " , were used where, ' represents the distance at which the interatomic LJ potential is zero and * represents the LJ well-depth parameter. In Section S4, we present a more detailed discussion on the choice of force fields for hBN, water molecules, and salt ions (see, e.g., refs. 14 and 54 ).
Partial charge calculations.
To calculate the partial charges, the geometry of monocrystalline and bicrystalline nanoporous hBN membranes was optimized using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, leading to a maximum force of 0.0003 hartree/bohr on the atoms. The DFT calculations were performed using the cp2k package 55 and the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional 56 with double zeta (short range) valence basis sets for the B and N atoms. 57 The nuclei and core electrons were modeled using the Goedecker-Teter-Hutter pseudopotentials. 58 Subsequently, partial charges on B and N atoms were calculated using the density derived atomic point (DDAP) charge algorithm developed by Blöchl. 59 After calculating the partial charges, the electrostatic potential plots in Figures 5 and S6 were prepared by calculating the electrostatic interaction between a unit test charge and the nanoporous hBN membrane (see Figure S3a in Section S5) using the group/group command in LAMMPS. The same group/group command was also used to plot the contours of the interfacial interactions between a single, tangential water molecule and the nanoporous hBN membrane in Figures 12 and 13. The simulation setup used to calculate these interactions is shown in Figure S3b in Section S5.
RESULTS AND DISCUSSION
GB misorientation angles and nanopore shapes in hBN.
In this work, we modeled water and ion permeation through four different hBN membranes: (i-iii) bicrystalline nanoporous hBN with symmetric tilt GBs having misorientation angles (+) of (i) 13.2°, (ii) 21.8°, and (iii) 32.2° and (iv) monocrystalline nanoporous hBN (i.e., without any GB), as shown in Figure 2. We studies. We chose these three misorientation angles because they cover varying linear dislocation densities (i.e., the number of (5|7) dislocation pairs on the GB per unit length) from low to high (~0.91 nm -1 , 1.5 nm -1 , and 2.24 nm -1 ).
Nanopores were considered at the GB, because it is more likely for extended defects, such as nanopores, to form there. We considered roughly triangular nanopores with nitrogen- affects the nanopore area and shape. Later in the article, we also discuss the water desalination performances of these nanopores after normalization by the nanopore area. In all the considered cases, the nanopore concentration (i.e., pore density) in the hBN layer was ~0.09-0.10 nm -2 (see the caption of Figure 2). This is a large value considering that almost 10 17 nanopores would be present in 1 m 2 area of the membrane. Nevertheless, our estimates of the per pore permeance/permeability can simply be multiplied by the actual pore density to obtain the membrane permeance/permeability, for membranes with a lower number of pores per unit area. 77 In Table 1, we list the areas of the nanopores considered, their equivalent circular diameters, and their aspect ratios, as quantified by the ratio of the minimum to maximum Feret diameter of the nanopore (see Section S7 for more details). The Feret diameter, also known sometimes as the caliper diameter, is the distance between the jaws of a caliper used to measure the size of an object. Note that the aspect ratio, as defined above, lies between 0.0 and 1.0. The limiting value of 1.0 corresponds to a circle and that of 0.0 to a line segment; the aspect ratio of an equilateral triangle, equaling the ratio of its height to its side length, is √'
( = 0.87. Thus,
the lower the aspect ratio of a nanopore, the more elongated it is. From Table 1, we see that the extent of elongation of the considered nanopores is in the order: monocrystalline hBN < 13.2°misoriented hBN < 21.8°-misoriented hBN < 32.2°-misoriented hBN.
Partial charge calculations and validation of the resultant electrostatic potential with DFT.
As discussed above, hBN membranes used for water desalination have intentionally induced geometrical defects such as nanopores and inadvertently induced geometrical defects such as
GBs. These geometrical defects in hBN alter the partial charges on B and N atoms, ultimately affecting the electrostatic interactions between water and hBN. To quantify this effect, we calculated the partial charges on B and N atoms in the hBN membranes considered here using the DFT-based DDAP charge algorithm (see Computational Methods for more details).
Specifically, partial charges were calculated using nine different sets of DDAP parameters, as described in Section S8. Here, we focus on cases 1 (the default DDAP parameters) and 4 (the "best" DDAP parameters, see below). The contours plots of the partial charges calculated using the default and best DDAP parameters are indicated in Figure 3a-d and Figure 4a To validate the DFT-predicted partial charges, we also calculated the electrostatic potential generated by the total DFT charge density (electrons and ions) in the hBN membrane. This was achieved by writing a cube file using the V_HARTREE_CUBE section in cp2k. The electrostatic potential values at points directly above the center of the nanopore were plotted against the perpendicular distance from the surface of the hBN sheet for both monocrystalline and bicrystalline configurations, as shown in Figure 5. To obtain the potential created by the partial charges on B and N atoms, a unit test charge was placed at the centre of the nanopore of the hBN membrane, as depicted in Section S5. This unit test charge was moved in the direction perpendicular to the hBN surface, and the electrostatic interactions between the hBN layer and the test charge were obtained using single-point calculations in the LAMMPS package. 41 The resultant electrostatic potential was compared with the DFT potential as shown in Figure 5. It can be inferred from Figure 5a-d that the electrostatic potentials calculated using the partial charges with default DDAP parameters do not agree with the DFT potential. In contrast, the electrostatic potentials calculated using the partial charges with the "best" DDAP parameters for nanoporous monocrystalline and bicrystalline (21.8°-and 32.2°-misoriented) hBN (see Figure 5a,c-d) are in excellent agreement with the DFT potential. In the case of the 13.2°-misoriented bicrystalline hBN membrane, there is a minor deviation between the DFT potential curve and the electrostatic potential curve calculated using the partial charges with the best DDAP parameters.
We show in the Section S8 that it is not possible to eliminate these differences by tuning the partial charges. Nevertheless, later in the article, we show that this minor deviation between the DFT and partial-charge-based potentials does not affect the main conclusion of our work. It is also seen in Figure 5 that the electrostatic potential is the highest at the center of the pore (the hBN layer is placed at z = 0 Å), with a symmetric drop in potential, on either side, as one moves away from the hBN layer. considered an elongated nanopore. Since not only the size of the nanopore, but also its shape affects the permeation of water, 15 it is reasonable that the water flow rate predicted by Garnier et al. 17 is much higher than that predicted by Gao et al. 18 It can also be seen in Figure 7 that Table 1). On the other hand, the water flow rate of the higher-angle misoriented bicrystalline hBN membranes is comparable with that for the monocrystalline configuration. This observation is despite the fact that the nanopores in the higher-angle misoriented bicrystalline hBN membranes had a lower area (equivalent circular diameter) than the nanopore considered in monocrystalline hBN, thus indicating that GBs intrinsically enhance water permeation through nanoporous hBN. The underlying reason could be attributed to the lower aspect ratio of the nanopores in the 21.8°-and 32.2°-misoriented bicrystalline hBN, as compared to the nanopore in monocrystalline hBN, as seen in Table 1. considering similar pore areas as in our study (25-30 Å 2 , see Table 1).
Figure 9.
Water permeability at 46.87 MPa effective pressure for monocrystalline and bicrystalline hBN membranes.
Comparison of our results with experimental data and avenues for future work.
The calculated values of the water permeability for monocrystalline and bicrystalline hBN membranes are in the range of 12.0 to 17.9 L cm −2 day −2 MPa −1 , which are several times higher than those of current commercial RO membranes (~0.1 L cm −2 day −1 MPa −1 , see ref. 6) and
better than that of a graphene filter (~5.9 L cm −2 day −2 MPa −1 measured in experiment 9 ).
However, as mentioned before, the pore density considered in our work is ~10 17 nm -2 . This value, although comparable to what is considered in other simulation studies, is about an order of magnitude higher than the value of 10 16 nm -2 achieved experimentally in graphene. 9 This observation indicates that one may reasonably achieve anywhere between 1. Quantifying the free energy barriers for water permeation. When water molecules pass through a nanopore in the hBN membrane, they have to surmount a free energy barrier. To further understand the water permeation mechanism through nanopores, we calculated the free energy profile or potential of mean force (PMF) of water molecules using the Boltzmann sampling method. 18,26,79 According to the Boltzmann sampling method, the free energy or PMF is given as, #(4) = −01ln8 (4), where R represents the universal gas constant, T represents the temperature, and ρ(z) represents the density at position z in the system. After obtaining the water density profiles from an equilibrium simulation with equal feed and permeate pressures of 1 bar, the free energy profiles were plotted for nanoporous monocrystalline and bicrystalline hBN membranes in Figure 10. It can be inferred from Figure 10 that the free energy profile of the water molecules is approximately constant far away from the membrane, which is placed at 4 = 0, in all cases. At the position of membrane (4 = 0), a large increase in the free energy is observed. This free energy enhancement at the nanopore represents the energy barrier for water permeation from the feed side to the permeate side. It can be observed from Figure 10 that the lower-angle misoriented bicrystalline hBN membrane shows a lower energy barrier, which causes higher water flux, as compared to the higher-angle misoriented bicrystalline configuration. In contrast, the 21.8°-misoriented bicrystalline hBN membrane shows a higher energy barrier, such that the water flux is lower. Overall, the trends presented by the free energy barriers (21.8° > 0° > 32.2° > 13.2°) in Figure 10 match the trends presented by the water fluxes (21.8° < 0° < 32.2° < 13.2°), where a misorientation of 0° indicates nanoporous monocrystalline hBN. Moreover, these results support the observation that, even after accounting for differences in nanopore areas, the 13.2°-misoriented bicrystalline hBN allows increased water flux through the hBN membrane. In this regard, future work could investigate the coupled role of temperature and grain boundaries on modulating water and ion transport through 2D nanopores, by simulating the desalination process at various system temperatures. Such an approach would enable the estimation of activation barriers for water permeation using an alternative method and the ensuing results could be compared with the PMF-based approach presented here.
Effect of GBs on ion rejection by nanoporous hBN.
Apart from the water flux, the extent of rejection of Na + and Clions from water is a crucial criterion for any desalination membrane.
We counted the percentage of Na + and Cl − ions that were rejected in each simulation run for various membrane configurations at different effective pressure values, as summarized in Tables 2 and 3, respectively. It can be deduced from Table 2 that, in only the monocrystalline membrane configuration, the sodium rejection is greater than 99% (corresponding to a maximum of a single Na + ion permeating in one out of five independent simulation runs). For the bicrystalline membranes, the Na + rejection follows the order: 32 Figure 11. The results for the water flow rate are also tabulated in Table 4 for the different sets of partial charges considered at 196.87
MPa effective pressure. It can be inferred from Figure 11 and Table 4 that in all the nanoporous hBN membranes, the water flow rate is significantly affected by the interfacial electrostatic interactions. The values of water flow rate were highest when the altered partial charges on B and N atoms were calculated using DFT-based simulations and the DDAP method, whereas the values of water flow rate were lowest when bulk charges (charges on monocrystalline hBN nanosheets) on B and N atoms were used. (Note that, in the latter case, the hBN membrane is charged, due to the presence of an unequal number of B and N atoms in it. Accordingly, MD simulation packages add a uniform background charge to the system such that the net charge on the simulation box is zero.) The water flow rates calculated without considering the electrostatic interactions (i.e., considering zero partial charges on B and N atoms) were found to lie in between cases (i) and (ii). We also see in Table 4 that a maximum of 69.1% improvement in water flow rate was observed when compared with the flow rate predicted using zero partial charges and 297.4% improvement was observed when compared with the flow rate predicted using bulk charges. Overall, we conclude that the calculated partial charges must be accurate enough, as the water flow rate is significantly affected by the interfacial electrostatic interactions. Moreover, the water flow rate reduces significantly if the hBN sheet is charged. Note that, in our simulations, the monocrystalline, 13.2°-misoriented bicrystalline, and 21.8°-misoriented bicrystalline hBN configurations have 4 fewer B atoms than N atoms, leading to a surface charge density of -0.051 to -0.059 C m -2 . On the other hand, the 32.2°misoriented hBN membrane has 5 fewer B atoms than N atoms, resulting in -0.066 C m -2 of surface charge. It can also be seen from Table 4 that the water flow rate is maximum in 13.2°misoriented bicrystalline membrane as compared to other membrane configurations even when we are considering the zero or bulk charges. Therefore, the water flow rate is always maximum in 13.2°-misoriented bicrystalline membrane, irrespective of whether we consider DFT, bulk, or zero charges for the B and N atoms. This finding indicates that our conclusion regarding the water permeability being highest through the 13.2°-misoriented bicrystalline hBN membrane is not affected by the minor difference between the DFT and partial-charge-based potential as seen in Figure 5b. To further investigate the physics behind the effect of interfacial electrostatic interactions, we plotted the contours of interfacial interactions for nanoporous monocrystalline and 32.2°misoriented bicrystalline hBN membranes using different sets of partial charges, as shown in Figure 12 and 13. To plot the contours of interfacial interactions between nanoporous hBN membrane and water molecule, initially, a water molecule was placed at the corner of the nanoporous hBN membrane at a distance of 3 Å above the surface, as depicted in Section S5.
Subsequently, the water molecule was moved in the lateral direction and the interaction energy was calculated in a grid above the hBN surface. The interfacial interactions contours for 13.2°and 21.8°-misoriented bicrystalline membranes are not depicted, as they show similar trends, as observed for the 32.2°-misoriented bicrystalline membrane. It can be seen from Figure 12 and 13 that the partial charges cause positive and negative patches of electrostatic potential in the vicinity of the nanopore. Depending on the magnitude of these patches, water permeation is enhanced (using DDAP-based DFT partial charges) or impeded (using bulk partial charges).
Comparing panels b-c in Figure 12 with panels d-e in the same figure, we see that the waternanoporous monocrystalline interaction energy is lower while using DDAP-based DFT partial charges (Figure 12b-c), as compared to while using bulk partial charges (Figure 12d-e). A similar conclusion is reached for the case of 32.2°-misoriented bicrystalline nanoporous hBN, by examining panels b-c in Figure 13 and comparing them with panels d-e in the same figure.
Thus, the presence of an optimal level of interfacial Coulombic interactions enhances water transport to the surface, and through to the other side of the nanopore, as in the case with DDAP-based partial charges. However, when the interfacial Coulombic interactions are too high, as in the case with bulk partial charges, the water molecules get trapped at the nanopore mouth, leading to a reduced water permeation rate.
Figure 12.
Interfacial interaction energy contours (in kcal/mol) for a tangential water molecule at 3 Å from the membrane, using different sets of partial charges for a nanoporous monocrystalline hBN membrane: (a) LJ interaction energy, (b) electrostatic potential energy calculated using DDAP-based DFT charges, (c) total potential energy calculated using DDAPbased DFT charges, (d) electrostatic potential energy calculated using bulk charges, and (e) total potential energy calculated using bulk charges. Figure 13. Interfacial interaction energy contours (in kcal/mol) for a tangential water molecule at 3 Å from the membrane, using different sets of partial charges for a 32.2°-misoriented bicrystalline nanoporous hBN membrane: (a) LJ interaction energy, (b) electrostatic potential energy calculated using DDAP-based DFT charges, (c) total potential energy calculated using DDAP-based DFT charges, (d) electrostatic potential energy calculated using bulk charges, and (e) total potential energy calculated using bulk charges.
CONCLUSIONS
For water desalination membranes, large-area 2D materials are required, which inevitably will contain grain boundaries (GBs) as geometrical defects. In this work, we investigated the role of GBs and interfacial electrostatic interactions in modulating the desalination performance of bicrystalline hBN membranes using classical MD simulations supported by quantummechanical DFT calculations. Before investigating the desalination performance, we calculated the partial charges on each B and N atom in hBN using DFT and validated these partial charges with the help of electrostatic potential plots. The electrostatic potential calculated using DFT and that using partial charges are in good agreement for monocrystalline and bicrystalline hBN configurations. By investigating water and ion permeation using MD simulations, we concluded that, although the lower-angle (13.2°) misoriented bicrystalline hBN membrane showed improved water flow rate by ~30%, the Na + rejection was reduced by ~6% as compared to monocrystalline hBN. The finding regarding increased water flow persisted even after correcting the water flow rates for the nanopore area, i.e., considering the water flux through the investigated nanopores. In fact, the water flow rate and flux through higher-angle (21.8° and 32.2°) misoriented bicrystalline hBN was found comparable to that through nanoporous monocrystalline hBN, despite them having smaller nanopore areas. We showed that these nanopores have a lower aspect ratio, thus pointing to the role of the nanopore shape in modulating water flux, as well as ion rejection. Indeed, the increased water flow rate/flux was accompanied by a significant drop in Na + rejection by the membrane. We also deduced that the values of the water flow rate were highest when altered partial charges on B and N atoms were calculated using DDAP-based DFT calculations, whereas the values of water flow rate were lowest when bulk partial charges (i.e., the charges in a monocrystalline hBN nanosheet) on B and N atoms were used. This implies that surface charge on the membrane reduces the rate of water permeation. Additionally, the water flow rates calculated without considering interfacial electrostatic interactions, i.e., by taking zero partial charges on B and N atoms were in between the two cases discussed above. Overall, our investigation of the role of
CONFLICTS OF INTEREST
There are no conflicts of interest to declare. Figure S1. Water flow rate for nanoporous monocrystalline hBN membrane at 50 MPa feed pressure by considering 500 and 2000 water molecules on the permeate side. Figure S2. Convergence plots during equilibration for the (a) total energy and (b) system temperature.
S2. Convergence of energy and temperature during equilibration
S3. Parameters for the LJ and Coulombic interatomic potentials
The parameters for LJ and Coulombic interatomic potentials were adapted from various studies in the literature and are mentioned in Table S1. S-3 Figure S3. Snapshot of a nanoporous hBN membrane (boron in red and nitrogen in blue) depicting (a) a test charge atom (in yellow) at the centre of the nanopore and (b) a tangential water molecule at the corner of the membrane at a distance of 3 Å above the surface of the hBN layer.
S5. Model used to calculate the interfacial interactions between a nanoporous hBN membrane and a unit test charge/tangential water molecule
S-5
S6. Nomenclature and repeat length of the grain boundaries (GBs)
The nomenclature of a GB is illustrated in Figure S4. A GB can be described by its misorientation angle (# = # ! + # " ), where # ! and # " represent the rotation angles of the left and right crystals, respectively. Note that, for symmetric tilt GBs, # ! = # " = # $ . 30 Figure S4. The structure of symmetric tilt GB in hBN having misorientation angle of 21.8°. The misorientation angle (# = # ! + # " = 10.9°+ 10.9°= 21.8°) described the misalignment between the two crystals. The vectors (2,1) and (1,2) represent the two periodic translation vectors for the left crystal (nL, mL) and for the right crystal (nR, mR) along the GB. The repeat vector or periodic length (d) of the GB described by the periodic translation vectors at the left and right crystals, respectively.
The GBs in hBN have periodically arranged defects (dislocations (5|7 pairs)), as shown in Figure S4. In the case of periodically arranged defects, an appropriate way to describe the GB is using two periodic translation vectors each for the left crystal (nL, mL) and for the right crystal (nR, mR) of the GB along the defect direction, i.e., as (nL, mL)|(nR, mR), which is shown in Figure S4. 30,31 The misorientation angle and periodic translation vectors are related by the following formula: # = tan %& [√3 3 ! /(3 ! + 26 ! )] + tan %& [√3 3 " /(3 " + 26 " )]. 32 The periodic (i.e., repeat) length for the symmetric tilt GBs, 9, is calculated using 9 = : ' ;6 ! $ + 6 ! 3 ! + 3 ! $ = : ' ;6 " $ + 6 " 3 " + 3 " $ ,
S7. Calculation of the areas, equivalent circular diameters, and aspect ratios of the various nanopores considered
In each case, the nanopore area was estimated by considering the B and N atoms as discs of radius equal to the van der Waals radius of the respective atom 33 (1.92 Å for B and 1.55 Å for N) in the plane of the membrane ( Figure S5). To this end, a code written in MATLAB R2021a was used, whereby the number of pixels occupied by the nanopore, after excluding the B and N discs, were used to determine the area, ?, of the nanopore using the MATLAB functions bwconncomp and regionprops. A similar approach has been used in other studies to determine the nanopore area. 34,35 Once the area of the nanopore was determined, the equivalent circular diameter, 9, of the nanopore, was calculated as 9 = @ () * . The maximum (9 +,-./ ) and minimum (9 +,-01 ) Feret diameters of each nanopore were also determined using the regionprops function in MATLAB ( Figure S5).
Subsequently, the aspect ratio, A, was calculated as A =
S8. Tuning of partial charges and validation of the resultant electrostatic potential for 13.2°misoriented bicrystalline nanoporous hBN
To eliminate the differences between the electrostatic potentials obtained in molecular dynamics (MD) simulations and density functional theory (DFT) calculations, we tuned the partial charges by altering the parameters for the density derived atomic point (DDAP) charges method. 36 The nine sets of parameters that were investigated are tabulated in Table S2. Note that the parameters denoted as "case 1" are the default parameters for the DDAP method in the cp2k package. For each case, the partial charges on B and N atoms were evaluated, and the corresponding electrostatic potentials were determined using single-point calculations in LAMMPS and plotted in Figure S6. Apart from visually comparing the partial-charge-based electrostatic potential with the DFT-based potential in Figure S6, the root-mean square deviation (RMSD) between the two cases was also calculated for each set of parameters and tabulated in Table S2. It can be inferred from the RMSD values that the set of parameters described in case 4 ("best" parameters) shows the minimum deviation, and the calculated partial charges corresponding to these parameters were used for investigating the desalination performance of 13.2°-misoriented bicrystalline nanoporous hBN. The same set of customized parameters, as described in case 4, was also used for obtaining the partial charges for B and N atoms in monocrystalline and bicrystalline (21.8°-and 32.2°-misoriented) nanoporous hBN membranes.
S-9 Figure S6. Electrostatic potential (in orange color) as a function of the out-of-plane distance from the centre of a 13.2°-misoriented bicrystalline hBN membrane for different sets of parameters using density derived atomic point charges: (a-i) cases 1 to 9 for different set of parameters, respectively. The DFT potential is shown in purple color in each case.
S-10 Table S2. Different sets of parameters investigated for obtaining the DDAP charges and their resultant RMSD between the DFT-predicted and MD-predicted electric potentials.
considered the monocrystalline hBN membrane to compare our findings with previously reported results and to quantify the effect of GBs on the desalination performance of nanoporous hBN. The nomenclature of the GB structures considered and details regarding their repeat length are described in the Section S6. The bicrystalline hBN configurations were created using the sewing method, 60 which generates pairs of pentagon-heptagon (5|7) dislocations at the junctions of the two misoriented crystals of hBN, as shown in Figure 2. Similar types of GBs structures in hBN and graphene have been seen experimentally 61-65 and predicted theoretically 49,66-69 in various
terminated edges, because triangular, boron-deficient defects are more-likely to form in hBN, as seen experimentally[70][71][72] and predicted theoretically 73,74 by previous studies. For nanoporous monocrystalline hBN, we considered a nanopore formed by removing 10 B atoms and 6 N atoms from the hBN lattice, leading to a total of 16 removed atoms(Figure 2a,b). Indeed, in previous work, Govind Rajan et al.73 cataloged the most-probable nanopore shapes in hBN using kinetic Monte Carlo simulations, DFT calculations, and graph theory and predicted triangular shapes of nanopores with nitrogen atoms at their edges to be the most prevalent in hBN. Further, Kozawa et al.74 predicted that triangular shapes of nanopores are the most kinetically favorable defects in hBN when a perfect square number of atoms (16 here) was removed. The choice of considering a triangular nanopore in hBN is also substantiated by the experimental observation of triangular vacancy defects in hBN.75,76 The triangular nanopore formed by removing 25 atoms (the next perfect square after 16) was not considered, because of the trade-off between an increased water flux and reduced ion rejection. Indeed, while the water flux can be improved by considering a larger nanopore, the ion rejection ability of the membrane will severely deteriorate in such a scenario. Thus, to investigate the overall efficiency of a nanoporous membrane for desalination purposes, a trade-off between water flux and ions rejection is very crucial. For the 13.2°-misoriented bicrystalline hBN, an equal number of B and N atoms as in the monocrystalline case (10 and 6, respectively) were removed to form the nanopore(Figure 2c,d); yet the area of the nanopore is slightly larger (27.6 Å 2 versus 26.4 Å 2 ) due the presence of the GB. The calculation of the nanopore area is explained in Section S7.
Figure 2 .
2Snapshots of monocrystalline and bicrystalline hBN without (a,c,e,g) and with (b,d,f,h) a roughly triangular, N-terminated nanopore: (a-b) monocrystalline hBN, (c-d)13.2°misoriented bicrystalline hBN, (e-f) 21.8°-misoriented bicrystalline hBN, and (g-h) 32.2°misoriented bicrystalline hBN. Atoms on and inside the black triangular outline are removed to create the nanopore in each case. Red and blue spheres represent boron and nitrogen atoms, respectively. The pore area is indicated in each case. The panels e and g have a smaller width than panels a and c, because a larger GB repeat length (in the vertical, i.e., y direction) necessitated the use of a smaller width (in the horizontal, i.e., x direction) to maintain roughly equal nanopore concentrations of ~0.09-0.10 nm -2 .Note that, our estimate of 26.4 Å 2 for the area of the nanopore in monocrystalline hBN formed by removing 10 B and 6 N atoms from the hBN lattice is similar to the estimate of 29.1 Å 2 obtained by Davoy et al. 15 For the 21.8°-misoriented bicrystalline hBN, 9 B atoms and 5 N atoms were removed to form a roughly triangular nanopore (Figure 2e,f); the resultant nanopore area is the smallest at 23.6 Å 2 . Finally, for the 32.2°-misoriented bicrystalline hBN, 10 B atoms and 5 N atoms were removed to form a roughly triangular nanopore (Figure 2g,h), leading to an intermediate nanopore area of 25.8 Å 2 . It is evident that the presence of a GB
-d, respectively, for the four cases (i)-(iv) discussed above. It can be inferred from Figure 3b-d and Figure 4b-d that in bicrystalline hBN membranes, B atoms adopt negative charges and N atoms adapt positive charges when DDAP parameters are altered from their default values to their best values (see for the parameter values).In contrast, no such sign reversal in the partial charges of B and N atoms takes place in a monocrystalline membrane when using the best DDAP parameters, as seen by comparingFigure 3aandFigure 4a. This sign reversal, found essential to reproduce the DFTpredicted electric potential, as discussed below, could be caused by a polarizing electric field at the GB parallel to the hBN surface and perpendicular to the GB, and should be probed more deeply in the future.
Figure 3 .
3Partial charge (in elementary charge unit (e)) distribution calculated using the default DDAP parameters for various configurations: (a) Monocrystalline (b) 13.2°-misoriented bicrystalline, (c) 21.8°-misoriented bicrystalline, and (d) 32.2°-misoriented bicrystalline nanoporous hBN membranes, respectively. Five membered rings are colored in magenta and seven membered rings are colored in green.
Figure 4 .
4Partial charge (in elementary charge unit (e)) distribution calculated using the "best" DDAP parameters for various configurations: (a) Monocrystalline (b) 13.2°-misoriented bicrystalline, (c) 21.8°-misoriented bicrystalline, and (d) 32.2°-misoriented bicrystalline nanoporous hBN membranes, respectively. Five membered rings are colored in magenta and seven membered rings are colored in green.
Figure 5 .Figure 6 .
56Electrostatic potential as a function of perpendicular distance from the surface of membrane for various configurations: (a) monocrystalline, (b) 13.2°-misoriented bicrystalline, (c) 21.8°-misoriented bicrystalline, and (d) 32.2°-misoriented bicrystalline nanoporous hBN membranes. Values calculated using DFT (purple), partial charges based on the default DDAP parameters (grey), and partial charges based on the best DDAP parameters (orange) are shown. The hBN membrane is placed at z = 0 Å in each case.Water flow rate through nanoporous monocrystalline and bicrystalline hBN.After calculating and validating the partial charges, MD simulations were carried out to investigate the desalination performance of nanoporous monocrystalline and bicrystalline hBN membranes. The simulation system for water desalination is illustrated inFigure 1and consists of a nanoporous hBN layer, two rigid graphene pistons, water molecules, and sodium (Na + ) and chloride (Cl -) ions. To analyze the desalination performance of these membrane configurations, the number of water molecules that permeated through the membrane (Nw) wascounted and plotted as a function of time at different feed pressures ranging from 50 to 200 Plots of the number of water molecules that permeated through the nanoporous hBN membrane versus time for various configurations: (a) monocrystalline, (b) 13.2°-misoriented bicrystalline, (c) 21.8°-misoriented bicrystalline, and (d) 32.2°-misoriented bicrystalline hBN. The plots are shown at three different feed pressures: 50 MPa (orange), 100 MPa (gray), and 200 MPa (purple), with dotted lines showing linear fits. The slope and the goodness of fit (R 2 value) are indicated in each case.After plotting Nw versus time, we calculated the water flow rate in each case to quantify the desalination performance. The water flow rate is defined as the number of permeated water molecules divided by the time taken for permeation. In this work, the water flow rate was calculated using two methods which are described in Section S9, namely (a) using the slope of a linear fit of the Nw versus time plot and (b) using the total number of permeated water molecules divided by the total time taken for their permeation. After comparison of calculated water flow rates obtained using the two methods in Section S9, we adopted method (a) to calculate the water flow rates in the rest of the article, because it better accounts for temporal fluctuations in the water flow rate. All the water flow rates and ion selectivities reported in this article were obtained as the average value from 5 independent simulations runs using different initial aqueous configurations and velocity distributions. The error bars indicated in each case represent the standard deviations from such collections of 5 runs.To compare the calculated water flow rates with previously reported results, we plotted the obtained water flow rates as a function of the effective pressure drop and superimposed previously published results on the same plot, as illustrated inFigure 7. The effective pressure drop is given as (! )** = Δ! − Δ.), where Δ! and Δ. represent, respectively, the difference in the applied pressures and the difference in the osmotic pressures, between the feed and permeate sides. Note that the Δ. term has been neglected in previous simulation work on coupled water and ion transport through nanoporous hBN. This term is calculated using theVan't Hoff equation, Δ! = /01Δ2 , where i is the Van't Hoff factor (2 due to complete dissociation of NaCl), Δ2 is the difference in the molar concentrations of the solute on the feed and permeate sides (610.863 mol/m 3 or 35.7 gm/L), 0 is the universal gas constant (8.314 Pa m 3 /mol K -1 ), and 1 is the temperature of the system (298.15 K), whence the calculated value of the osmotic pressure difference is 3.03 MPa. The three feed-side pressures, i.e., 50 MPa, 100 MPa, and 200 MPa lead to effective pressure values of 46.87 MPa, 96.87 MPa, and 196.87MPa, respectively. Similar values of pressure drops have been considered in several theoretical studies.17,18 Moreover, such high feed pressures (closer to 50 MPa) are currently also being explored experimentally. 78 Nevertheless, later in the article, we also report water permeabilities, which are normalized by the pressure drop, such that our results are independent of the applied pressure.
Figure 7 .
7Water flow rate versus effective pressure plots for nanoporous monocrystalline hBN membrane. The calculations in this work were carried out using two different interatomic potentials to model hBN flexibility -the Tersoff force field proposed by Albe et al.,45 with the results in purple color and the valence force field proposed by Govind Rajan et al.,51 with the results in yellow color. Results from Garnier et al.17 and Gao et al.18 are shown in green and gray color, respectively. Lines are only visual guides and do not indicate any fit.It can be inferred fromFigure 7that the water flow rates predicted in this work for nanoporous monocrystalline hBN are comparable with previously published work. Note that no previous calculations are available for water and ion permeation through nanoporous bicrystalline hBN.Nevertheless, variations were observed from previously predicted water flow rates of nanoporous monocrystalline hBN due to different force-field parameters employed, and the shape and size of the nanopores considered in previous work. For example,Gao et al. 18 considered the same nanopore used in this work but used a different water model (TIP3P), Lennard-Jones (LJ) parameters for water-hBN interactions, partial charges, and the force field to model the flexibility of hBN (universal force field). In contrast, Garnier et al.17 used the TIP4P/2005 water model with the same LJ parameters as employed by Gao et al.18 , but
the force field used to model the flexibility of nanoporous monocrystalline hBN does not affect water permeation significantly, as seen from our results shown in purple (using the Tersoff force field proposed by Albe et al.45 ) and yellow color (using the valence force field proposed by Govind Rajan et al.51 ). After comparison of the water flow rates of nanoporous monocrystalline hBN membranes predicted by different force fields, the Tersoff force field was used to model the flexibility of nanoporous monocrystalline and bicrystalline hBN membranes in the rest of this work, because the force field by Govind Rajan et al.51 cannot be used to model B-B and N-N bonds that are seen at GBs.Effect of GBs on the water flow rate, water flux, and water permeability through nanoporoushBN. We next analyzed the effect of GBs on the desalination performance of nanoporous bicrystalline hBN membrane. As seen before, inFigure 6, the number of water molecules passing through nanoporous monocrystalline and bicrystalline hBN membranes were plotted against the simulation time at different feed pressures. Subsequently, we calculated the water flow rate for various bicrystalline hBN membrane configurations and plotted the values against the effective pressure, as shown inFigure 8a. One can infer fromFigure 8athat the water flow rate follows an almost linear trend with the effective pressure; as the effective pressure increases, the water flow rate also increases for all the hBN membrane configurations. It is evident fromFigure 8athat the lower-angle misoriented bicrystalline hBN membrane (with13.2° misorientation angle) shows improved water flow rate as compared to monocrystalline hBN configuration, by ~30% at 46.87 MPa effective pressure. This finding can be attributed to the higher equivalent circular diameter (5.93 Å) and lower aspect ratio (0.84) of the nanopore in 13.2°-misoriented bicrystalline hBN, as compared to the nanopore of diameter 5.80 Å and aspect ratio 0.91 in monocrystalline hBN (see
Figure 8 .
8(a) Water flow rate and (b) water flux versus effective pressure for nanoporous monocrystalline and bicrystalline hBN membranes.As mentioned above, the nanopore shape significantly affects water and ion permeation through nanoporous hBN. Thus, we maintained an approximately triangular shape of the nanopore in all the bicrystalline configurations, as in the monocrystalline configuration.Nevertheless, due to this constraint, the sizes of the nanopores slightly varied in the bicrystalline configurations as compared to the monocrystalline configuration. To analyze the intrinsic effect of GB configurations on the desalination performance of nanoporous bicrystalline hBN, the water flux (i.e., the water flow rate normalized by the nanopore area) was calculated and plotted as a function of the effective pressure inFigure 8b. It can be deduced from Figure 8b that the water flux follows a similar trend as followed by the water flow rate. Nevertheless, the improvement in the water flux was slightly smaller (~24%) at 46.87 MPa effective pressure, as compared to the improvement in the water flow rate, in the case of the 13.2°-misoriented bicrystalline hBN membrane. On the other hand, the water fluxes through the 21.8°-and 32.2°-misoriented bicrystalline hBN membranes were closer to the monocrystalline hBN case. The water permeability in L cm −2 day −2 MPa −1 , defined as the volume of water permeated per unit time, membrane area, and pressure, was also calculated at 46.87 MPa effective pressure and plotted in Figure 9. It can be seen from Figure 9 that the improvement in the water permeability was more remarkable (~47%) at 46.87 MPa effective pressure, as compared to the improvement in the water flow rate, in the case of the 13.2°-misoriented bicrystalline hBN membrane. This extra improvement in the permeability of the 13.2°-misoriented bicrystalline hBN membrane is attributed to its lower surface area (1000 Å 2 ), as compared to the higher surface area of the monocrystalline hBN membrane (1136 Å 2 ). On the other hand, the water permeabilities through the 21.8°-and 32.2°-misoriented bicrystalline hBN membranes are almost identical to that of the monocrystalline hBN membrane. Nevertheless, the water permeability estimate of ~12-18 L cm −2 day −2 MPa −1 for nanoporous hBN is comparable in order of magnitude to estimates obtained for graphene, 6 MoS2, 23 and MOF 22 membranes
day − 2
2MPa −1 of performance using a hBN membrane. (A better performance would require the generation of more pores per unit area in hBN.) In contrast, the experimentally measured values of water permeance for amino-functionalized nanoporous hBN are in the range of 14.9 to ~48 L cm −2 day −2 MPa −1 for membranes of thickness 2000 nm and 400 nm, respectively. 12 The variation in the water permeability by a factor of 2-3 between the simulated and experimental results can be attributed to the larger sizes of the nanopores in the experimental samples.Indeed, the sizes of the nanopores in the experimentally synthesized hBN used byChen et al. 12 are in the range of 8.0 to 18.3 Å, which are larger than the sizes of the nanopores considered in our work (5 to 6 Å). Note that the study of Chen et al.12 considered lamellar hBN membranes, i.e., membranes synthesized using exfoliated hBN flakes. In addition, work is needed to understand how edge functionalization affects the sizes of nanopores in hBN, and consequently water and ion transport through nanopores in the material. The investigation of edge functional groups would also require the development of force-field parameters to describe functionalized hBN edges.
Figure 10 .
10Free energy profile or PMF variations along the flow direction (z-direction) for nanoporous monocrystalline and bicrystalline hBN membranes of varying misorientation angles.
analyzed the effect of electrostatic interactions on the desalination performance of monocrystalline and bicrystalline nanoporous hBN. To evaluate the effect of interfacial electrostatic interactions, we calculated the water flow rate with three different sets of partial charges on the B and N atoms in hBN membrane: (i) partial charges calculated using DFT and the DDAP method, (ii) bulk partial charges on each B (q = +0.907 e) and N atom (q = -0.907 e) (these charges were calculated by Govind Rajan et al. 51 for monocrystalline hBN nanosheets), and (iii) by taking zero charges on each B and N atom, i.e., when the interfacial electrostatic interactions are zero. After calculating the water flow rate for different sets of partial charges, we plotted these results against the effective pressure for various nanoporous monocrystalline and bicrystalline hBN membranes in
Figure 11 .
11Water flow rate versus effective pressure plots for various configurations: (a) monocrystalline, (b) 13.2°-misoriented bicrystalline, (c) 21.8°-misoriented bicrystalline, and (d) 32.2°-misoriented bicrystalline nanoporous hBN membranes.
GBs and interfacial electrostatic interactions in modulating the desalination performance of nanoporous bicrystalline hBN informs the use of large-area hBN membrane in seawater desalination applications. Specifically, we find that the presence of GBs and surface charge may be deleterious for, respectively, ion rejection and water permeation through nanoporous hBN membranes. Thus, monocrystalline nanoporous hBN should be preferred for synthesizing water desalination membranes. To conclude, we hope that future studies consider the effect of GBs, which are prevalent in large-area membranes, on the desalination performance of other nanoporous 2D materials as well, while realistically modeling interfacial electrostatic interactions.SUPPORTING INFORMATIONWater flow rate for nanoporous monocrystalline hBN membrane at 50 MPa feed pressure when the permeate side has 500 and 2000 water molecules; convergence of energy and temperature during equilibration; parameters for the LJ and Coulombic interatomic potentials; choice of the force fields for hBN, water molecules, salt ions, and interfacial interactions between hBN and water; model used to calculate the interfacial interactions between a nanoporous hBN membrane and a unit test charge/tangential water molecule; nomenclature and repeat length of the grain boundaries; calculation of the areas, equivalent circular diameters, and aspect ratios of the various nanopores considered; tuning of partial charges and validation of the resultant electrostatic potential for 13.2°-misoriented bicrystalline nanoporous hBN; comparing two different methods for water flow rate calculation; and comparison of the classical electrostatic potentials calculated using DDAP, Hirshfeld, Mulliken, and density derived electrostatic and chemical (DDEC) partial charges with the DFT-derived potential.
( 2 )
2Shannon, M. A.; Bohn, P. W.; Elimelech, M.; Georgiadis, J. G.; Marĩas, B. J.; Mayes, A. M. Science and Technology for Water Purification in the Coming Decades. Nature 2008, 452, 301-310. (3) Xu, G.-R.; Xu, J.-M.; Su, H.-C.; Liu, X.-Y.; Lu-Li; Zhao, H.-L.; Feng, H.-J.; Das, R. Two-Dimensional (2D) Nanoporous Membranes with Sub-Nanopores in Reverse Osmosis Desalination: Latest Developments and Future Directions. Desalination 2019, 451, 18-34. (4) Loh, G. C. Fast Water Desalination by Carbon-Doped Boron Nitride Monolayer: Transport Assisted by Water Clustering at Pores. Nanotechnology 2019, 30, 055401. (5) Water Scarcity: Overview. World Wildlife Fund (WWF) Inc. https://worldwildlife.org/threats/water-scarcity. (Last accessed: February 2022).
( 6 )
6Cohen-Tanugi, D.; Grossman, J. C. Water Desalination across Nanoporous Graphene. Nano Lett. 2012, 12, 3602-3608.(7) Homaeigohar, S.; Elbahri, M. Graphene Membranes for Water Desalination. NPG Asia Mater. 2017, 9, e427-e427. (8) Cohen-Tanugi, D.; Grossman, J. C. Nanoporous Graphene as a Reverse Osmosis Membrane: Recent Insights from Theory and Simulation. Desalination 2015, 366, 59-70.
' = : & = : $ = 2.5115 Å is the unit vector length of the hBN lattice.31 The periodic translation vectors and the repeat length corresponding to the three considered misorientation angles, i.e., 13.2°, 21.8°, and 31.8° are (
Figure S5 .
S5Calculation of the nanopore area for (a) monocrystalline hBN, (b) 13.2°-misoriented bicrystalline hBN, (c) 21.8°-misoriented bicrystalline hBN, and (d) 32.2°-misoriented bicrystalline hBN. The dashed red line indicates the minimum Feret diameter and the dashed blue line indicates the maximum Feret diameter.
. As explained in the Results and Discussion section, three different feed-side pressures, i.e., 50 MPa, 100 MPa, and 200 MPa were used, leading to effective pressure values of 46.87 MPa, 96.87 MPa, and 196.87 MPa, respectively, which are similar in magnitude to the values considered in previous theoretical studies.17,18 The applied pressure was implemented in the form of a uniform force, calculated as # = !$ % ⁄ ,where P denotes the applied pressure, A the piston cross-sectional area, and n the number of atoms in the piston.Details of the force fields used. In MD simulations, selection of the force-field parameters is
very critical, as the accuracy of the simulation critically depends on them. In this work, a hybrid
model (combining the Tersoff potential for hBN, 45 the TIP4P/2005 model for water, 46 and 12-6
LJ plus Coulombic interactions between hBN and water) was used to capture the atomic
interactions among different atoms. The Tersoff model with parameters suggested by Albe et
al. 45 was used to model the nanoporous monocrystalline and bicrystalline hBN membrane, as
done previously by Loh in the context of desalination 4 and by other researchers. 47-49 The
TIP4P/2005 water model 46 was used to represent water molecules, whereas Lennard-Jones (LJ) 50
and Coulombic interatomic potentials were used for all the intermolecular interactions. The
parameters for LJ and Coulombic interatomic potentials were taken from Konatham et al. 24 for
carbon atoms, Govind Rajan et al. 51 for B and N atoms (only LJ parameters), and Zeron et al. 52
for the sodium and chloride ions; these are tabulated in Section S3. In previous work, it was
shown that the LJ parameters proposed by Govind Rajan et al. for B and N atoms 51 are able to
Table 1 .
1Geometrical properties of the four nanopores considered in this work.Configuration
Area (Å 2 )
Equivalent circular
diameter (Å)
Aspect ratio (-)
Monocrystalline
26.4
5.80
0.91
13.2°
27.6
5.93
0.84
21.8°
23.6
5.49
0.78
32.2°
25.9
5.74
0.62
Table 2 .
2Percentage of Na + ions rejected through the different membrane configurations at various effective pressures.Effect of interfacial electrostatic interactions on water desalination performance. Although,intuitively, interfacial electrostatics should play an important role in modulating water and ion permeation, no previous study has investigated the effect of these electrostatic interactions on the desalination performance of through nanoporous hBN. In this regard, the method used to assign partial charges to the membrane atoms, based on DFT calculations, is crucial. In previous work, authors have either used Mulliken population analysis 82 (e.g., Davoy et al.15 and Garnier et al.17 ) or the Hirshfeld charge scheme 83 (e.g.,Gao et al. 18 ) to obtain the charges on B and N atoms. Yet others have used the density derived electrostatic and chemical (DDEC) method 84 and electrostatic-potential-fitting (ESP) method (e.g., Jafarzadeh et al.10 ) to calculate partial charges. In previous work, we showed that ESP-based charges are not able to predict correctly the electrostatic potential above defects in hBN.85 In Section S10, we show that DDAP (using default parameters), Hirshfeld, Mulliken, and DDEC partial charges are also not suitable for predicting the electrostatic potential above nanopores in hBN. Instead, only the DDAP charges with customized parameters are found to be accurate for this purpose. Finally,Pressure (MPa)
Na + ions rejection (%)
Monocrystalline
nanoporous hBN
Bicrystalline nanoporous hBN
13.2°
21.8°
32.2°
46.87
99.09 ± 2.03
92.73 ± 6.90
95.45 ± 0.00 97.27 ± 2.49
96.87
99.09 ± 2.03
92.73 ± 5.18
95.45 ± 0.00 97.27 ± 2.49
196.87
97.27 ± 2.49
92.72 ± 4.06
95.45 ± 0.00 95.45 ± 0.00
Table 3. Percentage of Cl -ions rejected through the different membrane configurations at
various effective pressures.
Pressure (MPa)
Cl -ions rejection (%)
Monocrystalline
nanoporous hBN
Bicrystalline nanoporous hBN
13.2°
21.8°
32.2°
46.87
100 ± 0.00
99.09 ± 2.03
100 ± 0.00
100 ± 0.00
96.87
100 ± 0.00
100 ± 0.00
100 ± 0.00
99.09 ± 2.03
196.87
99.09 ± 2.03
100 ± 0.00
100 ± 0.00
100 ± 0.00
Table 4 .
4Water flow rate through monocrystalline and bicrystalline hBN membranes for different partial charges at 196.87 MPa effective pressure.Configuration
Water flow rate (molec./ns)
% Improvement in water flow
rate using DDAP charges
DFT
charges
Zero
charges
Bulk
charges
Compared
with zero
charges
Compared with
bulk charges
Monocrystalline
102.8
67.4
35.7
52.6
188.3
13.2°
131.8
89.6
36.9
47.2
257.4
21.8°
86.7
51.7
24.6
67.9
253.3
32.2°
106.2
62.8
26.7
69.1
297.4
Mulliken, R. S. Electronic Population Analysis on LCAO-MO Molecular Wave Functions. I. J. Chem. Phys. 1955, 23, 1833-1840. (83) Hirshfeld, F. L. Bonded-Atom Fragments for Describing Molecular Charge Densities. Theor. Chim. Acta 1977, 44, 129-138. (84) Manz, T. A.; Sholl, D. S. Chemically Meaningful Atomic Charges That Reproduce the Electrostatic Potential in Periodic and Nonperiodic Materials. J. Chem. Theory Comput. 2010, 6, 2455-2468. (85) Seal, A.; Govind Rajan, A. Modulating Water Slip Using Atomic-Scale Defects: Friction on Realistic Hexagonal Boron Nitride Surfaces. Nano Lett. 2021, 21, 8008-8016. How Grain Boundaries and Interfacial Electrostatic Interactions Affect Water Desalination Via Nanoporous Bharat Bhushan Sharma and Ananth Govind Rajan * Department of Chemical Engineering, Indian Institute of Science, Bengaluru, Karnataka 560012, India *Corresponding Author: Ananth Govind Rajan (Email: [email protected])Nanometer Graphene Pores: Comparison of Theory and Simulation. ACS Nano 2017,
11, 7974-7987.
(78) Davenport, D. M.; Deshmukh, A.; Werber, J. R.; Elimelech, M. High-Pressure Reverse
Osmosis for Energy-Efficient Hypersaline Brine Desalination: Current Status, Design
Considerations, and Research Needs. Environ. Sci. Technol. Lett. 2018, 5, 467-475.
(79) Corry, B. Designing Carbon Nanotube Membranes for Efficient Water Desalination. J.
Phys. Chem. B 2008, 112, 1427-1434.
(80) Shannon, R. D. Revised Effective Ionic Radii and Systematic Studies of Interatomic
Distances in Halides and Chalcogenides. Acta Crystallogr. Sect. A 1976, 32, 751-767.
(81) Werber, J. R.; Deshmukh, A.; Elimelech, M. The Critical Need for Increased Selectivity,
Not Increased Water Permeability, for Desalination Membranes. Environ. Sci. Technol.
Lett. 2016, 3, 112-120.
(82) TOC Graphic
S-1
Supporting Information for:
Hexagonal Boron Nitride
Table of Contents
ofS1. Water flow rate for nanoporous monocrystalline hBN membrane at 50 MPa feed pressure when the permeate side has 500 and 2000 water molecules ........................................................... 2 S2. Convergence of energy and temperature during equilibration ................................................. 2 S3. Parameters for the LJ and Coulombic interatomic potentials ................................................... 2 S4. Choice of the force fields for hBN, water molecules, salt ions, and interfacial interactions between hBN and water .................................................................................................................. 3 S5. Model used to calculate the interfacial interactions between a nanoporous hBN membrane and a unit test charge/tangential water molecule ............................................................................ 4 S6. Nomenclature and repeat length of the grain boundaries (GBs) .............................................. 5 S7. Calculation of the areas, equivalent circular diameters, and aspect ratios of the various nanopores considered ...................................................................................................................... 6 S8. Tuning of partial charges and validation of the resultant electrostatic potential for 13.2°misoriented bicrystalline nanoporous hBN ..................................................................................... 8 S9. Comparing two different methods for water permeability calculation ................................... 10 S10. Comparison of the classical electrostatic potentials calculated using DDAP, Hirshfeld, Mulliken, and density derived electrostatic and chemical (DDEC) partial charges with the DFTderived potential ............................................................................................................................ 11 S11. References ............................................................................................................................. 12 S1. Water flow rate for nanoporous monocrystalline hBN membrane at 50 MPa feed pressure when the permeate side has 500 and 2000 water moleculesS-2
Table S1 .
S1LJ parameters and partial charges on various atoms. Partial charges on B and N atoms are not mentioned here as they were determined separately using density derived atomic point (DDAP) charges based DFT calculations.S4. Choice of the force fields for hBN, water molecules, salt ions, and interfacial interactions between hBN and waterAny molecular dynamics (MD) simulation's accuracy is critically dependent upon the chosen force field (FF). For hBN, the classical FF model developed by Govind Rajan et al.,3 Lennard-Jones (LJ) potential 5 with parameters taken from the DREIDING FF 6 or CHARMM FF, 7 and the Tersoff FF model 8 with parameters suggested by Sevik et al.9 have been used to simulate the interaction between boron and nitrogen atoms.[10][11][12][13][14][15][16] Out of these force fields, the Tersoff type potential is the most suitable force field to simulate grain boundaries in the hBN membrane, due to the presence of B-B/N-N bonds, and has been used by several researchers.[17][18][19] In this work, the Tersoff parameters suggested by Albe et al.20 were used. In addition to the hBN membrane, water molecules need to be modelled accurately. For this purpose, several water models, e.g., SPC, 21 SPC/E,22 SPC/Fw, 23 TIP3P, 24 TIP4P, 25 TIP4P/2005, 4 etc. are available to simulate the water molecules. In 2018, Prasad et al. 26 performed MD simulations to investigate the influence of the water model on water desalination using graphene nanopores and predicted that the calculated S-4 water flux varied up to 84% between the water models. The authors suggested the use of the TIP4P/2005 water model for computational studies on desalination across nanoporous membranes because the calculated diffusion coefficient and shear viscosity of water using this model are very close to the respective experimental values. Thus, in this work, we have used the TIP4P/2005 water model to simulate water molecules. Apart from this, the LJ parameters and partial charges usedfor Na + and Clions also affect the water permeability, especially for narrow pore.27 Although, several LJ parameters 1,28,29 have been used to simulate the Na + and Clions in water desalination, we used the LJ parameters proposed by Zeron et al. 1 since they very well reproduce the density and visocity of salt-water solutions across a range of concentrations.1 Atom
! (Å) " (kcal/mol)
q (e)
Reference
Na + (Salt)
2.21737 0.35190153
0.85
Zeron et al. 1
Cl − (Salt)
4.69906 0.01838504
-0.85
Zeron et al. 1
C (Piston)
3.4
0.05568834
0
Konatham et al. 2
B (Membrane) 3.3087
0.06924
-
Govind Rajan et al. 3
N (Membrane) 3.2174
0.047299
-
Govind Rajan et al. 3
H (Water)
0
0
0.5564
Abascal et al. 4
O (Water)
0.1852
3.1589
-1.1128
Abascal et al. 4
MPa inFigure 6. The permeate pressure is maintained at 0.1 MPa (i.e., 1 bar) in all cases. All simulations for water permeation were run till 200 water molecules (10% of the total water molecules on the feed side) permeated through the membrane. This is done so that the salinity of the water in the feed side is not significantly affected, thereby preventing large variations in chemical potential gradients during the water and ion permeation process. Therefore, the simulations times vary as a function of the applied pressure (as shown inFigure 6), and increase from ~2 ns to ~9 ns as the applied pressure decreases from 200 MPa to 50 MPa. Note that, in some of the previously published studies, 10,18 simulations were performed for a longer time (~30 ns), thereby significantly affecting the salinity of water on the feed side. One can observe from the plotted trend inFigure 6that in all the membrane configurations, the number of permeating water molecules followed a linear trend with simulation time, which is a signature of a constant pressure and chemical potential driving force. Moreover, the number of permeating water molecules significantly depended upon the imposed pressure. As the imposed pressure increased, the value of Nw at fixed time also increased for all the hBN configurations.
ACKNOWLEDGEMENTSWe gratefully acknowledge the financial support received from the National Supercomputing Mission (DST/NSM/R&D_HPC_Applications/2021/07), which is coordinated by the Department of Science and Technology (DST) and the Department of Electronics and Information Technology (DeitY). We also thank the Supercomputer Education and Research
The Future of Seawater Desalination: Energy, Technology, and the Environment. M Elimelech, W A Phillip, Science. 333Elimelech, M.; Phillip, W. A. The Future of Seawater Desalination: Energy, Technology, and the Environment. Science 2011, 333, 712-717.
Water Desalination Using Nanoporous Single-Layer Graphene. S P Surwade, S N Smirnov, I V Vlassiouk, R R Unocic, G M Veith, S Dai, S M Mahurin, Nat. Nanotechnol. 10Surwade, S. P.; Smirnov, S. N.; Vlassiouk, I. V.; Unocic, R. R.; Veith, G. M.; Dai, S.; Mahurin, S. M. Water Desalination Using Nanoporous Single-Layer Graphene. Nat. Nanotechnol. 2015, 10, 459-464.
Molecular Insights into Effective Water Desalination through Functionalized Nanoporous Boron Nitride Nanosheet Membranes. R Jafarzadeh, J Azamat, H Erfan-Niya, M Hosseini, Appl. Surf. Sci. 471Jafarzadeh, R.; Azamat, J.; Erfan-Niya, H.; Hosseini, M. Molecular Insights into Effective Water Desalination through Functionalized Nanoporous Boron Nitride Nanosheet Membranes. Appl. Surf. Sci. 2019, 471, 921-928.
New Generation Nanomaterials for Water Desalination: A Review. Y H Teow, A W Mohammad, 451Teow, Y. H.; Mohammad, A. W. New Generation Nanomaterials for Water Desalination: A Review. Desalination 2019, 451, 2-17.
Functionalized Boron Nitride Membranes with Ultrafast Solvent Transport Performance for Molecular Separation. C Chen, J Wang, D Liu, C Yang, Y Liu, R S Ruoff, W Lei, Nat. Commun. 9Chen, C.; Wang, J.; Liu, D.; Yang, C.; Liu, Y.; Ruoff, R. S.; Lei, W. Functionalized Boron Nitride Membranes with Ultrafast Solvent Transport Performance for Molecular Separation. Nat. Commun. 2018, 9, 1902.
2D Nanoporous Membrane for Cation Removal from Water: Effects of Ionic Valence, Membrane Hydrophobicity, and Pore Size. M H Köhler, J R Bordin, M C Barbosa, J. Chem. Phys. 222804Köhler, M. H.; Bordin, J. R.; Barbosa, M. C. 2D Nanoporous Membrane for Cation Removal from Water: Effects of Ionic Valence, Membrane Hydrophobicity, and Pore Size. J. Chem. Phys. 2018, 148, 222804.
Water Desalination Using Graphene Nanopores: Influence of the Water Models Used in Simulations. K Prasad, V P Kannam, S K Hartkamp, R Sathian, S P , Phys. Chem. Chem. Phys. 20Prasad K., V. P.; Kannam, S. K.; Hartkamp, R.; Sathian, S. P. Water Desalination Using Graphene Nanopores: Influence of the Water Models Used in Simulations. Phys. Chem. Chem. Phys. 2018, 20, 16005-16011.
High Water Flux with Ions Sieving in a Desalination 2D Sub-Nanoporous Boron Nitride Material. X Davoy, A Gellé, J.-C Lebreton, H Tabuteau, A Soldera, A Szymczyk, A Ghoufi, ACS Omega. 3Davoy, X.; Gellé, A.; Lebreton, J.-C.; Tabuteau, H.; Soldera, A.; Szymczyk, A.; Ghoufi, A. High Water Flux with Ions Sieving in a Desalination 2D Sub-Nanoporous Boron Nitride Material. ACS Omega 2018, 3, 6305-6310.
A Review on Thermo-Mechanical Properties of Bi-Crystalline and Polycrystalline 2D Nanomaterials. B B Sharma, A Parashar, Crit. Rev. Solid State Mater. Sci. 2020Sharma, B. B.; Parashar, A. A Review on Thermo-Mechanical Properties of Bi- Crystalline and Polycrystalline 2D Nanomaterials. Crit. Rev. Solid State Mater. Sci. 2020, 45, 134-170.
Physics behind Water Transport through Nanoporous Boron Nitride and Graphene. L Garnier, A Szymczyk, P Malfreyt, A Ghoufi, J. Phys. Chem. Lett. 7Garnier, L.; Szymczyk, A.; Malfreyt, P.; Ghoufi, A. Physics behind Water Transport through Nanoporous Boron Nitride and Graphene. J. Phys. Chem. Lett. 2016, 7, 3371- 3376.
Rational Design and Strain Engineering of Nanoporous Boron Nitride Nanosheet Membranes for Water Desalination. H Gao, Q Shi, D Rao, Y Zhang, J Su, Y Liu, Y Wang, K Deng, R Lu, J. Phys. Chem. C. 121Gao, H.; Shi, Q.; Rao, D.; Zhang, Y.; Su, J.; Liu, Y.; Wang, Y.; Deng, K.; Lu, R. Rational Design and Strain Engineering of Nanoporous Boron Nitride Nanosheet Membranes for Water Desalination. J. Phys. Chem. C 2017, 121, 22105-22113.
Mechanical Strength of a Nanoporous Bicrystalline H-BN Nanomembrane in a Water Submerged State. B B Sharma, A Parashar, Phys. Chem. Chem. Phys. 2020Sharma, B. B.; Parashar, A. Mechanical Strength of a Nanoporous Bicrystalline H-BN Nanomembrane in a Water Submerged State. Phys. Chem. Chem. Phys. 2020, 22, 20453-20465.
Critical Knowledge Gaps in Mass Transport through Single-Digit Nanopores: A Review and Perspective. S Faucher, N Aluru, M Z Bazant, D Blankschtein, A H Brozena, J Cumings, J Pedro De Souza, M Elimelech, R Epsztein, J T Fourkas, A G Rajan, H J Kulik, A Levy, A Majumdar, C Martin, M Mceldrew, R P Misra, A Noy, T A Pham, M Reed, E Schwegler, Z Siwy, Y Wang, M Strano, J. Phys. Chem. C. 123Faucher, S.; Aluru, N.; Bazant, M. Z.; Blankschtein, D.; Brozena, A. H.; Cumings, J.; Pedro de Souza, J.; Elimelech, M.; Epsztein, R.; Fourkas, J. T.; Rajan, A. G.; Kulik, H. J.; Levy, A.; Majumdar, A.; Martin, C.; McEldrew, M.; Misra, R. P.; Noy, A.; Pham, T. A.; Reed, M.; Schwegler, E.; Siwy, Z.; Wang, Y.; Strano, M. Critical Knowledge Gaps in Mass Transport through Single-Digit Nanopores: A Review and Perspective. J. Phys. Chem. C 2019, 123, 21309-21326.
Molecular Dynamics Study on Water Desalination through Functionalized Nanoporous Graphene. Y Wang, Z He, K M Gupta, Q Shi, R Lu, 116Wang, Y.; He, Z.; Gupta, K. M.; Shi, Q.; Lu, R. Molecular Dynamics Study on Water Desalination through Functionalized Nanoporous Graphene. Carbon N. Y. 2017, 116, 120-127.
Water Desalination with Two-Dimensional Metal-Organic Framework Membranes. Z Cao, V Liu, A B Farimani, Nano Lett. 19Cao, Z.; Liu, V.; Farimani, A. B. Water Desalination with Two-Dimensional Metal- Organic Framework Membranes. Nano Lett. 2019, 19, 8638-8643.
Water Desalination with a Single-Layer MoS2 Nanopore. M Heiranian, A B Farimani, N R Aluru, Nat. Commun. 68616Heiranian, M.; Farimani, A. B.; Aluru, N. R. Water Desalination with a Single-Layer MoS2 Nanopore. Nat. Commun. 2015, 6, 8616.
Simulation Insights for Graphene-Based Water Desalination Membranes. D Konatham, J Yu, T A Ho, A Striolo, Langmuir. 29Konatham, D.; Yu, J.; Ho, T. A.; Striolo, A. Simulation Insights for Graphene-Based Water Desalination Membranes. Langmuir 2013, 29, 11884-11897.
Water Permeability of Nanoporous Graphene at Realistic Pressures for Reverse Osmosis Desalination. D Cohen-Tanugi, J C Grossman, J. Chem. Phys. 74704Cohen-Tanugi, D.; Grossman, J. C. Water Permeability of Nanoporous Graphene at Realistic Pressures for Reverse Osmosis Desalination. J. Chem. Phys. 2014, 141, 074704.
Multilayer Nanoporous Graphene Membranes for Water Desalination. D Cohen-Tanugi, L Lin, C Grossman, Cohen-tanugi, D.; Lin, L.; Grossman, C. Multilayer Nanoporous Graphene Membranes for Water Desalination. 2016.
Molecular Dynamics Simulation of Water Transport through Graphene-Based Nanopores: Flow Behavior and Structure Characteristics. X Yang, X Yang, S Liu, Chinese J. Chem. Eng. 23Yang, X.; Yang, X.; Liu, S. Molecular Dynamics Simulation of Water Transport through Graphene-Based Nanopores: Flow Behavior and Structure Characteristics. Chinese J. Chem. Eng. 2015, 23, 1587-1592.
Molecular Simulation of Reverse Osmosis for Heavy Metal Ions Using Functionalized Nanoporous Graphenes. Y Li, Z Xu, S Liu, J Zhang, X Yang, Comput. Mater. Sci. 139Li, Y.; Xu, Z.; Liu, S.; Zhang, J.; Yang, X. Molecular Simulation of Reverse Osmosis for Heavy Metal Ions Using Functionalized Nanoporous Graphenes. Comput. Mater. Sci. 2017, 139, 65-74.
The Effect of Temperature on Water Desalination through Two-Dimensional Nanopores. K Vishnu Prasad, S P Sathian, J. Chem. Phys. 2020164701Vishnu Prasad, K.; Sathian, S. P. The Effect of Temperature on Water Desalination through Two-Dimensional Nanopores. J. Chem. Phys. 2020, 152, 164701.
MoS2-Based Membranes in Water Treatment and Purification. Y Liu, Y Zhao, X Zhang, X Huang, W Liao, Y Zhao, Chem. Eng. J. 422130082Liu, Y.; Zhao, Y.; Zhang, X.; Huang, X.; Liao, W.; Zhao, Y. MoS2-Based Membranes in Water Treatment and Purification. Chem. Eng. J. 2021, 422, 130082.
Wetting and Interfacial Properties of Water Nanodroplets in Contact with Graphene and Monolayer Boron Nitride Sheets. H Li, X C Zeng, ACS Nano. 6Li, H.; Zeng, X. C. Wetting and Interfacial Properties of Water Nanodroplets in Contact with Graphene and Monolayer Boron Nitride Sheets. ACS Nano 2012, 6, 2401-2409.
Hexagonal Boron Nitride and Water Interaction Parameters. Y Wu, L K Wagner, N R Aluru, J. Chem. Phys. 164118Wu, Y.; Wagner, L. K.; Aluru, N. R. Hexagonal Boron Nitride and Water Interaction Parameters. J. Chem. Phys. 2016, 144, 164118.
Computer-Aided Design of Boron Nitride-Based Membranes with Armchair and Zigzag Nanopores for Efficient Water Desalination. Materials (Basel). A A Tsukanov, E V Shilko, 13Tsukanov, A. A.; Shilko, E. V. Computer-Aided Design of Boron Nitride-Based Membranes with Armchair and Zigzag Nanopores for Efficient Water Desalination. Materials (Basel). 2020, 13, 1-12.
Hexagonal Boron Nitride with Nanoslits as a Membrane for Water Desalination: A Molecular Dynamics Investigation. L Liu, Y Liu, Y Qi, M Song, L Jiang, G Fu, J Li, Sep. Purif. Technol. 2020117409Liu, L.; Liu, Y.; Qi, Y.; Song, M.; Jiang, L.; Fu, G.; Li, J. Hexagonal Boron Nitride with Nanoslits as a Membrane for Water Desalination: A Molecular Dynamics Investigation. Sep. Purif. Technol. 2020, 251, 117409.
Functionalized Boron Nitride Nanosheet as a Membrane for Removal of Pb 2+ and Cd 2+ Ions from Aqueous Solution. J Azamat, J J Sardroodi, L Poursoltani, D Jahanshahi, J. Mol. Liq. 114920Azamat, J.; Sardroodi, J. J.; Poursoltani, L.; Jahanshahi, D. Functionalized Boron Nitride Nanosheet as a Membrane for Removal of Pb 2+ and Cd 2+ Ions from Aqueous Solution. J. Mol. Liq. 2021, 321, 114920.
Erfan-Niya, H. Hexagonal Boron Nitride (h-BN) in Solutes Separation. S Majidi, S Pakdel, J Azamat, Das, R.ChamMajidi, S.; Pakdel, S.; Azamat, J.; Erfan-Niya, H. Hexagonal Boron Nitride (h-BN) in Solutes Separation. In; Das, R., Ed.; Springer International Publishing: Cham, 2021; pp. 163-191.
Separation of Copper and Mercury as Heavy Metals from Aqueous Solution Using Functionalized Boron Nitride Nanosheets: A Theoretical Study. J Azamat, A Khataee, S W Joo, J. Mol. Struct. 1108Azamat, J.; Khataee, A.; Joo, S. W. Separation of Copper and Mercury as Heavy Metals from Aqueous Solution Using Functionalized Boron Nitride Nanosheets: A Theoretical Study. J. Mol. Struct. 2016, 1108, 144-149.
Porous Boron Nitride Nanosheets for Effective Water Cleaning. W Lei, D Portehault, D Liu, S Qin, Y Chen, Nat. Commun. Lei, W.; Portehault, D.; Liu, D.; Qin, S.; Chen, Y. Porous Boron Nitride Nanosheets for Effective Water Cleaning. Nat. Commun. 2013, 4, 1777.
Removal of Arsenic Ions Using Hexagonal Boron Nitride and Graphene Nanosheets: A Molecular Dynamics Study. R Srivastava, A Kommu, N Sinha, J K Singh, Mol. Simul. 43Srivastava, R.; Kommu, A.; Sinha, N.; Singh, J. K. Removal of Arsenic Ions Using Hexagonal Boron Nitride and Graphene Nanosheets: A Molecular Dynamics Study. Mol. Simul. 2017, 43, 985-996.
Atomistic Simulations to Study the Effect of Water Molecules on the Mechanical Behavior of Functionalized and Non-Functionalized Boron Nitride Nanosheets. B B Sharma, A Parashar, Comput. Mater. Sci. 109092Sharma, B. B.; Parashar, A. Atomistic Simulations to Study the Effect of Water Molecules on the Mechanical Behavior of Functionalized and Non-Functionalized Boron Nitride Nanosheets. Comput. Mater. Sci. 2019, 169, 109092.
Fast Parallel Algorithms for Short-Range Molecular Dynamics. S Plimpton, J. Comput. Phys. 117Plimpton, S. Fast Parallel Algorithms for Short-Range Molecular Dynamics. J. Comput. Phys. 1995, 117, 1-19.
A Unified Formulation of the Constant Temperature Molecular Dynamics Methods. S Nosé, J. Chem. Phys. 81Nosé, S. A Unified Formulation of the Constant Temperature Molecular Dynamics Methods. J. Chem. Phys. 1984, 81, 511-519.
Canonical Dynamics: Equilibrium Phase-Space Distributions. W G Hoover, Phys. Rev. A. 31Hoover, W. G. Canonical Dynamics: Equilibrium Phase-Space Distributions. Phys. Rev. A 1985, 31, 1695-1697.
Particle-Particle-Particle-Mesh (P3M) Algorithms. R Hockney, J Eastwood, Computer Simulation Using Particles. CRC PressHockney, R. .; Eastwood, J. . Particle-Particle-Particle-Mesh (P3M) Algorithms. In Computer Simulation Using Particles; CRC Press, 1988; pp. 267-304.
Computer Simulation and Boron Nitride. K Albe, W Möller, K Heinig, Radiat. Eff. Defects Solids. 141Albe, K.; Möller, W.; Heinig, K. Computer Simulation and Boron Nitride. Radiat. Eff. Defects Solids 1997, 141, 85-97.
A General Purpose Model for the Condensed Phases of Water: TIP4P. J L F Abascal, C Vega, J. Chem. Phys. 234505Abascal, J. L. F.; Vega, C. A General Purpose Model for the Condensed Phases of Water: TIP4P/2005. J. Chem. Phys. 2005, 123, 234505.
Mechanical and Thermal Properties of Grain Boundary in a Planar Heterostructure of Graphene and Hexagonal Boron Nitride. Y Li, A Wei, H Ye, H Yao, Nanoscale. 10Li, Y.; Wei, A.; Ye, H.; Yao, H. Mechanical and Thermal Properties of Grain Boundary in a Planar Heterostructure of Graphene and Hexagonal Boron Nitride. Nanoscale 2018, 10, 3497-3508.
Inter-Granular Fracture Behaviour in Bicrystalline Boron Nitride Nanosheets Using Atomistic and Continuum Mechanics-Based Approaches. B B Sharma, A Parashar, J. Mater. Sci. 56Sharma, B. B.; Parashar, A. Inter-Granular Fracture Behaviour in Bicrystalline Boron Nitride Nanosheets Using Atomistic and Continuum Mechanics-Based Approaches. J. Mater. Sci. 2021, 56, 6235-6250.
Investigation on Mechanical Performances of Grain Boundaries in Hexagonal Boron Nitride Sheets. Q Ding, N Ding, L Liu, N Li, C.-M L Wu, Int. J. Mech. Sci. 149Ding, Q.; Ding, N.; Liu, L.; Li, N.; Wu, C.-M. L. Investigation on Mechanical Performances of Grain Boundaries in Hexagonal Boron Nitride Sheets. Int. J. Mech. Sci. 2018, 149, 262-272.
On the Determination of Molecular Fields. -II. From the Equation of State of a Gas. J E Jones, Proc. R. Soc. London. Ser. A, Contain. Pap. a Math. Phys. Character. 106Jones, J. E. On the Determination of Molecular Fields. -II. From the Equation of State of a Gas. Proc. R. Soc. London. Ser. A, Contain. Pap. a Math. Phys. Character 1924, 106, 463-477.
Ab Initio Molecular Dynamics and Lattice Dynamics-Based Force Field for Modeling Hexagonal Boron Nitride in Mechanical and Interfacial Applications. A Govind Rajan, M S Strano, D Blankschtein, J. Phys. Chem. Lett. 9Govind Rajan, A.; Strano, M. S.; Blankschtein, D. Ab Initio Molecular Dynamics and Lattice Dynamics-Based Force Field for Modeling Hexagonal Boron Nitride in Mechanical and Interfacial Applications. J. Phys. Chem. Lett. 2018, 9, 1584-1591.
Cl − , and SO4 2− in Aqueous Solution Based on the TIP4P/2005 Water Model and Scaled Charges for the Ions. I M Zeron, J L F Abascal, C Vega, + Force Field Of Li, Na + , K + Mg, 2+ Ca, 2+ , J. Chem. Phys. 134504Zeron, I. M.; Abascal, J. L. F.; Vega, C. A Force Field of Li + , Na + , K + , Mg 2+ , Ca 2+ , Cl − , and SO4 2− in Aqueous Solution Based on the TIP4P/2005 Water Model and Scaled Charges for the Ions. J. Chem. Phys. 2019, 151, 134504.
Liquids with Lower Wettability Can Exhibit Higher Friction on Hexagonal Boron Nitride: The Intriguing Role of Solid-Liquid Electrostatic Interactions. A Govind Rajan, M S Strano, D Blankschtein, Nano Lett. 19Govind Rajan, A.; Strano, M. S.; Blankschtein, D. Liquids with Lower Wettability Can Exhibit Higher Friction on Hexagonal Boron Nitride: The Intriguing Role of Solid- Liquid Electrostatic Interactions. Nano Lett. 2019, 19, 1539-1551.
Salt Parameterization Can Drastically Affect the Results from Classical Atomistic Simulations of Water Desalination by MoS2 Nanopores. J P K Abal, J R Bordin, M C Barbosa, Phys. Chem. Chem. Phys. 2020Abal, J. P. K.; Bordin, J. R.; Barbosa, M. C. Salt Parameterization Can Drastically Affect the Results from Classical Atomistic Simulations of Water Desalination by MoS2 Nanopores. Phys. Chem. Chem. Phys. 2020, 22, 11053-11061.
T D Kühne, M Iannuzzi, M Del Ben, V V Rybkin, P Seewald, F Stein, T Laino, R Z Khaliullin, O Schütt, F Schiffmann, D Golze, J Wilhelm, S Chulkov, M H Bani-Hashemian, V Weber, U Borštnik, M Taillefumier, A S Jakobovits, A Lazzaro, H Pabst, T Müller, R Schade, M Guidon, S Andermatt, N Holmberg, G K Schenter, A Hehn, A Bussy, F Belleflamme, G Tabacchi, A Glöß, M Lass, I Bethune, C J Mundy, C Plessl, M Watkins, J Vandevondele, M Krack, J Hutter, Cp2k, An Electronic Structure and Molecular Dynamics Software Package -Quickstep: Efficient and Accurate Electronic Structure Calculations. 2020152Kühne, T. D.; Iannuzzi, M.; Del Ben, M.; Rybkin, V. V.; Seewald, P.; Stein, F.; Laino, T.; Khaliullin, R. Z.; Schütt, O.; Schiffmann, F.; Golze, D.; Wilhelm, J.; Chulkov, S.; Bani-Hashemian, M. H.; Weber, V.; Borštnik, U.; Taillefumier, M.; Jakobovits, A. S.; Lazzaro, A.; Pabst, H.; Müller, T.; Schade, R.; Guidon, M.; Andermatt, S.; Holmberg, N.; Schenter, G. K.; Hehn, A.; Bussy, A.; Belleflamme, F.; Tabacchi, G.; Glöß, A.; Lass, M.; Bethune, I.; Mundy, C. J.; Plessl, C.; Watkins, M.; VandeVondele, J.; Krack, M.; Hutter, J. CP2K: An Electronic Structure and Molecular Dynamics Software Package - Quickstep: Efficient and Accurate Electronic Structure Calculations. J. Chem. Phys. 2020, 152, 194103.
Generalized Gradient Approximation Made Simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 1996, 77, 3865-3868.
Gaussian Basis Sets for Accurate Calculations on Molecular Systems in Gas and Condensed Phases. J Vandevondele, J Hutter, J. Chem. Phys. 114105VandeVondele, J.; Hutter, J. Gaussian Basis Sets for Accurate Calculations on Molecular Systems in Gas and Condensed Phases. J. Chem. Phys. 2007, 127, 114105.
Separable Dual-Space Gaussian Pseudopotentials. S Goedecker, M Teter, J Hutter, Phys. Rev. B. 54Goedecker, S.; Teter, M.; Hutter, J. Separable Dual-Space Gaussian Pseudopotentials. Phys. Rev. B 1996, 54, 1703-1710.
Electrostatic Decoupling of Periodic Images of Plane-Wave-Expanded Densities and Derived Atomic Point Charges. P E Blöchl, J. Chem. Phys. 103Blöchl, P. E. Electrostatic Decoupling of Periodic Images of Plane-Wave-Expanded Densities and Derived Atomic Point Charges. J. Chem. Phys. 1995, 103, 7422-7428.
The Influence of Tilt Grain Boundaries on the Mechanical Properties of Bicrystalline Graphene Nanoribbons. Phys. E Low-Dimensional Syst. N Xu, J G Guo, Z Cui, Nanostructures. 84Xu, N.; Guo, J. G.; Cui, Z. The Influence of Tilt Grain Boundaries on the Mechanical Properties of Bicrystalline Graphene Nanoribbons. Phys. E Low-Dimensional Syst. Nanostructures 2016, 84, 168-174.
Electronic Transport in Polycrystalline Graphene. O V Yazyev, S G Louie, Nat. Mater. 9Yazyev, O. V.; Louie, S. G. Electronic Transport in Polycrystalline Graphene. Nat. Mater. 2010, 9, 806-809.
Atomic Resolution Imaging of Grain Boundary Defects in Monolayer Chemical Vapor Deposition-Grown Hexagonal Boron Nitride. A L Gibb, N Alem, J.-H Chen, K J Erickson, J Ciston, A Gautam, M Linck, A Zettl, J. Am. Chem. Soc. 135Gibb, A. L.; Alem, N.; Chen, J.-H.; Erickson, K. J.; Ciston, J.; Gautam, A.; Linck, M.; Zettl, A. Atomic Resolution Imaging of Grain Boundary Defects in Monolayer Chemical Vapor Deposition-Grown Hexagonal Boron Nitride. J. Am. Chem. Soc. 2013, 135, 6758-6761.
Grain Boundaries in Chemical-Vapor-Deposited Atomically Thin Hexagonal Boron Nitride. X Ren, J Dong, P Yang, J Li, G Lu, T Wu, H Wang, W Guo, Z Zhang, F Ding, C Jin, Phys. Rev. Mater. 3Ren, X.; Dong, J.; Yang, P.; Li, J.; Lu, G.; Wu, T.; Wang, H.; Guo, W.; Zhang, Z.; Ding, F.; Jin, C. Grain Boundaries in Chemical-Vapor-Deposited Atomically Thin Hexagonal Boron Nitride. Phys. Rev. Mater. 2019, 3, 1-11.
Grain Boundary Motion in Two-Dimensional Hexagonal Boron Nitride. X Ren, C Jin, ACS Nano. 14Ren, X.; Jin, C. Grain Boundary Motion in Two-Dimensional Hexagonal Boron Nitride. ACS Nano 2020, 14, 13512-13523.
. Q Li, X Zou, M Liu, J Sun, Y Gao, Y Qi, X Zhou, B I Yakobson, Y Zhang, Liu, Z. Grain Boundary Structures and Electronic Properties of Hexagonal Boron Nitride on Cu. 15111Nano Lett.Li, Q.; Zou, X.; Liu, M.; Sun, J.; Gao, Y.; Qi, Y.; Zhou, X.; Yakobson, B. I.; Zhang, Y.; Liu, Z. Grain Boundary Structures and Electronic Properties of Hexagonal Boron Nitride on Cu(111). Nano Lett. 2015, 15, 5804-5810.
Structures and Electronic Properties of Symmetric and Nonsymmetric Graphene Grain Boundaries. J Zhang, J Zhao, 55Zhang, J.; Zhao, J. Structures and Electronic Properties of Symmetric and Nonsymmetric Graphene Grain Boundaries. Carbon N. Y. 2013, 55, 151-159.
Atomistic Simulations to Study the Effect of Grain Boundaries and Hydrogen Functionalization on the Fracture Toughness of Bi-Crystalline h-BN Nanosheets. B B Sharma, A Parashar, Phys. Chem. Chem. Phys. 21Sharma, B. B.; Parashar, A. Atomistic Simulations to Study the Effect of Grain Boundaries and Hydrogen Functionalization on the Fracture Toughness of Bi- Crystalline h-BN Nanosheets. Phys. Chem. Chem. Phys. 2019, 21, 13116-13125.
The Nature of Strength Enhancement and Weakening by Pentagon -Heptagon Defects in Graphene. Y Wei, J Wu, H Yin, X Shi, R Yang, M Dresselhaus, Nat. Mater. 11Wei, Y.; Wu, J.; Yin, H.; Shi, X.; Yang, R.; Dresselhaus, M. The Nature of Strength Enhancement and Weakening by Pentagon -Heptagon Defects in Graphene. Nat. Mater. 2012, 11, 759-763.
Dislocations and Grain Boundaries in Two-Dimensional Boron Nitride. Y Liu, X Zou, B I Yakobson, ACS Nano. 6Liu, Y.; Zou, X.; Yakobson, B. I. Dislocations and Grain Boundaries in Two- Dimensional Boron Nitride. ACS Nano 2012, 6, 7053-7058.
Electron Knock-on Damage in Hexagonal Boron Nitride Monolayers. J Kotakoski, C H Jin, O Lehtinen, K Suenaga, A V Krasheninnikov, Phys. Rev. B. 113404Kotakoski, J.; Jin, C. H.; Lehtinen, O.; Suenaga, K.; Krasheninnikov, A. V. Electron Knock-on Damage in Hexagonal Boron Nitride Monolayers. Phys. Rev. B 2010, 82, 113404.
Atomic-Scale Dynamics of Triangular Hole Growth in Monolayer Hexagonal Boron Nitride under Electron Irradiation. G H Ryu, H J Park, J Ryou, J Park, J Lee, G Kim, H S Shin, C W Bielawski, R S Ruoff, S Hong, Z Lee, Nanoscale. 7Ryu, G. H.; Park, H. J.; Ryou, J.; Park, J.; Lee, J.; Kim, G.; Shin, H. S.; Bielawski, C. W.; Ruoff, R. S.; Hong, S.; Lee, Z. Atomic-Scale Dynamics of Triangular Hole Growth in Monolayer Hexagonal Boron Nitride under Electron Irradiation. Nanoscale 2015, 7, 10600-10605.
Fabrication of Subnanometer-Precision Nanopores in Hexagonal Boron Nitride. S M Gilbert, G Dunn, A Azizi, T Pham, B Shevitski, E Dimitrov, S Liu, S Aloni, A Zettl, 715096Gilbert, S. M.; Dunn, G.; Azizi, A.; Pham, T.; Shevitski, B.; Dimitrov, E.; Liu, S.; Aloni, S.; Zettl, A. Fabrication of Subnanometer-Precision Nanopores in Hexagonal Boron Nitride. Sci. Rep. 2017, 7, 15096.
Addressing the Isomer Cataloguing Problem for Nanopores in Two-Dimensional Materials. A Govind Rajan, K S Silmore, J Swett, A W Robertson, J H Warner, D Blankschtein, M S Strano, Nat. Mater. 18Govind Rajan, A.; Silmore, K. S.; Swett, J.; Robertson, A. W.; Warner, J. H.; Blankschtein, D.; Strano, M. S. Addressing the Isomer Cataloguing Problem for Nanopores in Two-Dimensional Materials. Nat. Mater. 2019, 18, 129-135.
Observation and Spectral Assignment of a Family of Hexagonal Boron Nitride Lattice Defects. D Kozawa, A Govind Rajan, S X Li, T Ichihara, V B Koman, Y Zeng, M Kuehne, S K Iyemperumal, K S Silmore, D Parviz, P Liu, A T Liu, S Faucher, Z Yuan, W Xu, J H Warner, D Blankschtein, M S Strano, Kozawa, D.; Govind Rajan, A.; Li, S. X.; Ichihara, T.; Koman, V. B.; Zeng, Y.; Kuehne, M.; Iyemperumal, S. K.; Silmore, K. S.; Parviz, D.; Liu, P.; Liu, A. T.; Faucher, S.; Yuan, Z.; Xu, W.; Warner, J. H.; Blankschtein, D.; Strano, M. S. Observation and Spectral Assignment of a Family of Hexagonal Boron Nitride Lattice Defects. 2019.
Fabrication of a Freestanding Boron Nitride Single Layer and Its Defect Assignments. C Jin, F Lin, K Suenaga, S Iijima, Phys. Rev. Lett. 102Jin, C.; Lin, F.; Suenaga, K.; Iijima, S. Fabrication of a Freestanding Boron Nitride Single Layer and Its Defect Assignments. Phys. Rev. Lett. 2009, 102, 3-6.
Selective Sputtering and Atomic Resolution Imaging of Atomically Thin Boron Nitride Membranes. J C Meyer, A Chuvilin, G Algara-Siller, J Biskupek, U Kaiser, Nano Lett. 9Meyer, J. C.; Chuvilin, A.; Algara-Siller, G.; Biskupek, J.; Kaiser, U. Selective Sputtering and Atomic Resolution Imaging of Atomically Thin Boron Nitride Membranes. Nano Lett. 2009, 9, 2683-2689.
Z Yuan, A Govind Rajan, R P Misra, L W Drahushuk, K V Agrawal, M S Strano, D Blankschtein, Mechanism and Prediction of Gas Permeation through Sub. Yuan, Z.; Govind Rajan, A.; Misra, R. P.; Drahushuk, L. W.; Agrawal, K. V.; Strano, M. S.; Blankschtein, D. Mechanism and Prediction of Gas Permeation through Sub-
Cl − , and SO4 2− in Aqueous Solution Based on the TIP4P/2005 Water Model and Scaled Charges for the Ions. I M Zeron, J L F Abascal, C Vega, + Force Field Of Li, Na + , K + Mg, 2+ Ca, 2+ , 10.1063/1.5121392J. Chem. Phys. 201913134504Zeron, I. M.; Abascal, J. L. F.; Vega, C. A Force Field of Li + , Na + , K + , Mg 2+ , Ca 2+ , Cl − , and SO4 2− in Aqueous Solution Based on the TIP4P/2005 Water Model and Scaled Charges for the Ions. J. Chem. Phys. 2019, 151 (13), 134504. https://doi.org/10.1063/1.5121392.
Simulation Insights for Graphene-Based Water Desalination Membranes. D Konatham, J Yu, T A Ho, A Striolo, 10.1021/la4018695Langmuir. 201338Konatham, D.; Yu, J.; Ho, T. A.; Striolo, A. Simulation Insights for Graphene-Based Water Desalination Membranes. Langmuir 2013, 29 (38), 11884-11897. https://doi.org/10.1021/la4018695.
Ab Initio Molecular Dynamics and Lattice Dynamics-Based Force Field for Modeling Hexagonal Boron Nitride in Mechanical and Interfacial Applications. A Govind Rajan, M S Strano, D Blankschtein, 10.1021/acs.jpclett.7b03443J. Phys. Chem. Lett. 20187Govind Rajan, A.; Strano, M. S.; Blankschtein, D. Ab Initio Molecular Dynamics and Lattice Dynamics-Based Force Field for Modeling Hexagonal Boron Nitride in Mechanical and Interfacial Applications. J. Phys. Chem. Lett. 2018, 9 (7), 1584-1591. https://doi.org/10.1021/acs.jpclett.7b03443.
A General Purpose Model for the Condensed Phases of Water: TIP4P. J L F Abascal, C Vega, 10.1063/1.2121687J. Chem. Phys. 12323234505Abascal, J. L. F.; Vega, C. A General Purpose Model for the Condensed Phases of Water: TIP4P/2005. J. Chem. Phys. 2005, 123 (23), 234505. https://doi.org/10.1063/1.2121687.
On the Determination of Molecular Fields. -II. From the Equation of State of a Gas. J E Jones, 10.1098/rspa.1924.0082Proc. R. Soc. London. Ser. A, Contain. Pap. a Math. Phys. Character. 1924738Jones, J. E. On the Determination of Molecular Fields. -II. From the Equation of State of a Gas. Proc. R. Soc. London. Ser. A, Contain. Pap. a Math. Phys. Character 1924, 106 (738), 463-477. https://doi.org/10.1098/rspa.1924.0082.
DREIDING: A Generic Force Field for S-13. S L Mayo, B D Olafson, W A Goddard, Mayo, S. L.; Olafson, B. D.; Goddard, W. A. DREIDING: A Generic Force Field for S-13
Molecular Simulations. 10.1021/j100389a010J. Phys. Chem. 26Molecular Simulations. J. Phys. Chem. 1990, 94 (26), 8897-8909. https://doi.org/10.1021/j100389a010.
CHARMM: A Program for Macromolecular Energy, Minimization, and Dynamics Calculations. B R Brooks, R E Bruccoleri, B D Olafson, D J States, S Swaminathan, M Karplus, 10.1002/jcc.540040211J. Comput. Chem. 19832Brooks, B. R.; Bruccoleri, R. E.; Olafson, B. D.; States, D. J.; Swaminathan, S.; Karplus, M. CHARMM: A Program for Macromolecular Energy, Minimization, and Dynamics Calculations. J. Comput. Chem. 1983, 4 (2), 187-217. https://doi.org/10.1002/jcc.540040211.
New Empirical Approach for the Structure and Energy of Covalent Systems. J Tersoff, 10.1103/PhysRevB.37.6991Phys. Rev. B. 12Tersoff, J. New Empirical Approach for the Structure and Energy of Covalent Systems. Phys. Rev. B 1988, 37 (12), 6991-7000. https://doi.org/10.1103/PhysRevB.37.6991.
Thermal Conductivity of BN-C Nanostructures. A Kınacı, J B Haskins, C Sevik, T Çağın, 10.1103/PhysRevB.86.115410Phys. Rev. B. 201211115410Kınacı, A.; Haskins, J. B.; Sevik, C.; Çağın, T. Thermal Conductivity of BN-C Nanostructures. Phys. Rev. B 2012, 86 (11), 115410. https://doi.org/10.1103/PhysRevB.86.115410.
Computer-Aided Design of Boron Nitride-Based Membranes with Armchair and Zigzag Nanopores for Efficient Water Desalination. Materials (Basel). A A Tsukanov, E V Shilko, 10.3390/ma132252562020Tsukanov, A. A.; Shilko, E. V. Computer-Aided Design of Boron Nitride-Based Membranes with Armchair and Zigzag Nanopores for Efficient Water Desalination. Materials (Basel). 2020, 13 (22), 1-12. https://doi.org/10.3390/ma13225256.
Hexagonal Boron Nitride with Nanoslits as a Membrane for Water Desalination: A Molecular Dynamics Investigation. L Liu, Y Liu, Y Qi, M Song, L Jiang, G Fu, J Li, Liu, L.; Liu, Y.; Qi, Y.; Song, M.; Jiang, L.; Fu, G.; Li, J. Hexagonal Boron Nitride with Nanoslits as a Membrane for Water Desalination: A Molecular Dynamics Investigation.
. 10.1016/j.seppur.2020.117409Sep. Purif. Technol. 251Sep. Purif. Technol. 2020, 251, 117409. https://doi.org/10.1016/j.seppur.2020.117409.
Removal of Arsenic Ions Using Hexagonal Boron Nitride and Graphene Nanosheets: A Molecular Dynamics Study. R Srivastava, A Kommu, N Sinha, J K Singh, 10.1080/08927022.2017.1321754Mol. Simul. 2017Srivastava, R.; Kommu, A.; Sinha, N.; Singh, J. K. Removal of Arsenic Ions Using Hexagonal Boron Nitride and Graphene Nanosheets: A Molecular Dynamics Study. Mol. Simul. 2017, 43 (13-16), 985-996. https://doi.org/10.1080/08927022.2017.1321754.
Rational Design and Strain Engineering of Nanoporous Boron Nitride Nanosheet Membranes for Water Desalination. H Gao, Q Shi, D Rao, Y Zhang, J Su, Y Liu, Y Wang, K Deng, R Lu, 10.1021/acs.jpcc.7b06480J. Phys. Chem. C. 201740Gao, H.; Shi, Q.; Rao, D.; Zhang, Y.; Su, J.; Liu, Y.; Wang, Y.; Deng, K.; Lu, R. Rational Design and Strain Engineering of Nanoporous Boron Nitride Nanosheet Membranes for Water Desalination. J. Phys. Chem. C 2017, 121 (40), 22105-22113. https://doi.org/10.1021/acs.jpcc.7b06480.
Physics behind Water Transport through Nanoporous Boron Nitride and Graphene. L Garnier, A Szymczyk, P Malfreyt, A Ghoufi, 10.1021/acs.jpclett.6b01365J. Phys. Chem. Lett. 717Garnier, L.; Szymczyk, A.; Malfreyt, P.; Ghoufi, A. Physics behind Water Transport through Nanoporous Boron Nitride and Graphene. J. Phys. Chem. Lett. 2016, 7 (17), 3371-3376. https://doi.org/10.1021/acs.jpclett.6b01365.
. Tabrizi Nasser Saadat, Behrouz VahidNasser Saadat Tabrizi ; Behrouz Vahid ;
Functionalized Single-Atom Thickness Boron Nitride Membrane for Separation of Arsenite Ion from Water: A Molecular Dynamics Simulation Study. Jafar Azamat, 10.22036/PCR.2020.222756.17422020Jafar Azamat. Functionalized Single-Atom Thickness Boron Nitride Membrane for Separation of Arsenite Ion from Water: A Molecular Dynamics Simulation Study. 2020, 8 (3), 843-856. https://doi.org/10.22036/PCR.2020.222756.1742.
Fast Water Desalination by Carbon-Doped Boron Nitride Monolayer: Transport Assisted by Water Clustering at Pores. G C Loh, 10.1088/1361-6528/aaf063Nanotechnology. 2019555401Loh, G. C. Fast Water Desalination by Carbon-Doped Boron Nitride Monolayer: Transport Assisted by Water Clustering at Pores. Nanotechnology 2019, 30 (5), 055401. https://doi.org/10.1088/1361-6528/aaf063.
Mechanical and Thermal Properties of Grain Boundary in a Planar Heterostructure of Graphene and Hexagonal Boron Nitride. Y Li, A Wei, H Ye, H Yao, 10.1039/C7NR07306BNanoscale. 20187Li, Y.; Wei, A.; Ye, H.; Yao, H. Mechanical and Thermal Properties of Grain Boundary in a Planar Heterostructure of Graphene and Hexagonal Boron Nitride. Nanoscale 2018, 10 (7), 3497-3508. https://doi.org/10.1039/C7NR07306B.
. S-14, S-14
Inter-Granular Fracture Behaviour in Bicrystalline Boron Nitride Nanosheets Using Atomistic and Continuum Mechanics-Based Approaches. B B Sharma, A Parashar, 10.1007/s10853-020-05697-xJ. Mater. Sci. 202110Sharma, B. B.; Parashar, A. Inter-Granular Fracture Behaviour in Bicrystalline Boron Nitride Nanosheets Using Atomistic and Continuum Mechanics-Based Approaches. J. Mater. Sci. 2021, 56 (10), 6235-6250. https://doi.org/10.1007/s10853-020-05697-x.
Investigation on Mechanical Performances of Grain Boundaries in Hexagonal Boron Nitride Sheets. Q Ding, N Ding, L Liu, N Li, C.-M L Wu, 10.1016/j.ijmecsci.2018.10.003Int. J. Mech. Sci. 149Ding, Q.; Ding, N.; Liu, L.; Li, N.; Wu, C.-M. L. Investigation on Mechanical Performances of Grain Boundaries in Hexagonal Boron Nitride Sheets. Int. J. Mech. Sci. 2018, 149 (October), 262-272. https://doi.org/10.1016/j.ijmecsci.2018.10.003.
Computer Simulation and Boron Nitride. K Albe, W Möller, K Heinig, 10.1080/10420159708211560Radiat. Eff. Defects Solids. 141Albe, K.; Möller, W.; Heinig, K. Computer Simulation and Boron Nitride. Radiat. Eff. Defects Solids 1997, 141, 85-97. https://doi.org/10.1080/10420159708211560.
Interaction Models for Water in Relation to Protein Hydration. H J C Berendsen, J P M Postma, W F Van Gunsteren, J Hermans, 10.1007/978-94-015-7658-1_21Berendsen, H. J. C.; Postma, J. P. M.; van Gunsteren, W. F.; Hermans, J. Interaction Models for Water in Relation to Protein Hydration; 1981; pp 331-342. https://doi.org/10.1007/978-94-015-7658-1_21.
The Missing Term in Effective Pair Potentials. H J C Berendsen, J R Grigera, T P Straatsma, 10.1021/j100308a038J. Phys. Chem. 24Berendsen, H. J. C.; Grigera, J. R.; Straatsma, T. P. The Missing Term in Effective Pair Potentials. J. Phys. Chem. 1987, 91 (24), 6269-6271. https://doi.org/10.1021/j100308a038.
Flexible Simple Point-Charge Water Model with Improved Liquid-State Properties. Y Wu, H L Tepper, G A Voth, 10.1063/1.2136877J. Chem. Phys. 124224503Wu, Y.; Tepper, H. L.; Voth, G. A. Flexible Simple Point-Charge Water Model with Improved Liquid-State Properties. J. Chem. Phys. 2006, 124 (2), 024503. https://doi.org/10.1063/1.2136877.
A Modified TIP3P Water Potential for Simulation with Ewald Summation. D J Price, C L Brooks, 10.1063/1.1808117J. Chem. Phys. 20Price, D. J.; Brooks, C. L. A Modified TIP3P Water Potential for Simulation with Ewald Summation. J. Chem. Phys. 2004, 121 (20), 10096-10103. https://doi.org/10.1063/1.1808117.
Comparison of Simple Potential Functions for Simulating Liquid Water. William L Jorgensen, Jayaraman Chandrasekhar, J D M , 10.1063/1.445869J. Chem. Phys. 19832926William L. Jorgensen, Jayaraman Chandrasekhar, and J. D. M. Comparison of Simple Potential Functions for Simulating Liquid Water. J. Chem. Phys. 1983, 79 (2), 926. https://doi.org/10.1063/1.445869.
Water Desalination Using Graphene Nanopores: Influence of the Water Models Used in Simulations. K Prasad, V Kannam, S K Hartkamp, R Sathian, S P , 10.1039/c8cp00919hPhys. Chem. Chem. Phys. 2023Prasad K, V.; Kannam, S. K.; Hartkamp, R.; Sathian, S. P. Water Desalination Using Graphene Nanopores: Influence of the Water Models Used in Simulations. Phys. Chem. Chem. Phys. 2018, 20 (23), 16005-16011. https://doi.org/10.1039/c8cp00919h.
Salt Parameterization Can Drastically Affect the Results from Classical Atomistic Simulations of Water Desalination by MoS2 Nanopores. J P K Abal, J R Bordin, M C Barbosa, 10.1039/D0CP00484GPhys. Chem. Chem. Phys. 202019Abal, J. P. K.; Bordin, J. R.; Barbosa, M. C. Salt Parameterization Can Drastically Affect the Results from Classical Atomistic Simulations of Water Desalination by MoS2 Nanopores. Phys. Chem. Chem. Phys. 2020, 22 (19), 11053-11061. https://doi.org/10.1039/D0CP00484G.
Sodium Chloride, NaCl/∈: New Force Field. R Fuentes-Azcatl, M C Barbosa, 10.1021/acs.jpcb.5b12584J. Phys. Chem. B. 20169Fuentes-Azcatl, R.; Barbosa, M. C. Sodium Chloride, NaCl/∈: New Force Field. J. Phys. Chem. B 2016, 120 (9), 2460-2470. https://doi.org/10.1021/acs.jpcb.5b12584.
Determination of Alkali and Halide Monovalent Ion Parameters for Use in Explicitly Solvated Biomolecular Simulations. I S Joung, T E Cheatham, 10.1021/jp8001614J. Phys. Chem. B. 30Joung, I. S.; Cheatham, T. E. Determination of Alkali and Halide Monovalent Ion Parameters for Use in Explicitly Solvated Biomolecular Simulations. J. Phys. Chem. B 2008, 112 (30), 9020-9041. https://doi.org/10.1021/jp8001614.
Structures and Electronic Properties of Symmetric and Nonsymmetric S-15. J Zhang, J Zhao, Zhang, J.; Zhao, J. Structures and Electronic Properties of Symmetric and Nonsymmetric S-15
. 10.1016/j.carbon.2012.12.021Graphene Grain Boundaries. Carbon N. Y. 55Graphene Grain Boundaries. Carbon N. Y. 2013, 55, 151-159. https://doi.org/10.1016/j.carbon.2012.12.021.
Electronic Transport in Polycrystalline Graphene. O V Yazyev, S G Louie, 10.1038/nmat2830Nat. Mater. 201010Yazyev, O. V.; Louie, S. G. Electronic Transport in Polycrystalline Graphene. Nat. Mater. 2010, 9 (10), 806-809. https://doi.org/10.1038/nmat2830.
Investigation on Mechanical Performances of Grain Boundaries in Hexagonal Boron Nitride Sheets. Q Ding, N Ding, L Liu, N Li, C.-M L Wu, 10.1016/j.ijmecsci.2018.10.003Int. J. Mech. Sci. 149Ding, Q.; Ding, N.; Liu, L.; Li, N.; Wu, C.-M. L. Investigation on Mechanical Performances of Grain Boundaries in Hexagonal Boron Nitride Sheets. Int. J. Mech. Sci. 2018, 149, 262-272. https://doi.org/10.1016/j.ijmecsci.2018.10.003.
Consistent van Der Waals Radii for the Whole Main Group. M Mantina, A C Chamberlin, R Valero, C J Cramer, D G Truhlar, 10.1021/jp8111556J. Phys. Chem. A. 19Mantina, M.; Chamberlin, A. C.; Valero, R.; Cramer, C. J.; Truhlar, D. G. Consistent van Der Waals Radii for the Whole Main Group. J. Phys. Chem. A 2009, 113 (19), 5806- 5812. https://doi.org/10.1021/jp8111556.
Water Desalination across Nanoporous Graphene. D Cohen-Tanugi, J C Grossman, 10.1021/nl3012853Nano Lett. 20127Cohen-Tanugi, D.; Grossman, J. C. Water Desalination across Nanoporous Graphene. Nano Lett. 2012, 12 (7), 3602-3608. https://doi.org/10.1021/nl3012853.
Water Desalination with a Single-Layer MoS2 Nanopore. M Heiranian, A B Farimani, N R Aluru, 10.1038/ncomms9616Nat. Commun. 618616Heiranian, M.; Farimani, A. B.; Aluru, N. R. Water Desalination with a Single-Layer MoS2 Nanopore. Nat. Commun. 2015, 6 (1), 8616. https://doi.org/10.1038/ncomms9616.
Electrostatic Decoupling of Periodic Images of Plane-Wave-Expanded Densities and Derived Atomic Point Charges. P E Blöchl, 10.1063/1.470314J. Chem. Phys. 17Blöchl, P. E. Electrostatic Decoupling of Periodic Images of Plane-Wave-Expanded Densities and Derived Atomic Point Charges. J. Chem. Phys. 1995, 103 (17), 7422-7428. https://doi.org/10.1063/1.470314.
| [] |
[
"Dynamics of Majority Rule in Two-State Interacting Spins Systems",
"Dynamics of Majority Rule in Two-State Interacting Spins Systems"
] | [
"P L Krapivsky \nCenter for BioDynamics\nCenter for Polymer Studies\nDepartment of Physics\nBoston University\n02215BostonMA\n",
"S Redner \nCenter for BioDynamics\nCenter for Polymer Studies\nDepartment of Physics\nBoston University\n02215BostonMA\n"
] | [
"Center for BioDynamics\nCenter for Polymer Studies\nDepartment of Physics\nBoston University\n02215BostonMA",
"Center for BioDynamics\nCenter for Polymer Studies\nDepartment of Physics\nBoston University\n02215BostonMA"
] | [] | We introduce a 2-state opinion dynamics model where agents evolve by majority rule. In each update, a group of agents is specified whose members then all adopt the local majority state. In the mean-field limit, where a group consists of randomly-selected agents, consensus is reached in a time that scales ln N , where N is the number of agents. On finite-dimensional lattices, where a group is a contiguous cluster, the consensus time fluctuates strongly between realizations and grows as a dimension-dependent power of N . The upper critical dimension appears to be larger than 4. The final opinion always equals that of the initial majority except in one dimension.PACS numbers: 02.50. Ey, In this letter, we introduce an exceedingly simple opinion dynamics model -majority rule (MR) -that exhibits rich dynamical behavior. The model consists of N agents, each of which can assume the opinion (equivalently spin) states +1 or −1, that evolve as follows:• Pick a group of G spins from the system (with G an odd number). This group could be any G spins, in the mean-field limit, or it is contiguous, for finitedimensional systems. | 10.1103/physrevlett.90.238701 | [
"https://export.arxiv.org/pdf/cond-mat/0303182v3.pdf"
] | 10,605,407 | cond-mat/0303182 | 7d4553cde7a7053e1f85fb9557219a58dd4f0958 |
Dynamics of Majority Rule in Two-State Interacting Spins Systems
9 Jun 2003
P L Krapivsky
Center for BioDynamics
Center for Polymer Studies
Department of Physics
Boston University
02215BostonMA
S Redner
Center for BioDynamics
Center for Polymer Studies
Department of Physics
Boston University
02215BostonMA
Dynamics of Majority Rule in Two-State Interacting Spins Systems
9 Jun 2003
We introduce a 2-state opinion dynamics model where agents evolve by majority rule. In each update, a group of agents is specified whose members then all adopt the local majority state. In the mean-field limit, where a group consists of randomly-selected agents, consensus is reached in a time that scales ln N , where N is the number of agents. On finite-dimensional lattices, where a group is a contiguous cluster, the consensus time fluctuates strongly between realizations and grows as a dimension-dependent power of N . The upper critical dimension appears to be larger than 4. The final opinion always equals that of the initial majority except in one dimension.PACS numbers: 02.50. Ey, In this letter, we introduce an exceedingly simple opinion dynamics model -majority rule (MR) -that exhibits rich dynamical behavior. The model consists of N agents, each of which can assume the opinion (equivalently spin) states +1 or −1, that evolve as follows:• Pick a group of G spins from the system (with G an odd number). This group could be any G spins, in the mean-field limit, or it is contiguous, for finitedimensional systems.
• The spins in the group all adopt the state of the local majority.
These two steps are repeated until the system reaches a final state of consensus. While the MR model ignores psycho-sociological aspects of real opinion formation [1], this simple decision-making process leads to rich collective behavior. We seek to understand two basic issues: (i) What is the time needed to reach consensus as a function of N and of the initial densities of plus and minus spins? (ii) What is the probability of reaching a given final state as a function of the initial spin densities? To set the stage for our results, we recall the corresponding behavior in the classical 2-state voter model (VM) [2], where a spin is selected at random and it adopts the opinion of a randomly-chosen neighbor. For a system of N spins in d dimensions, the time to reach consensus scales as N for d > 2, as N ln N for d = 2 (the critical dimension of the VM), and as N 2 in d = 1 [2,3]. Because the average magnetization is conserved, the probability that the system eventually ends with all plus spins equals the initial density of plus spins in all spatial dimensions.
The MR model has the same degree of simplicity as the VM, but exhibits very different behavior. Part of the reason for this difference is that MR does not conserve the average magnetization. Another distinguishing trait of MR is the many-body nature of the interaction. This feature also arises, for example, in the Sznajd model [4], * Electronic address: paulk,[email protected] where two neighboring agents that agree can influence a larger local neighborhood, or in Galam's rumor formation model [5], where an entire population is partitioned into disjoint groups that each reach their own consensus. The updating of an extended group of spins was also considered by Newman and Stein [6] in the Ising model with zero-temperature Glauber kinetics [7].
We now outline basic features of the MR model. In the mean-field limit we give an exact solution for the approach to consensus, while for finite dimensions we give numerical and qualitative results.
Mean-field limit. Consider the simplest case where arbitrary groups of size G = 3 are selected and updated at each step. To determine the ultimate fate of the system, let E n denote the "exit probability" that the system ends with all spins plus when starting with n plus spins. Now
3 j N − 3 n − j N n
is the probability that a group of size 3 has j plus and 3 − j minus spins in an N -spin system that contains n plus spins. The group becomes all plus for j = 2, it becomes all minus for j = 1, while for j = 0 or 3 there is no evolution. Thus E n obeys the master equation [8] N n
E n = 3 N −3 n−2 E n+1 +3 N −3 n−1 E n−1 + N −3 n−3 + N −3 n E n ,(1)
which simplifies to
(n − 1)(E n+1 − E n ) = (N − n − 1)(E n − E n−1 ). (2)
Writing D n = E n+1 −E n , Eq. (2) becomes a first-order recursion whose solution is
D n = B Γ(n) Γ(N − n − 1)
.
To compute the constant B we use the fact that 1≤n≤N −2 D n = E N −1 − E 1 = 1, due to the boundary conditions E 1 = 0 and E N −1 = 1. Thus we find
E n = n−1 j=1 D j = 1 2 N −3 n−1 j=1 Γ(N − 2) Γ(j) Γ(N − j − 1)
.
The probability to end with all spins minus is simply E N −n . Since consensus is the ultimate fate of the system E n + E N −n = 1.
While the minority may win in a finite system, the probability for this event quickly vanishes as N → ∞. In the continuum limit n, N → ∞ with p = n/N < 1/2 the exit probability is exponentially small: E n ∝ X N , with X = 1/[2p p (1 − p) 1−p ]. Only near n = N/2 does the exit probability rapidly increase. Employing Stirling's approximation we may recast (4) into
E n → E(y) = 1 √ 2π y −∞ dz e −z 2 /2 ,(5)
where y = (2n − N )/ √ N . We now study the mean time T n to reach consensus (either all plus or all minus) when the initial state consists of n plus and N − n minus spins. (The time to reach a specified final state can also be analyzed within this framework.) Similar to the reasoning for the exit probability, the equation for T n is [8]
N n T n = 3 N −3 n−2 (T n+1 +δT )+3 N −3 n−1 (T n−1 +δT ) + N −3 n−3 + N −3 n (T n +δT ),(6)
subject to the boundary conditions T 0 = T N = 0. The natural choice for the time interval between elementary steps is δT = 3/N , so that each spin is updated once per unit time, on average. The master equation for U n = T n+1 − T n simplifies to
(n − 1)U n = (N − n − 1)U n−1 − (N − 1)(N − 2) n(N − n) ,(7)
with the boundary conditions U 0 = 1 and U N −1 = −1.
Apart from the inhomogeneous term, Eq. (7) is identical to (2). Thus, we seek a solution in a form similar to (3):
U n = (N − 1)(N − 2) Γ(n) Γ(N − n − 1) V n(8)
This transforms (7) into the difference equation
V n−1 = V n + Γ(n − 1) Γ(N − n − 1) n(N − n) .(9)
The symmetry of Eqs. U n and V n imply that V k = 0. Starting from this value and using Eq. (9) we recursively obtain, for all j ≤ k
V j = k−j i=1 Γ(k − i) Γ(k + i − 1) (k − i + 1) (k + i) .(10)
For n ≥ 1, the average time
T n = T 1 + n−1 j=1 U j becomes T n = 1 + 2k(2k − 1) n−1 j=1 V j Γ(j) Γ(2k − j)(11)
with V j given by (10). For the maximal time T max = T k (with k = (N −1)/2), we obtain
T max = 1 + 2k(2k − 1) k m=2 S k,m(12)
with
S k,m = m−1 j=1 Γ(k + j − m) Γ(k + m − j − 1) Γ(j) Γ(2k − j) (k + j − m + 1) (k + m − j)
.
A detailed asymptotic analysis [9] shows that
T max → 2 ln N N → ∞.(13)
For biased initial conditions, it is convenient to consider the limits n, N → ∞, but with p = n/N kept fixed and distinct from 1/2. Now the leading behavior of T n is
T n → 2k(2k−1) n−1 j=1 1 (2k−j −1) (j + 1) (2k−j) → ln N as N → ∞.(14)
Thus for biased initial conditions, the consensus time also scales as ln N , but with amplitude equal to one. A detailed analysis [9] indicates that the amplitude sharply changes from 2 to 1 within the layer N −1/2 ≪ p−1/2 ≪ 1 as the system is moved away from the symmetric initial condition p = 1/2 ( Fig. 1.) One dimension. To implement the dynamics in one dimension, we define a group to be G consecutive spins. If there is no consensus in the selected group, the opinion of the minority-opinion agents are changed so that local consensus obtains. We parameterize the opinions by the spin states S = ±1. For the simplest case of group size G = 3, let S, S ′ , S ′′ be the spins in the group that is being updated. Focusing on spin S, this spin flips with rate
W (S→−S) = (1 + S ′ S ′′ ) 1 − S (S ′ + S ′′ ) 2 .(15)
The factor 1+S ′ S ′′ ensures that spin S can flip only when S ′ = S ′′ , while the quantity within the square brackets ensures consensus after spin S flips. Since S 2 = 1, this rate can be simplified to W = 1 + S ′ S ′′ − S(S ′ + S ′′ ). In one dimension, each spin S j belongs to three groups: (S j−2 , S j−1 , S j ), (S j−1 , S j , S j+1 ), and (S j , S j+1 , S j+2 ). Therefore the total spin-flip rate is given by
W (S j → −S j ) = 3+S j−2 S j−1 +S j−1 S j+1 +S j+1 S j+2 − S j (S j−2 +2S j−1 +2S j+1 +S j+2 ),
which depends on the state of the two nearest neighbors and the two next-nearest neighbors of S j . The equation of motion for the mean spin is [7] d dt
S j = −2 S j W (S j → −S j ) .(16)
With the flip rate given above, first-order terms S j are coupled to third-order terms S j−1 S j S j+1 . For a spatially homogeneous system this gives
dm 1 dt = 6(m 1 − m 3 )(17)
where m 1 = S j and m 3 = S j−1 S j S j+1 . This coupling of different-order correlators makes analytical progress challenging. In contrast, the different-order correlators decouple in the VM and dm 1 /dt = 0 [2,7]. In the meanfield limit of MR, m 3 = m 3 1 and the resulting equation reproduces the consensus time growing as ln N and the fact that the initial majority determines the final state.
Despite this distinction with the VM in one dimension, the dynamics of the MR can be usefully reformulated in terms of the domain walls between neighboring opposite spins. As long as walls are separated by at least two sites, each undergoes a symmetric random walk, exactly as in the VM. However, when two domain walls occupy neighboring bonds, then the local spin configuration is . . . − − − + − − − . . . and these two walls are doomed to annihilate. Because of this close correspondence with the VM, we expect, and verified numerically, that the density of domain walls N (t) decays as t −1/2 . More quantitatively, we study the densities of plus and minus domains of length n, P n and Q n , respectively [10]. The number densities of plus and minus domains are identical, N (t) = P n = Q n , while the fractions of positive and negative spins are given by L + (t) = nP n and L − (t) = nQ n , respectively (with L + + L − = 1). The equations of motion for these moments are
dN dt = −3(P 1 + Q 1 ), and dL + dt = 3(Q 1 − P 1 ) (18)
Substituting N ∼ t −1/2 into (18) gives P 1 ∼ Q 1 ∼ t −3/2 . Therefore L + (t)−L + (∞) ∼ t −1/2 . Thus even though the fractions of plus and minus spins vary with time, they ultimately saturate to finite values. Correspondingly, the exit probability has a non-trivial dependence on the initial magnetization in the thermodynamic limit (Fig. 2). This should be compared with the VM result E(p) = p that follows from the conserved VM magnetization.
Higher Dimensions. There are many natural ways to implement MR in dimension d > 1. One possibility is to update groups of three spins at the corners of elementary plaquettes on the two-dimensional triangular lattice. This gives to a spin-flip rate of a similar form to that in Eq. (15) and again leads to the equation of motion for the mean magnetization being coupled to a third-order correlator. On hypercubic lattices, a natural definition for a group is a spin plus its 2d nearest-neighbors in the d coordinate directions (von Neumann neighborhood).
On these finite-dimensional lattices, MR differs from both the VM and the Ising model with zero-temperature Glauber kinetics (IG) [7]. On the triangular lattice, for example, an elementary plaquette of three plus spins in a sea of minus spins cannot grow in the IG model, but it can grow in the MR model. Additionally, straight interfaces are not stable in the MR model, but they are stable in the IG model. On the other hand there is a (small) surface tension in the MR dynamics that smooths convex corners, just as in the IG model. Thus both the MR and the IG dynamics lead to relatively compact clusters while the VM naturally gives ramified clusters.
We simulated the time T until consensus is reached in one through four dimensions when the system starts from zero magnetization. In one dimension, there is only a single characteristic time scale, the mean time T , that grows as N 2 . The distribution ρ N (τ ) of the scaled time τ = T /T approaches a well-defined limiting distribution ρ ∞ (τ ) for N → ∞. At late stages of the evolution such that only two domain walls remain, we estimate analytically and verify numerically that this distribution has an exponential long-time tail ρ ∞ (τ ) ∼ e −τ as τ → ∞.
In two dimensions (d = 2), we find evidence of at least two characteristic times, corresponding to two different routes to consensus. In most realizations, one opinion quickly becomes dominant and eventually wins. In the remaining realizations, however, the system reaches a configuration in which opinions segregate into two (or very rarely four or more) nearly straight stripes. For the IG model, the interfaces between such stripes would eventually become perfectly flat and this would be the final state [11]. However, such stripes are ultimately unstable in the MR model so that consensus is eventually reached, albeit very slowly.
In three and higher dimensions, there are apparently numerous characteristic times that all scale differently with N . A similar behavior, associated with a vast number of metastable states, was previously observed in the three-dimensional IG model [11]. The broad distribution of consensus times suggests that the most probable time T mp is an appropriate characteristic scale. This quantity indeed has much more convincing power-law behavior than the mean time T for d = 2 and 3. We obtain T mp ∼ N z with z = 1.23, 0.72, and 0.56 in d = 2, 3, and 4, respectively, with large fluctuations occurring in the most probable consensus time in 4 dimensions (Fig. 3).
In summary, majority rule leads to rich dynamics. In the mean-field limit, there is ultimate consensus in the state of the initial majority. The mean consensus time T scales as ln N , where N is the number of agents in the system, and the amplitude of T changes rapidly between 1 and 2 as a function of the initial magnetization. One dimension is the only case where the minority can ultimately win. Here, single-opinion domains grow as t 1/2 so that T ∝ N 2 . Because the magnetization is not conserved, the probability to reach a given final state has a non-trivial initial state dependence. For d > 1, the initial majority determines the final state. The consensus time grows as a power law in N with a dimension-dependent exponent. Large sample-to-sample fluctuations in T arise whose magnitude increases with dimension. Mean-field behavior is not reproduced in four dimensions, so that the upper critical dimension is at least greater than four.
We thank R. Durrett, T. Liggett, C. Newman, and D. Stein for advice and NSF grants DMR9978902 and DMR0227670 for financial support of this research.
( 7 )FIG. 1 :
71and ansatz (8) under the transform n → N − 1 − n, and the antisymmetry of the boundary conditions U 0 = 1 and U N −1 = −1 imply that U n = −U N −1−n and V n = −V N −1−n . For concreteness, we shall take N to be odd and define k = (N − 1)/2. Then the above boundary conditions on Consensus time Tn versus p = n/N for N = 81, 401, 2001, 10001, and 50001 (gradual steepening). The curves are symmetric about p = 1/2.
FIG. 2 :
2Probability that a one-dimensional system of length 10 4 ends with all spins plus as a function of the initial density p of plus spins. Each data point is based on 2000 realizations. The dashed line is the corresponding VM result.
FIG. 3 :
3Most probable consensus time Tmp versus N for zero initial magnetization in one dimension (•), on the triangular lattice (∆), and hypercubic d-dimensional lattice with a (2d+1)-site neighborhood for d = 2 ( ), d = 3 ( * ), and d = 4 (+). The lines represent the best power-law fits with respective slopes 2.06, 1.23, 1.24, 0.72, and 0.56.
The Tyranny of the Status Quo (Harcourt Brace Company. R D Friedman, M Friedman, San DiegoSee e.g., R. D. Friedman and M. Friedman, The Tyranny of the Status Quo (Harcourt Brace Company, San Diego, 1984);
W Weidlich, Sociodynamics: A Systematic Approach to Mathematical Modelling in the Social Sciences. LondonTaylor & FrancisW. Weidlich, Sociodynamics: A Systematic Ap- proach to Mathematical Modelling in the Social Sciences (Taylor & Francis, London, 2000).
T M Liggett, Interacting Particle Systems. New YorkSpringer-VerlagT. M. Liggett, Interacting Particle Systems (Springer- Verlag, New York, 1985).
. P L Krapivsky, Phys. Rev. A. 451067P. L. Krapivsky, Phys. Rev. A 45, 1067 (1992).
. K Sznajd-Weron, J Sznajd, Int. J. Mod. Phys. C. 111157K. Sznajd-Weron and J. Sznajd, Int. J. Mod. Phys. C 11, 1157 (2000);
. D Stauffer, J. Artif. Soc. Soc. Simul. 51D. Stauffer, J. Artif. Soc. Soc. Simul. 5, no. 1 (2002).
. S Galam, Physica (Amsterdam). 274132S. Galam, Physica (Amsterdam) 274, 132 (1999);
. cond-mat/0211571Eur. Phys. J. B. 24403Eur. Phys. J. B 24, 403 (2002); cond-mat/0211571.
. C M Newman, D L Stein, Phys. Rev. E. 605244C. M. Newman and D. L. Stein, Phys. Rev. E 60, 5244 (1999).
. R J Glauber, J. Math. Phys. 4294R. J. Glauber, J. Math. Phys. 4, 294 (1963).
S Redner, A Guide to First-Passage Processes. Cambridge University PressS. Redner, A Guide to First-Passage Processes (Cam- bridge University Press, 2001).
. P L Krapivsky, S Redner, in preparationP. L. Krapivsky and S. Redner, in preparation.
. P L Krapivsky, E Ben-Naim, Phys. Rev. E. 563788P. L. Krapivsky and E. Ben-Naim, Phys. Rev. E 56, 3788 (1997).
. V Spirin, P L Krapivsky, S Redner, Phys. Rev. E. 6516119V. Spirin, P. L. Krapivsky, and S. Redner, Phys. Rev. E 65, 016119 (2002).
| [] |
[
"Demographic noise and resilience in a semi-arid ecosystem model",
"Demographic noise and resilience in a semi-arid ecosystem model"
] | [
"John Realpe-Gomez \nTheoretical Physics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUnited Kingdom\n\nGrupo de Ciencia Transdisciplinar\nInformación\n",
"Mara Baudena \nDepartment of Environmental Sciences\nCopernicus Institute\nUtrecht University\nP.O. Box 80155 TCUtrechtThe Netherlands\n\nGrupo Interdisciplinar de Sistemas Complejos (GISC)\nDepartamento de Matemáticas\nUniversidad Carlos III de Madrid\nAvenida de la Universidad 3028911Leganés, MadridSpain\n",
"Tobias Galla \nTheoretical Physics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUnited Kingdom\n",
"Alan J Mckane \nTheoretical Physics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUnited Kingdom\n",
"Max Rietkerk \nDepartment of Environmental Sciences\nCopernicus Institute\nUtrecht University\nP.O. Box 80155 TCUtrechtThe Netherlands\n",
"\nInstituto de Matemáticas Aplicadas\nComplejidad\n",
"\nUniversidad de Cartagena\nBolívarColombia\n"
] | [
"Theoretical Physics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUnited Kingdom",
"Grupo de Ciencia Transdisciplinar\nInformación",
"Department of Environmental Sciences\nCopernicus Institute\nUtrecht University\nP.O. Box 80155 TCUtrechtThe Netherlands",
"Grupo Interdisciplinar de Sistemas Complejos (GISC)\nDepartamento de Matemáticas\nUniversidad Carlos III de Madrid\nAvenida de la Universidad 3028911Leganés, MadridSpain",
"Theoretical Physics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUnited Kingdom",
"Theoretical Physics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUnited Kingdom",
"Department of Environmental Sciences\nCopernicus Institute\nUtrecht University\nP.O. Box 80155 TCUtrechtThe Netherlands",
"Instituto de Matemáticas Aplicadas\nComplejidad",
"Universidad de Cartagena\nBolívarColombia"
] | [] | The scarcity of water characterising drylands forces vegetation to adopt appropriate survival strategies. Some of these generate water-vegetation feedback mechanisms that can lead to spatial self-organisation of vegetation, as it has been shown with models representing plants by a density of biomass, varying continuously in time and space. However, although plants are usually quite plastic they also display discrete qualities and stochastic behaviour. These features may give rise to demographic noise, which in certain cases can influence the qualitative dynamics of ecosystem models. In the present work we explore the effects of demographic noise on the resilience of a model semi-arid ecosystem. We introduce a spatial stochastic eco-hydrological hybrid model in which plants are modelled as discrete entities subject to stochastic dynamical rules, while the dynamics of surface and soil water are described by continuous variables. The model has a deterministic approximation very similar to previous continuous models of arid and semi-arid ecosystems. By means of numerical simulations we show that demographic noise can have important effects on the extinction and recovery dynamics of the system. In particular we find that the stochastic model escapes extinction under a wide range of conditions for which the corresponding deterministic approximation predicts absorption into desert states. (John Realpe-Gomez), [email protected] (Mara Baudena), [email protected] (Tobias Galla), [email protected] (Alan J. McKane), [email protected] (Max Rietkerk) | 10.1016/j.ecocom.2013.04.002 | [
"https://arxiv.org/pdf/1209.2588v2.pdf"
] | 14,512,079 | 1209.2588 | 53217a7552f7750a31a7daa2dd19ab6393522e1b |
Demographic noise and resilience in a semi-arid ecosystem model
18 Apr 2013
John Realpe-Gomez
Theoretical Physics
School of Physics and Astronomy
The University of Manchester
M13 9PLManchesterUnited Kingdom
Grupo de Ciencia Transdisciplinar
Información
Mara Baudena
Department of Environmental Sciences
Copernicus Institute
Utrecht University
P.O. Box 80155 TCUtrechtThe Netherlands
Grupo Interdisciplinar de Sistemas Complejos (GISC)
Departamento de Matemáticas
Universidad Carlos III de Madrid
Avenida de la Universidad 3028911Leganés, MadridSpain
Tobias Galla
Theoretical Physics
School of Physics and Astronomy
The University of Manchester
M13 9PLManchesterUnited Kingdom
Alan J Mckane
Theoretical Physics
School of Physics and Astronomy
The University of Manchester
M13 9PLManchesterUnited Kingdom
Max Rietkerk
Department of Environmental Sciences
Copernicus Institute
Utrecht University
P.O. Box 80155 TCUtrechtThe Netherlands
Instituto de Matemáticas Aplicadas
Complejidad
Universidad de Cartagena
BolívarColombia
Demographic noise and resilience in a semi-arid ecosystem model
18 Apr 2013arXiv:1209.2588v2 [physics.bio-ph]Semi-arid ecosystemsResilienceVegetation patternsExtinctionRecoveryStochastic processes
The scarcity of water characterising drylands forces vegetation to adopt appropriate survival strategies. Some of these generate water-vegetation feedback mechanisms that can lead to spatial self-organisation of vegetation, as it has been shown with models representing plants by a density of biomass, varying continuously in time and space. However, although plants are usually quite plastic they also display discrete qualities and stochastic behaviour. These features may give rise to demographic noise, which in certain cases can influence the qualitative dynamics of ecosystem models. In the present work we explore the effects of demographic noise on the resilience of a model semi-arid ecosystem. We introduce a spatial stochastic eco-hydrological hybrid model in which plants are modelled as discrete entities subject to stochastic dynamical rules, while the dynamics of surface and soil water are described by continuous variables. The model has a deterministic approximation very similar to previous continuous models of arid and semi-arid ecosystems. By means of numerical simulations we show that demographic noise can have important effects on the extinction and recovery dynamics of the system. In particular we find that the stochastic model escapes extinction under a wide range of conditions for which the corresponding deterministic approximation predicts absorption into desert states. (John Realpe-Gomez), [email protected] (Mara Baudena), [email protected] (Tobias Galla), [email protected] (Alan J. McKane), [email protected] (Max Rietkerk)
Introduction
In arid and semi-arid ecosystems water constitutes the main limiting resource for vegetation. The harsh environmental conditions, related to frequent droughts, limit plant recruitment and survival and often prevent vegetation from fully covering the ground. In such areas vegetation largely occurs in patches of high density surrounded by bare soil, and forming spatial vegetation patterns (e.g. Aguiar and Sala, 1999;Barbier et al., 2006;Deblauwe et al., 2008). These spatial structures emerge from the system's dynamics as a consequence of scale-dependent water-vegetation feedback mechanisms even in absence of any underlying spatial heterogeneity (e.g. Klausmeier, 1999;von Hardenberg et al., 2001;Rietkerk et al., 2002;Meron, 2012). For example, in arid areas the vegetation increases the infiltration capacity of the soil as compared to bare ground because of root penetration (Rietkerk and van de Koppel, 1997;Walker et al., 1981) and because it prevents the formation of biogenic crusts that would form in bare soil (Casenave and Valentin, 1992). While vegetation patches compete for water, vegetation enhances its own growth locally within the patches. This so-called infiltration feedback is known to be one of the most important scaledependent water-vegetation feedback mechanisms, leading to self-organization and pattern formation phenomena (e.g. Rietkerk et al., 2002).
Scale-dependent resource concentration mechanisms of this type are also connected to the possible occurrence of catastrophic shifts in ecosystems. For example, a vegetated patchy state may turn into a degraded state with mostly bare soil if rainfall decreases below a certain threshold. A subsequent increase in rainfall above the threshold may not be enough to recover the previous vegetation state (e.g. von Hardenberg et al., 2001;Rietkerk et al., 2004;Rietkerk and van de Koppel, 2008;Baudena and Rietkerk, 2012). This is an example of how the resilience of such ecosystems is strongly interrelated with spatial structure. By 'resilience' we mean here the ability of an ecosystem to remain in a given domain of attraction and to return quickly to the same state after disturbances (Rietkerk and van de Koppel, 2008).
In models used to represent dryland ecosystems both water and plants are often represented as density fields, varying continuously in time and space, and their dynamics is typically described by deterministic differential equations (e.g. Gilad et al., 2004;Rietkerk et al., 2002). In other models the plant dynamics is modelled using stochastic differential equations (Manor and Shnerb, 2008), or instead the vegetation is represented by discrete states in a drastically simplified way (e.g. Kéfi et al., 2007a,b). Continuous models tend to be simple and provide powerful insight through the use of analytical techniques. Moreover, in the case of semi-arid ecosystems, they very successfully point out the importance of scaledependent water-vegetation feedbacks in the self-organised pattern formation processes (e.g. von Hardenberg et al., 2001;Rietkerk et al., 2002).
The spatial self-organization of drylands ecosystems has not yet been explored with models representing single plants individually, even though such individual-based models are now increasingly common in other areas of ecology and biology. A reason for this may be that plants are not always conceived as discrete individuals since, unlike animals, they are extremely plastic and can respond quite continuously to environmental changes, for instance by partially dying and recovering later on (Crawley, 1990). However, plants have also discrete features: in their seed stage, they behave as discrete entities that can fall and perhaps germinate in some random position. The birth and establishment of a single plant is also subject to a collection of random events. Furthermore, they can live alone in patches composed essentially of a single plant. Finally, the death of a whole plant is also a possible (unpredictable) event. These features can be readily accommodated in a stochastic individual-based modelling approach. To be more precise, plants in drylands often have a modular structure and are composed of multiple hydraulically independent stems (Schenk et al., 2008). Except when the plant is in a seed stage such stems could be seen as the relevant individual entities rather than the whole plant. In plant ecology, forest models are a successful example of individual-based modelling (Botkin et al., 1972;Shugart, 1984;Pacala et al., 1993Pacala et al., , 1996Moorcroft et al., 2001;Perry and Enright, 2006). The approach has been extended to grasslands as well (Jørgensen, 2011;Coffin and Lauenroth, 1990;Peters, 2002;Rastetter et al., 2003), and it has also been used as a method to parametrise mean-field differential equations from fine-scale data (Moorcroft et al., 2001). In principle, it is possible to capture the most salient features of an IBM by a suitable continuous model (Black and McKane, 2012). How to do this in general is an active area of research in the modelling of complex systems (San Miguel et al., 2012). In the case of forest models, good progress has been made in achieving this (Strigul et al., 2008). Formally speaking, an IBM can coincide with a deterministic continuous model only in the limit of an infinite number of individuals. It might be imagined that a system with a relatively large number of individuals would simply lead to a small amount of noise around the deterministic solution. In some situations this is the case, but in others, however, the stochastic effects in an IBM can produce qualitatively different results (McKane and Newman, 2005;Butler and Goldenfeld, 2009;Biancalani et al., 2010;Butler and Goldenfeld, 2011;Biancalani et al., 2011;Rogers et al., 2012;Ramaswamy et al., 2012). Still these effects become negligible with a 'sufficiently' large number of individuals, but how large is 'sufficiently' large? It can actually be unrealistically large and the stochastic effects may turn out to be necessary to account for the observed behaviour of real systems. It is not clear a priori how large the numbers need to be, and the system of interest needs to be carefully analysed before reaching a conclusion.
Following the previous discussion it is understandable that a deterministic continuous model can be a good approximation to an individual-based model of forests, where the vegetation is rather dense. In drylands, on the other hand, vegetation is composed of a relatively small number of plants coexisting with regions of empty land in which the above-ground biomass density is essentially zero. Under these conditions the intrinsic stochastic behaviour of individual plants may turn out to be relevant for understanding the behaviour of the system (Black and McKane, 2012). Even more so if the drylands are close to extinction, where the number of individuals is rather scarce.
In arid areas, vegetation is strictly dependent on water availability, whose dynamics is effectively captured by a deterministic continuous approach. A natural approach to represent the dynamics of drylands is in terms of socalled 'hybrid models', in which the dynamics of water is represented by differential equations, and in which the plant dynamics are described by an IBM (Vincenot et al., 2010). Thus, these models combine both continuous and discrete variables.
In our work we use such a hybrid model to study the resilience of semi-arid ecosystems against desertification. The model includes the water-vegetation infiltration feedback, and thus we expect vegetation patterns to emerge as previously observed in other types of models of semiarid ecosystems (e.g. Rietkerk et al., 2002). The use of individual-based models can be computationally demanding if too many details are included, and this can obstruct the identification of the underlying mechanisms relevant for the behaviour of the system. An example is the continuous range of values characterising each individual, e.g. mass and heterogeneity: every plant having a different mass. Our approach constitutes a compromise between realism, tractability, and insight, concentrating on what we think could be the most relevant vegetation features. We extend an IBM, recently investigated by Realpe-Gomez et al. (2012) to include spatial interactions. In particular, we neglect the growth phase and assume all plants have the same mass. We expect this to be a reasonable approximation if the time it takes for a plant to establish is much smaller than the time scale in which the spatial distribution of biomass changes appreciably and the growth phase of each individual plant is not very rel-evant for the system dynamics. We study the resilience and stability of the model dryland ecosystem when the individual nature of plants and its intrinsic stochasticity are considered. Our aim is to investigate the role of demographic noise and to understand whether noise enhances recovery or whether it may drive the system to extinction. To do so, we will compare the outcomes of a stochastic IBM and a deterministic continuous model.
Model definitions
Stochastic model
General considerations
In this section we introduce our stochastic hybrid model for semi-arid ecosystems. The model describes the dynamics of vegetation, soil and surface water in a given area to which the model is applied. The source of surface water is rainfall. Surface water then infiltrates the soil where plants can take it up. Plants are considered as discrete entities following a stochastic dynamics (Realpe-Gomez et al., 2012), while soil and surface water are modelled by continuous variables whose dynamics would be deterministic (Pueyo et al., 2008;Rietkerk et al., 2002;HilleRisLambers et al., 2001) were they not influenced by plants. Figure 1 summarises the dynamics of the model. For computational simplicity we will work mostly in one spatial dimension. Occasionally we will also consider the case of two dimensions; this is to illustrate that the results of the one-dimensional model are still valid in this more realistic case. We will define the model in one spatial dimension only, the modifications required to extend it to two dimensions are straightforward. In the onedimensional model we assume that the land is partitioned into L uniform square cells, each labelled by an index i. Thus the model describes a linear array of square cells. The length of the side of a cell is denoted by h. With every cell i we associate three variables corresponding to the discrete (integer) number of plants, n i , in the cell and the continuous quantities of soil and surface water, ω i and σ i , respectively, on that piece of land. We will assume that the density of biomass per unit area in cell i is given by ρ i = µ n i , where µ represents the mass of one plant individual divided by the area of the cell, i.e. µ = m P /h 2 . We have made here the simplifying assumption that all plants have identical mass m P . Thus, µ characterises the contribution of an individual plant to the biomass density. We consider the cells as homogeneous, i.e. we do not take into account any structure within the cell, such as the spatial distribution of plants within a cell. We also neglect properties of individual plants, such as for example different plant sizes or stages of growth. The detailed dynamics of the various components of the model are explained below, where we also introduce the relevant model parameters. The units and numerical values of all the parameters in the model are summarised in Table 1.
Water dynamics
For all cells, i, the dynamics of the depths of soil and surface water, ω i and σ i (measured in mm) are described by differential equations that are deterministic when the biomass density ρ i (measured in g m −2 ) is kept constant; namely
dω i dt = F ω (ρ i , ω i , σ i ) + D ω ∆ ω i ,(1a)dσ i dt = F σ (ρ i , ω i , σ i ) + D σ ∆ σ i .(1b)
In each of these equations the first term on the right-hand side describes the water dynamics within a cell. These are captured by the functions F ω and F σ , to be specified below (see Eqs. (3) and (4)). The second term in each of the two equations describes spatial transport processes, that we model here as standard diffusive processes by using the discrete diffusion operator
∆ω i = 1 h 2 j∈N (i) (ω j − ω i ) ,(2a)∆σ i = 1 h 2 j∈N (i) (σ j − σ i ) .(2b)
Here the sum is over all elements j ∈ N (i), i.e. the set of cells j which are nearest neighbours of cell i. The coefficients D ω and D σ of the diffusion operator in Eqs.
(1) are the diffusion constants corresponding to the transport of soil and surface water, respectively. This is usually a good approximation (see e.g. Rietkerk et al., 2002) of the more accurate description derived from shallow water theory (Gilad et al., 2004;Meron, 2011). Indeed, recent work (van der Stelt et al., 2013) suggests that there is no qualitative difference between these two descriptions.
Following HilleRisLambers et al. (2001) we define the functions F ω and F σ , describing the on-site water dynamics, as
F ω (ρ, ω, σ) = α(ρ) σ − β(ω) ρ − r ω , (3a) F σ (ρ, ω, σ) = R − α(ρ) σ .(3b)
Here,
α(ρ) = a ρ + k 2 W 0 ρ + k 2 , β(ω) = b ω ω + k 1 ,(4)
describe the infiltration rate of surface water into the soil, and the soil water uptake due to plants, respectively. These rates saturate for large values of ρ and ω, respectively. The model parameters k 1 and k 2 determine the exact shape of the saturation curves and are known as 'half-saturation' constants (HilleRisLambers et al., 2001). The dependence of α on the biomass density, ρ, introduces the infiltration-water-vegetation feedback, known to generate spatial vegetation patterns in existing deterministic models (e.g. Rietkerk et al., 2002). The parameter r characterises the loss of soil water due to evaporation (Eq. (3a)), while the parameter R in Eq. (3b) is the rainfall rate. The dynamics of water are illustrated in Fig. 1, see the left-most pair of cells (A) and the pair of cells in the middle (B).
Plant dynamics
We model individual plants with an IBM. Individual plants I i in cell i are represented as discrete entities whose dynamics are given by the stochastic transition rules
I i Γ b (ωi) − −−− → 2I i ,(5a)I i Γ d −→ E i ,(5b)I i Γs(ωj ) − −−− → I i + I j , j ∈ N (i) .(5c)
Here the symbols over the arrows refer to the probabilities per unit time, or rates, of the corresponding transitions. These rates may depend on the amount of available soil water ω (see Eqs. (6) below). The first transition rule, Eq. (5a), corresponds to an individual plant I i in cell i giving birth to another plant in the same cell i at a rate Γ b (ω i ). The second rule, Eq. (5b), refers to the death of a plant in cell i. The quantity E i on the right-hand side of this reaction stands for a vacancy in cell i, so that this transition rule indicates that an individual plant I i in a cell i dies at a rate Γ d and leaves behind an empty place E i in cell i. The third rule, Eq. (5c), describes a process in which an individual plant in cell i gives birth to another plant in a neighbouring cell j at a rate Γ s (ω j ). This captures the dispersal of the seeds of an individual plant to neighbouring cells and includes the probability of germination of the seedling. Notice that the rate of such dispersion processes depends on the amount of soil water, ω j , in the cell in which the new plant is born (see Eq. (6) below).
We define the transition rates in analogy with previous deterministic models, e.g. HilleRisLambers et al. (2001), Rietkerk et al. (2002), and Pueyo et al. (2008). Specifically, we use
Γ b (ω i ) = c β(ω i ), (6a) Γ d = d ,(6b)Γ s (ω j ) = K c β(ω j ) .(6c)
The plant birth rate Γ b is proportional to the plant water uptake β (Eq. (4)), and c is a constant that characterises the conversion rate at which water uptake is turned into newly established plants (Eq. (6a)). The death rate Γ d is assumed to be a constant d (Eq. (6b)). The rate Γ s with which new plants are established in neighbouring cells is proportional to the water uptake β, to the parameter c and to the constant K, representing the probability that a seed falls into a neighbouring cell and manages to survive (Eq. (6c)). As described above the water uptake rate β, and thus Γ s , depend on the soil water concentration in the colonised cell (ω j ). This reflects the fact that the probability of germination of the seedling in the neighbouring cells is a function of the local availability of soil water. In our simple model we assume that plants can colonise only nearest neighbour cells (see e.g. Kéfi et al., 2007b). This can be generalised to take into account longer-range dispersal kernels, which may be appropriate when the characteristic scale of seed dispersal is appreciably greater than h, the lateral extension of a cell (Pueyo et al., 2008). It is worth pointing out that the transition rates do not depend on the whole history of the system, but only on its current state. This type of stochastic hybrid model is usually referred to as a piecewise deterministic Markov process in the literature (Davis, 1984;Faggionato et al., 2009Faggionato et al., , 2010Realpe-Gomez et al., 2012).
Coarse-graining: cell dynamics
Given that we neglect the spatial structure within a cell, the relevant transition rules involve the state of cells rather than that of particular individual plants, so our model is of an effective nature; it provides a coarse grained description of the underlying dynamics. According to the rules for individual plants, Eq. (5), the only possible transitions for a cell i with n i plants are n i → n i ± 1, i.e. the birth or death of a plant in cell i. Hence, the possible transitions that can occur in cell i are fully specified by the expressions (see e.g. Black and McKane (2012) for similar models)
T b (n i + 1|n i ; ω i ) = n i Γ b (ω i ), (7a) T d (n i − 1|n i ) = n i Γ d ,(7b)T i→j s (n i , n j + 1|n i , n j ; ω j ) = n i Γ s (ω j ), j ∈ N (i),(7c)
for all i. The notation T b (n i +1|n i ; ω i ), for instance, stands for the probability per unit time (or rate) that the number of plants n i in a cell i increases by one, given that the amount of soil water in i is ω i at the time the transition takes place. Since each plant within a cell i has the potential to give birth to a new plant in i with a rate Γ b (ω i ) independently of all other plants, the total rate for a transition n i → n i + 1 in cell i is given by n i Γ b (ω i ). Similar considerations apply for the other two processes described in Eq. (7) above. Notice that the arguments before the vertical bar in the above rate functions, Eqs. (7), refer to the state of the system after the transition, while those to the right of the vertical bar refer to the state before the transition, following the standard notation for conditional probabilities. This stochastic dynamics of cells is illustrated in Fig. 1, see the pair of cells in the middle (B) and the three right-most pairs of cells (C, D, E). The cells at the top (C) correspond to an on-site birth event, those in the middle (D) correspond to a death event and those at the bottom (E) correspond to a birth event induced by a neighbouring cell.
Deterministic approximation 2.2.1. General considerations
The dynamics of plants in our model are stochastic, so we cannot predict the exact state of the system at a future time. At best, it might be possible to obtain the probability of finding the system in a given state, specified by the number of plants, n i , and the soil and surface water depths ω i and σ i , in all cells i. From such probability distributions it would then be possible to compute the expected average behaviour. Another way to study stochastic systems is to perform a certain number S of independent stochastic simulations. In doing so we could, for each cell i and for a given time t, estimate the average density of biomass or the average depths of soil and surface water. We will denote these quantities by P i , W i and O i , respectively.
The transition rules governing the dynamics of plants (Eqs. (6)- (7)) along with the dynamical laws for the soil and surface water (Eqs. (1)-(4)) induce a corresponding deterministic equation for the dynamics of these averages in the limit of a very large number of simulations, formally S → ∞. At the same time this deterministic approximation also describes the behaviour of the stochastic model for very small µ. The equivalence is formally exact in the limit µ → 0 (cf. Realpe-Gomez et al., 2012). In this sense µ controls the strength of the noise in the stochastic model: if µ → 0 the noise is irrelevant and the dynamics becomes deterministic. Since ρ i = µn i the number of plants tends to be very large (n i → ∞) in this limit in regions with a non-zero density of biomass, ρ i > 0. We also refer to Realpe-Gomez et al. (2012) for more details of a nonspatial version of the model; see also Black and McKane (2012) for a general review of individual-based models and the connection with deterministic limiting descriptions. The deterministic model governing the average behaviour of the stochastic model turns out to be similar to models based on partial differential equations, well known in the literature (HilleRisLambers et al., 2001;Rietkerk et al., 2002;Pueyo et al., 2008;Dakos et al., 2011). The average biomass density, P i , and the average depths of soil and surface water, W i and O i , respectively, follow the equations
dP i dt = F ρ (P i , W i , O i ) + D ρ (W i ) ∆P i ,(8a)dW i dt = F ω (P i , W i , O i ) + D ω ∆ W i ,(8b)dO i dt = F σ (P i , W i , O i ) + D σ ∆ O i .(8c)
Here F ω and F σ are given by Eqs.
(3)-(4) and
F ρ (P, W, O) = cβ (W ) P − dP,(9)
where we have defined a rescaled conversion rate c = (1 + z K) c, with z the number of nearest neighbours of a cell, e.g. z = 2 in one dimension and z = 4 for two dimensions. The diffusion operator is given by Eq.
(2) as before, and so Eq. (8a) above is effectively a diffusion equation with a diffusion coefficient
D ρ (W i ) = h 2 K c β (W i ).
The biomass diffusion coefficient thus depends on the average amount of soil water in the cell under consideration. The reason for this water dependence is that the 'diffusion' of biomass occurs when a plant in a given cell j produces a newly established plant in a neighbouring cell (i ∈ ∂j).
The assumption underlying our model is that the establishment of a new plant depends on the amount of water available in the cell in which the new plant is established. Indeed, a water-dependent diffusion coefficient is also implicit in the work by Pueyo et al. (2008). In the special case in which the range of the dispersal kernel is so short that only dispersal to neighbouring cells occurs, Eqs. (8) above coincide with the equations obtained in Pueyo et al. (2008). On the other hand, such a water dependence of the biomass 'diffusion' coefficient constitutes the only difference between the deterministic equations (8), and a discretisation of the set of equations introduced by HilleRisLambers et al. (2001).
Homogeneous fixed points
The deterministic equations introduced above, Eq. (8), have homogeneous fixed points, i.e. equilibrium states,
such that P i = P , W i = W and O i = O, for all i, and dP dt = 0, dW dt = 0, dO dt = 0.(10)
Such fixed points may or may not be stable under small spatially homogeneous perturbations. Depending on the choice of parameters there can be either one out of two possible stable fixed points (Pueyo et al., 2008) or none (Realpe-Gomez et al., 2012). The two types of stable fixed points correspond to a bare soil state (also referred to as a 'desert state' in the following)
P d = 0, W d = R r , O d = R a W 0 ,(11)
or to a state with non-zero vegetation
P v = c R d − r c k 1 c b − d , W v = d k 1 c b − d , O v = R α(P v )
. (12) In the region where there are no stable fixed points, a numerical integration of the corresponding non-spatial equations shows the existence of homogeneous limit cycles, as it has been reported recently by Realpe-Gomez et al. (2012). These oscillations become unstable once spatial structure is taken into account giving way to vegetation patterns as discussed below. In this work we will not focus on this parameter regime since we are interested in a regime where the deterministic dynamics can lead to extinction (see below).
Spatial patterns and resilience
Besides the homogeneous states given by Eqs. (11)-(12) above, the deterministic system defined by Eqs. (8) can also display stationary spatial patterns (Rietkerk et al., 2002). This is not surprising, given the similarity of Eqs.
(8) to those investigated by HilleRisLambers et al. (2001) and Pueyo et al. (2008), which are known to display spatially patterned solutions. For certain values of the parameters, these patterns can 'emerge' out of the homogeneous vegetated state given by Eq. (12) when a small heterogeneous perturbation is applied (Rietkerk et al., 2002). This results in Turing patterns (Turing, 1952;Cross and Greenside, 2009) and can be studied mathematically within a linear approximation of Eqs. (8) around the fixed point with homogeneous vegetation. However, numerical integration of the full set of nonlinear equations has revealed that spatial patterns exist in a region of parameters much wider than that explained by such a linear analysis (Rietkerk et al., 2002). These non-trivial solutions are inherently due to nonlinearities and so they cannot be captured in a linear approximation. In a parameter range relevant for dryland ecology (i.e. at low rainfall values) a relatively large region of bistability between spatial patterns and the desert state exists. In the region we investigate here the deterministic system can converge either to the desert state given by Eq. (11) or to a stable vegetation pattern, depending on the initial conditions from which the dynamics are started. The basin of attraction of the state with vegetation patterns, i.e. the set of all initial conditions which lead to such a state, can be used to characterise the resilience of the deterministic model: the larger the size of the basin of attraction of a 'desirable' state the more resilient is the system (Holling, 1973;Grimm and Calabrese, 2011;Martin and Calabrese, 2011). Performing such a study quantitatively is not an easy task given that the space of all possible initial conditions is huge. We will therefore restrict our work to an analysis involving only one type of realistic initial conditions. This will be discussed in more detail below.
Analysis
All results discussed in this paper were obtained from spatially explicit numerical simulations, mostly in one dimension, with a few two-dimensional examples for illustrative purposes. We used periodic boundary conditions throughout. In order to obtain sample simulations for the stochastic model we used an adaptation of an algorithm due to Gillespie (1976) (see also Gillespie, 1977), details are discussed in Realpe-Gomez et al. (2012). To obtain solutions of the deterministic approximation, Eq. (8), we integrated these equations numerically using the Eulerforward method with a time step of ∆t = 0.01 days.
In all cases we used the same initial conditions for both the stochastic model and its deterministic approximation obtained as follows: (a) soil water ω i , and surface water σ i in all cells i were assigned the values corresponding to the homogeneous desert state given by W d and O d in Eq. (11); (b) biomass density was assigned a value ρ 0 > 0 in a fraction f of the cells picked at random, and a value of zero in the remaining fraction (1 − f ) of cells. We emphasise that in each run the random choice of populated cells is the same in both models. Notice that ρ 0 , along with the parameter µ, fixes the initial number of individual plants per initial populated cell n 0 = ρ 0 /µ, which has to be an integer. The possible choices of ρ 0 and µ are therefore constrained.
Before continuing we would like to comment on why we have chosen this particular type of initial conditions here. The main reason for this is that we are dealing with spatial models and to define a generic initial condition we would need a very large number of parameters, i.e. initial biomass, soil and surface water in each cell. It is to avoid this that we decided to focus here on a subset of initial conditions that can be easily specified by a few parameters, as we have explained above.
First we analysed the differences between the deterministic and the stochastic approach comparing the behaviour of the system in single runs. Then, in order to study how generic the observed outcomes were, we estimated the probability of extinction, P ext , in both the stochastic and the deterministic models as a function of the different model parameters. This probability was obtained from simulations of the corresponding dynamics for several different initial conditions, picked at random as described above, and counting the fraction of samples in which the systems became extinct. In the case of the deterministic system the only difference between simulation runs is the randomly chosen initial spatial distribution of the vegetated cells. Once the initial condition is fixed, the subsequent dynamics is fully deterministic, with no further random elements. Similarly, the simulation runs of the stochastic system also involved a random element due to the initial conditions, but further stochasticity then entered during the actual run due to the random processes determining the sequence and timing of the transitions of the individual-based model.
We estimated the probability of extinction in terms of what we expect to be the relevant parameters of the set of initial conditions: (a) the initial cover f , i.e. the fraction of cells with ρ 0 > 0 in the initial conditions; (b) the initial amount of biomass ρ 0 introduced in those cells; (c) the mass of a plant per unit area µ; and (d) the rainfall rate R. We chose f and ρ 0 as relevant parameters because they determine the amount of initial biomass, µ because it controls the strength of the stochasticity in the model and R in order to investigate the models under different conditions of water availability. In the limit µ → 0, the stochastic results are expected to show similar behaviour to the deterministic approximation, while for larger values of µ we would expect stochastic effects to be more common.
To estimate the probabilities of extinction, 50 simulations were run for each model and for each set of parameters. Each of these simulations was run for a period of time of up to T = 5000 days. Simulation runs of the stochastic model were identified as becoming extinct if all cells i reached a state with n i = 0 within this time. In the deterministic model we applied a threshold criterion P i < ǫ, i.e. a given cell is assumed to contain no plants when its plant density is below a set threshold ε. In order to be consistent with the original stochastic model we chose ǫ = µ, as we assumed that µ was the contribution of a single plant to the biomass density. The introduction of a threshold was necessary because the deterministic approximation works with a genuinely continuous density, and the state with exactly zero biomass is reached only asymptotically at infinite times. We carried out some consistency checks, and observe that results depend little on the exact choice of the value of the threshold. Findings remained essentially the same even for ǫ much smaller than µ.
Results
Dynamics
We compared the dynamics of the stochastic model with the dynamics of the corresponding deterministic approximation. Both models were initialised with homogeneous initial conditions (i.e. f = 1) with ρ 0 = 10 g m −2 . For all cells, soil and surface water took values corresponding to the desert state, given by Eq. (11). We compared the time dynamics of the total biomass density for both models. As expected from the stability analysis of the deterministic non-spatial model, we observed that in this case the deterministic approximation converges asymptotically to the desert state (Fig. 2, red thick line). In contrast the stochastic model did not reach extinction for the same homogeneous initial conditions and parameter values. The stochastic model initially followed a path close to the deterministic approximation and approached extinction, but then some regions displayed an explosive growth and the system finally was found in a quasi-stationary pattern. As expected the stochastic dynamics deviated substantially from its deterministic approximation especially when the number of plants in the system was small (see Fig. 2). We will refer to this effect of escaping a path to extinction as 'self-recovery'. From a mathematical perspective this phenomenon appears similar to the one investigated in Black and McKane (2011). For clarity the figure shows the results up to time t = 500 days even though we ran the simulations for much longer (up to time T = 5 × 10 4 days). We observed that the total biomass in the stochastic model remains essentially the same up to stochastic deviations, i.e. it reaches a stationary state. Still there was a nonzero probability that the stochastic model, after having escaped the initial path to extinction, e.g. after, say, t ≈ 300 days in Fig. 2, reached extinction. Such events occur because of large stochastic deviations, and the probability of such events can be expected to be rather small (Frey, 2010).
The phenomenon of self-recovery in the stochastic model can also be seen in a comparison of the time-dependent spatial biomass density profile along the line of cells in both models, see Fig. 3. Initial conditions for the stochastic and the deterministic models were chosen as f = 1/2 and ρ 0 = 10 g m −2 . For all cells the soil and surface water densities were initialised from the values of the desert state given by Eq. (11). In Fig. 3 we compare the dynamics of the vegetation profile in the stochastic model with that of the corresponding deterministic approximation. Again, we can see that the deterministic approximation leads to extinction while the stochastic model recovers. If we look at the stochastic simulation for a longer time (5×10 4 days) we clearly see that vegetation persists, see Fig. 4 below the horizontal dashed line. We have also used the last configuration reached by the stochastic model (horizontal dashed line in Fig. 4) as initial conditions for the deterministic model; the corresponding time dynamics is shown in Fig. 4, above the horizontal dashed line. The patterns also remain stable under the deterministic dynamics, but it was demographic stochasticity which allowed the system to escape extinction in the first place; running the deterministic dynamics alone leads to extinction. The stochastic system thus generated suitable initial conditions for the deterministic system to converge to a spatial pattern. In Fig. 4 one can also observe that the patches reached by the deterministic model are fully frozen at long times while those generated by the stochastic model remain dynamic, they can split, merge or become extinct.
Similar observations can also be made in the twodimensional system, as we show in Fig. 5 and in the enclosed supplementary video animation. As expected, both the stochastic and the deterministic models display spatial vegetation patterns, similar to those observed previously with other models, and in an analogous range of rainfall values (HilleRisLambers et al., 2001;Rietkerk et al., 2002;Pueyo et al., 2008).
Finally we notice that in many simulations the stochastic system follows a regime of 'explosive' local growth soon after it has escaped the path to extinction. During this relatively short period of time some cells can get to carry a relatively large number of individuals with a maximum of about 120 plants observed in simulations. Afterwards plants spread out in space to a certain extent, and eventually the system appears to reach a stochastic stationary state (see supplementary video).
Probability of extinction
Next we will present the results of a more systematic investigation of the effect of 'self-recovery'. Figure 6 shows the probability of extinction P ext as a function of f , for two different values of ρ 0 (10 g m −2 and 50 g m −2 ). We observe that the stochastic model almost never reached extinction, while the deterministic approximation did so in almost all samples with f > 0.4. These observations may appear counter-intuitive at first sight: in the deterministic model we find that the more plants are in the system initially, i.e. the larger the initial cover f , the more likely it is that the system reaches extinction. However, this effect is already seen in the deterministic non-spatial model which predicts the extinction of homogeneous initial configurations (i.e. f = 1). The spatial model instead predicts stable patterns (HilleRisLambers et al., 2001;Rietkerk et al., 2002;Pueyo et al., 2008). Even more surprisingly perhaps, is that the larger the initial amount of biomass in each populated cell, ρ 0 , the larger the probability of extinction tends to be. To be more precise, this only happens if the initial cover is not too small (f 0.1). From the simulations, it seems that if the amount of biomass in a cell is too large, a fast spatial spread of plants is triggered, which favours a more homogeneous distribution of biomass. This amounts to an effective increase of the initial cover, f , and thus the previous effect takes over. This behaviour appears to run contrary to the infiltration feedback mechanism for pattern formation: local biomass increase favours infiltration in detriment of water availability in the surroundings. However, a similar effect already exists in the deterministic model as well. Indeed, in the region of parameter values that we considered here (see Tab. 1), vegetation patterns are not the only stable stationary solutions to the deterministic system: homogeneous desert is also one such solutions; in other words, there is bistability between vegetation patterns and desert. This suggests that, in this regime, the infiltration feedback mechanism is not always effective in inducing the formation of patterns. To develop some intuition of the kind of processes that could lead to these phenomena, it is useful to imagine an extreme case with relatively large total reproduction rate and a not so large average transport of water. We can then expect that plants manage to spread quickly before water heterogeneities consolidate. This might lead to a rather homogeneous state with a behaviour similar to what one would see in the non-spatial model in this regime: all plants die together. Although this situation may not be realistic, it indicates that the infiltration feedback mechanism may break down at some point, leading the system to a desert state rather than to pattern formation. This appears to be consistent with our simulations, in which we find that a larger initial cover or a larger value of ρ 0 imply a larger total reproduction rate.
We study the variations of the probability of extinction P ext as a function of µ for three different values of f : 1/8, 1/2, and 7/8 in Fig. 7. We can observe that P ext ≈ 0 for almost all values of µ except for µ = 0.1 g m −2 and µ > 6 g m −2 , which correspond to very small and large plant biomass, respectively.
Next, we study the probability of extinction P ext as a function of the rainfall rate R, for a range corresponding to typical dryland values, using three different values for µ: 1 g m −2 , 5 g m −2 and 10 g m −2 (Fig. 8). In the stochastic model, the effect of 'self-recovery', or escaping the path to extinction, is more appreciable the larger the amount of rainfall. The corresponding probability of extinction for the deterministic system is equal to one in almost the whole regime investigated (not shown).
Under certain conditions, the opposite effect can happen, with demographic noise promoting extinction in cases in which the deterministic system survives instead. However, this needs a range of the parameter µ which we expect not to be relevant for real-world situations. In particular, we have observed such behaviour only for large values of the parameter µ and for low initial cover. More specifically, we have observed that this can happen if f 10% and µ 10 g m −2 , for R 0.5 mm d −1 , or µ 5 g m −2 , for 0.4 R 0.5 mm d −1 .
Discussion
Two general approaches to modelling ecosystems, and complex systems in general, are continuous models and individual based models. The former tends to be relatively simple and amenable to mathematical analysis, while the latter tends to be more realistic and rely heavily on computational simulations due to its usually higher complexity. In particular, individual based models of ecosystems are mainly stochastic reflecting the random behaviour commonly observed in natural systems. On the other hand, continuous models are mostly deterministic or include a noise term added ad hoc. In some circumstances these can be seen as describing the most relevant features of individual based models. However, it is well known that demographic stochasticity can have important consequences that are usually neglected when modelling ecosystems in terms of continuous densities (see e.g. Black and McKane, 2012). In this work, we represented semi-arid ecosystems with two parallel models: a hybrid stochastic model, including individual-based vegetation dynamics and deterministic water dynamics, and a deterministic model, governing the behaviour of the average stochastic variables. The models represented explicitly the water-vegetation infiltration feedback. Thus, for the parameter values we studied, both models contained solutions corresponding to stable vegetation patterns, analogously to what was observed previously with other models of this type of ecosystem (e.g. Rietkerk et al., 2002).
While continuous models assume that stochastic effects are irrelevant and so neglect them altogether, we have also made two assumptions to simplify the computational analysis, i.e. we have assumed the ecosystem is not very sensitive to the growth phase of each individual plant so we can neglect it, and we have taken the same mass for all plants. We expect this to be a reasonable approximation if the time it takes for a plant to establish is much smaller than the time scale on which there is an appreciable change of the spatial distribution of biomass, e.g. the characteristic time for pattern formation.
Conditioned on the validity of these assumptions, our analysis showed that the resilience of the vegetation patterns changed when taking into account the discrete nature of plants and the intrinsic stochasticity of their behaviour. Including such stochasticity, the modelled ecosystems did not turn into deserts in a wide range of cases in which the deterministic representation would predict the ecosystem to become extinct. This is an important observation, given that semi-arid ecosystems are characterised by a rather scarce number of plants, scattered across regions of empty land. In a finite population, the number of individuals varies because of the intrinsic stochastic behaviour of the individuals. Usually, the fewer the individuals, the stronger the effects of the demographic noise (Ramaswamy et al., 2012;Rogers et al., 2012;Biancalani et al., 2011;Butler and Goldenfeld, 2011;Biancalani et al., 2010;Butler and Goldenfeld, 2009;McKane and Newman, 2005). Intuitively, this is expected to promote extinction when the number of plants is small. It is therefore remarkable that we observed a relevant regime of parameters in which including the stochastic individual nature of the plants actually increased the likelihood of vegetation pattern emergence, and of escaping the desert state. The opposite might also occur, i.e. a larger probability of becoming extinct in the stochastic model, especially when considering large individual plants, covering initially only a small fraction of soil. Under these conditions, the probability of a large stochastic deviation that brings the system suddenly to extinction, could be non-negligible. However, we expect that our approach is better justified for intermediate values of µ, i.e. the biomass of a plant (per unit area), where the results were rather robust.
Stochastic behaviour was not observed when µ was too small, in which case our model coincided with a deterministic continuous model, whereas if µ was too large the growth phase and heterogeneity was more noticeable. When the individual plant biomass was relatively high (µ 8 g m −2 ), or very low (µ → 0), the outcomes of the stochastic model tended to have higher probability of being desert than for intermediate µ values. Demographic noise seemed to be more important in an intermediate range of single-plant biomass, which corresponds to a realistic range for herbs or grass biomass (see e.g. Peters, 2002).
In order to study the resilience of the vegetation patterns, we addressed the issue of how and how easily they were reached in time, i.e. for which type of initial conditions the system evolved towards the pattern (e.g. Eppinga, 2009). This gave an indication of how "attractive" the patterns were. In the case of the deterministic approximation, this was equivalent to studying the basin of attraction of the patterned stable states. This notion, however, is not applicable to stochastic systems, where there is no notion of equilibrium point, and no deterministic dynamics uniquely leading from a certain initial condition to a final state. For this reason, we focused on the probability of extinction as a function of the initial conditions and for different parameter values. For a large set of initial conditions and parameters, we observed that the stochastic model almost never led the vegetation to extinction, while the vegetation in the deterministic model almost always became extinct. When varying rainfall in a realistic range for drylands, the effect of 'self-recovery' in the stochastic model was promoted for the highest rainfall values, where the contrast with the deterministic case was sharper.
Given the success of deterministic continuous models to date, demographic noise is expected to play a relevant role only when the number of plants is rather low. Indeed we have observed (see e.g. Fig. 2) that initially the two models follow essentially the same dynamics towards desertification, and only when they are close to extinction the two dynamics diverge: the deterministic dynamics follows the path to extinction, while the stochastic one recovers. This particular difference between the two models was especially important for vegetation initially covering more than 30-40%, where the deterministic case would practically always become extinct. We should be careful, however, not to give too much weight to the importance of the initial conditions, since they were arbitrarily chosen in simulations. The most robust statements of this work had to do with comparing what happened with both the deterministic and stochastic models after they had followed their own dynamics for a while. Nevertheless, such a behaviour of the deterministic model might appear counterintuitive, in the sense that it might be concluded that a semi-arid ecosystem with many plants, e.g. due to planting, risks extinction. The stochastic model, on the other hand, did not show this counter-intuitive behaviour since the probability of extinction was equally small (less than 0.1) for all values of initial cover. In the case of the deterministic model, the more homogeneously distributed the plants were, the more difficult it was for any particular vegetation patch to actively increase its own soil water. The water-vegetation feedback, in this case, needed contrasts between vegetated and bare patches to take place effectively. For the same reasons, in this regime the homogeneous vegetated state was not stable, and full homogeneous vegetation cover would quickly evolve into a desert state. Not so for the stochastic model, though, which could escape extinction even in this case. On the other hand, for very low initial vegetation cover (less than 20-30%), the results of the deterministic and the stochastic approaches were very similar. We must underline that these low fractions of vegetation cover are quite realistic in the most arid ecosystems. One may wonder how it is that for large initial cover (e.g. larger than 40%) the deterministic system has a large probability of extinction, while for small cover (e.g. smaller than 20%) it almost always survives; after all the dynamics is continuous and to reach extinction the system first needs to go through states of small cover. However, the spatial structure of the intermediate states thus reached does not necessarily coincide with that of a state with homogeneous distribution of soil and surface water and a random distribution of constant biomass ρ 0 , as were used here as initial conditions. This stresses the importance of being careful about interpreting our particular choice of initial conditions, as mentioned above.
The deterministic model we investigated in this work describes exactly the typical behaviour of the stochastic system when the individual plant biomass was negligible (µ → 0). In this case we could have a very large number of plants per cell. This deterministic model was a good approximation to the kernel model investigated by Pueyo et al. (2008) in the case that the range of the ker-nel of seed dispersal in the latter was of the order of one or two cells (∼ 2-4 m). Under these conditions, the kernel could be approximated with an effective diffusive term and with a diffusion coefficient depending on the state of the soil water. If, additionally, we could neglect the spatial variation of the soil water, the two models became similar to the model studied by HilleRisLambers et al. (2001). In this sense, we could say that the stochastic model introduced here was close to these well-studied models and, in particular, it was not unexpected to observe spatial patterns in a similar range of parameters. What could change drastically is the transitory dynamics while reaching a stationary state, as we showed with the effect of self-recovery.
In principle, a more complete approach would have to deal with each single plant individually, with all its attributes and ongoing processes, as has been done for instance in the study of forests, see e.g. Perry and Enright (2006). However, this would make the problem far more complex from a computational point of view, and it might become difficult to gain insight. Indeed, as has been discovered in the investigations on forests, a far simpler analytical approximation, called the perfect plastic approximation, may capture most of the relevant details of the dynamics observed in simulations (Strigul et al., 2008). This case, however, corresponds to a regime of high vegetation density, where fluctuations of the average behaviour are expected to be irrelevant. This raises the question of which would be the 'best' way of modelling a plant in the study of semi-arid ecosystems in the sense of, paraphrasing Einstein, 'keeping it simple, but not too simple'.
In our stochastic modelling approach, we did not consider the heterogeneity in plant size. In particular, we did not include the different plant life stages. In a sense, plants were instantaneously created as adult individuals. This might appear as an unrealistic feature that might promote the effect of 'self-recovery' we discussed, because a plant could start increasing the availability of water locally as soon as it was created, and therefore instantly start promoting its own survival. However, this feature would also lead to an overestimation of water uptake. Since in our model plants died suddenly, in detriment of self-recovery, mortality was also over-estimated. Furthermore, in the stochastic model we investigated, a plant could produce another plant only in the neighbouring cells, thus limiting the impact the new birth had on the state of the ecosystem.
A next step in the complexity of the modelling could be to introduce two type of individuals: seedlings and established plants. In this way we would be able to take into account, for instance, the high asymmetry in mortality between these two. The question is then how robust are the results we have discussed in this work under this more realistic scenario. At first sight, one could expect that the stochastic model becomes closer to the deterministic approximation, but experience in this field of research has shown that counter-intuitive effects are not uncommon (Ramaswamy et al., 2012;Rogers et al., 2012;Biancalani et al., 2011;Butler and Goldenfeld, 2011;Biancalani et al., 2010;Butler and Goldenfeld, 2009;McKane and Newman, 2005).
The model we presented did not include environmental heterogeneity and stochasticity, or topography (see e.g. Sheffer et al., 2013). We also discarded rainfall intermittence. Vegetation in arid areas is well adapted to the occasional occurrence of rainfall (Baudena et al., 2007;Kletter et al., 2009;D'Odorico et al., 2007), although the effect of rainfall intermittency may not be too relevant when spatial feedbacks are represented (see Baudena and Provenzale, 2008). Besides water, we did not consider any other limiting resource, such as nutrient or light limitations. Moreover, we did not include another water-vegetation feedback mechanism, which is known to play a role in drylands, namely the effect of root length. Plant water availability increases with the root extent, which in turn increases with the biomass itself, thus favouring plant growth and generating vegetation patterns (Gilad et al., 2004;Lefever, R. and Lejeune, 1997;Barbier et al., 2008).
Another relevant issue is how to validate our findings with observations. For example, we could analyse patch dynamics to see whether it displays the 'wiggly' behaviour observed in the model results. This nevertheless would require time series of spatial patterns, with an appropriate spatial and temporal resolution, which may not be easily obtainable. In principle, it could even be possible to compute the statistics of this patch dynamics, in order to have quantitative predictions.
Despite not being conclusive in any sense, this investigation indicated that, in certain regimes, including demographic noise could lead to a larger estimate of the resilience of semi-arid ecosystems. Our model results suggested that demographic noise may be more important in the less arid ecosystems, with higher rainfall and vegetation cover. Since changes in rainfall regimes are expected, for example as a consequence of climate change, it may be necessary to take into account individual-based dynamics to evaluate the resilience and resistance of these ecosystems to such forcing. In summary, we think the study of semi-arid ecosystems might benefit from the approach taken for instance in the research on forests, where quite detailed IBM's have been extensively used. Indeed, in contrast to forests which are characterised by a rather dense vegetation, the typical number of plants in semi-arid ecosystems is comparatively quite low and so the stochastic effects implicit in such a modelling approach are expected to be more relevant.
Engineering and Physical Sciences Research Council EP-SRC (grant reference EP/I019200/1). TG acknowledges funding by RCUK (EP/E500048/1). The research of MB and MR is also supported by the project CASCADE (Seventh Framework Programme FP7/2007-2013 grant agreement 283068).
B A C D E Time t Time t + τ DETERMINISTIC EVOLUTION OF WATER BETWEEN t AND t + τ STOCHASTIC BIRTH OR DEATH OF PLANTS AT t + τ n i + 1 n i − 1 ω i , σ i ω i , σ i n j + 1 ω i , σ i ω j , σ j n i n j n j ω j , σ j n i Γ d ω ′ j , σ ′ j ω ′ i , σ ′ i n i Γ s (ω j ) ω i , σ i n i n i n i Γ b (ω i ) Eqs.
(1)-(4) Figure 1: (Colour online) Illustration of the stochastic hybrid model for semi-arid ecosystems. For clarity we show only two neighbouring cells i and j but the same applies to all the other cells. Suppose that, at time t, the system is in a state where the number of plants and the depths of soil and surface water in cell i are given by n i , ω ′ i and σ ′ i , respectively (A). The analogous quantities in cell j are n j , ω ′ j , σ ′ j . Suppose, furthermore, that the next birth or death of a plant takes place at time t + τ ; these events happen at random, and so τ is itself a stochastic variable. Since there are no transition events between t and t + τ , soil and surface water in all cells evolve deterministically according to Eqs.
(1)-(4) in this time interval. Suppose now that their new state at t + τ is given by ω i and σ i , for all cells i (B). At t + τ a stochastic transition happens, there are three possible types of transitions: a plant at a cell i gives birth to a new plant in the same cell (C), it dies (D) or it gives birth to a new plant in a neighbouring cell j (E). These transitions happen with rates n i Γ b (ω i ), n i Γ d , and n i Γs(ω j ), respectively (see Eqs.
(6) and (7)). Immediately after t + τ , and until the next birth or death of a plant takes place, soil and surface water in all cells again evolve deterministically as before. with that in the corresponding deterministic approximation (thick red line). While the deterministic approximation leads to extinction, the full stochastic model recovers. Simulations were run on a line with 128 cells, periodic boundary conditions and the same random initial conditions in all cells: soil and surface water depths were given by the values in the desert state, Eq. (11), while the initial biomass was ρ 0 = 10 g m −2 (f = 1). Other key parameter values were: µ = 1 g m −2 , R = 0.6 mm d −1 . See Table 1 Simulations were run on a line with 128 cells, periodic boundary conditions and the same initial conditions for both the stochastic model and its deterministic approximation: soil and surface water depths in all cells were given by the values in the desert state, Eq. (11), while biomass was ρ 0 = 0 g m −2 in half of the cells, chosen randomly, and ρ 0 = 10 g m −2 in the other half (f = 1/2). Key parameter values were: µ = 1 g m −2 , R = 0.6 mm d −1 . See Table 1 for the remaining parameter values. Fig. 3 (stochastic model) for a longer period of time. Below the horizontal dashed line, i.e. from t = 0 days to t = 5 × 10 4 days, we show the dynamical behaviour of the stochastic model, and observe that the system indeed escapes the path to extinction and finally reaches a stationary pattern. Vegetation patches, however, appear to follow dynamics on their own: they can split, merge, and become extinct. In order to compare this with the dynamics of the deterministic model, we use the configuration reached by the stochastic dynamics at t = 5 × 10 4 days (horizontal dashed line) as an initial condition for the deterministic model. The outcome of the corresponding deterministic dynamics are shown above the horizontal dashed line, i.e. from t = 5 × 10 4 days to t = 7 × 10 4 days. In other words, at time t = 5 × 10 4 days we switch the dynamics from the stochastic to the deterministic model. Clearly the patterns remain stable under the deterministic dynamics. Simulations were run on a line with 128 cells, periodic boundary conditions and the same initial conditions for both the stochastic model and its deterministic approximation: soil and surface water depths in all cells were given by the values in the desert state, Eq. (11), while biomass was ρ 0 = 0 g m −2 in half of the cells, chosen randomly, and ρ 0 = 10 g m −2 in the other half (f = 1/2). Key parameter values were: µ = 1 g m −2 , R = 0.6 mm d −1 . See Table 1 for the remaining parameter values. (left) with that in the corresponding deterministic approximation (right); the axes correspond to the two spatial dimensions. See also the supplementary video which shows the full dynamics of these same vegetation profiles. While the deterministic approximation leads to extinction, the full stochastic model recovers. The simulations were run on a square grid of 64×64 cells and with periodic boundary conditions. Both the stochastic model and its deterministic approximation were started with the same initial conditions: soil and surface water depths in all cells were given by the values in the desert state, Eq. (11), while biomass was ρ 0 = 0 g m −2 in half of the cells, chosen randomly, and ρ 0 = 10 g m −2 in the other half (f = 1/2). Key parameter values were: µ = 1 g m −2 , R = 0.6 mm d −1 . See Table 1 for the remaining parameter values. : (Colour online) Probability of extinction in the stochastic model (blue triangles and magenta rhombi) and in the corresponding approximation to a deterministic model (red squares and green circles). We can observe a very strong contrast between the two: while the deterministic model almost always becomes extinct for cover values f 0.4, the stochastic system almost never does, for any value of f . For the stochastic case, the probabilities were estimated from 50 numerical simulations for each point. The horizontal axis indicates the initial fraction f of populated cells, i.e. cells with initial biomass ρ 0 > 0. Simulations were run on a line with 128 cells and with periodic boundary conditions. Both the stochastic model and its deterministic approximation were started with the same initial conditions: in all cells soil and surface water depths were given by the values in the desert state, Eq. (11), while biomass was ρ 0 = 10 g m −2 (green circles and magenta rhombi), and ρ 0 = 50 g m −2 (red squares and blue triangles) in a fraction f of randomly chosen cells and zero in the remaining cells. Key parameter values were: µ = 1 g m −2 , R = 0.6 mm d −1 . See Table 1 for the remaining parameter values. Figure 7: (Colour online) Probability of extinction in the stochastic model as a function of the parameter µ, i.e. the average mass of a plant divided by the area of a cell. The probabilities were estimated from 50 numerical simulations for each point. Simulations were run on a line with 128 cells with periodic boundary conditions and initial conditions chosen as follows: in all cells initial soil and surface water depths were given by the values in the desert state, Eq. (11), while biomass was ρ 0 ≈ 10 g m −2 in a fraction f of randomly chosen cells and zero in the remaining cells. The three curves correspond to three different values of f : 1/8 (red squares), 1/2 (green circles) and 7/8 (blue triangles). See Table 1 for the remaining parameter values. For the same parameter values and initial conditions the probability of extinction in the deterministic model is essentially one throughout the whole regime investigated (not shown). conditions. In all cells the initial depths of soil and surface water were given by the values in the desert state, Eq. (11), while the initial biomass was ρ 0 = 10 g m −2 in half of the cells (f = 1/2) and zero in the remaining half. The parameter µ took three different values for each of the three curves shown: 1 g m −2 (red squares), 5 g m −2 (green circles) and 10 g m −2 (blue triangles). Note that for the same parameters and initial conditions the probability of extinction of the deterministic model is essentially one over the whole regime investigated (not shown). See Table 1 for the remaining parameter values.
Parameter Description
Units
Value a maximum infiltration rate d −1 0.2 b maximum specific water uptake mm g −1 m 2 d −1 0.05 c conversion of water uptake to plants g mm −1 m −2 10 d plant mortality rate d −1 0.25 r water loss due to drainage and evaporation d −1 0.2 h length of the side of a cell m 2 k 1 half-saturation constant of water uptake mm 5 k 2 half-saturation constant of water infiltration g m −1 5 µ mean contribution of a plant to biomass density g m −2 0.1-10 D ω diffusion coefficient for soil water m 2 d −1 0.1 D σ diffusion coefficient for surface water m 2 d −1 100 K probability of a seed moving to a neighbouring cell -0.02 L number of cells -128, 64 × 64 R rainfall rate mm d −1 0.35-0.60 W 0 water infiltration rate in the absence of plants -0.1 Table 1: Parameter values for the models studied in this paper.
Figure 2 :
2(Colour online) Comparison of the dynamics of the total biomass density in the stochastic model (continuous green line)
Figure 3 :
3(Colour online) Comparison of the dynamics of the vegetation profile in the stochastic model (left) with that in the corresponding deterministic approximation (right). While the deterministic approximation leads to extinction, the full stochastic model recovers.
Figure 4 :
4(Colour online) Dynamics of the vegetation profile presented on the left side of
Figure 5 :
5(Colour online) Comparison of the dynamics of the vegetation profile in the stochastic model in two dimensions
Figure 6
6Figure 6: (Colour online) Probability of extinction in the stochastic model (blue triangles and magenta rhombi) and in the corresponding approximation to a deterministic model (red squares and green circles). We can observe a very strong contrast between the two: while the deterministic model almost always becomes extinct for cover values f 0.4, the stochastic system almost never does, for any value of f . For the stochastic case, the probabilities were estimated from 50 numerical simulations for each point. The horizontal axis indicates the initial fraction f of populated cells, i.e. cells with initial biomass ρ 0 > 0. Simulations were run on a line with 128 cells and with periodic boundary conditions. Both the stochastic model and its deterministic approximation were started with the same initial conditions: in all cells soil and surface water depths were given by the values in the desert state, Eq. (11), while biomass was ρ 0 = 10 g m −2 (green circles and magenta rhombi), and ρ 0 = 50 g m −2 (red squares and blue triangles) in a fraction f of randomly chosen cells and zero in the remaining cells. Key parameter values were: µ = 1 g m −2 , R = 0.6 mm d −1 . See Table 1 for the remaining parameter values.
Figure 8 :
8(Colour online) Probability of extinction in the stochastic model as a function of the rainfall rate R. Extinction probabilities were estimated from 50 numerical simulations for each point. Simulations were run on a line with 128 cells with periodic boundary
for the remaining parameter values.
AcknowledgementsThe authors thank M. Eppinga for useful discussions. This research is supported by the ERA Complexity Net through the RESINEE project ('Resilience and interaction of networks in ecology and economics'). UK support for this project to JRG, TG and AJM is administered by the
Patch structure, dynamics and implications for the functioning of arid ecosystems. M Aguiar, O E Sala, Trends in Ecology & Evolution. 14Aguiar, M., Sala, O. E., 1999. Patch structure, dynamics and impli- cations for the functioning of arid ecosystems. Trends in Ecology & Evolution 14, 273-277.
Spatial decoupling of facilitation and competition at the origin of gapped vegetation patterns. N Barbier, P Couteron, R Lefever, V Deblauwe, O Lejeune, Ecology. 89Barbier, N., Couteron, P., Lefever, R., Deblauwe, V., Lejeune, O., 2008. Spatial decoupling of facilitation and competition at the origin of gapped vegetation patterns. Ecology 89, 1521-31.
Self-organized vegetation patterning as a fingerprint of climate and human impact on semi-arid ecosystems. N Barbier, P Couteron, J Lejoly, V Deblauwe, O Lejeune, Journal of Ecology. 94Barbier, N., Couteron, P., Lejoly, J., Deblauwe, V., Lejeune, O., 2006. Self-organized vegetation patterning as a fingerprint of cli- mate and human impact on semi-arid ecosystems. Journal of Ecol- ogy 94, 537-547.
Vegetation response to rainfall intermittency in drylands: Results from a simple ecohydrological box model. Advances in Water Resources. M Baudena, G Boni, L Ferraris, J Von Hardenberg, A Provenzale, 30Baudena, M., Boni, G., Ferraris, L., von Hardenberg, J., Proven- zale, A., 2007. Vegetation response to rainfall intermittency in drylands: Results from a simple ecohydrological box model. Ad- vances in Water Resources 30, 1320-1328.
Rainfall intermittency and vegetation feedbacks in drylands. M Baudena, A Provenzale, Hydrology and Earth System Sciences. 12Baudena, M., Provenzale, A., 2008. Rainfall intermittency and veg- etation feedbacks in drylands. Hydrology and Earth System Sci- ences 12, 679-689.
Complexity and coexistence in a simple spatial model for arid savanna ecosystems. M Baudena, M Rietkerk, Theoretical Ecology. Baudena, M., Rietkerk, M., 2012. Complexity and coexistence in a simple spatial model for arid savanna ecosystems. Theoretical Ecology.
Stochastic Turing patterns in a Brusselator model. T Biancalani, D Fanelli, F D Patti, Physical Review E. 8146215Biancalani, T., Fanelli, D., Patti, F. D., 2010. Stochastic Turing patterns in a Brusselator model. Physical Review E 81, 046215.
Stochastic waves in a Brusselator model with nonlocal interactions. T Biancalani, T Galla, A J Mckane, Physical Review E. 8426201Biancalani, T., Galla, T., McKane, A. J., 2011. Stochastic waves in a Brusselator model with nonlocal interactions. Physical Review E 84, 026201.
Stochastic formulation of ecological models and their applications. A J Black, A J Mckane, P12006, A J Black, A J Mckane, Journal of Statistical Mechanics. 27Trends in Ecology & EvolutionBlack, A. J., McKane, A. J., 2011. WKB calculation of an epidemic outbreak distribution. Journal of Statistical Mechanics, P12006. Black, A. J., McKane, A. J., 2012. Stochastic formulation of ecologi- cal models and their applications. Trends in Ecology & Evolution 27, 337-345.
Rationale, limitations, and assumptions of a northeastern forest growth simulator. D B Botkin, J F Janak, J R Wallis, IBM Journal of Research & Development. 16Botkin, D. B., Janak, J. F., Wallis, J. R., 1972. Rationale, limita- tions, and assumptions of a northeastern forest growth simulator. IBM Journal of Research & Development 16, 101-116.
Robust ecological pattern formation induced by demographic noise. T Butler, N Goldenfeld, Physical Review E. 8030902Butler, T., Goldenfeld, N., 2009. Robust ecological pattern formation induced by demographic noise. Physical Review E 80, 030902(R).
Fluctuation-driven Turing patterns. T Butler, N Goldenfeld, Physical Review E. 8411112Butler, T., Goldenfeld, N., 2011. Fluctuation-driven Turing patterns. Physical Review E 84, 011112.
A runoff capability classification system based on surface features criteria in semi-arid areas of West Africa. A Casenave, C Valentin, Journal of Hydrology. 130Casenave, A., Valentin, C., 1992. A runoff capability classification system based on surface features criteria in semi-arid areas of West Africa. Journal of Hydrology 130, 231-249.
A gap dynamics simulation model of succession in a semiarid grassland. D Coffin, W Lauenroth, Ecological Modelling. 49Coffin, D., Lauenroth, W., 1990. A gap dynamics simulation model of succession in a semiarid grassland. Ecological Modelling 49, 229-266.
The population dynamics of plants. M J Crawley, Phil. Trans. R. Soc. Lond. B. 330Crawley, M. J., 1990. The population dynamics of plants. Phil. Trans. R. Soc. Lond. B 330, 125-140.
Pattern formation and dynamics in non-equilibrium systems. M C Cross, H S Greenside, Cambridge University PressCambridgeCross, M. C., Greenside, H. S., 2009. Pattern formation and dy- namics in non-equilibrium systems. Cambridge University Press, Cambridge.
Slowing down in spatially patterned ecosystems at the brink of collapse. V Dakos, S Kéfi, M Rietkerk, E H Van Nes, M Scheffer, The American Naturalist. 177Dakos, V., Kéfi, S., Rietkerk, M., van Nes, E. H., Scheffer, M., 2011. Slowing down in spatially patterned ecosystems at the brink of collapse. The American Naturalist 177, E155-E166.
Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models. M H A Davis, with discussionDavis, M. H. A., 1984. Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models (with discussion).
. J R , Stat. Soc. B. 46J. R. Stat. Soc. B 46, 353-388.
The global biogeography of semi-arid periodic vegetation patterns. V Deblauwe, N Barbier, P Couteron, O Lejeune, J Bogaert, Global Ecology and Biogeography. 17Deblauwe, V., Barbier, N., Couteron, P., Lejeune, O., Bogaert, J., 2008. The global biogeography of semi-arid periodic vegetation patterns. Global Ecology and Biogeography 17, 715-723.
Noise-induced vegetation patterns in fire-prone savannas. P D'odorico, F Laio, A Porporato, L Ridolfi, N Barbier, Journal of Geophysical Research. 112D'Odorico, P., Laio, F., Porporato, A., Ridolfi, L., Barbier, N., 2007. Noise-induced vegetation patterns in fire-prone savannas. Journal of Geophysical Research 112, G02021-G02021.
Amazing pattern. M Eppinga, Universiteit UtrechtPh.D. thesisEppinga, M., 2009. Amazing pattern. Ph.D. thesis, Universiteit Utrecht.
Nonequilibrium thermodynamics of piecewise deterministic Markov processes. A Faggionato, D Gabrielli, M R Crivellari, Journal of Statistical Physics. 137Faggionato, A., Gabrielli, D., Crivellari, M. R., 2009. Non- equilibrium thermodynamics of piecewise deterministic Markov processes. Journal of Statistical Physics 137, 259-304.
Averaging and large deviation principles for fully-coupled piecewise deteministic Markov processes and applications to molecular motors. A Faggionato, D Gabrielli, M R Crivellari, Markov Processes Related Fields. 16Faggionato, A., Gabrielli, D., Crivellari, M. R., 2010. Averaging and large deviation principles for fully-coupled piecewise deteministic Markov processes and applications to molecular motors. Markov Processes Related Fields 16, 497-548.
Evolutionary game theory: Theoretical concepts and applications to microbial communities. E Frey, Physica A: Statistical Mechanics and its Applications. 389section 2.1Frey, E., 2010. Evolutionary game theory: Theoretical concepts and applications to microbial communities. Physica A: Statistical Me- chanics and its Applications 389, 4265-4298, section 2.1.
Ecosystem engineers: from pattern formation to habitat creation. E Gilad, J Von Hardenberg, A Provenzale, M Shachak, E Meron, Physical Review Letters. 9398105Gilad, E., von Hardenberg, J., Provenzale, A., Shachak, M., Meron, E., 2004. Ecosystem engineers: from pattern formation to habitat creation. Physical Review Letters 93, 98105.
A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. D T Gillespie, Journal of Computational Physics. 22Gillespie, D. T., 1976. A general method for numerically simulat- ing the stochastic time evolution of coupled chemical reactions. Journal of Computational Physics 22, 403-434.
Exact stochastic simulation of coupled chemical reactions. D T Gillespie, Journal of Physical Chemistry. 81Gillespie, D. T., 1977. Exact stochastic simulation of coupled chem- ical reactions. Journal of Physical Chemistry 81, 2340-2361.
Viability and resilience of complex systems: concepts, methods and case studies from ecology and society. V Grimm, J M Calabrese, Deffuant, G., Gilbert, N.Springer15HeidelbergWhat is resilience? A short introductionGrimm, V., Calabrese, J. M., 2011. What is resilience? A short introduction. In: Deffuant, G., Gilbert, N. (Eds.), Viability and resilience of complex systems: concepts, methods and case studies from ecology and society. Springer, Heidelberg, Ch. 1, p. 15.
Vegetation pattern formation in semiarid grazing systems. R Hillerislambers, M Rietkerk, Van Den, F Bosch, H H T Prins, H De Kroon, Ecology. 82HilleRisLambers, R., Rietkerk, M., van den Bosch, F., Prins, H. H. T., de Kroon, H., 2001. Vegetation pattern formation in semi- arid grazing systems. Ecology 82, 50-61.
Resilience and stability of ecological systems. C Holling, Annual Review of Ecology and Systematics. 4Holling, C., 1973. Resilience and stability of ecological systems. An- nual Review of Ecology and Systematics 4, 1-23.
Handbook of Ecological Models used in Ecosystem and Environmental Management. S E Jørgensen, CRC PressBoca RatonJørgensen, S. E. (Ed.), 2011. Handbook of Ecological Models used in Ecosystem and Environmental Management. CRC Press, Boca Raton.
Spatial vegetation patterns and imminent desertification in Mediterranean arid ecosystems. S Kéfi, M Rietkerk, C L Alados, Y Pueyo, V P Papanastasis, A Elaich, P C De Ruiter, Nature. 449Kéfi, S., Rietkerk, M., Alados, C. L., Pueyo, Y., Papanastasis, V. P., Elaich, A., de Ruiter, P. C., 2007a. Spatial vegetation patterns and imminent desertification in Mediterranean arid ecosystems. Nature 449, 213-217.
Local facilitation, bistability and transitions in arid ecosystems. S Kéfi, M Rietkerk, M Van Baalen, M Loreau, Theoretical Population Biology. 71Kéfi, S., Rietkerk, M., van Baalen, M., Loreau, M., 2007b. Local facilitation, bistability and transitions in arid ecosystems. Theo- retical Population Biology 71, 367-379.
Regular and irregular patterns in semi-arid vegetation. C Klausmeier, Science. 284Klausmeier, C., 1999. Regular and irregular patterns in semi-arid vegetation. Science 284, 1826-1828.
Patterned vegetation and rainfall intermittency. A Y Kletter, J Von Hardenberg, E Meron, A Provenzale, Journal of Theoretical Biology. 256Kletter, A. Y., von Hardenberg, J., Meron, E., Provenzale, A., 2009. Patterned vegetation and rainfall intermittency. Journal of Theo- retical Biology 256, 574-583.
On the origin of tiger bush. R Lefever, O Lejeune, Bulletin of Mathematical Biology. 59Lefever, R. and Lejeune, O., 1997. On the origin of tiger bush. Bul- letin of Mathematical Biology 59, 263-294.
Facilitation, competition, and vegetation patchiness: from scale free distribution to patterns. A Manor, N M Shnerb, Journal of Theoretical Biology. 253Manor, A., Shnerb, N. M., 2008. Facilitation, competition, and vege- tation patchiness: from scale free distribution to patterns. Journal of Theoretical Biology 253, 838-842.
Viability and resilience of complex systems: concepts, methods and case studies from ecology and society. S Martin, G D J M Calabrese, Deffuant, G., Gilbert, N.Springer3HeidelbergMartin, S., Calabrese, G. D. J. M., 2011. Defining resilience mathe- matically: from attractors to viability. In: Deffuant, G., Gilbert, N. (Eds.), Viability and resilience of complex systems: concepts, methods and case studies from ecology and society. Springer, Hei- delberg, Ch. 2, p. 3.
Predator-prey cycles from resonant amplification of demographic stochasticity. A J Mckane, T J Newman, Physical Review Letters. 94218102McKane, A. J., Newman, T. J., 2005. Predator-prey cycles from res- onant amplification of demographic stochasticity. Physical Review Letters 94, 218102.
Pattern-formation approach to modelling spatially extended ecosystems. E Meron, Ecological Modelling. 234Meron, E., 2012. Pattern-formation approach to modelling spatially extended ecosystems. Ecological Modelling 234, 70-82.
Modeling dryland landscapes. E Meron, Mathematical Modelling of Natural Phenomena. 6Meron, E., 2011. Modeling dryland landscapes. Mathematical Mod- elling of Natural Phenomena 6, 163-187.
A method for scaling vegetation dynamics: the ecosystem demography model (ED). P R Moorcroft, G C Hurtt, S W Pacala, Ecological Monographs. 71Moorcroft, P. R., Hurtt, G. C., Pacala, S. W., 2001. A method for scaling vegetation dynamics: the ecosystem demography model (ED). Ecological Monographs 71, 557-586.
Forest models defined by field measurements: Estimation, error analysis and dynamics. S W Pacala, C D Canham, J Saponara, J A S Jr, R K Kobe, E Ribbens, Ecological Monographs. 66Pacala, S. W., Canham, C. D., Saponara, J., Jr., J. A. S., Kobe, R. K., Ribbens, E., 1996. Forest models defined by field mea- surements: Estimation, error analysis and dynamics. Ecological Monographs 66, 1-43.
Forest models defined by field measurements: I. the design of a northeastern forest simulator. S W Pacala, C D Canham, J A SilanderJr, Canadian Journal of Forest Research. 23Pacala, S. W., Canham, C. D., Silander Jr., J. A., 1993. Forest mod- els defined by field measurements: I. the design of a northeastern forest simulator. Canadian Journal of Forest Research 23, 1980- 1988.
Spatial modelling of vegetation change in dynamic landscapes: a review of methods and applications. G L W Perry, N J Enright, Progress in Physical Geography. 30Perry, G. L. W., Enright, N. J., 2006. Spatial modelling of vege- tation change in dynamic landscapes: a review of methods and applications. Progress in Physical Geography 30, 47-72.
Plant species dominance at a grasslandshrubland ecotone: an individual-based gap dynamics model of herbaceous and woody species. D P C Peters, Ecological Modelling. 152Peters, D. P. C., 2002. Plant species dominance at a grassland- shrubland ecotone: an individual-based gap dynamics model of herbaceous and woody species. Ecological Modelling 152, 5-32.
Dispersal strategies and spatial organization of vegetation in arid ecosystems. Y Pueyo, S Kéfi, C L Alados, M Rietkerk, Oikos. 117Pueyo, Y., Kéfi, S., Alados, C. L., Rietkerk, M., 2008. Dispersal strategies and spatial organization of vegetation in arid ecosys- tems. Oikos 117, 1522-1532.
Discreteness-induced concentration inversion in mesoscopic chemical systems. R Ramaswamy, N Gonzalez-Segredo, I F Sbalzarini, R Grima, Nature Communications. 3779Ramaswamy, R., Gonzalez-Segredo, N., Sbalzarini, I. F., Grima, R., 2012. Discreteness-induced concentration inversion in mesoscopic chemical systems. Nature Communications 3, 779.
Using mechanistic models to scale ecological processes across space and time. E B Rastetter, J D Aber, D P C Peters, D S Ojima, I C Burke, BioScience. 53Rastetter, E. B., Aber, J. D., Peters, D. P. C., Ojima, D. S., Burke, I. C., 2003. Using mechanistic models to scale ecological processes across space and time. BioScience 53, 68-76.
Demographic noise and piecewise deterministic Markov processes. Physical Review E 86. J Realpe-Gomez, T Galla, A J Mckane, 11137Realpe-Gomez, J., Galla, T., McKane, A. J., 2012. Demographic noise and piecewise deterministic Markov processes. Physical Re- view E 86, 011137.
Self-organization of vegetation in arid ecosystems. M Rietkerk, M C Boerlijst, F Van Langevelde, R Hillerislambers, J Van De Koppel, L Kumar, H H T Prins, De Roos, American Naturalist. 160Rietkerk, M., Boerlijst, M. C., van Langevelde, F., HilleRisLambers, R., van de Koppel, J., Kumar, L., Prins, H. H. T., de Roos, A. M., 2002. Self-organization of vegetation in arid ecosystems. American Naturalist 160, 524-530.
Self-organized patchiness and catastrophic shifts in ecosystems. M Rietkerk, S C Dekker, P C De Ruiter, J Van De Koppel, Science. 305Rietkerk, M., Dekker, S. C., de Ruiter, P. C., van de Koppel, J., 2004. Self-organized patchiness and catastrophic shifts in ecosystems. Science 305, 1926-1929.
Alternate stable states and threshold effects in semi-arid grazing systems. M Rietkerk, J Van De Koppel, Oikos. 79Rietkerk, M., van de Koppel, J., 1997. Alternate stable states and threshold effects in semi-arid grazing systems. Oikos 79, 69-76.
Regular pattern formation in real ecosystems. M Rietkerk, J Van De Koppel, Trends in Ecology & Evolution. 23Rietkerk, M., van de Koppel, J., 2008. Regular pattern formation in real ecosystems. Trends in Ecology & Evolution 23, 169-175.
Demographic noise can lead to the spontaneous formation of species. T Rogers, A J Mckane, A G Rossberg, Europhysics Letters. 9740008Rogers, T., McKane, A. J., Rossberg, A. G., 2012. Demographic noise can lead to the spontaneous formation of species. Euro- physics Letters 97, 40008.
Challenges in complex systems science. M San Miguel, J H Johnson, J Kertesz, K Kaski, A Daz-Guilera, R S Mackay, V Loreto, P Rdi, D Helbing, The European Physical Journal Special Topics. 214San Miguel, M., Johnson, J. H., Kertesz, J., Kaski, K., Daz-Guilera, A., MacKay, R.S., Loreto, V., rdi, P., Helbing, D., 2012. Chal- lenges in complex systems science. The European Physical Journal Special Topics 214, 245-271.
Hydraulic integration and shrub growth form linked across continental aridity gradients. Proceedings of the National Academy of Sciences of The United States of America. H J Schenk, S Espino, C M Goedhart, M Nordenstahl, H I Martinez Cabrera, C S Jones, 105Schenk, H. J., Espino, S., Goedhart, C. M., Nordenstahl, M., Mar- tinez Cabrera, H. I., Jones, C. S., 2008. Hydraulic integration and shrub growth form linked across continental aridity gradients. Pro- ceedings of the National Academy of Sciences of The United States of America 105, 11248-11253.
Emerged or imposed: a theory on the role of physical templates and self-organisation for vegetation patchiness. E Sheffer, J Von Hardenberg, H Yizhaq, M Shachak, E Meron, Ecology Letters. 16Sheffer, E., von Hardenberg, J., Yizhaq, H., Shachak, M., Meron, E., 2013. Emerged or imposed: a theory on the role of physical templates and self-organisation for vegetation patchiness. Ecology Letters 16, 127-139.
Rise and fall of periodic patterns for a generalized Klausmeier-Gray-Scott model. H Shugart, S Van Der Stelt, A Doelman, G Hek, J D M Rademacher, Journal of Nonlinear Science. 23Springer-VerlagA theory of forest dynamicsShugart, H., 1984. A theory of forest dynamics. Springer-Verlag, New York. van der Stelt, S., Doelman, A., Hek, G., Rademacher, J. D. M., 2013. Rise and fall of periodic patterns for a generalized Klausmeier- Gray-Scott model. Journal of Nonlinear Science 23, 39-95.
Scaling from trees to forests: tractable macroscopic equations for forest dynamics. N Strigul, D Pristinski, D Purves, J Dushoff, S Pacala, Ecological Monographs. 78Strigul, N., Pristinski, D., Purves, D., Dushoff, J., Pacala, S., 2008. Scaling from trees to forests: tractable macroscopic equations for forest dynamics. Ecological Monographs 78, 523-545.
The chemical basis of morphogenesis. A M Turing, Phil. Trans. R. Soc. B (London). 237Turing, A. M., 1952. The chemical basis of morphogenesis. Phil. Trans. R. Soc. B (London) 237, 37-72.
Theoretical considerations on the combined use of system dynamics and individual-based modeling in ecology. C E Vincenot, F Giannino, M Rietkerk, K Moriya, S Mazzoleni, Ecological Modelling. 222Vincenot, C. E., Giannino, F., Rietkerk, M., Moriya, K., Mazzoleni, S., 2010. Theoretical considerations on the combined use of sys- tem dynamics and individual-based modeling in ecology. Ecologi- cal Modelling 222, 210-218.
Diversity of vegetation patterns and desertification. J Von Hardenberg, E Meron, M Shachak, Y Zarmi, Physical Review Letters. 87von Hardenberg, J., Meron, E., Shachak, M., Zarmi, Y., 2001. Di- versity of vegetation patterns and desertification. Physical Review Letters 87, 198101.
Stability of semi-arid savanna grazing systems. B Walker, D Ludwig, C Holling, R Peterman, Journal of Ecology. 69Walker, B., Ludwig, D., Holling, C., Peterman, R., 1981. Stability of semi-arid savanna grazing systems. Journal of Ecology 69, 473- 498.
| [] |
[
"Influence of solvent polarization and non-uniform ion size on electrostatic properties between charged surfaces in an electrolyte solution",
"Influence of solvent polarization and non-uniform ion size on electrostatic properties between charged surfaces in an electrolyte solution"
] | [
"Jun-Sik Sin *[email protected] \nDepartment of Physics\nKim Il Sung University\nTaesong DistrictPyongyangDemocratic People's Republic of Korea\n"
] | [
"Department of Physics\nKim Il Sung University\nTaesong DistrictPyongyangDemocratic People's Republic of Korea"
] | [] | In this paper, we study electrostatic properties between two similar or oppositely charged surfaces immersed in an electrolyte solution by using mean-field approach accounting for solvent polarization and non-uniform size effect. Applying a free energy formalism accounting for unequal ion sizes and orientational ordering of water dipoles, we derive coupled and self-consistent equations to calculate electrostatic properties between charged surfaces. Electrostatic properties for similarly charged surfaces depend on counterion size but not coion size. Moreover, electrostatic potential and osmotic pressure between similarly charged surfaces are found to be increased with increasing counterion size. On the other hand, the corresponding ones between oppositely charged surfaces are related to both sizes of positive and negative ions. For oppositely charged surfaces, the electrostatic potential, number density of solvent molecules and relative permittivity of an electrolyte having unequal ion sizes are not symmetric about the centerline between the charged surfaces. For either case, the consideration of solvent polarization results in an decrease in the electrostatic potential and the osmotic pressure compared to the case without the effect. | 10.1063/1.5002607 | [
"https://arxiv.org/pdf/2202.04860v1.pdf"
] | 20,948,175 | 2202.04860 | 67990489202de9c6c4810165241c30be22d491c6 |
Influence of solvent polarization and non-uniform ion size on electrostatic properties between charged surfaces in an electrolyte solution
10 Feb 2022
Jun-Sik Sin *[email protected]
Department of Physics
Kim Il Sung University
Taesong DistrictPyongyangDemocratic People's Republic of Korea
Influence of solvent polarization and non-uniform ion size on electrostatic properties between charged surfaces in an electrolyte solution
10 Feb 2022Preprint
In this paper, we study electrostatic properties between two similar or oppositely charged surfaces immersed in an electrolyte solution by using mean-field approach accounting for solvent polarization and non-uniform size effect. Applying a free energy formalism accounting for unequal ion sizes and orientational ordering of water dipoles, we derive coupled and self-consistent equations to calculate electrostatic properties between charged surfaces. Electrostatic properties for similarly charged surfaces depend on counterion size but not coion size. Moreover, electrostatic potential and osmotic pressure between similarly charged surfaces are found to be increased with increasing counterion size. On the other hand, the corresponding ones between oppositely charged surfaces are related to both sizes of positive and negative ions. For oppositely charged surfaces, the electrostatic potential, number density of solvent molecules and relative permittivity of an electrolyte having unequal ion sizes are not symmetric about the centerline between the charged surfaces. For either case, the consideration of solvent polarization results in an decrease in the electrostatic potential and the osmotic pressure compared to the case without the effect.
I. INTRODUCTION
The study of electrostatic properties between charged surfaces in an electolyte is of great significance in material science and biology. It is well known that the interaction between two charged surfaces is attributed to two kinds of physical natures. (i.e., Van der Waals and Electrostatic interaction). [1][2][3][4][5] Although classical Poisson-Boltzmann (PB) theory has been a fundamental tool describing electric double layer electrostatic potential and osmotic-pressure between two charged surfaces for a long time [6,7], many researchers have devoted a great deal of effort to amending the PB theory which cannot be applicable to the case being short the distance between two charged surfaces. To properly describe the electric double layer properties, there exist attempts to develop mean-field theories describing finite sizes of ions and water molecules and/or water polarization [8][9][10][11]. In the last decade, the consideration of non-uniform ion sizes has attracted massive attention [12-18, 42, 43]. In particular, the authors of [19,20] simultaneously accounted for orientational ordering of water dipoles and finite sizes of ions and water molecules.
Recently, it was demonstrated in the papers [21-25, 42, 43] that the simultaneous considerations of unequal ion size and solvent polarization are very important for describing electrostatic properties of an electrolyte near a single charged surface.
On the other hand, the electrostatic properties between charged surfaces were studied by using mean-field theories taking into account finite ion size, solvent polarization and composition of solvent mixture. Using a mean-field theory accounting only for ion size effect, the authors of [26] demonstrated the valuable fact that nontrivial interactions between ion size effect and electric double layer overlap phenomena may augment the effective extent of electric double layer overlap in narrow fluidic confinements. Considering ions as point-like charge, the authors of [27] demonstrated the effect of solvent polarization on the electric double layer electrostatic potential distribution and the effective EDL thickness in narrow nanofluidic confinements. Although the authors of [28] addressed osmotic pressure between charged surfaces by accounting for dipole moment of water molecules and ion size, some of their results were contrary to common sense such as behavior of spatial distribution of permittivity. [29,30] Namely, in [28], the increase of relative permittivity near the charged surface was predicted for point-like ions. This is a consequence of predicted accumulation of water dipoles near the charged surface due to an assumed Boltzmann distribution for water molecules, which prevails over the saturation effect in polarizability as shown in [44]. The authors of [31] derived a general expression for osmotic pressure for the case when the free energy of the system does not depend explicitly on the coordinate. Even though studies [32][33][34][35] have been investigated electric double layer forces between charged surfaces, they did not considered the effect of solvent polarization.
Recently, the authors of [36,37] presented a more satisfactory answer on the osmotic pressure by using Langevin-Bikerman. Their model well represents electrostatic properties by simultaneous considerations of size effects and polarization of water molecules, barring the difference in size between positive and negative ions.
Although the authors of [42,43] described differential capacitance and permeation through charged nanotube membranes by accounting for both solvent polarization and disparity of ion sizes, there does not exist study to describe osmotic pressure between charged surfaces with consideration of the effects. On the other hand, Monte Carlo simulation [38][39][40]was extensively used to correct the theoretical predictions of classical PB method. In fact, discreteness-of-charge and the image effects, considered by using Monte Carlo method, are of important roles in the compact part of the electric double layer and may substantially affect the zeta potential of the surface. In the present paper, these effects are beyond our scope and we provide results only for constant zeta potential because compared to such effects, ion size asymmetry and solvent polarization crucially affect electrostatic interaction between charged surfaces for high electrolyte concentration.
In this paper, we study the effect of the difference in size between positive and negative ions as well as solvent polarization on electrostatic potential, number density of ions and water molecules, permittivity and osmotic pressure by using a mean field approach. [19,20,23,37,42] The first result is that evaluating electrostatic properties between oppositely charged surfaces requires considering difference in size between positive and negative ions, while these between similarly charged surfaces can depend only on counterion size. Next, it is shown that for a constant surface potential, solvent polarization diminishes ion size effects on electrostatic properties between similar and oppositely charged surfaces. Finally, we emphasize that our method can consistently explain the experimental results of interaction force between similar or oppositely charged surfaces, by correcting the large under-prediction made by the corresponding PB model or the over-prediction made by considering only the ion size.
II. THEORY
We consider two parallel plates (similar or oppositely charged) separated by a distance H in an electrolyte. The transverse direction is noticed by x; the left plate is placed at x = −H/2 and the right plate x = −H/2. The resulting electrostatic properties at the interfaces between the plates and the electrolyte solution should be addressed by setting the free energy of total thermodynamic system as follows
F = dr − ε 0 E 2 2 + e 0 zψ (r) (n + − n − ) + p 0 E cos ωρ (ω) ω − µ + n + − µ − n − − µ ω (ω) ρ (ω) ω − T s ..(1)
While the local electrostatic potential is denoted by Ψ (r), the number density of different ionic species and the number density of water molecules are expressed as n i (r) , i = +, − and n w (r) = ρ (ω, r) ω , respectively.
Here f (ω) ω = f (ω) 2π sin (ω) dω and ω stands for the angle between the dipole moment vector p and the normal to the charged surface. p and E notice the dipole moment of water molecules and electric field strength, respectively, where p 0 = |p| and E = |E|.
In Eq. (1), the first term describes the self energy of the electrostatic field, where stands for the vacuum permittivity. The second term means the electrostatic energy of the ions.
It is noticeable that unlike the case in [23], the the third term i.e., electrostatic energy of water dipoles, is equal to one of [19,37] where the formula for osmotic pressure was derived. Coupling the system to a bulk reservoir necessitates the next three terms, where µ + and µ − mean the chemical potentials of positive and negative ions, respectively, and µ w (ω) corresponds to the chemical potential of water dipoles with orientational angle ω . T and s are the temperature and the entropy density, respectively. (Please refer [10,19,20,23].)
From this free energy functional, we derive the self-consistent equations determining electrostatic properties by performing minimization of the corresponding free energy describing the electric double layer and subsequently find the formula for the osmotic pressure. The
Lagrangian of the total system can be expressed so that the volume conservation is satisfied
L = F − λ (r) (1 − n + V + − n − V − − n w V w ) dr,(2)
where λ is a local Lagrange parameter. When the origin of the electric potential is located
at x = ∞, ψ (x = ∞) = 0. At the origin, n i (x = ∞) = n ib and λ (x = ∞) = λ b , wheren + = n +b exp (−V + h − e 0 zβψ)/D,(3a)n − = n −b exp (−V − h + e 0 zβψ)/D,(3b)n w = n wb exp (−V w h) sinh (p 0 βE) p 0 βE /D,(3c)n +b e −V + h−βzeφ − 1 + n −b e −V − h+βzeφ − 1 + n wb e −Vwh sinh(p 0 βE) p 0 βE − 1 = 0, (3d) where D = n +b V + exp (−V + h − e 0 zβψ)+n −b V − exp (−V − h + e 0 zβψ)+n w V w exp (−V w h) sinh(p 0 βE) p 0 βE , h = λ − λ b and e −p 0 Eβ cos(ω) ω = 2π 0 π d(cos ω)e −p 0 Eβ cos(ω) 4π = sinh(p 0 Eβ) p 0 Eβ
. [19,20] In addition to the above equations, we have the following equations for the chemical potentials of ions and water molecules
µ + = k B T ln (n +b /N b ) + V + λ b ,(4a)µ − = k B T ln (n −b /N b ) + V − λ b , (4b) µ w (ω i ) = k B T ln (ρ (ω i ) ∆Ω/N b ) + V w λ b ,(4c)
where ρ (ω i ) stands for the number density of water molecules with orientational angle ω i and n w = ρ (ω) ω .
The Euler-Lagrangian equation for ψ (r) yields the Poisson equation
∇ (ε 0 ε r ∇ψ) = −e 0 z (n + − n − ) ,(5)
where ε r ≡ 1 + |P| ε 0 E . Due to the symmetry of the present study, P, the polarization vector due to a total orientation of point-like water dipoles, is parallel to the normal to charged surfaces, as in [19,36,44]
P (x) = n w (x) p 0 L (p 0 Eβ) ,(6)
where β = 1/ (k B T ), p 0 = 4.8D and the function L (u) = coth (u) − 1/u means the Langevin function.
For now, let us derive a new formula for osmotic pressure accounting for unequal ion sizes and solvent polarization. In fact, the authors of [31] proved the fact that when the free energy density of the total system does not depend on the coordinate, osmotic pressure between two charged surfaces can be derived from the following expression
f − (∂f /∂ψ ′ ) ψ ′ = consant = −P,(7)
where ψ ′ is the derivative of ψ with respect to x and the constant is the negative of the local pressure P that is defined to be the sum of the osmotic pressure and the bulk pressure, i.e.,
P = P osm + P bulk .(8)
The free energy density for the present study does not depend explicitly on the coordinate
x, as can be seen in Eq.(1). Using Eqs. (1) and (7), we can therefore get
(∂f /∂ψ ′ ) ψ ′ = −ε 0 ψ ′2 + ρ (ω) p 0 E cos ω ω (9) P = − ε 0 E 2 2 − e 0 zψ (n + − n − ) + µ + n + + µ − n − + µ ω (ω) ρ (ω) ω + T s(10)
Substituting Eq. (4) in the above equation, we get the following equation
P = − ε 0 E 2 2 − e 0 zψ (n + − n − ) + (k B T ln (n +b /N b ) + V + λ b ) n + + (k B T ln (n −b /N b ) + V + λ b ) n − + (k B T ln (ρ (ω) ∆Ω i /N b ) + V w λ b ) ρ w (ω) ω +k B T N ln N − i={+,−} n i ln n i − lim k→∞ k i=1 {[ρ (ω i ) ∆Ω i ] ln [ρ (ω i ) ∆Ω i ]} = − ε 0 E 2 2 − e 0 zψ (n + − n − ) + k B T n + ln n +b N b N n + + k B T n − ln n −b N b N n − +k B T ρ (ω) ln ρ b (ω)∆Ω i N b N ρ(ω)∆Ω i ω + λ b .(11)
As the distance between the charged surfaces approaches the positive infinity, P = P bulk . In consequence, we obtain λ b = P bulk from Eq. (11).
We can find the formula for osmotic pressure by comparing the above fact, Eq. (11) and
Eq. (8)
P osm = − ε 0 E 2 2 − e 0 zψ (n + − n − ) + k B T n + ln n +b N b N n + + k B T n − ln n −b N b N n − +k B T ρ (ω) ln ρ b (ω)∆Ω i N b N ρ(ω)∆Ω i ω .(12)
On the other hand, Eq. (3d) can be rewritten as follows.
n +b e −V + h−βzeφ + n −b e −V − h+βzeφ + n wb e −Vwh sinh(p 0 βE) p 0 βE = n +b + n −b + n wb = N b(13)
Rearranging Eq. (13) also results in a new relation
N = n + +n − +n w = n +b exp (−V + h − e 0 zβψ) + n −b exp (−V − h + e 0 zβψ) + n wb e −Vwh sinh(p 0 βE) p 0 βE D = N b D(14)
Substituting the above relation in Eq. (12) and using the condition of volume conservation, we eventually obtain the following relation
P osm = − ε 0 E 2 2 − e 0 zψ (n + − n − ) + k B T n + ln 1 exp(−V + h−e 0 zβψ) + k B T n − ln 1 exp(−V − h+e 0 zβψ) + +k B T ρ (ω) ln ρ b (ω)∆Ω i N b N ρ(ω)∆Ω i ω = − ε 0 E 2 2 + k B T (n + V + + n − V − )h+ +k B T n wb V w h exp (−V w h) sinh(p 0 βE) p 0 βE /D + exp (−V w h) exp(p 0 βE cos ω)(p 0 βE cos ω) = = − ε 0 E 2 2 + k B T (n + V + + n − V − + n w V w )h + k B T (exp (−V w h) exp(p 0 βE cos ω) (p 0 βE cos ω) ω ) = − ε 0 E 2 2 + k B T h + k B T n wb exp (−V w h) /D exp(p 0 βE cos ω) (p 0 βE cos ω) ω = = − ε 0 E 2 2 + k B T h − k B T n wb exp (−V w h) /D cosh (p 0 βE) − sinh(p 0 βE) (p 0 βE) = = − ε 0 E 2 2 + k B T h − k B T n w cosh (p 0 βE) (p 0 βE) sinh(p 0 βE) − 1 = − ε 0 E 2 2 + k B T h − k B T n w (p 0 βE) L (p 0 βE) .(15)
Eq. (15) corresponds to the osmotic pressure between two charged surfaces. This osmotic pressure is constant across the channel, and therefore, can be estimated at any point within the channel. We can easily identify that the formula Eq. (15) is for a more general situation containing ones of Poisson-Boltzmann, Bikerman, Langevin-Poisson-Boltzmann, Modified
Langevin-Poisson-Boltzmann, Langevin-Bikerman approach [7,8,36,37,44,45] and the approach with ion size effect and without solvent polarization [17]. For the simplicity of calculation, we can express all the quantities in dimensionless forms as
x = x/λ,ψ = e 0 zβψ,d = d/λ,ε r = ε r /ε p , λ = ε 0 εpk B T 2n b e 2 0 z 2 ,h = hV w , V = V /V w , η = 1 n wb Vw ,n b = n b n wb , p 0 βE = p 0 2n b ε 0 εpk B T dψ dr = χĒ,(16)
Based on the dimensionless parameters, Eq. 4 and Eq. 5 can be rewritten as follows
n +b e −ψ−V +h − 1 +n −b eψ −V −h − 1 + e −h sinh(χĒ) χĒ − 1 = 0,(17)d dx ε r dψ dx = η 2 exp ψ −V −h − exp −ψ −V +h sinh(χĒ) χĒ exp −h +n bV− exp ψ −V −h +n bV+ exp −ψ −V +h . (18) −2 −1 0 1 2 2.5 −2.5 0 0.2 0.4 0.6 0.8 ψ(V) x(nm) −2 −1 0 1 2 2.5 −2.V − =V + =V w =0.03nm 3 V − =V + =0.3nm 3 ,V w =0.03nm 3 V − =0.3nm 3 ,V + =V w =0.03nm 3 (a) (b) (d) (c)
III. RESULTS AND DISCUSSION
All the calculations in the present study are performed by using the fourth order Runge-Kutta method combined with shooting method. For clarity, we choose 0.01M for the ionic concentration in the buk electrolyte solution and 298K for the temperature.
A. Similarly Charged Surfaces
Without loss of generality, we assume that the surfaces are positively charged. Fig. 1(a) shows the electrostatic potential profile between similarly charged surfaces for the case where the distance between charged surfaces is H = 5nm and the surface potential is respectively. In Fig. 1(a), it is illustrated that the electrostatic potential has a symmetric distribution attributed to the geometry of this system and the sign of the surface potentials. We can also find that an increase in counterion size makes the electrostatic potential increase in the region between two charged surfaces. This is understood by the fact that the screening property of a counterion with a large size is weaker than corresponding one of a counterion with a smaller size. Importantly, we should emphasize that the electrostatic potential profile is not related to coion size, as in [23,25]. Fig. 1(b) demonstrates the spatial dependence of the number density of counterions.
ψ (x = h/2) = 0.5V. The cases for (V − = V + = V w = 0.03nm 3 ), (V − = V + = 0.3nm 3 , V w = 0.03nm 3 )
For the cases when a counterion has the small size (V − = 0.03nm 3 ), the number density of counterions is larger than corresponding one for a counterion of the larger size (V − = 0.3nm 3 )in the vicinity of a charged surface. Such a phenomenon is attributed to the excluded volume effect, as explained in [19]. Fig. 1(c) shows the transverse variation of the number density of water molecules. It is clearly seen that the density for a counterion of the large size is smaller than corresponding one for a counterion of the smaller size. This is explained by combining excluded volume effect of counterions and the fact that due to overlap of the electric double layers of similarly charged surfaces, the number density of counterions gets larger than the bulk value. Fig. 1(d) shows the variation of the permittivity with the position between two charged surfaces. According to the permittivity formula of Eq. (6), the permittivity is strongly affected by the number density of water molecules and the electric field strength. Near the centerline between the charged surfaces, the permittivity is proportional to the number density of water molecules, since due to the geometrical symmetry of the present system the electric field strength is zero at the centerline between two charged surfaces. However, in the vicinity of a charged surface, in spite of the fact that the number density of water molecules for the case when a counterion has the large size is smaller than corresponding one for a counterion of the smaller size, the permittivity for the former case is larger than one for the latter case. That is why a high electric field strength lowers the permittivity [41] . Fig. 2(a) shows the electrostatic potential at the centerline between two charged surfaces as a function of the separation distance between the surfaces. In Figs. (2, 3, 4, 5), we use an identical convention for the studied cases. Case 1, Case 3 and Case 5 mean
(V − = V + = V w = 0.03nm 3 ), (V − = V + = 0.3nm 3 , V w = 0.03nm 3 ) and (V − = 0.3nm 3 , V + = V w = 0.03nm 3 )
for the case when both solvent polarization and ion sizes are considered, respectively. Case 2 ,
Case 4 and Case 6 represent (V
− = V + = V w = 0.03nm 3 ), (V − = V + = 0.3nm 3 , V w = 0.03nm 3 )
and (V − = 0.3nm 3 , V + = V w = 0.03nm 3 ) for the case accounting only for ion sizes, respectively.
For similarly charged surfaces, the electrostatic potential at the centerline between them is a main characteristics that illustrates overlap of electric double layers. It is noticeable that under the boundary condition of a given potential, solvent polarization lowers the potential at the centerline. In fact, as pointed out in [26], ion size effect enhances electrostatic potential at the centerline due to lowering of screening property of ions. However, solvent polarization lowers the permittivity as shown in Fig. 1(d). From the viewpoint of physics, lowering permittivity yields enhancement of magnitude of electric field strength in an electrolyte.
Since at any position between two charged surfaces the permittivity value is not higher than the bulk value of the permittivity, the electric field strength is not lower than corresponding one for the case with only ion size effects. Finally for the case with solvent polarization, the centerline potential is lower than one for the case without the effect. As can be expected, for either case, an increase in the separation distance between two charged surfaces decreases the centerline potential. This is deduced by using the fact that the longer the distance between two charged surfaces, the weaker the overlap of electric double layers of the charged surfaces.
The inset in Fig. 2(a) shows that ∆ψ, the difference in the centerline potential between the cases with and without solvent polarization, decreases with the distance between charged surfaces. It can be explained by the fact that as the distance between charged surfaces gets longer, the centerline potential for either case tends to zero. Fig. 2(b) shows the dependence of the centerline potential on surface potential at charged
surfaces. It is clearly demonstrated that an increase in the surface potential allows the centerline potential to increase. This is attributed to the fact that for a given distance between two charged surfaces, an increase in the surface potential enhances the overlap of electric double layers of two charged surface and therefore induces the increase in the centerline potential.The inset in Fig. 2(b) shows that the difference in the centerline potential between the cases with and without solvent polarization, increases with increasing the surfaces potential. It can be explained by the fact that an increase in the surface potential involves enhancement of solvent polarization . Fig. 3(a) shows the variation of osmotic pressure with the distance separation between two charged surfaces. As one can see, a large ion size makes repulsive osmotic pressure higher than for the smaller one and solvent polarization decreases the osmotic pressure.
For all the cases, an increase in the distance between charged surfaces yields diminished osmotic pressure. The above facts are attributed to the following fact. As shown in [23], the value of h increases with increasing the electric potential. On the other hand, as mentioned above, ion size effects increases the centerline potential. Namely, h increases with ion size.
At the centerline, the electric field strength is zero due to the geometrical symmetry of this problem. As a result, the formula for osmotic pressure between the similarly charged surfaces is rewritten as follows:
P osm = k B T h(x = 0).(19)
Consequently, the osmotic pressure increases with counterion size, decreases when we consider solvent polarization. As the distance between charged surfaces approaches infinity, it also tends to zero. The inset of Fig. 3(a) shows that an increase in the distance between the surfaces allows ∆P osm , being the difference in osmotic pressure between the cases with and without solvent polarization, to be decreased. Combining the Eq. (19) and the explanation for Fig. 2(a) provides the reason for the phenomenon. Fig. 3(b) shows the surface potential dependence of osmotic pressure between two charged surfaces. Fig. 3(b) demonstrates that an increase in the surface potential induces an increase of osmotic pressure. This phenomenon is elucidated by combining Eq. (19) and the fact that the centerline potential increases with increasing the surface potential. Although we have treated the cases of positively charged surfaces, the same is true for the cases of negatively charged surfaces. Finally, it should be pointed out that modified Langevin-Poisson-Boltzmann approach predicts slightly lower values of osmotic pressure than for Langevin-Poisson-Boltzmann approach. The difference is attributed to a stronger decrease in permittivity due to consideration of cavity field as pointed out in [20]. However, since the difference in osmotic pressure for the two approaches is small, we confirm that the present approach without consideration of cavity field is quite reliable for computing osmotic pressure. As mentioned-above, this fact is easily understood by the fact that electric double layer near a charged surface is determined mainly by counterions. Due to the symmetry, for different cases the centerline potential is zero at any distance and any magnitude of surface potential. It is clearly seen that for the cases with and without solvent polarization, the centerline potential decreases with the distance between two charged surfaces. The value for the case with solvent polarization is lower than that for the case without solvent polarization. The fact is explained by the fact that the low permittivity induces high magnitude of electric field strength. Fig. 6(b) shows the centerline potential with the surface potential. Due to the same reason in Fig. 5(a), the centerline potential increases with increasing magnitude of the surface potential. It is noticeable that the difference in the centerline potential between the two charged surfaces is enhanced with the surface potential. Fig. 7(a) is a graph of the attractive osmotic pressure as a function of the distance between charged surfaces. As one can see, an increase in ion size involves the enhancement of the attractive osmotic pressure due to the same reason in the Fig. 3(a). Unlike the case of similarly charged surfaces, both sizes of a positive and negative ion are important for determining the pressure. This is understood by considering the fact that counter-ions of the two surfaces have the different sign of electric charge. The inset of Fig. 7(a) shows that as the distance between the oppositely charged surfaces increases, the attractive osmotic pressure vanishes due to the same reason as in the inset of Fig. 3(a) Fig. 7(b) shows the variation of osmotic pressure with the surface voltage. Fig. 7(b) demonstrates that as the surface potentials increases, the attractive osmotic pressure increases due to the same reason as in the inset of Fig. 3(b). The inset of Fig. 7(b) shows that as the surface potentials increases, the difference in the attractive osmotic pressures between charged surfaces for the cases with and without solvent polarization, increases due to the same reason as in the inset of Fig. 3(b).
B. Oppositely Charged Surfaces
V − =V + =V w =0.03nm 3 V − =V + =0.3nm 3 ,V w =0.03nm 3 V − =0.3nm 3 ,V + =V w =0.03nm 3 (a) (b) (c)
Summarizing Figs. (2, 3, 7), it is elucidated that for the case of constant surface potential, solvent polarization reduces ion size effect. Namely, the centerline potential and osmotic pressure between two charged surfaces are diminished for the case with solvent polarization than for the case without the effect. This result distinguishes our theory from one of [36]. In fact, they considered uniform size effect as well as solvent polarization, while they asserted that experiment results are understood by the increase due to only the consideration of solvent polarization. We believe that the reason is not so simple as their ones. On one hand, ion size effect, as shown in Figs. (2, 3, 6, 7), enhances the centerline potential and osmotic pressure for similar or oppositely charged surfaces. On the other hand, the consideration of solvent polarization involves lowering the properties. As a result, solvent polarization reasonably lowers excessive increases in centerline potential and osmotic pressure due to ion size effect. We conclude that simultaneous considerations of solvent polarization and ion size effect is mandatory for elucidating experimental results.
Also, the present theory can take into account the difference in size not only between ions but also between ions and a water molecule. This fact demonstrates the clear advantage that unlike in the previous theory [36] where ions and water molecules have the same size, the present method can treat more realistic situations where sizes of ions and water molecules are not equal to each other.
Our results can be compared with Monte Carlo simulation describing orientational ordering of solvent dipoles and ion size effects. The simulation will require vast computational cost due to a large number of degrees of freedom for the present study.
IV. CONCLUSIONS
Using a mean-field theory accounting for solvent polarization and unequal size effect, we have studied electrostatic properties between two charged surfaces. We have shown that the electrostatic properties are unconditionally symmetrical about the centerline between similarly charged surfaces but not between oppositely charged surfaces.
We have demonstrated that for the case of similarly charged surfaces, electrostatic properties are determined mainly by counterions but not by coions. In contrast to the case, the properties for oppositely charged surfaces are determined by both positive and negative ions. Moreover, the centerline potential, being a quantity representing overlap of electric double layers of similarly charged surfaces, and the osmotic pressure between the surfaces increase with the counterion size. Most importantly, we have found that under the condition of constant surface potential, the consideration of solvent polarization reduces the centerline potential and osmotic pressure augmented by ion size effect.
n
ib and λ b represent the bulk ionic concentration and the Lagrange parameter at x = ∞, respectively. The number densities of ions and water molecules can be obtained by applying the boundary conditions and by writing Euler-Lagrange equations of Eq. (2) in terms of the number densities of particles.
FIG. 1 .
1(Color online)For similarly charged surfaces, variation of electrostatic potential (a), the number density of counterions (b), the number density and water molecules(c) and the permittivity (d) with the position for different sets of ion sizes. The separation distance between charged surfaces is H = 5nm and the surface potential is ψ (x = H/2)) = ψ (x = −H/2)) = +0.5V.
FIG. 2 .
2(Color online) For similarly charged surfaces, (a) Variation of the centerline potential with the separation distance between the charged surfaces for ψ (x = H/2)) = ψ (x = −H/2)) = +0.5V. (b) Variation of the centerline potential with the surface potential for different sets of ion sizes. The separation distance between charged surfaces is H = 5nm. and (V − = 0.3nm 3 , V + = V w = 0.03nm 3 ) are represented by squares, circles and solid line,
FIG. 3 .
3(Color online) For similarly charged surfaces, (a) Variation of osmotic pressure with the separation distance between the charged surfaces for ψ (x = H/2)) = ψ (x = −H/2)) = +0.5V. (b)Variation of the osmotic pressure as a function of the surface potential for different sets of ion sizes and the separation distance between charged surfaces, H = 5nm.
Fig. 4 FIG. 4 .
44shows the dependence of osmotic pressure on the distance separation between the surfaces for similarly charged surfaces (ψ (x = H/2)) = ψ (x = −H/2)) = +0.5V) for Poisson-Boltzmann(PB), Langevin-Poisson-Boltzmann(LPB), Modified Langevin-Poisson-Boltzmann(MLPB), Bikerman, Langevin-Bikerman(LB) and the present approach(V (Color online) For similarly charged surfaces, Variation of osmotic pressure with the separation distance between the charged surfaces for ψ (x = H/2)) = ψ (x = −H/2)) = +0.5V for different types of electric double layer model. Circles, crosses, plus signs, asterisks, squares and diamonds stand for Poisson-Boltzmann(PB), Langevin-Poisson-Boltzmann(LPB), Modified Langevin-Poisson-Boltzmann(MLPB), Bikerman, Langevin-Bikerman(LB) and present approach(V − = V + = 0.1nm 3 ), respectively. V + = 0.1nm 3 ). Here circles, crosses, plus signs, asterisks, squares and diamonds stand for Poisson-Boltzmann, Langevin-Poisson-Boltzmann, Modified Langevin-Poisson-Boltzmann, Bikerman, Langevin-Bikerman and present approach(V − = V + = 0.1nm 3 ), respectively. Comparison of the osmotic pressure calculated using different types of electric double layer model shows the following facts. First, we confirm again that the models with solvent polarization(Langevin-Poisson-Boltzmann and Langevin-Bikerman) predict lower values of osmotic pressure than for corresponding models (Poisson-Boltzmann and Bikerman) without solvent polarization, respectively. It is also identified that the models with larger volumes of ions result in larger values of osmotic pressure. In other words, Poisson-Boltzmann approach(point-like ion) predicts lower values of osmotic pressure than for Bikerman approach(V − = V + = V w = 0.03nm 3 ). And Langevin-Poisson-Boltzmann approach(point-like ions) yields lower values of osmotic pressure than for Langevin-Bikerman approach(V − = V + = V w = 0.03nm 3 ) which provides lower osmotic pressures than for present approach(V − = V + = 0.1nm 3 ).
Fig. 5 (
5a) shows the electrostatic potential profile for oppositely charged surfaces. The symbols in Fig. (5) have the same meanings as in Fig. (1). As can be seen, for the cases when negative and positive ions are the same size, the profile has point symmetry in the position. This is attributed to the geometrical and electrical symmetry of the system. For the case of unequal sizes of ions(V − = 0.3nm 3 , V + = V w = 0.03nm 3 ), the profile has not point symmetry. For the case of unequal sizes, near the surface with negative potential the profile is equal to one for the case of V − = V + = V w = 0.03nm 3 , near the surface with positive potential the profile is equivalent to one for the case of V − = V + = 0.3nm 3 , V w = 0.03nm 3 .
Fig. 5 (
5b) and Fig. 5(c) show the number density of water molecules and the permittivity between the oppositely charged surfaces, respectively. Those quantities exhibits such a behavior as in Fig. 5(a) due to the same reason.
Fig. 6 (
6a) show the centerline potential according to the distance between charged surfaces for the case of V − = 0.3nm 3 , V + = V w = 0.03nm 3 .
FIG. 5 .
5(Color online)For oppositely charged surfaces, variation of electrostatic potential (a), the number density and water molecules(b) and the permittivity (c) with the position for different sets of ion sizes. The separation distance between charged surfaces is H = 5nm and the surface potential is ψ (x = H/2)) = −ψ (x = −H/2)) = +0.5V
FIG
. 6. (Color online) For oppositely charged surfaces, (a) Variation of the centerline potential with the separation distance between the charged surfaces for ψ (x = H/2)) = −ψ (x = −H/2)) = +0.5V. (b) Variation of the centerline potential with the surface potential for the case of unequal ion sizes(V − = 0.3nm 3 , V + = V w = 0.03nm 3 ) and the separation distance between charged surfaces, H = 5nm.
FIG. 7 .
7(Color online) For oppositely charged surfaces, (a) Variation of osmotic pressure with the separation distance between the charged surfaces for ψ (x = H/2)) = −ψ (x = −H/2)) = +0.5V. (b) Variation of the osmotic pressure with the surface potential for different sets of ion sizes and the separation distance between charged surfaces, H = 5nm.
J N Israelachvili, Intermolecular and Surface Forces. LondonAcademicJ. N. Israelachvili, Intermolecular and Surface Forces (Academic, London,1985).
Fundamentals of Interface and Colloid Science. J Lyklema, Solid-Liquid Interfaces. LondonAcademicIIJ. Lyklema, Fundamentals of Interface and Colloid Science. Volume II:Solid-Liquid Interfaces (Academic, London, 2005).
R J Hunter, Zeta Potential in Colloid Science. LondonAcademicR. J. Hunter, Zeta Potential in Colloid Science (Academic, London, 1981).
. J.-P Hansen, H Lowen Ann, Rev. Phys. Chem. 51209J.-P. Hansen and H. Lowen Ann. Rev. Phys. Chem. 51, 209 (2000).
. N Kampf, D Ben-Yaakov, D Andelman, S A Safran, J Klein, Phys. Rev. Lett. 103118304N. Kampf, D. Ben-Yaakov, D. Andelman, S. A. Safran, and J. Klein, Phys. Rev. Lett. 103, 118304 (2009).
. M G Gouy, J. Phys.(France). 9457M.G. Gouy, J. Phys.(France) 9, 457 (1910).
. D L Chapman, Philos. Mag. 25475D.L. Chapman, Philos. Mag. 25, 475 (1913).
. J J Bikerman, Philos. Mag. 33384J.J. Bikerman, Philos. Mag. 33, 384 (1942).
. E Wicke, M Eigen, Z. Elektrochem. 56551E. Wicke and M. Eigen, Z. Elektrochem. 56, 551 (1952).
. V Kralj-Iglič, A Iglič, J. Phys.France. 6477V. Kralj-Iglič and A. Iglič, J. Phys.France 6, 477 (1996).
. I Borukhov, D Andelman, H Orland, Phys. Rev. Lett. 79435I. Borukhov, D. Andelman, and H. Orland, Phys. Rev. Lett. 79, 435 (1997).
. V B Chu, Y Bai, J Lipfert, D Herschlag, S Doniach, Biophys. J. 933202V.B. Chu, Y. Bai, J. Lipfert, D. Herschlag, and S. Doniach. Biophys. J. 93, 3202 (2007).
. A A Kornyshev, J. Phys. Chem. B. 1115545A.A. Kornyshev, J. Phys. Chem. B 111, 5545 (2007).
. P M Biesheuvel, M Van Soestbergen, J. Colloid Interface Sci. 316490P.M. Biesheuvel and M. van Soestbergen, J. Colloid Interface Sci. 316, 490 (2007).
. S Zhou, Z Wang, B Li, Phys. Rev. E. 8421901S. Zhou, Z. Wang, and B. Li, Phys. Rev. E. 84, 021901 (2011).
. J Wen, S Zhou, Z Xu, B Li, Phys. Rev. E. 8541406J. Wen, S. Zhou, Z. Xu, and B. Li, Phys. Rev. E. 85, 041406 (2012).
. A H Boschitsch, P V Danilov, J. Comput. Chem. 331152A.H. Boschitsch and P.V. Danilov, J. Comput. Chem. 33, 1152 (2012).
. M Popovic, A Siber, Phys. Rev. E. 8822302M. Popovic and A. Siber, Phys. Rev. E 88, 022302 (2013).
. A Iglič, E Gongadze, K Bohinc, Bioelectrochemistry. 79223A. Iglič, E. Gongadze, and K. Bohinc, Bioelectrochemistry 79, 223 (2010).
. E Gongadze, A Iglič, Bioelectrochemistry. 87199E. Gongadze and A. Iglič, Bioelectrochemistry 87, 199 (2012).
. J.-S Sin, S.-J Im, K.-I Kim, Electrochim. Acta. 153531J.-S. Sin, S.-J. Im, and K.-I. Kim, Electrochim. Acta 153, 531 (2015).
. J.-S Sin, H.-C Pak, K.-I Kim, K.-C Ri, D.-Y Ju, N.-H Kim, C.-S Sin, Phys. Chem. Chem. Phys. 18234J.-S. Sin, H.-C. Pak, K.-I. Kim, K.-C. Ri, D.-Y. Ju, N.-H. Kim, and C.-S. Sin, Phys. Chem. Chem. Phys. 18, 234 (2016).
. J.-S Sin, K.-I Kim, H.-C Pak, C.-S Sin, Electrochim. Acta. 207237J.-S. Sin, K.-I. Kim, H.-C. Pak, and C.-S. Sin, Electrochim. Acta 207, 237 (2016).
. J.-S Sin, H.-C Pak, C.-S Sin, Phys. Chem. Chem. Phys. 1826509J.-S. Sin, H.-C. Pak, and C.-S. Sin, Phys. Chem. Chem. Phys. 18, 26509 (2016) .
. J.-S Sin, N.-H Kim, C.-S Sin, Colloids Surf. A. 529972J.-S. Sin, N.-H. Kim, and C.-S. Sin, Colloids Surf. A 529, 972 (2017).
. S Das, S Charkraborty, Phys. Rev. E. 8412501S. Das and S. Charkraborty, Phys. Rev. E 84, 012501 (2011).
. S Das, S Chakraborty, S K Mitra, Phys. Rev. E. 8551508S. Das, S. Chakraborty, and S. K. Mitra, Phys. Rev. E 85, 051508 (2012).
. A Abrashkin, D Andelman, H Orland, Phys. Rev. Lett. 9977801A. Abrashkin, D. Andelman, and H. Orland, Phys. Rev. Lett. 99, 077801 (2007).
. O Teschke, G Ceotto, E F De Souza, Chem. Phys. Lett. 326328O. Teschke, G. Ceotto, and E. F. de Souza, Chem. Phys. Lett. 326, 328 (2000).
. E F Souza, G Ceotto, O Teschke, J. Mol. Catal. A: Chem. 167235E. F. de Souza, G. Ceotto, and O. Teschke, J. Mol. Catal. A: Chem. 167,235 (2001).
. D Ben-Yaakov, D Andelman, D Harries, R Podgornik, J. Phys. Chem. B. 1136001D. Ben-Yaakov, D. Andelman, D. Harries, and R. Podgornik, J. Phys. Chem. B 113, 6001 (2009).
. J Urbanija, K Bohinc, A Bellen, S Maset, A Iglic, V Kralj-Iglic, P B S Kumar, J , J. Urbanija, K. Bohinc, A. Bellen, S. Maset, A. Iglic, V. Kralj-Iglic, and P. B. S. Kumar, J.
. D Ben-Yaakov, D Andelman, H Diamant, Phys. Rev. E. 8722402D. Ben-Yaakov, D. Andelman, and H. Diamant, Phys. Rev. E 87, 022402 (2013).
. K Bohinc, J Reščič, L Lue, Soft Matter. 124397K. Bohinc, J. Reščič, and L. Lue, Soft Matter 12, 4397 (2016).
. R M Adar, D Andelman, H Diamant, Phys. Rev. E. 9422803R. M. Adar, D. Andelman, and H. Diamant, Phys. Rev. E 94, 022803 (2016).
. R P Misra, S Das, S K Mitra, J. Chem. Phys. 138114703R. P. Misra, S. Das, and S. K. Mitra, J. Chem. Phys 138, 114703 (2013).
. E Gongadze, A Velijonja, Š Perutkova, P Kramar, A Maček-Lebar, V Kralj-Iglič, A Iglič, Electrochim. Acta. 12642E. Gongadze, A. Velijonja,Š. Perutkova, P. Kramar, A. Maček-Lebar, V. Kralj-Iglič, and A. Iglič, Electrochim. Acta 126, 42 (2014).
. J Zelko, A Iglič, V Kralj-Iglič, P B S Kumar, J. Chem. Phys. 133204901J. Zelko, A. Iglič, V. Kralj-Iglič, and P. B. S. Kumar, J. Chem. Phys. 133, 204901 (2010).
. G I Guerrero-García, Pedro González-Mozuelos, M O De La, Cruz , J. Chem. Phys. 13554701G. I. Guerrero-García, Pedro González-Mozuelos, and M. O. de la Cruz, J. Chem. Phys. 135, 054701 (2011).
. Z.-Y Wang, Y.-Q Ma, J. Chem. Phys. 136234701Z.-Y. Wang and Y.-Q. Ma, J. Chem. Phys. 136, 234701 (2012).
. F Booth, J. Chem. Phys. 23453F. Booth, J. Chem. Phys. 23, 453 (1955).
Aleš Iglič. E Gognadze, Electrochim. Acta. 178541E. Gognadze, Aleš Iglič, Electrochim. Acta 178, 541 (2015).
. S Mohajernia, A Mazare, E Gongadze, V Kralj-Iglič, A Iglič, P Schmuki, Electrochim. Acta. 24525S. Mohajernia, A. Mazare, E. Gongadze, V. Kralj-Iglič, A. Iglič, P. Schmuki, Electrochim. Acta 245, 25 (2017).
. E Gongadze, U Van Rienen, V Kralj-Iglič, A Iglič, Gen. Physiol. Biophys. 30130E. Gongadze, U. van Rienen, V. Kralj-Iglič, A. Iglič, Gen. Physiol. Biophys. 30, 130 (2011).
. A Velikonja, P B Santhosh, E Gongadze, M Kulkarni, K Eleršič, Š Perutkova, V Kralj-Iglič, N P Ulrih, A Iglič, Int. J. Mol. Sci. 1415312A. Velikonja, P.B. Santhosh, E. Gongadze, M. Kulkarni, K. Eleršič,Š. Perutkova, V. Kralj- Iglič, N.P. Ulrih, A. Iglič, Int. J. Mol. Sci. 14, 15312 (2013).
| [] |
[
"Biosorption of Cr(VI)_ and Cr(III)_Arthrobacter species",
"Biosorption of Cr(VI)_ and Cr(III)_Arthrobacter species"
] | [
"E Gelagutashvili \nIv. Javakhishvili\nTbilisi State University E. Andronikashvili Institute of Physics\n0177, 6, Tamarashvili St\n",
"E Ginturi \nIv. Javakhishvili\nTbilisi State University E. Andronikashvili Institute of Physics\n0177, 6, Tamarashvili St\n",
"D Pataraia \nDurmishidze Institute of Biochemistry and Biotechnology\nD.Agmashenebeli Kheivani, 10 km0159TbilisiGeorgia\n",
"M Gurielidze \nDurmishidze Institute of Biochemistry and Biotechnology\nD.Agmashenebeli Kheivani, 10 km0159TbilisiGeorgia\n"
] | [
"Iv. Javakhishvili\nTbilisi State University E. Andronikashvili Institute of Physics\n0177, 6, Tamarashvili St",
"Iv. Javakhishvili\nTbilisi State University E. Andronikashvili Institute of Physics\n0177, 6, Tamarashvili St",
"Durmishidze Institute of Biochemistry and Biotechnology\nD.Agmashenebeli Kheivani, 10 km0159TbilisiGeorgia",
"Durmishidze Institute of Biochemistry and Biotechnology\nD.Agmashenebeli Kheivani, 10 km0159TbilisiGeorgia"
] | [] | The biosorption of Cr(VI)_ and Cr(III)_ Arthrobacter species (Arthrobacter globiformis and Arthrobacter oxidas) was studied simultaneous application dialysis and atomic absorption analysis. Also biosorption of Cr(VI) in the presence of Zn(II) during growth of Arthrobacter species and Cr(III) in the presence of Mn(II) were discussed.Comparative Cr(VI)_ and Cr(III)_ Arthrobacter species shown, that Cr(III) was more effectively adsorbed by both bacterium than Cr(VI). The adsorption capacity is the same for both the Chromium-Arthrobacter systems. The biosorption constants for Cr(III) is higher than for Cr(VI) 5.7-5.9-fold for both species. | null | [
"https://arxiv.org/pdf/1106.2918v1.pdf"
] | 119,191,296 | 1106.2918 | f1050f97e1b185475367c181407bf9b4d35d5bc3 |
Biosorption of Cr(VI)_ and Cr(III)_Arthrobacter species
E Gelagutashvili
Iv. Javakhishvili
Tbilisi State University E. Andronikashvili Institute of Physics
0177, 6, Tamarashvili St
E Ginturi
Iv. Javakhishvili
Tbilisi State University E. Andronikashvili Institute of Physics
0177, 6, Tamarashvili St
D Pataraia
Durmishidze Institute of Biochemistry and Biotechnology
D.Agmashenebeli Kheivani, 10 km0159TbilisiGeorgia
M Gurielidze
Durmishidze Institute of Biochemistry and Biotechnology
D.Agmashenebeli Kheivani, 10 km0159TbilisiGeorgia
Biosorption of Cr(VI)_ and Cr(III)_Arthrobacter species
The biosorption of Cr(VI)_ and Cr(III)_ Arthrobacter species (Arthrobacter globiformis and Arthrobacter oxidas) was studied simultaneous application dialysis and atomic absorption analysis. Also biosorption of Cr(VI) in the presence of Zn(II) during growth of Arthrobacter species and Cr(III) in the presence of Mn(II) were discussed.Comparative Cr(VI)_ and Cr(III)_ Arthrobacter species shown, that Cr(III) was more effectively adsorbed by both bacterium than Cr(VI). The adsorption capacity is the same for both the Chromium-Arthrobacter systems. The biosorption constants for Cr(III) is higher than for Cr(VI) 5.7-5.9-fold for both species.
Introduction
Chromium exists in the environment mainly as Cr(III) and Cr(VI) species. The interest in Cr is governed by the fact that its toxicity depends critically on its oxidation state. While Cr(III) is considered essential lipid and protein metabolism, Cr(VI) is known to be toxic to humans [1]. Gram-positive Arthrobacter species bacteria can reduce Cr(VI) to Cr(III) under aerobic growth and there is a large interest in Crreducing bacteria. The exact mechanism by which microorganisms take up the metal is relatively unclear.
Metal sorption performance depends on some external factors such as pH, other ions in solution, which may be in competition ets. Several aerobic and anaerobic Cr(VI) reducers are known with some being able to use organic contaminants as electron donors for Cr(VI) reduction [2]. 2 The inhibitive effect of FeS on Cr(III) oxidation by biogenic Mn-oxides that were produced in the culture of a known species of Mn(II) oxidizers, Pseudomonas putida. In soils containing manganese oxides, the immobilized form of chromium Cr(III) could potentially be reoxidized. [3,4]. Arthrobacter species strain FR-3, isolated from sediments of a swamp, produced a novel serine type oxidase. The purified free sulfide oxidase activity was completely inhibited by Co(II) and Zn(II) [5]. In [6] was shown, that Chromium reductase activity of Arthrobacter rhombi-RE strain was associated with the cell-free extract and the contribution of extracellular enzymes to Cr(VI) reduction was negligible. Ca(II) enhanced the enzyme activity, while Hg(II), Cd(II) and Zn(II) inhibited the enzyme activity.
Environmental systems are always dynamic and often far from equilibrium. In spite of this, the Biotic Ligand Model assumes that the metal of interest and its complexes are chemical equilibrium with each other and with sensitive sites on the biological surface [7]. Constants for the interaction of the metal with the biological surface have been estimated by measuring metal internalization fluxes, metal loading, and metal toxicity [1]. To develop an efficient biosorbent and its reuse by subsequent desorption processes, knowledge of the mechanism of metal binding is thus very important.
The biosorption of Cr(VI) and Cr(III) on Arthrobacter species was examined in this study simultaneous application dialysis and atomic absorption analysis. Also biosorption of Cr(VI) in the presence of Zn(II) during growth of Arthrobacter species and Cr(III) in the presence of Mn(II) were studied.
Materials and Methods
The other reagents were used: K 2 CrO 4 , CrCl 3 , ZnSO 4, MnCl 2 x4H 2 0 (Analytical grade). Arthrobacter bacterials were cultivated in the nutrient medium without co-cations and loaded with concentration of Zn (50mg/l)(in the case of Cr(VI) and Mn(50mg/l)( (in the case of Cr(III) . Cells were centrifuged at 12000 rpm for 10 min and washed three times with phosphate buffer (pH 7.1). The centrifuged cells were dried without the supernatant solution until constant weight. After solidification( dehydrated) of cells (dry weight) solutions for dialysis were prepared by dissolving in phosphate buffer. This buffer was used in all experiments. A known quantity of dried bacterium suspension was contacted with solution containing a known concentration of metal ion. For biosorption isotherm studies, the dry cell weight was kept constant (1 mg/ml), while the initial chromium concentration in each sample was varied in the interval (10 -3 -10 -6 M). All experiments were carried out at ambient temperature. Metal was separated from the biomass with the membrane, which thickeness was 30 g Visking (serva) and analyzed by an atomic absorption spectrophotometer. Dialysis carried out during 72 h. Data Analysis. The isotherm data were characterized by the Freundlich [8] equation
C b = K C t 1/n
where C b is metal concentration adsorbed on either live or dried cells of Arthrobacter species in mgg -1 dry weight. C t is the equilibrium concentration of metal (mgl -1 ) in the solution. K is an empirical constant that provides an indication logC b as a function of log C t of the adsorption capacity of either live or dry cells, 1/n is an empirical constant that provides an indication of the intensity of adsorption. The adsorption isotherms were obtained by plotting logC b as a function of log C t.
Results and Discussions
Metal removal by living or dry cells of bacterium species (Arthrobacter globiformis and Arthrobacter oxidas) was studied as a function of metal concentration. The linearized adsorption isotherms of Cr ion in anion and cation forms for two kinds of Arthrobacter at room temperature are shown in Fig.1-4 by fitting experimental points. Freundlich parameters evaluated from the isotherms with the correlation coefficients are given in table 1. 3 In fig. 1 are presented Biosorption Isotherms for Cr(VI)-Arthrobacter globiformis and Cr(VI)-Arthrobacter oxidas for dry (A,B) and living cells (C,D). As shown in fig.1 the adsorption of Cr(VI) in both cases of Arthrobacter species to living and dry cells of Arthrobacter species was dependent on their concentrations, and thus fitted the Freundlich adsorption isotherm. The Freundlich adsorption model were used for the mathematical description of the biosorption of Cr Arthrobacter species. The correlations between experimental data and the theoretical equation were extremely good, with R 2 above 0.90 (table 1) for all the cases. The higher correlation coefficient show that the Freundlich model is very suitable for describing the biosorption equilibrium of Chromium by the Arthrobacter species in the studied concentration range. (The constants determined in a given concentration range will not necessarity be the same as constants determined in another concentration range, because each determination will have its own detection window [9].
The adsorption yields determined for each Arthrobacter were compared in Arthrobacter globiformis as compared with Arthrobacter oxidas. It is in good agreement, with literatute data by which, there is a large difference in the efficiency of adsorption in each species of microorganisms, since the sorption depends on the nature and the composition of the cell wall [10]. Metal concentrations sorbed by bacterium and those in solution at equilibrium obeyed the Freundlich equation, suggesting the presence of heterogeneous sorption sites on bacterium surfaces. On the other hand, Grampositive bacteria have a greater sorptive capacity due to their thicker layer of peptidoglycan which contains numerous sorptive sites [11] Biosorption involves a combination of active and passive transport mechanisms starting with the diffusion of the metal ion to the surface of the microbial cell. Once the metal ion has diffused to the cell surface, it will bind to sites on the cell surface which exhibit some chemical affinity for the metal. It is known, that plasma membrane is the primary site of interaction of trace metal with living organisms. All biological surfaces contain multiple sites including biotic ligands, transport sites, or specific sites and non-specific active sites that are unlikely to participate in the internalization process, including cell wall polysaccharides, also proteins, and lipids, which act as a basic binding site of heavy metals. Typical biosorption includes two phases. The first one is associated with the external cell surface, which was discussed and the second one is an intracellular biosorption, depending on the cellular metabolism [12]. Functional groups within the wall provide the amino, carboxylic, sulfydryl, phosphate, and thiol groups that can bind metals [13]. Once inside the cell, chromium is reduced by cellular components such as glutathione, cysteine, ascorbate. The bacterial ability for Cr(VI) reduction does not require high energy input nor toxic chemical reagents and it allows the use of native, non hazardous strains [14]. It was shown, that the carboxyl groups were the main binding site in the cell wall of gram positive bacteria [15]. Such bond formation could be accompanied by displacement of protons and is dependent in part on the extent of protonation, which is determined by the pH [16].
Proceed from the assumption, may be speculate, that first binding sites for Cr(VI) on the surface of Arthrobacter species are carboxyl groups. Our results indicated that Cr(VI) sorption is depended of species of bacterial Arthrobacter. Differences between Arthrobacter species in metal ion binding may be due to the properties of the metal sorbates and the properties of bacterium (functional groups, structure and surface area, depending on the species). Functional groups, such as amino, carboxylic, suphydryl, phosphate and thiole groups, differ in their affinity and specifity for metal binding. n values which reflects the intensity of sorption presents the same trend but, as seen from 4 confirm the hypothasis that metal sorption by this bacterium is independent of the metabolic state of the organism [17].
Comparative Cr(VI)-and Cr(III)-Arthrobacter species (fig1 and fig.3, table 1) shown, that Cr(III) was more effectively adsorbed by both bacterium than Cr(VI). The adsorption capacity is the same for both the Chromium-Arthrobacter systems. The biosorption constants for Cr(III) is higher than for Cr(VI) 5.65-5.88 fold for both species (table1). Cr(VI) is one of the more stable oxidation states, the others being chromium(II), chromium (III). Cr (VI) can be reduced to Cr(III) by the biomass through two different mechanisms [18]. The first mechanism, Cr (VI) is directly reduced to Cr(III) in the aqueos phase by contact with the electron-donor groups of the biomass. The second mechanism consists of three steps. The binding of anionic Cr(VI) ion species to the positively charged groups present on the biomass surface, the reduction of Cr(VI) to Cr(III) by adjacent electron-donor groups and the release of the Cr(III) ions into the aqueous phase due to electronic repulsion between the positively charged groups and the Cr(III) ions.. The ,,uptake-reduction'' model for chromium(VI) carcinogenicity is, that tetrahedral chromate is actively transported across the cell membrane via mechanisms in place for analogous such as sulfate, SO 4 2-. Chromium (III) is not actively transported across the cell membrane to lack of transport mechanisms for these octahedral complexes. Thus, Cr (VI) may be adsorbed to bacterium a much lower degree than Cr(III), which was shown in our results.
It is seen from fig.2
that bioavailability increases in the presence of Zn ions in both cases. (for Cr(VI)-Arthrobacter globiformis and for Cr(VI)-Arthrobacter oxidas). Biosorption characteristics (K)
and (n) for A. oxides in the absence and in the presence of Zn(II) are 4.6 x 10 -4 (K), 1.25 (n) and 6.6 x 10 -4 , 1.08 (n) respectively, for A. globiformis without Zn(II) and in the presence of Zn(II) 3.4 x 10 -4 (K), 1.35 (n) and 8.1 x 10 -4 (K), 1.19 (n) respectively, n values did not change. But for Cr(VI)-Arthrobacter globiformis increase is more significant. The binding data are in good agreement with literature data, by which biological ligands are generally polyfunctional and polyelectrolytic, with an average pK value between 4.0 and 6.0 [2,19]. The presence of other cations (Zn(II) increased the uptake of the target cations by bacterium. Such effect from other cation (Zn(II) suggest that at least ion exchange is one of the mechanisms responsible for metal uptake by such Arthrobacter species. This has implications for the selection of Arthrobacter species for indrustrial applications.
Different species of bacterium displayed a different sorptive relationship. Biosorption is often followed by a slower metal binding process in which additional metal ion is bound, often irreversibly. This slow phase of metal uptake can be due to a number of mechanisms, including covalent bonding, crystallization on the cell surface or, most often, diffusion into the cell interior and binding to proteins and other intercellular sites [20]. Biosorption may be associated not only to physico-chemical interactions between the metal and the cell wall, but also with other mechanisms, such as the microprecipitation of the metal [21]or the metal penetration through the cell wall [22] Biosorption fig.1).
Fig. 1
1characteristics (K) and (n) for A. oxidas are without Mn(II) and in the presence of Mn(II) 2.6 x 10 -4 (K), 1.37 (n) and 2.4 x 10 -4 (K), 1.41 (n) respectively, for A. globiformis without Mn(II) and in the presence of Mn(II) 2.0 x 10 -4 (K), 1.23 (n) and 1.9 x 10 -4 (K), 1.47 (n) respectively.Thus, Biosorption characteristics did not change in the presence of Mn(II) ions.. This means, that Mn(II) did not significantly affect the biosorption of Cr (III) ion-Arthrobacter species i.e. Mn(II) essentially did not displace Cr(III) from bacteria. This fact leads us to speculate that primary binding site for Cr(III) is different than the binding site for Mn(II). The distorted octahedral coordination sphere proposed for Cr(III) and strong tendency to coordinate donor atoms equatorially may be responsible for the specific interaction with Arthrobacter species. The linearized Freundlich adsorption isotherms of Cr (VI) ion-Arthrobacter globiformis and Arthrobacter oxidas. (C b is the binding metal concentration (mg/g) and C total is initial Cr concentration(mg/l). (A and B dry cells-, C and D living cells)6
Fig. 2 .Fig. 3 .Fig. 4 .
234The linearized Freundlich adsorption isotherms of Cr (VI) ion-Arthrobacter globiformis+Zn and Arthrobacter oxidas+Zn. (The parameters are the same as in fig.The linearized Freundlich adsorption isotherms of Cr (III) ion-Arthrobacter globiformis and Arthrobacter oxidas . (The parameters are the same as infig.1). The linearized Freundlich adsorption isotherms of Cr (III) ion-Arthrobactern globiformis+Mn and Arthrobacter oxidas+Mn (The parameters are the same as in
table 1.The data in table 1 show a significant difference between the binding constants for Cr(VI) -A.oxidas and Cr(VI)-A.globiformis.( Biosorption constants for Arthrobacter oxidas and Arthrobacter globiformis are 4.6x 10 -4 , 3.4x10
-4 respectively). Decrease in bioavailability has been observed experimentally for Cr(VI)-
table 1 for both Arthrobacter species n values are not significantly different and their sorption intensity indicator are generally small(1.08-1.47). Comparative Freundlich biosorption characteristics Cr(VI) Arthrobacter species of living and dry cells shown (table 1) , that n values are in both cases the same(1.25,1.35). Dry cells have larger biosorption constant for both species (K)( 4.6x 10 -4 , 3.4x10 -4 , than living cells(1.0x 10 -4 ,1.36x 10 -4 This may
Table 1 .
1Biosorption characteristics for Cr(VI)-and Cr(III) -Arthrobater Oxidas and Cr(VI)-and Cr(III) -Arthrobacter Globiformis at 23 0 C Cr(VI)Cr(III)
Sensitized spectrophotometric determination of Cr(III) ion for speciation of chromium ion in surfactant, media using -benzoin oxime. Mehrorang Ghaedi, Enayat Asadpour, Azam Vafaie, Spectrochimica Acta, Part A. 63Mehrorang Ghaedi, Enayat Asadpour, Azam Vafaie.Sensitized spectrophotometric determination of Cr(III) ion for speciation of chromium ion in surfactant, media using -benzoin oxime. Spectrochimica Acta, Part A,2006,63,182-188.
Bioremediation of metal contamination. D R Lovley, J D Coates, Curr.Opin. Biotechnol. 8Lovley, D.R., Coates , J.D. Bioremediation of metal contamination.Curr.Opin. Biotechnol. 1997,8,285-289.
Inhibition of FeS on Chromium(III) Oxidation by Biogenic Manganese Oxides. Youxian Wu, Baolin Deng, Environmental Engineering Science. 23Youxian Wu and Baolin Deng, Inhibition of FeS on Chromium(III) Oxidation by Biogenic Manganese Oxides, Environmental Engineering Science,2006,23,552-560
Chromium(III) Oxidation Coupled with Microbially Mediated Mn(II) Oxidation. Youxian Wu, Baolin Deng, Huifang Xu, Hiromi Kornishi, Geomicrobiology Journal. 22Youxian Wu and Baolin Deng, Huifang Xu, Hiromi Kornishi, Chromium(III) Oxidation Coupled with Microbially Mediated Mn(II) Oxidation. Geomicrobiology Journal,2005,22,161-170.
Optimization of culture conditions and properties of immobilized sulfide oxidase from Arthrobacter species. W D Mohapatra, O Gould, S Dinardo, Papavinasam, R W Revie, Journal of Biotechnology. Mohapatra W.D., Gould O., Dinardo,S., Papavinasam and Revie R.W. Optimization of culture conditions and properties of immobilized sulfide oxidase from Arthrobacter species. Journal of Biotechnology,
Hexavalent chromium reduction by free and immobilized cell-free extract of Arthrobacter rhombi-RE. R Elangovar, L Philip, K Chandraraj, Appl.Biochem.Biotechnol. 160Elangovar R., Philip L., Chandraraj K. Hexavalent chromium reduction by free and immobilized cell-free extract of Arthrobacter rhombi-RE. Appl.Biochem.Biotechnol.2010,160,81-97.
Predicting the Bioavailability of metals and metal complexes. Critical review of the biotic ligand model. I Vera, Kevin J Slaveykova, Wilkinson, Environ.Chem. 2Vera I. Slaveykova and Kevin J. Wilkinson, Predicting the Bioavailability of metals and metal complexes. Critical review of the biotic ligand model.Environ.Chem., 2005, 2,9-24.
Adsorption in solutions. H Freundlich, Phys.Chem. 57Freundlich,H., Adsorption in solutions,Phys.Chem. 1906,57,384-410.
. R M Town, M Filella, Limnol.Oceanogr. 1341Town, R.M., Filella, M., Limnol.Oceanogr. 2000,45,1341.
Microalgae and wasterwater treatment. O Hammouda, A Gaber, N Raouf-Abdel, Ecotoxicology and Environmental Safety. 31Hammouda, O.,Gaber,A.,Raouf-Abdel, N.,Microalgae and wasterwater treatment. Ecotoxicology and Environmental Safety , 1995,31,205-210.
Metal immobilization by biofilms. Mechanisms and analytical tools. E D Van Hullebusch, M H Zandvoort, P N L Lens, Rev.Environ.Sci.Bio/Technol. 2Van Hullebusch, E.D.,.Zandvoort, M.H., Lens, P.N.L., Metal immobilization by biofilms. Mechanisms and analytical tools,Rev.Environ.Sci.Bio/Technol.2003,2,9-33.
Biotreatment of Cr(VI) Effluents. M T Tavares, C Martins, P Neto, Sengupta,A.K.LancasterHazardous and Industrial WastesTavares, M.T., Martins,C.,Neto.,P. Biotreatment of Cr(VI) Effluents. In Sengupta,A.K.(Ed.), Hazardous and Industrial Wastes.Tecnomics Publishing Co, Lancaster, 1995, pp.223-232.
Uptake cadmium and zinc by the Chlorella vudguris. part II. Multi-ion Situation. Y P Ting, F Lawson, I G Prince, Biotechnology and Bioengineering. 37Ting,Y.P.,Lawson,F., Prince,I.G. Uptake cadmium and zinc by the Chlorella vudguris. part II. Multi-ion Situation. Biotechnology and Bioengineering 1991, 37,445-455.
Moreno-Sanches, R, Interaction of chromium with microorganisms and plants. C Cervantes, J Campus-Garcia, S Davars, F Gutierrez-Corona, H Loza-Tavera, J C Torres-Guzman, FEMS Microbiol.Rev. 25Cervantes, C, Campus-Garcia , J., Davars, S., Gutierrez-Corona, F.,. Loza-Tavera, H, Torres- Guzman J.C.,.Moreno-Sanches, R, Interaction of chromium with microorganisms and plants, FEMS Microbiol.Rev.2001,25,335-347.
.microbial treatment of metal pollution -a working biotechnology. M G Gadd, C White, Trends in Biotechnology. 11Gadd,M.G., White ,C.,.microbial treatment of metal pollution -a working biotechnology, Trends in Biotechnology,1993,11,353-359.
. T R Muraleedharan, L Iyengar, C Venkobacher, Curr.Sci. 61Muraleedharan,T.R., Iyengar ,L., and Venkobacher,C., Curr.Sci.,1991,61,379-385.
Effects of cellular metabolism and viability on metal ion accumulation by cultured biomass from a bloom of Microcytis aeruginosa. L Parkerd, C Rail, N Mallick, P K Rai, H D Kumar, Appl.Environ.Microbial. 64ParkerD.L.,RaiL.C.,Mallick N.,Rai P.K. and Kumar H.D. Effects of cellular metabolism and viability on metal ion accumulation by cultured biomass from a bloom of Microcytis aeruginosa. Appl.Environ.Microbial.1998,64,1545-1547.
Studies on hexavalent chromium biosorption by chemically-treated biomass of Ecklonia sp. D Park, Y.-S Yun, J M Park, Chemosphere. 60Park, D., Yun, Y.-S., Park, J.M. Studies on hexavalent chromium biosorption by chemically-treated biomass of Ecklonia sp. Chemosphere, 2005,60,1356-1364.
K J Wilkinson, J Buffle, Physicochemical Kinetics and Transport at Chemical-Biological Interphases. H.P. van Leeuwen, W.KoesterJohn Wiley. Chichester445Wilkinson K.J., Buffle, J., in Physicochemical Kinetics and Transport at Chemical-Biological Interphases (Eds H.P. van Leeuwen, W.Koester), 2004,pp.445(John Wiley. Chichester).
The binding of heavy metals to algae surfaces. H B Xue, L Sigg, Water Res. 22Xue H.B. and Sigg L. The binding of heavy metals to algae surfaces. Water Res.1990, 22, 917-926.
Sites of cadmium uptake in bacteria used for biosorption. J Scott, Palmer S J , Applied Microbiology and Biotechnology. 33Scott J., and Palmer S.J., Sites of cadmium uptake in bacteria used for biosorption. Applied Microbiology and Biotechnology,1990,33,221-225.
Biosorption of cadmium and copper by Saccharomyces cerevisiae. Microbial Utilisation Renewable Resources. T Y Peng, T W Koon, 8Peng T.Y.,and Koon T.W., Biosorption of cadmium and copper by Saccharomyces cerevisiae. Microbial Utilisation Renewable Resources,1993, 8,494-504.
| [] |
[
"ON STUDY OF A METRIC ON C(",
"ON STUDY OF A METRIC ON C("
] | [
"S \nRB YADAV AND SRIKANTH KV\n\n"
] | [
"RB YADAV AND SRIKANTH KV\n"
] | [] | In this article we define a metric on C(S 1 , S 1 ). Also, we give some density results in C(S 1 , S 1 ).1991 Mathematics Subject Classification. Primary 54C35, Secondary 54C10. | null | [
"https://arxiv.org/pdf/1810.02686v1.pdf"
] | 119,319,552 | 1810.02686 | 671f026a954f780f0e54917cb4b4f05f333d478b |
ON STUDY OF A METRIC ON C(
27 Sep 2018
S
RB YADAV AND SRIKANTH KV
ON STUDY OF A METRIC ON C(
27 Sep 2018
In this article we define a metric on C(S 1 , S 1 ). Also, we give some density results in C(S 1 , S 1 ).1991 Mathematics Subject Classification. Primary 54C35, Secondary 54C10.
Introduction
M. H. Stone gave the Stone-Weierstrass Theorem [3], which is a density result with respect to uniform topology. This result is a generalization of the Weierstrass Approximation Theorem by lightening the restrictions imposed on the domain over which the given functions are defined. By taking the co-domain of the given functions to be the complex plane in place of real line, he went further and gave a complex version of the latter result as Stone-Weierstrass Theorem [4]. In this paper our aim is to seek density results for C(S 1 , S 1 ), the class of continuous functions from the unit circle S 1 , given by S 1 = {(x, y) ∈ R 2 : x 2 + y 2 = 1}, to itself.
Preliminaries
In this section we recall some basic notions. Let X be a compact houdorff space. Define C(X, R) = {f : X → R : f is continuous}.
Definition 2.1.
(1) An algebra is a bimodule together with a bilinear product. (2) If A and A 1 are algebras such that A 1 ⊂ A and module and product structures on A 1 are the induced ones from A.
Example 2.0.1.
(1) R and C are algebras.
(2) C(X, R) and C(X, C) are algebras.
(3) Set of polynomials is subalgebra of C([a, b], R).
Definition 2.2. (1) A subalgebra A ⊂ C(X, R) is said to be unital if 1 ∈ A. (2) A ⊂ C(X, R) is said to separate points of X if ∀a, b ∈ X with a = b there exists f ∈ A such that f (a) = f (b).
Theorem 2.1 (Stone-Weierstrass theorem). If A is a unital sub-algebra of C(X, R) which separates points then A is dense in C(X, R) in the topology induced by sup metric.
Next we recall some basic notions related to covering spaces. Let E and X be topological spaces, and let q : E → X be a continuous map.
Definition 2.
3. An open set U ⊂ X is said to be evenly covered by q if q −1 (U ) is a disjoint union of connected open subsets of E (called the sheets of the covering over U ), each of which is mapped homeomorphically onto U by q.
Definition 2.4. A covering map or projection map is a continuous surjective map q : E → X such that E is connected and locally path-connected, and every point of X has an evenly covered neighborhood. If q : E → X is a covering map, we call E a covering space of X and X the base of the covering.
Example 2.0.2. The exponential quotient map p : R → S 1 given by p(x) = exp(2πıx) is a covering map.
Example 2.0.3. The n th power map p n : S 1 → S 1 given by p n (z) = z n is also a covering map.
Example 2.0.4. Let T n = S 1 × S 1 × ... × S 1 n times . Define p n : R n → T n by p n (x 1 , ...., x n ) = (p(x 1 ), ..., p(x n )),
where p is exponential map of Example 2.0.2. It can be verified that p n is a covering map.
Definition 2.5. If q : E → X is a covering map and φ : Y → X is any continuous map, a lift of φ is a continuous mapφ : Y → E such that q •φ = φ.
From [2] we have following result called as Path Lifting Property:
I → R is continuous with f (0) ∈ [0, 1) and f (1) − f (0) ∈ Z}.
Consider the function α : [0, 1] → S 1 given by α(t) = (cos 2πt, sin 2πt) and the covering space R of S 1 with projection map p given by p(t) = (cos 2πt, sin 2πt).
For f ∈ C(S 1 , S 1 ), letf α be the unique lift 2.1 of f • α such thatf α (0) ∈ [0, 1). Clearlyf α ∈ C(I, R). For f ∈ C(S 1 , S 1 ) and g ∈ C(S 1 , S 1 ), define d 0 (f, g) = 2π sup x∈S 1 x =(1,0) |f α (α −1 (x)) −g α (α −1 (x))|
Let C(I, R) be equipped with the metric d 1 induced by sup norm on it.
Proposition 3.1. d 0 is a metric on C(S 1 , S 1 ) and there exists a homeomorphism φ : C(S 1 , S 1 ) → C(I, R) such that d 0 (f, g) = 2πd 1 (φ(f ), φ(g)), for all f and g in C(S 1 , S 1 ) .
Proof. Define φ : C(S 1 , S 1 ) →C(I, R) by φ(f )(t) =f α (t), ∀t ∈ I Now, ∀t ∈ I, φ(f ) = φ(g) ⇒f α (t) =g α (t) ⇒ p •f α (t) = p •g α (t) ⇒ f • α(t) = g • α(t). Again, ∀t ∈ I, f • α(t) = g • α(t) ⇒ f (x) = g(x), ∀x ∈ S 1 ⇒ f = g. So φ is injective.
Given any g ∈ C(I, R), define f by
f (x) = p • g • α −1 (x) if x ∈ S 1 − {(1, 0)} p • g(0) = p • g(1) otherwise Clearly f ∈ C(S 1 , S 1 ) and φ(f ) = g. So φ is onto. d 0 (f, g) = 2π sup x∈S 1 x =(1,0) |f α (α −1 (x)) −g α (α −1 (x))| = 2π sup x∈S 1 x =(1,0) |φ(f )(α −1 (x)) − φ(g)(α −1 (x))| = 2π sup t∈(0,1) |φ(f )(t) − φ(g)(t)| = 2π sup t∈[0,1] |φ(f )(t) − φ(g)(t)| = 2πd 1 (φ(f ), φ(g))
So d 0 is a metric space and φ is a homeomorphism. Proof. Let f ∈ C 0,a (I, R). By Weierstrass approximation theorem a sequence of polynomials {p n } such that p n → f uniformly. Now definep n bỹ p n (x) = p n (x) − p n (0) + x(f (1) − p n (1) + p n (0)). Clearlyp n → f uniformly andp n (0) = 0,p n (1) = a.
Density results using PL functions and polynomials. Recall that a function f : [a, b] → R is said to be piecewise linear (PL) if there exist sets of points
{p i } k i=1 in [a, b] and {q i } k i=1 in R such that p 1 = a, p k = b and ∀t ∈ [p i , p i+1 ], f (t) = q i + (q i+1 − q i ) (t−pi) pi+1−pi , for every 1 ≤ i ≤ k − 1. For a, b ∈ R defineTheorem 3.1. Given a, b ∈ R, the set P a,b (I, R) is dense in C a,b (I, R) with sup metric. Proof. Define ψ : C a,b (I, R) → C 0,b−a (I, R) via ψ(f )(x) = f (x) − a
Clearly ψ is a homeomorphism and ψ(P a,b (I, R)) = P 0,b−a (I, R). So, since P 0,b−a (I, R) is dense in C 0,b−a (I, R), P a,b (I, R) is dense in C a,b (I, R). For q ∈ S 1 and m ∈ Z define C q m (S 1 , S 1 ) = {f ∈ C(S 1 , S 1 )|f ((1, 0)) = q and W (f ) = m} P L q m (S 1 , S 1 ) = {f ∈ C(S 1 , S 1 )|fα ∈ P Lq ,q+m (I, R), p •fα(0) = q} P q m (S 1 , S 1 ) = {f ∈ C(S 1 , S 1 )|fα ∈ Pq ,q+m (I, R), p •fα(0) = q}.
Theorem 3.2. The set P L q m (S 1 , S 1 ) is dense in C q m (S 1 , S 1 ).
Proof. Iff α (0) =q, then clearly φ −1 (Cq ,q+m (I, R)) = C q m (S 1 , S 1 ) and φ −1 (P Lq ,q+m (I, R)) = P L q m (S 1 , S 1 ). So, since P Lq ,q+m (I, R) is dense in Cq ,q+m (I, R) and φ is a homeomorphism, P L q m (S 1 , S 1 ) is dense in C q m (S 1 , S 1 ).
Theorem 3.3. The set P q m (S 1 , S 1 ) is dense in C q m (S 1 , S 1 ). Proof.
Proof is similar to the proof of Theorem3.2.
4. Stone-Weierstrass type theorem for C(S 1 , S 1 ) Let X be a compact metric space. Fix u, v ∈ X with u = v, a, b ∈ R and A ⊂ C(X, R). Define
C a,b u,v (X, R) = {f ∈ C(X, R)|f (u) = a, f (v) = b} A a,b u,v (X, R) = {f ∈ A|f (u) = a, f (v) = b}. Lemma 4.1.
If A is a unital sub-algebra of C(X, R) which separates points and a, b are any two distinct real numbers then A a,b u,v (X, R) is dense in C a,b u,v (X, R). Proof. Since A is a unital sub-algebra of C(X, R) which separates points, by Stone-Weierstrass theorem [3] A is dense in C(X, R). So, for each f ∈ C(X, R) there exists a sequence f n in A such that f n → f uniformly. Since a = b we can choose f n in such a way that f n (v) − f n (u) = 0, ∀n. Now definef
n = a + (b−a)(fn(x)−fn(u)) fn(v)−fn(u)
Clearlyf n (u) = a andf n (v) = b, ∀n andf n → f uniformly.
Theorem 4.1. If A is a unital sub-algebra of C(X, R) which separates points and a, b are any two real number then A a,b u,v (X, R) is dense in C a,b u,v (X, R). Proof. When a = b the result is true by above lemma. So let f ∈ C a,a u,v (X, R). First let a = 0. By Urysohn Lemma there exists g ∈ C(X, R) such that g(u) = − a 4 and g(v) = a 4 . Now define f 1 and f 2 by f 1 = 1 2 f + g and f 2 = 1 2 f − g. Clearly f 1 ∈ C 1 4 a, 3 4 a u,v (X, R) and f 2 ∈ C 3 Theorem 4.2 (Stone-Weierstrass with finitely many interpolatory constraints). For a natural number k, let S = {x 1 , x 2 , · · · , x k } ⊂ X (with x i = x j for distinct i and j) and let V = {v 1 , v 2 , · · · , v k } ⊂ R. If A is a unital sub-algebra of C(X, R) which separates points then A V S (X, R) is dense in C V S (X, R). For q ∈ S 1 , m ∈ Z and a unital sub-algebra A of C(I, R) which separates points, define P A m,q (S 1 , S 1 ) = {f ∈ C(S 1 , S 1 )|fα ∈ Aq ,q+m 0,1 (I, R), p •fα(0) = q}.
Theorem 4.3. For m ∈ Z and q ∈ S 1 , P A m,q (S 1 , S 1 ) is dense in C q m (S 1 , S 1 ). Proof. Proof is similar to the proof of Theorem 3.2.
Lemma 2 . 1 .
21Let q : E → X be a covering map. Suppose f : [0, 1] → X is any path, and e ∈ E is any point in the fiber of q over f (0). Then there exists a unique liftf : I → E of f such thatf (0) = e.
3.A metric on C(S 1 , S 1 ) Let C(S 1 , S 1 ) denote the collection of all continuous functions from S 1 to S 1 . Further let I = [0, 1] and C(I, R) = {f |f :
P
L a,b (I, R) = {f |f : I → R is PL and f (0) = a, f (1) = b} C a,b (I, R) = {f ∈ C(I, R)|f (0) = a, f (1) = b} P a,b (I, R) = {p(x)|p(x) is a polynomial on R with p(0) = a, p(1) = b} Note that P L a,b (I, R) is dense in C a,b (I, R).
Lemma 3 . 1 .
31Given a ∈ R, the set P 0,a (I, R) is dense in C 0,a (I, R) with sup metric.
Definition 3 . 1 .
31Let f : S 1 → S 1 be a continuous function. Then winding number W (f ) of f is the integerf α (1) −f α (0).
a, 1 4 a u,v
(X, R) and f 1 + f 2 = f . So by above lemma there exist sequences f 1,n ∈ A 1 4 a,34 a u,v (X, R) and f 2,n ∈ A 3 4 a,14 a u,v (X, R) such that f 1,n → f 1 and f 2,n → f 2 uniformly. Now define f n = f 1,n + f 2,n . Clearly f n ∈ A a,a u,v (X, R) and f n → f uniformly. Again proof for the case when a = 0, can be given in the same way as was given in Theorem 3.1.Let X be a compact housdorff space. Let F denotes either the field of real numbers R or the field of complex numbers C, C(X, F) denotes the collection of F-valued continuous functions on X with the sup-norm. And, let A denote a subset of C(X, F).Let k be any natural number. Let S = {x 1 , x 2 , · · · , x k } ⊂ X (with x i = x j for distinct i and j) and· · · , f (x k ) = v k }. Theorem 4.1 is a particular case of the following theorem from[1].
On an extension of the stone-weierstrass theorem. V Srikanth, Raj Kuppam, Bhawan Yadav, MATHEMATICAL COMMUNICATIONS. 19Srikanth V Kuppam and Raj Bhawan Yadav, On an extension of the stone-weierstrass theo- rem, MATHEMATICAL COMMUNICATIONS, Vol. 19, (2014), pp. 391-396.
Introduction to Topological Manifolds. John M Lee, Springer-VerlagNew YorkJohn M. Lee. Introduction to Topological Manifolds. Springer-Verlag, New York, (2000).
H Marshall, Stone, Generalized Weierstrass Approximation Theorem. 21Marshall H. Stone, Generalized Weierstrass Approximation Theorem, Mathematics Maga- zine., Vol. 21, No. 4 (Mar. -Apr., 1948), pp. 167-184.
H Marshall, Stone, Generalized Weierstrass Approximation Theorem. 21Marshall H. Stone, Generalized Weierstrass Approximation Theorem, Mathematics Maga- zine., Vol. 21, No. 5 (May -Jun., 1948), pp. 237-254.
| [] |
[
"Variational approach to second species periodic solutions of Poincaré of the 3 body problem",
"Variational approach to second species periodic solutions of Poincaré of the 3 body problem"
] | [
"Sergey Bolotin \nDepartment of Mathematics\nDepartment of Matematics Sapienza\nUniversity of Wisconsin-Madison and Moscow Steklov Mathematical Institute\nUniversity of Rome\n\n",
"Piero Negrini \nDepartment of Mathematics\nDepartment of Matematics Sapienza\nUniversity of Wisconsin-Madison and Moscow Steklov Mathematical Institute\nUniversity of Rome\n\n"
] | [
"Department of Mathematics\nDepartment of Matematics Sapienza\nUniversity of Wisconsin-Madison and Moscow Steklov Mathematical Institute\nUniversity of Rome\n",
"Department of Mathematics\nDepartment of Matematics Sapienza\nUniversity of Wisconsin-Madison and Moscow Steklov Mathematical Institute\nUniversity of Rome\n"
] | [] | We consider the plane 3 body problem with 2 of the masses small. Periodic solutions with near collisions of small bodies were named by Poincaré second species periodic solutions. Such solutions shadow chains of collision orbits of 2 uncoupled Kepler problems. Poincaré only sketched the proof of the existence of second species solutions. Rigorous proofs appeared much later and only for the restricted 3 body problem. We develop a variational approach to the existence of second species periodic solutions for the nonrestricted 3 body problem. As an application, we give a rigorous proof of the existence of a class of second species solutions. | 10.3934/dcds.2013.33.1009 | [
"https://arxiv.org/pdf/1104.2288v1.pdf"
] | 119,679,649 | 1104.2288 | 28a62504eb088d6da34887b1c03276631553331b |
Variational approach to second species periodic solutions of Poincaré of the 3 body problem
12 Apr 2011 January 19, 2013
Sergey Bolotin
Department of Mathematics
Department of Matematics Sapienza
University of Wisconsin-Madison and Moscow Steklov Mathematical Institute
University of Rome
Piero Negrini
Department of Mathematics
Department of Matematics Sapienza
University of Wisconsin-Madison and Moscow Steklov Mathematical Institute
University of Rome
Variational approach to second species periodic solutions of Poincaré of the 3 body problem
12 Apr 2011 January 19, 2013Dedicated to Ernesto Lacomba on the occasion of his 65th birthday
We consider the plane 3 body problem with 2 of the masses small. Periodic solutions with near collisions of small bodies were named by Poincaré second species periodic solutions. Such solutions shadow chains of collision orbits of 2 uncoupled Kepler problems. Poincaré only sketched the proof of the existence of second species solutions. Rigorous proofs appeared much later and only for the restricted 3 body problem. We develop a variational approach to the existence of second species periodic solutions for the nonrestricted 3 body problem. As an application, we give a rigorous proof of the existence of a class of second species solutions.
Introduction
Consider the plane 3-body problem with masses m 1 , m 2 , m 3 . Suppose that m 3 is much larger than m 1 , m 2 , i.e. µ = (m 1 + m 2 )/m 3 is a small parameter. Then m 1 /m 3 = µα 1 , m 2 /m 3 = µα 2 , α 1 + α 2 = 1.
Let q 1 , q 2 ∈ R 2 be positions of m 1 , m 2 with respect to m 3 (Poincaré's heliocentric coordinates) and p 1 , p 2 ∈ R 2 their scaled momenta. The motion of m 1 , m 2 with respect to m 3 is described by a Hamiltonian system (H µ ) with Hamiltonian
H µ (q, p) = H 0 (q, p) + µ |p 1 + p 2 | 2 2 − α 1 α 2 |q 1 − q 2 | ,(1.1)
where
H 0 (q, p) = |p 1 | 2 2α 1 + |p 2 | 2 2α 2 − α 1 |q 1 | − α 2 |q 2 | .
The Hamiltonian H µ is a small perturbation of the Hamiltonian H 0 = H 1 + H 2 describing 2 uncoupled Kepler problems. The configuration space of system
(H 0 ) is U 2 = U × U , where U = R 2 \ {0}.
The configuration space of the perturbed system (H µ ) is U 2 \ ∆, where ∆ = {q = (q 1 , q 2 ) ∈ U 2 : q 1 = q 2 } represents collisions of m 1 , m 2 .
System (H µ ) has energy integral and the integral of angular momentum 1 G(q, p) = G 1 + G 2 = iq 1 · p 1 + iq 2 · p 2 corresponding to the rotational symmetry e iθ : R 2 → R 2 . System (H 0 ) has additional first integrals H 1 , H 2 and G 1 , G 2 -energies and angular momenta of m 1 , m 2 . In the domain P = {(q, p) ∈ U 2 × R 4 : H 1 , H 2 < 0, G 1 , G 2 = 0}, orbits of m 1 , m 2 are Kepler ellipses, and solutions of system (H 0 ) are quasiperiodic with 2 frequencies.
Let R ⊂ P be the regular domain -the set of points in P such that the corresponding Kepler ellipses do not cross. For small µ > 0 solutions of system (H µ ) in R are O(µ)-approximated by solutions of system (H 0 ) on finite time intervals independent of µ. By the classical perturbation theory, away from resonances the same is true on longer time intervals. Moreover, as proved by Arnold [3], for small µ > 0, system (H µ ) has a large measure of invariant 2-dimensional KAM tori on which solutions are quasiperiodic, and thus well approximated (modulo rotation) by solutions of system (H 0 ) on infinite time intervals.
In the singular domain S ⊂ P , where the corresponding Kepler ellipses cross, the classical perturbation theory does not work. Indeed, for almost any initial condition in S, solution of system (H 0 ) is quasiperiodic with incommensurable frequencies, and so eventually m 1 , m 2 simultaneously approach an intersection point of Kepler ellipses. Then the perturbation in (1.1) becomes large, and so it can not be ignored even in the first approximation in µ.
For small µ > 0 solutions of the 3 body problem (H µ ) in S can be described as follows. The bodies m 1 , m 2 move along nearly Kepler ellipses and after many revolutions they almost collide. Then they start moving along a new pair of nearly Kepler orbits. If the new energies H 1 , H 2 are both negative, so the new Kepler orbits are ellipses, then m 1 , m 2 will again nearly collide after many revolutions, and the process repeats itself. Thus almost collision solutions of the 3-body problem (H µ ) shadow chains of collision orbits of system (H 0 ).
Almost collision periodic solutions of system (H µ ) were first studied by Poincaré in New Methods of Celestial Mechanics. Poincaré named them second species periodic solutions. However, he did not provide a rigorous existence proof. Rigorous proofs appeared much later (see e.g. [16,10,7]) and only for the restricted 3 body problem, circular and elliptic. In [10,8] also chaotic second species solutions of the circular and elliptic restricted problem were studied.
The goal of this paper is to develop a variational approach to almost collision periodic orbits of the nonrestricted 3 body problem. As an application, we will give a rigorous proof of the existence of a class of almost collision periodic orbits. Chaotic almost collision orbits will be studied in another paper. Remark 1.1. It is possible to fix the value of angular momentum G and reduce rotational symmetry. Then we obtain a Hamiltonian system with 3 degrees of freedom. However, since reduction of the rotational symmetry considerably complicates the Hamiltonian, it is simpler to work with the original Hamiltonian system (H µ ) with 4 degrees of freedom. Remark 1.2. We consider only near collisions of small masses m 1 and m 2 and exclude near collisions of m 1 , m 2 with m 3 . In particular, triple collisions are excluded. It is well known that double collisions can be regularized, but the Levi-Civita regularization becomes singular as µ → 0. Understanding this singularity is the base for our methods. Levi-Civita regularization was previously used to study second species solutions for the restricted 3 body problem, see e.g. [16,10,14]. Remark 1.3. Main results of this paper hold for more general Hamiltonians with singularity, for example
H µ (q, p) = |p 1 | 2 2a 1 (q) + |p 2 | 2 2a 2 (q) + g(q, µ) − µf (q, µ) |q 1 − q 2 | , a 1 , a 2 , f > 0,
where all functions are smooth without singularity at q 1 = q 2 .
Main results
A solution of the Hamiltonian system (H µ ) is determined by its projection to the configuration space U 2 \ ∆ which will be called a trajectory. Let L µ be the Lagrangian corresponding to H µ . A T -periodic trajectory γ : R → U 2 \ ∆ is a critical point of the Hamilton action functional
A µ (T, γ) = T 0 L µ (γ(t),γ(t)) dt (2.1)
on the space of T -periodic W 1,2 loc curves in U 2 \ ∆. We write T explicitly in A µ (T, γ) because later it will become a variable. Any trajectory of system (H 0 ) has the form γ = (γ 1 , γ 2 ), where γ j is a trajectory of the Kepler problem. For µ = 0 the action functional A = A 0 splits:
A(T, γ) = α 1 B(T, γ 1 ) + α 2 B(T, γ 2 ), γ = (γ 1 , γ 2 ), (2.2) where B(T, σ) = T 0 |σ(t)| 2 2 + 1 |σ(t)| dt (2.3)
is the action functional of the Kepler problem on the space of T -periodic W 1,2 loc curves σ : R → U . We have to consider also trajectories of system (H 0 ) with collisions. A Tperiodic curve γ = (γ 1 , γ 2 ) : R → U 2 is called a periodic n-collision chain if there exist time moments
t = (t 1 , . . . , t n ), t 1 < . . . < t n < t n+1 = t 1 + T, (2.4)
such that:
• γ has collisions at t = t j , so that
γ(t j ) = (x j , x j ) ∈ ∆, γ 1 (t j ) = γ 2 (t j ) = x j .
• γ| [tj ,tj+1] is a trajectory of system (H 0 ) which will be called a collision orbit.
• Momentum p(t) = (p 1 (t), p 2 (t)) = (α 1γ1 (t), α 2γ2 (t)) changes at collisions so that the total momentum y = p 1 + p 2 is continuous:
y(t j + 0) = y(t j − 0) = y j . (2.5)
Equivalently, the jump of momentum p(t j + 0) − p(t j − 0) is orthogonal to ∆ at γ(t j ).
• The total energy H 0 does not change at collision:
H 0 (γ(t j ), p(t j + 0)) = H 0 (γ(t j ), p(t j − 0)). (2.6)
By (2.5), the total angular momentum G = G 1 +G 2 is preserved at collisions:
G(γ(t j ), p(t j ± 0)) = ix j · y j .
Hence G is constant along γ. By (2.6), the total energy H 0 = H 1 + H 2 = E is also constant along γ, but not the energies H 1 , H 2 of m 1 , m 2 , or their angular momenta G 1 , G 2 . A collision chain γ is a broken trajectory of system (H 0 ) -a concatenation of collision orbits with reflections from ∆. However, unlike for ordinary billiard systems, ∆ has codimension 2 in the configuration space U 2 , so the change of the normal component of the momentum at collision is not uniquely determined. Thus there is no direct interpretation of collision chains as trajectories of a dynamical system. Such an interpretation is given later on.
Collision chains are limits of trajectories of system (H µ ) which approach collisions as µ → 0. Indeed, we have: Proposition 2.1. Let γ µ be a T µ -periodic trajectory of system (H µ ) which uniformly converges, as µ → 0, to a T -periodic curve γ. If γ([0, T ]) ∩ ∆ is a finite set, then γ is a periodic collision chain.
We say that γ µ is an almost collision orbit shadowing the collision chain γ. A similar statement holds for nonperiodic collision chains.
Intuitively, Proposition 2.1 is almost evident: a near collision of m 1 , m 2 lasts a short time during which the influence of the non-colliding body m 3 is negligible. Then m 1 , m 2 form a 2 body problem, so their total momentum y = p 1 + p 2 and total energy H 0 = H 1 + H 2 are almost preserved. This yields (2.5)-(2.6). This can be made into a rigorous proof, see e.g. [1]. A better way to prove Proposition 2.1 is by using the Levi-Civita regularization, see section 7.
Collision chains can be characterized as extremals of Hamilton's action functional (2.2). Let Ω T n be the set of ω = (t, T, γ), where t = (t 1 , . . . , t n ) satisfies (2.4) and γ : R → U 2 is a T -periodic W 1,2 loc curve such that γ(t j ) = (x j , x j ) ∈ ∆. The collision times t j and collision points x j are not fixed. Then Ω T n can be identified with an open set in a Hilbert space (see (2.8) and collision times t j and collision points x j are smooth functions on Ω T n .
Remark 2.1. If γ(t) / ∈ ∆ for t = t j , then time moments t j are determined by the curve γ, i.e. the projection Ω T n → W 1,2 (R/T Z, U 2 ), (t, T, γ) → γ, is injective. Then ω = (t, T, γ) is determined by γ. But t j are not continuous functions of γ, so we have to include the variables t in the definition of Ω T n .
Remark 2.2. In the one-dimensional calculus of variations Hilbert spaces are unnecessary: at least locally function spaces can be replaced by finite dimensional subspaces of broken extremals. We will use this approximation later on. However, in this section we use conventional W 1,2 setting.
The action functional A(ω) = A(T, γ) is a smooth function on Ω T n .
Proposition 2.2. γ is a T -periodic n-collision chain iff ω = (t, T, γ) is a critical point of the Hamilton action A on Ω T n .
Proof. If ω = (t, T, γ) is a critical point of A on Ω T n , then each segment γ| [ti−1,tj ] is a collision orbit of system (H 0 ). Then by the first variation formula [2],
dA(ω) = n j=1 (∆h j dt j − ∆y j · dx j ),
where ∆h j and ∆y j are jumps of energy and total momentum:
∆y j = y + j − y − j , y ± j = y(t j ± 0), ∆h j = h + j − h − j , h ± j = H 0 (γ(t j ), p(t j ± 0)).
Since the differentials dt j , dx j are independent, critical points of A satisfy ∆h j = 0 and ∆y j = 0. This implies (2.5)-(2.6). Converse is also evident.
For collision chains with fixed energy H 0 = E we use Maupertuis's variational principle [2]. Let Ω n = ∪ T >0 Ω T n . The period T > 0 is now a smooth function on Ω n . Hamilton's action is replaced by the Maupertuis action functional
A E (ω) = A E (T, γ) = A(T, γ) + ET, ω = (t, T, γ). (2.7)
If γ is a collision chain with energy E, then
A E (T, γ) = γ p · dq
is the classical Maupertuis action. The Maupertuis principle for collision chains is as follows:
Proposition 2.3. γ is a T -periodic n-collision chain of energy E iff ω = (t, T, γ) is an extremal of the Maupertuis functional A E on Ω n .
Due to time and rotation invariance, critical points of the action functional are degenerate. To eliminate time degeneracy, we identify collision chains which differ by time translation
γ(t) → γ(t − τ ), t j → t j + τ, τ ∈ R.
This defines a group action of R on Ω n . The coordinates on the quotient space Ω n = Ω n /R can be defined as follows. Let
s j = t j+1 − t j > 0, σ j (τ ) = γ j (t j + τ s j ), 0 ≤ τ ≤ 1. Denote s = (s 1 , . . . , s n ) ∈ R n + , σ = (σ 1 , . . . , σ n ) ∈ W n , where W n = {σ = (σ 1 , . . . , σ n ) : σ j ∈ W 1,2 ([0, 1], U 2 ), σ j (1) = σ j+1 (0) ∈ ∆}.
The map map Ω n → R n + × W n , (t, T, γ) → (s, σ), makes it possible to identify Ω n with R n + × W n . We can represent σ j bŷ
σ j ∈ W 1,2 0 ([0, 1], U 2 ),σ j (τ ) = σ j (τ ) − (1 − τ )x j − τ x j+1 .
Then σ is determined by (x,σ), where x = (x 1 , . . . , x n ) ∈ U n andσ = (σ 1 , . . . ,σ n ).
Hence W n ∼ = U n × W 1,2 0 ([0, 1], U 2n ).
Finally
Ω n ∼ = R n + × W n ∼ = R n + × U n × W 1,2 0 ([0, 1], U 2n ), Ω n ∼ = Ω n × R. (2.8)
The action functional gives a smooth function A(s, σ) on Ω n , invariant under rotations: A(s, e iθ σ) = A(s, σ). We deal with the rotation degeneracy later on.
Often it is convenient to use parametrization independent Jacobi's form of the Maupertuis action functional -the length of γ in the Jacobi's metric ds E :
J E (γ) = γ ds E , ds E = max p {p · dq : H 0 (q, p) = E} = 2(E + α 1 |q 1 | −1 + α 2 |q 2 | −1 )(α 1 |dq 1 | 2 + α 2 |dq 2 | 2 ).
Then
J E (γ) ≤ A E (T, γ) and if γ is parametrized so that H 0 ≡ E, then A E (T, γ) = J E (γ)
. Thus up to parametrization, extremals of J E and A E are the same. For a collision chain γ corresponding to (s, σ) ∈ Ω n ,
J E (γ) = J E (σ) = n j=1 J E (σ)
is a function on W n . We obtain Proposition 2.4. If γ is a periodic n-collision chain of energy E then the corresponding σ ∈ W n is an extremal of the Jacobi action J E . If σ is an extremal of J E , then if each σ j is reparametrized so that H 0 ≡ E, the corresponding γ is a periodic n-collision chain.
We will not use Jacobi's variational principle since J E is not a smooth function. A discrete version of Jacobi's action is smooth and we will use it later on.
Similar variational principles hold for collision chains periodic in a rotating coordinate frame: γ(t + T ) = e iΦ γ(t) for some quasiperiod T and phase Φ. We call γ periodic modulo rotation. If Φ / ∈ 2πQ, then γ is quasiperiodic in a fixed coordinate frame.
The corresponding function space is defined as follows. LetΩ n be the set of all (t, T, γ, Φ), where t, T are as before, Φ ∈ R and the W 1,2 loc curve γ : R → U 2 satisfies γ(t + T ) = e iΦ γ(t) and γ(t j ) ∈ ∆. ThenΩ n can be identified with an open set in a Hilbert space and t j , x j , T, Φ are smooth functions onΩ n . In fact Ω n ∼ = Ω n ×R. Indeed,γ(t) = e −itΦ/T γ(t) is a T -periodic curve, so (t, T,γ) ∈ Ω n .
Define the Maupertuis-Routh action functional onΩ n by
A EG (t, T, γ, Φ) = A(T, γ) + ET − GΦ. (2.9)
This is a smooth function onΩ n and we have:
Proposition 2.5.
γ is a periodic modulo rotation collision chain with energy E and angular momentum G iff (t, T, γ, Φ) is a critical point of the functional A EG onΩ n .
Remark 2.3. It seems natural to take Φ ∈ T = R/2πZ since Φ + 2π gives the same collision chain. But then the functional A EG will be multivalued: defined modulo 2πG.
Remark 2.4. The name of the functional A EG is motivated as follows. One can perform Routh's reduction [2] of the rotational symmetry for fixed G replacing the configuration space U 2 by U 2 = U 2 /T ∼ = R 2 + × T and the Lagrangian by the so called Routh function. Then the functional A EG becomes the Maupertuis functional for the reduced Routh system. Probably this observation is due to Birkhoff [5]. However, Routh's reduction makes the Lagrangian more complicated, so we do not use it in this paper.
There are several other possible variational principles for collision chains (for example, we may fix the phase Φ), but in the present paper we will use only the ones given above.
Sufficient condition for the existence of a periodic orbit of system (H µ ) shadowing a given collision chain γ requires that the chain is nontrivial in the following sense. Let
u(t) = γ 2 (t) − γ 1 (t) and let v(t) = αu(t) = α 1 p 2 (t) − α 2 p 1 (t), α = α 1 α 2 , (2.10) be the scaled relative velocity of m 1 , m 2 . Let v ± j = v(t j ± 0) be relative collision velocities. Since h ± j = |y j | 2 /2 + |v ± j | 2 /2α − |x j | −1 = E, (2.11) equations (2.5)-(2.6) imply that |v + j | = |v − j |:
relative speed is preserved at collision. We impose two essentially equivalent conditions:
Direction change condition. Relative collision velocity changes direction at collision:
∆v j = v + j − v − j = 0, j = 1, . . . , n. In particular, v ± j = 0. No early collisions condition. γ(t) / ∈ ∆ for t = t j .
If the direction change condition is not satisfied at some t j , thenγ(t j − 0) = γ(t j + 0), and so γ| [tj−1,tj+1] is a smooth trajectory of system (H 0 ). Deleting the collision time moment t j we obtain a (n − 1)-collision chain violating no early collisions condition.
Conversely, if γ is a n-collision chain violating no early collisions condition, then adding an extra collision time moment, we obtain a (n + 1)-collision chain violating the changing direction condition. From now on we add these two equivalent conditions to the definition of a collision chain.
Remark 2.5. The changing direction condition implies that almost collision orbits γ µ shadowing the collision chain γ come O(µ)-close to collision. Often almost collision orbits discussed in Astronomy come close to collision, but not too close, for example O(µ ν )-close with ν ∈ (0, 1), see e.g. [15,18,14]. Such orbits change direction at near collision, but this change is small as µ → 0. Then the corresponding collision chains do not satisfy the changing direction condition:
∆v j = v + j − v − j = 0.
Our methods do not work for such almost collision orbits.
The changing direction condition makes it possible to construct a shadowing orbit γ µ of system (H µ ), but it does not prevent γ µ from having regularizable double collisions of m 1 , m 2 . To exclude such collisions we need to impose an extra condition:
No return condition. v + j + v − j = 0, j = 1, .
. . , n. But this condition is not as crucial as the changing direction condition, so we do not include it in the definition of a collision chain.
To construct shadowing orbits we also need some nondegeneracy assumptions. We say that a T -periodic n-collision chain γ with energy E is nondegenerate if ω = (t, T, γ) ∈ Ω n is a nondegenerate modulo symmetry critical point of the Maupertuis action A E on Ω n .
Due to time translation and rotation invariance, critical points of A E are all degenerate: the group action γ(t) → e iθ γ(t − τ ) of R × T preserves A E . We say that ω = (t, T, γ) ∈ Ω n is nondegenerate modulo symmetry if the nullity of the quadric form d 2 A(ω) on T ω Ω n is 2 -the lowest possible. Equivalently, the manifold M ⊂ Ω n obtained from ω by the action of the group R × T is a nondegenerate critical manifold. Nondegeneracy modulo symmetry is equivalent to nondegeneracy of the corresponding critical point on Ω n /(R × T) ∼ = Ω n /T.
As usual in the classical calculus of variations, the Hessian operator corresponding to d 2 A E (ω) is a sum of invertible and compact operators on the Hilbert space T ω Ω n , so nondegeneracy modulo symmetry implies that the Hessian has bounded inverse on T ω Ω n /T ω M . In fact, at least locally, A E can be reduced to a finite dimensional discrete action functional (see section 5), so all Hilbert spaces involved are essentially finite dimensional. Now two main results will be formulated.
Theorem 2.1. Let γ be a nondegenerate T -periodic collision chain with energy E. Then for small µ > 0 there exists a T µ -periodic orbit γ µ of system (H µ ) with energy E which O(µ) shadows γ:
T µ = T + O(µ), γ µ (t) = γ(t) + O(µ), t ∈ [0, T ].
If γ satisfies the no return condition, then γ µ has no collisions and there exist 0 < a < b independent of µ such that
µa ≤ d(γ µ (t j ), ∆) ≤ µb. (2.12)
Due to time translation and rotation symmetry, 4 multipliers of γ µ (eigenvalues of the linear symplectic Poincaré map of R 8 ) are equal to 1. Nontrivial multipliers are λ 1 , λ −1 1 , λ 2 , λ −1 2 , where λ 1 (µ) is real and large of order | ln µ|, and λ 2 (µ) has a limit as µ → 0, where λ 2 (0) = 1 is complex with |λ 2 (0)| = 1 or real.
Next we consider collision chains with fixed energy E and angular momentum G. Again we say that a periodic modulo rotation collision chain γ is nondegenerate if the corresponding (t, T, γ, Φ) ∈Ω n is a nondegenerate modulo symmetry critical point of the functional A EG onΩ n . Thus it has only degeneracy coming from rotation and time translation invariance. Theorem 2.2. Let γ be a T -periodic modulo rotation nondegenerate collision chain with energy E and angular momentum G. Then for small µ > 0 there exists a periodic modulo rotation orbit γ µ of system (H µ ) with energy E and angular momentum G which O(µ)-shadows γ:
γ µ (t + T µ ) = e iΦµ γ µ (t), γ µ (t) = γ(t) + O(µ), t ∈ [0, T ],
where
T µ = T + O(µ), Φ µ = Φ + O(µ).
An estimate (2.12) holds also here. Even if γ is periodic (Φ ∈ 2πQ), in general the shadowing orbit γ µ will be periodic only modulo rotation, and thus quasiperiodic in a fixed coordinate frame.
To use Theorems 2.1-2.2, we need to find nondegenerate modulo symmetry collision chains. In general this is not easy. A simple application of Theorem 2.2, based on a perturbative approach, is given in section 3. More complex applications will be given in a future publication.
In section 4 a description of nondegenerate collision orbits is given. Using this description, in section 5 we reduce the action functionals to their discrete versions. Then in section 6 we formulate a local connection result -Theorem 6.1 -and use it to prove Theorem 2.1. The proof of Theorem 2.2 is similar. In section 7 we use Levi-Civita regularization to reduce Theorem 6.1 to Theorem 7.2 which is a generalization of the Shilnikov Lemma [21] to Hamiltonian systems with a normally hyperbolic critical manifold.
Remark 2.6. In this paper we do not attempt to use global variational methods. The reason is that although one can use global methods to find critical points of the action functionals, in general it is hard to check that the critical points satisfy the changing direction condition.
Restricted elliptic limit
Suppose that one of the small masses m 1 , m 2 is much smaller than the other: α 1 ≪ α 2 . In the formal limit α 1 → 0 we obtain the restricted elliptic 3 body problem for which many second species periodic solutions were obtained in [7]. These results do not immediately extend to the case of small α 1 > 0. However, we will show that they can be used to obtain many second species periodic solutions for the nonrestricted 3 body problem.
Let us fix energy E and angular momentum G. For α 1 ≪ α 2 , the Maupertuis-Routh action functional
A EG (t, T, γ, Φ) = α 1 B(T, γ 1 ) + α 2 B(T, γ 2 ) + ET − GΦ, γ = (γ 1 , γ 2 ), (3.1)
onΩ n is a small perturbation of the Maupertuis-Routh action functional for the Kepler problem. Indeed, the functional A EG 0 = A EG | α1=0 does not depend on γ 1 :
A EG 0 (t, T, γ, Φ) = B EG (T, γ 2 , Φ) = B(T, γ 2 ) + T E − GΦ. (3.2)
The condition γ(t j ) ∈ ∆ imposes no restrictions on γ 2 . Thus the functional B EG is defined on the set Π of (T, σ, Φ), where σ :
R → U is a W 1,2 loc curve such that σ(t + T ) = e iΦ σ(t). We have a submersion π :Ω n → Π, (t, T, γ, Φ) → (T, γ 2 , Φ), and A EG 0 = B EG • π.
The functional B EG is very degenerate, because all orbits of the Kepler problem with energy E < 0 are periodic with the same period τ = 2π(−2E) −3/2 . Suppose E, G are such that there exists an elliptic orbit Γ of Kepler's problem with energy E and angular momentum G. For definiteness let G > 0. Then 0 < (−2E)G < 1. The major semiaxis and eccentricity of Γ are
a = (−2E) −1 , e = √ 1 + 2EG.
The Maupertuis action is
J E (Γ) = Γ y · dx = 2π(−2E) −1/2 .
The counterclockwise elliptic orbit Γ : R → U is defined uniquely modulo rotation and time translation.
Proposition 3.1. Let E < 0, G > 0 and (−2E)G < 1.
Then all critical points ω = (T, σ, Φ) of the functional B EG on Π belong to one of the nondegenerate critical manifolds M m ⊂ Π, m ∈ N, obtained from (mτ, Γ, 0) by rotation and time translation of Γ. We have
B EG | Mm = B(mτ, Γ) + mτ E = mJ E (Γ) = 2πm(−2E) −1/2 .
Proof. Let (T, σ, Φ) ∈ Π be a critical point of B EG . Then σ is a solution of the Kepler problem with energy E and angular momentum G and hence σ is a time translation and rotation of Γ. Since Γ is a non-circular orbit, quasiperiodicity condition σ(t + T ) = e iΦ σ(t) implies that Φ = 0 mod 2πZ and T = mτ for some m ∈ N.
Next we need to check that M m is a nondegenerate critical manifold of B EG . Essentially this is the same statement, but now we need to consider the linearized Kepler problem.
The second variation d 2 B EG (ω) at ω = (mτ, Γ, 0) is a bilinear form on the tangent space T ω Π which is the set of η = (θ, ξ, φ), where θ, φ ∈ R and ξ : R → R 2 is a vector field such that
ξ(t + mτ ) = ξ(t) +Γ(t)θ + iΓ(t)φ. (3.3)
The standard calculus of variations implies that if η ∈ T ω Π belongs to the kernel of d 2 B EG (ω), then ξ is a solution of the variational equation for Γ which lie on the zero levels of the linear first integrals corresponding to the integrals of angular momentum and energy. The linear approximations at Γ to the integrals of energy and angular momentum arė
Γ(t) ·ξ(t) −Γ(t) · ξ(t) ≡ 0, iΓ(t) ·ξ(t) − iΓ(t) · ξ(t) ≡ 0. Condition (3.3) gives iΓ(t) ·Γ(t)θ ≡ 0, iΓ(t) ·Γ(t)φ ≡ 0.
Since Γ is noncircular, θ = φ = 0 and so ξ(t + mτ ) = ξ(t). It follows that η = (0, ξ, 0) is tangent to M m , i.e. the variation ξ(t) is obtained by time translation and rotation of Γ(t).
The critical manifold N m = π −1 (M m ) ⊂Ω n of A EG 0 corresponding to M m is (up to time translation and rotation)
N m = {(t, mτ, σ, Γ, 0) ∈Ω n : σ(t + mτ ) = σ(t), σ(t j ) = Γ(t j )}.
This is an infinite dimensional nondegenerate critical manifold of A EG 0 . For nonzero α 1 , by (3.1),
A EG | Nm = α 1 (B(T, σ) + Emτ − 2πm(−2E) −1/2 ) + 2πm(−2E) −1/2 .
By a standard property of nondegenerate critical manifolds [17], any nondegenerate modulo symmetry critical point ω ∈ N m of A EG | Nm for small α 1 > 0 gives a nondegenerate modulo symmetry critical point of A EG , and hence a nondegenerate modulo symmetry collision chain with energy E and angular momentum G.
Up to an additive constant and a constant multiple, A EG | Nm is Hamilton's action B(mτ, σ) for the Kepler problem. It is defined on the set Π Γ,m of (t, σ), where σ : R → U is an mτ -periodic curve such that σ(t j ) = Γ(t j ). Thus B(mτ, σ) = B Γ,m (t, σ) is precisely the action functional whose critical points are collision chains of the elliptic restricted 3 body problem. This functional was studied in [7], and many its nondegenerate critical points were found for small eccentricity (almost circular Γ), i.e. (−2E)G close to 1. Also the changing direction and no early collisions condition was verified in [7], and this carries out for small α 1 > 0. We obtain Theorem 3.1. Let 0 < (−2E)|G| < 1 be close to 1. Then for sufficiently small α 1 > 0 there exist many collision chains γ such that for sufficiently small µ > 0, γ is O(µ)-shadowed by a second species periodic modulo rotation solution γ µ of the nonrestricted 3 body problem with given E, G.
This result can be improved by using a more quantitative statement from [7]. The obtained second species solutions are periodic in a rotating coordinate frame and quasiperiodic in a fixed coordinate frame. Proper periodic orbits will be obtained in a future publication; for them reduction to the restricted elliptic problem is impossible.
Collision action function
Collision chains can be represented as critical points of a function of a finite number of variables -discrete action functional. This is needed for the proof of Theorems 2.1-2.2 and in subsequent publications. Since collision chains are concatenations of collision orbits, we need to describe collision orbits first.
A collision orbit γ = (γ 1 , γ 2 ) of system (H 0 ) is a pair of Kepler orbits joining the points x − , x + ∈ U . Thus description of collision orbits is reduced to the classical Lambert's problem [22] of joining the points x − , x + by a Kepler orbit.
First we join the points x − , x + by a Kepler orbit Γ : [0, τ ] → U with fixed energy E < 0, or, equivalently, fixed major semi axis a = (−2E) −1 . Due to scaling invariance of Kepler's problem without loss of generality set a = 1. Then a Kepler ellipse passing through x − , x + is determined by the second focus F such that
|x − | + |x − − F | = 2, |x + | + |x + − F | = 2. (4.1)
The solution F = F (x − , x + ) of these equations exists and smoothly depends on x ± if the corresponding circles intersect transversely, i.e. (x − , x + ) lie in the set
X = {(x − , x + ) ∈ U 2 : ||x + | − |x − || < |x + − x − | < 4 − |x − | − |x + |}.
For (x − , x + ) ∈ X there exist two solutions F of equations (4.1), and we take one of them, for definiteness the one on the left side of the segment x − x + . Let Γ(x − , x + ) be the counter clock wise simple arc of the constructed Kepler ellipse joining the points x − and x + . Let
f (x − , x + ) = Γ y · dx = Γ (2|x| −1 − 1) 1/2 |dx|
be the Maupertuis action of Γ. This is a smooth rotation invariant function on X:
f (e iθ x − , e iθ x + ) = f (x − , x + ).
Remark 4.1. By Lambert's Theorem [22], f is a function of s ± = |x − |+ |x + |± |x − − x + | only. An explicit formula is
f (x − , x + ) = W (s + ) ± W (s − ), (4.2) where W (s) = 1 2 (4 − s)s + 2 arctan s 4 − s .
Plus is taken if x + = e iθ x − with θ ∈ [π, 2π) and minus if θ ∈ (0, π]. One can check that f is smooth at θ = π.
Due to scaling invariance of the Kepler problem, for arbitrary negative energy E < 0, the Maupertuis action of a simple counter clock wise arc Γ = Γ(E, x − , x + ) connecting the points ( • Γ smoothly depends on E, x − , x + .
x − , x + ) ∈ X E = (−2E) −1 X is f (E, x − , x + ) = Γ y · dx = Γ (2(|x| −1 + E)) 1/2 |dx| = (−2E) −1/2 f ((−2E)x − , (−2E)x + ).
• The Maupertuis action of Γ is
J n (E, x − , x + ) = Γ y · dx = = (−2E) −1/2 (2π|n| + (sgn n)f ((−2E)x − , (−2E)x + )). (4.3) • τ = τ n (E, x − , x + ) = ∂ ∂E J n (E, x − , x + ). (4.4) • ∂ 2 ∂E 2 J n (E, x − , x + ) > 0, (x − , x + ) ∈ X E . (4.5)
The orbits Γ n (E, x − , x + ) are nondegenerate (have non-conjugate end points) and any nondegenerate connecting orbit with E < 0 is obtained in this way.
For the classical Lambert's problem [22], when Γ is a simple elliptic arc, n = 0 or n = −1 depending on if Γ is a counterclockwise or clockwise. We set sgn 0 = 1.
The first term in (4.3) is the Maupertuis action for n complete revolutions around the Kepler ellipse, and the second is the action of a simple elliptic arc. Equation (4.4) follows from the first variation formula; it is essentially Kepler's time equation. So only inequality (4.5) is non-evident. It is enough to check it for the classical Lambert's problem with n = 0, 1. Then (4.5) can be deduced from the explicit formula (4.6), although the computation is not trivial. An equivalent statement was proved in [19].
Next we consider Lambert's problem for fixed time τ > 0. This problem involves solving the transcendental Kepler's equation so there is no explicit formula for the solution. Let D n ⊂ R + × U 2 be the open set which is the image of the diffeomorphism • Γ smoothly depends on τ, x − , x + .
(E, x − , x + ) → (τ n (E, x − , x + ), x − , x + ), E < 0, (x − , x + ) ∈ X E .
• Hamilton's action
B(τ, Γ) = F n (τ, x − , x + )
is a smooth function on D n and
∂ 2 ∂τ 2 F n (τ, x − , x + ) < 0, (τ, x − , x + ) ∈ D n . (4.6)
• All nondegenerate connecting orbits with E < 0 are Γ n (τ, x − , x + ) for some n ∈ Z and (τ,
x − , x + ) ∈ D n .
Proof. We need to find the energy E < 0 such that the connecting orbit Γ = Γ n (E, x − , x + ) in Proposition 4.1 has given time τ = τ n (E, x − , x + ). Then
(F n (τ, x − , x + ) + τ E) τ =τn(E,x−,x+) = J n (E, x − , x + ).
Hence J n and −F n are Legendre transforms of each other:
J n (E, x − , x + ) = max τ (F n (τ, x − , x + ) + τ E), F n (τ, x − , x + ) = min E (J n (E, x − , x + ) − τ E). Since J n (E, x − , x + ) is convex with respect to E, its Legendre transform −F n (τ, x − , x + )
is convex in τ and smooth.
The initial and final total momenta of γ are given by the first variation formula
y + = ∂ ∂x + F n (τ, x − , x + ), y − = − ∂ ∂x − F n (τ, x − , x + ). (4.7)
Now it is easy to describe nondegenerate collision orbits γ = (γ 1 , γ 2 ) of system (H 0 ). Denote by k = [γ] = (k 1 , k 2 ) ∈ Z 2 , k j = [γ j ], the rotation vector of γ. We obtain Proposition 4.3. For any k = (k 1 , k 2 ) ∈ Z 2 and any (τ, x − , x + ) ∈ V k = D k1 ∩ D k2 :
• There exists a nondegenerate collision orbit γ :
[0, τ ] → U 2 , γ = γ(k, τ, x − , x + ), with collision points γ(0) = (x − , x − ), γ(τ ) = (x + , x + ). • γ smoothly depends on (τ, x − , x + ) ∈ V k .
• Hamilton's action of γ is
S k (τ, x − , x + ) = A(τ, γ) = α 1 F k1 (τ, x − , x + ) + α 2 F k2 (τ, x − , x + ). (4.8) • ∂ 2 ∂τ 2 S k (τ, x − , x + ) < 0, (τ, x − , x + ) ∈ V k . (4.9)
By the first variation formula [2],
dS k (τ, x − , x + ) = y + · dx + − y − · dx − − E dτ,
where E is the total energy of the collision orbit γ, and y ± = y(t ± ) are total momenta at collisions. Thus
y + = ∂S k ∂x + (τ, x − , x + ), y − = − ∂S k ∂x − (τ, x − , x + ), E = − ∂S k ∂τ (τ, x − , x + ).
(4.10)
Remark 4.2. We will not need this in the present paper, but for almost all (τ, x − , x + ) ∈ V k the collision action satisfies the twist condition
det ∂ 2 S k ∂x − ∂x + (τ, x − , x + ) = 0. (4.11)
Thus S k is the generating function of a symplectic collision map (τ, x − , y − ) → (τ, x + , y + ). This will be important for the study of chaotic collision chains.
Remark 4.3.
It can happen that the collision orbit γ has early collisions: γ(t) ∈ ∆ for some t ∈ (0, τ ). To avoid this, we may need to delete from V k a zero measure set, see [7].
Let us now fix energy E < 0 and look for collision orbits of system (H 0 ) with energy E. The map
(τ, x − , x + ) → (− ∂S k ∂τ (τ, x − , x + ), x − , x + ) is a diffeomorphism of V k onto an open set W k ⊂ R × U 2 .
Let L E k be the Legendre transform of −S k with respect to τ :
L E k (x − , x + ) = max τ (S k (τ, x − , x + ) + τ E) = (S k (τ, x − , x + ) + τ E)| τ =τ k (E,x−,x+) ,
where τ is obtained by solving the last equation (4.10). Then L E k is a smooth function on
W E k = {(x − , x + ) : (E, x − , x + ) ∈ W k }. We obtain Proposition 4.4. Let E < 0. For any (x − , x + ) ∈ W E k there exists a unique collision orbit γ : [0, τ ] → U 2 of energy E such that γ(0) = (x − , x − ), γ(τ ) = (x + , x + ). Its Maupertuis action L E k (x − , x + ) = γ p · dq = γ ds E
is a smooth function on W E k . The total momenta at collision are
y + = ∂L E k ∂x + (x − , x + ), y − = − ∂L E k ∂x − (x − , x + ). (4.12)
In terms of actions functions (4.3) for the Kepler problem,
L E k (x − , x + ) = min E1+E2=E (α 1 J k1 (E 1 , x − , x + ) + α 2 J k2 (E 2 , x − , x + )).
Remark 4.4. Due to homogeneity of the Kepler problem,
L E k (x − , x + ) = (−2E) −1/2 L k ((−2E)x − , (−2E)x + ),
where L k corresponds to energy E = −1/2.
Remark 4.5. The action functions S k and L E k can not be expressed in elementary functions. However they admit simple asymptotic representation for large k. This will be done in a subsequent publication.
In the next section we use the action functions S k and L E k to represent collision chains as critical points of discrete action functionals.
Discrete variational principles
For a given sequence k = (k 1 , . . . , k n ) ∈ Z 2n , k j ∈ Z 2 , define a discrete Hamilton's action by
A k (s, x) = n j=1 S k j (s j , x j , x j+1 ),
where s = (s 1 , . . . , s n ) ∈ R n + , x = (x 1 , . . . , x n ) ∈ U n and S k j is the action function on V k j defined by (4.8). The domain of A k is
V k = {(s, x) ∈ R n + × U n : (s j , x j , x j+1 ) ∈ V k j , x n+1 = x 1 }.
Any (s, x) ∈ V k defines (t, T, γ) ∈ Ω n as follows. Take t = (t 1 , . . . , t n ) so that s j = t j+1 − t j and set
γ(t) = γ(k j , s j , x j , x j+1 )(t − t j ), t j ≤ t ≤ t j+1 ,
where γ(k j , s j , x j , x j+1 ) : [0, s j ] → U 2 is the collision orbit in Proposition 4.3. Then γ = γ k (s, x) is a broken trajectory of system (H 0 ) with period T = n j=1 s j and Hamilton's action A k (s, x) = A(T, γ). Of course (t, γ) is defined modulo time translation, so we identify curves which differ by time translation. Thus we defined an embedding ι : V k → Ω n = Ω n /R and A k = A • ι.
For collision chains with fixed period T , we restrict A k to
V T k = {(s, x) ∈ V k : n j=1 s j = T }. Proposition 5.1. Any critical point (s, x) of A k on V T k defines a T -periodic collision chain γ = γ k (s, x).
Indeed, critical points of
A k satisfy ∂S k j ∂x j (s j , x j , x j+1 ) + ∂S k j−1 ∂x j (s j−1 , x j−1 , x i ) = 0, (5.1) ∂S k j ∂s j (s j , x j , x j+1 ) = −E, (5.2)
where −E is the Lagrange multiplier. By (4.10), E is the energy of the corresponding collision chain γ. The total momentum at collision is
y j = − ∂S k j ∂x j (s j , x j , x j+1 ) = ∂S k j−1 ∂x j (s j−1 , x j−1 , x j ).
Thus γ satisfies (2.5)-(2.6).
Proposition 5.1 follows also from Proposition 2.2. Indeed, the functional A k is the restriction of Hamilton's action A to the set ι(V k ) ⊂ Ω n of broken extremals. This set is obtained by equating to zero the differential of A for fixed t, T, x.
In Proposition 5.1 the period T is fixed. For collision chains with fixed energy E < 0 we consider a discrete Maupertuis action functional on V k :
A E k (s, x) = A k (s, x) + ET, T = N j=1 s j .
Now T is a function on V k , and A E k (s, x) is the Maupertuis action (2.7) of the broken trajectory γ = γ k (s, x). We obtain Proposition 5.2. To any critical point (s, x) of A E k on V k there corresponds a periodic collision chain γ with energy E. All nondegenerate collision chains with energy E are obtained in this way from nondegenerate modulo rotation critical points of some A E k . Remark 5.1. Hamilton's action is invariant under rotations: A k (s, e iθ x) = A k (s, x). Thus every critical point of the functional A E k is degenerate. To obtain nondegenerate critical points we should consider the quotient functional A E k on the quotient space
V k = V k /T ⊂ R n + × U n , U n = U n /T ∼ = R n + × T n−1 .
Let us now fix energy E < 0 and angular momentum G and consider periodic modulo rotation collision chains γ with given E, G. We obtain the discrete Maupertuis-Routh action functional
A EG k (s, x, Φ) = n j=1 S k j (s j , x j , x j+1 )+ET −GΦ, x n+1 = e iΦ x 1 , T = N j=1 s j .
The independent variables are s = (s 1 , . . . , s n ), x = (x 1 , . . . , x n ) and Φ, so the domain of A EG there corresponds a periodic modulo rotation collision chain γ = γ k (s, x, Φ) with energy E and angular momentum G. Any nondegenerate periodic modulo rotation collision chain with energy E and angular momentum G is obtained from a nondegenerate modulo rotation critical point of some A EG k . To construct orbits of system (H µ ) shadowing the collision chain γ corresponding to a critical point (s, x), we need to verify the changing direction condition. For k = (k 1 ,
k iŝ V k = {(s, x, Φ) ∈ R n + × U n × R : (s j , x j , x j+1 ) ∈ V k j , x n+1 = e iΦ x 1 }.k 2 ) ∈ Z 2 denote R k (τ, x − , x + ) = F k1 (τ, x − , x + ) − F k2 (τ, x − , x + ).
By (4.7) the relative collision velocities (2.10) of a collision orbit γ = γ(k, τ, x − , x + ) : [0, τ ] → U 2 are given bẏ
u(0) = − ∂R k ∂x − (τ, x − , x + ),u(τ ) = ∂R k ∂x + (τ, x − , x + ).
Thus the changing direction condition for the collision chain corresponding to (s, x) can be expressed as follows:
∂R k j ∂x j (s j , x j , x j+1 ) + ∂R k j−1 ∂x j (s j−1 , x j−1 , x j ) = 0. (5.3)
We have
A k = α 1 B k1 + α 2 B k2 ,(5.4)
where k = (k 1 , k 2 ) with k j = (k 1 j , . . . , k n j ) ∈ Z n and
B kj (s, x) = n i=1 F k i j (s i , x i , x i+1 ) (5.5)
is the discrete action functional for the Kepler problem. If (s, x) is a critical point of A k with respect to x, then by (5.1), the changing direction condition (5.3) is equivalent to
∂ ∂x j B k1 (s, x) = 0, j = 1, . . . , n. (5.6)
Next we reformulate the shadowing Theorems 2.1-2.2.
Theorem 5.1. Let (s, x) ∈ V k be a nondegenerate modulo rotation critical point of A E k satisfying the changing direction condition (5.6). Then for sufficiently small µ > 0 the corresponding T -periodic collision chain γ is O(µ)-shadowed modulo time translation by an almost collision T µ -periodic orbit γ µ of the 3 body problem with period T µ = T + O(µ).
Theorem 5.2. Let (s, x, Φ) ∈V k be a nondegenerate modulo rotation critical point of A EG k satisfying the changing direction condition (5.6). Then for sufficiently small µ > 0, the corresponding collision chain γ is O(µ)-shadowed modulo rotation and time translation by an almost collision periodic modulo rotation orbit γ µ of the 3 body problem with energy E and angular momentum G.
These discrete versions of Theorems 2.1-2.2 are most suitable for applications. In a future publication we will use them in [9] to find many nontrivial second species solutions.
For a dynamical systems reformulation, it is convenient to introduce Jacobi's discrete action functional
J E k (x) = n j=1 L E kj (x j , x j+1 ).
It is defined on
W E k = {x = (x 1 , . . . , x n ) : (x j , x j+1 ) ∈ W E k j , x n+1 = x 1 }.
Equating to 0 the derivatives of A EG k (s, x) with respect to s, we obtain: Proposition 5.4. Any nondegenerate modulo symmetry periodic collision chain with energy E corresponds to a nondegenerate modulo rotation critical point x of some J E k . A critical point x = (x 1 , . . . , x n ) of J E k is a n-periodic trajectory of a discrete Lagrangian system (L E ) with a multivalued discrete Lagrangian
L E = {L E k } k∈Z 2 : ∂ ∂x j (L E k j−1 (x j−1 , x j ) + L E k j (x j , x j+1 )) = 0, j = 1, . . . , n.
Thus description of second species solutions is reduced to the dynamics of a discrete Lagrangian system (L E ). Under a twist condition, a periodic trajectory of system (L E ) corresponds to a periodic trajectory (x j , y j ),
y j = − ∂ ∂xj L E k j (x j , x j+1 )
, of a sequence of symplectic twist maps (x j , y j ) → (x j+1 , y j+1 ) with generating functions L E k j . We postpone this reformulation to a future paper, where we deal with chaotic almost collision orbits.
The minimal degeneracy of a critical point of J E k is at least 1 due to rotational symmetry J E k (e iθ x) = J E k (x). This implies that the discrete Lagrangian system (or the corresponding symplectic map) has an integral of angular momentum G = ix j · y j . One can perform Routh's reduction in this discrete Lagrangian system reducing it to one degree of freedom [12], but this complicates the discrete Lagrangian.
For periodic modulo rotation collision chains with fixed E, G we have:
Proposition 5.5. Any nondegenerate periodic modulo rotation collision chain with energy E corresponds to a nondegenerate modulo rotation critical point (x, Φ) of the discrete Jacobi-Routh action functional
J EG k (x, Φ) = J E k (x) − GΦ, x n+1 = e iΦ x 1 .
The proofs of Theorems 5.1 and 5.2 are modifications of the proof of Theorem 2.1 in [6]. They are based on the Levi-Civita regularization and shadowing. The proof of Theorem 5.1 will be given in the next section. The proof of Theorem 5.2 is similar and will be omitted.
Proof of Theorem 5.1
For µ = 0, the action functional (2.1) of system (H µ ) is singular when γ approaches ∆. We will formulate a variational problem for almost collision orbits of system (H µ ) with given energy E which has no singularity at ∆.
Let us fix energy E < 0. Trajectories of system (H µ ) with energy E are extremals of the Jacobi action functional
J E µ (γ) = γ ds E µ , ds E µ = max p {p · dq : H µ (q, p) = E}.
Away from ∆, the functional J E µ is a regular perturbation of the Jacobi functional J E for system (H 0 ). Regularizing J E µ near ∆ requires some preparation. First we describe local behavior of trajectories of system (H 0 ) colliding with ∆. We will use the variables (this is a version of Jacobi's variables)
x = α 1 q 1 + α 2 q 2 , y = p 1 + p 2 , u = q 2 − q 1 , v = α 1 p 2 − α 2 p 1 .
(6.1) Thus x is the center of mass of m 1 , m 2 , y is their total momentum, u is their relative position, and v is the scaled relative velocity. The change is symplectic:
p · dq = p 1 · dq 1 + p 2 · dq 2 = y · dx + v · du.
The inverse change is
q 1 = x − α 2 u, q 1 = x + α 1 u, p 1 = α 1 y − v, p 2 = α 2 y + v.
For solutions of system (H 0 ), y =ẋ and v = αu, where α = α 1 α 2 . Let γ be a trajectory of system (H 0 ) with energy E. We denote by (x(t), y(t), u(t), v(t)) its representation in Jacobi's variables. If γ has a collision at t = 0, i.e. u(0) = 0,
x(0) = x 0 , then H 0 = |y 0 | 2 /2 + |v 0 | 2 /2α − |x 0 | −1 = E.
We assume that collisions occurs with nonzero relative speed v 0 = 0. Then there exists δ > 0 such that (x 0 , y 0 ) lies in a compact set
M = M E δ = {(x 0 , y 0 ) : λ(x 0 , y 0 ) = E − |y 0 | 2 /2 + |x 0 | −1 ≥ δ, |x 0 | ≥ δ}. (6.
2) We fix δ > 0. Eventually it will taken sufficiently small. Denote B ρ = {u ∈ R 2 : |u| ≤ ρ} and S ρ = ∂B ρ . We have Lemma 6.1. Take any δ > 0 and let ρ > 0 be sufficiently small. Then for any (x 0 , y 0 ) ∈ M and any u + ∈ S ρ there exists a trajectory γ + : [0, τ + ] → U 2 of system (H 0 ) with energy E such that:
• u(t) ∈ B ρ for 0 ≤ t ≤ τ + and x(0) = x 0 , y(0) = y 0 , u(0) = 0, u(τ + ) = u + .
• γ + smoothly depends on (x 0 , y 0 , u + ) ∈ M × S ρ and
τ + = τ + (x 0 , y 0 , u + ) = ρ α/2λ(x 0 , y 0 ) + O(ρ 2 ), x(τ + ) = ξ + (x 0 , y 0 , u + ) = x 0 + ρ α/2λ(x 0 , y 0 )y 0 + O(ρ 2 ). (6.3)
• The Maupertuis action of γ + has the form
J E (γ + ) = γ+ p · dq = a + (x 0 , y 0 , u + ) (6.4) = ρ 2α/λ(x 0 , y 0 )(E + |x 0 | −1 ) + O(ρ 2 ).
Remark 6.1. On S ρ we use the polar coordinate θ, where u = ρe iθ . Thus O(ρ 2 ) means a function of x 0 , y 0 , θ whose C 2 norm is bounded by cρ 2 with c independent of ρ.
The proof is obtained by a simple shooting argument, because H 0 has no singularity at ∆:
x(t) = x 0 + ty 0 + O(t 2 ), u(t) = tv 0 /α + O(t 2 ).
It remains to solve the equation u(τ + ) = u + for τ + and v 0 , where |v 0 | = 2αλ(x 0 , y 0 ).
Similarly, we have a trajectory γ − : [τ − , 0] → U 2 of system (H 0 ) with energy E such that x(0) = x 0 , y(0) = y 0 , u(0) = 0, u(τ − ) = u − . Then
τ − = τ − (x 0 , y 0 , u − ), x(τ − ) = ξ − (x 0 , y 0 , u − ), J E (γ − ) = a − (x 0 , y 0 , u − ).
If (x 0 , y 0 ) ∈ M , then x 0 belongs to
D = D E δ = {x : δ ≤ |x| ≤ (δ − E) −1 }.
Let Σ ρ be the boundary of the tubular neighborhood N ρ of D 2 ⊂ ∆:
Σ ρ = {q : x ∈ D, u ∈ S ρ }, N ρ = {q : x ∈ D, u ∈ B ρ }.
Fix arbitrary large C > 0 and let
K ρ = {(q 0 , q + ) ∈ D 2 × Σ ρ : |q 0 − q + | ≤ Cρ}.
Lemma 6.2. If ρ > 0 is sufficiently small, then for any (q 0 , q + ) ∈ K ρ , there exists a trajectory γ : [0, τ + ] → N ρ of system (H 0 ) with energy E joining q 0 with q + . Moreover γ smoothly depends on (q 0 , q + ) and its Maupertuis action has the form
J E (γ) = d E (q 0 , q + ) = 2α −1 (E + |x 0 | −1 )(|x + − x 0 | 2 + α 2 ρ 2 ) + O(ρ 2 ). (6.5)
Here d E (q 0 , q + ) is the distance in the Jacobi metric ds E .
Proof. The condition γ(τ + ) = q + = (x + , u + ) gives u(τ + ) = u + , x(τ + ) = x + . Then (6.3) makes it possible to determine
y 0 = η + (q 0 , q + ) = 2(E + |x 0 | −1 ) |x + − x 0 | 2 + α 2 ρ 2 (x + − x 0 ) + O(ρ). (6.6)
Next we connect points q − , q + ∈ Σ ρ by a reflection trajectory of energy E. Let P ρ = {(q − , q + ) ∈ Σ 2 ρ : |q − − q + | ≤ Cρ}. Proposition 6.1. Let ρ > 0 be sufficiently small. Then for any (q − , q + ) ∈ P ρ :
• There exist τ − < 0 < τ + and a broken trajectory γ :
[τ − , τ + ] → N ρ with energy E such that γ(0) = q 0 = (x 0 , x 0 ) ∈ D 2 , γ| [0,τ+] , γ| [τ−
,0] are trajectories of system (H 0 ), γ(τ ± ) = q ± and there is no jump of total momentum at collision: y(+0) = y(−0) = y 0 .
• γ smoothly depends on (q − , q + ) ∈ P ρ .
• The Maupertuis action of γ has the form
J E (γ) = γ ds E = g E (q − , q + ) = d E (q 0 , q − ) + d E (q 0 , q + ) (6.7) = 2(E + |x 0 | −1 )(|x + − x − | 2 + 4α 2 ρ 2 ) + O(ρ 2 ). (6.8) • x 0 = ξ(q − , q + ) = (x + + x − )/2 + O(ρ 2 ), (6.9) y 0 = η(q − , q + ) = 2(E + |x 0 | −1 ) |x + − x − | 2 + 4α 2 ρ 2 (x + − x − ) + O(ρ), τ ± = τ ± (q − , q + ) = ± |x + − x − | 2 + 4α 2 ρ 2 8(E + |x 0 | −1 ) + O(ρ 2 ).
Proof. We find x 0 from the equation
∂ ∂x 0 (d E (q 0 , q − ) + d E (q 0 , q + )) = 0 ⇔ η + (q 0 , q + ) = η − (q 0 , q − ),
where η ± is defined in (6.6). Differentiating (6.5), we see that the Hessian matrix
8(E + |x 0 | −1 ) α(|x + − x − | 2 + 4α 2 ρ 2 ) I − (x + − x − ) ⊗ (x + − x − ) |x + − x − | 2 + 4α 2 ρ 2 + O(ρ)
is nondegenerate. By the implicit function theorem, the solution x 0 = ξ(q − , q + ) is smooth.
A similar result holds for the perturbed system (H µ ), but it is no longer easy to prove. Fix an arbitrary small constant δ > 0. Lemma 6.3. Let ρ > 0 be sufficiently small. There exists µ 0 > 0 such that for all µ ∈ (0, µ 0 ], any (x 0 , y 0 ) ∈ M and any u ± ∈ S ρ such that |u + + u − | ≥ δρ:
• There exist t − < 0 < t + and a trajectory γ :
[t − , t + ] → N ρ of system (H µ )
with energy E such that u(t ± ) = u ± , x(0) = x 0 , y(0) = y 0 .
• γ smoothly depends on (x 0 , y 0 , u − , u + , µ) ∈ M × S 2 ρ × (0, µ 0 ] and converges to a concatenation of trajectories γ ± in Lemma 6.1 as µ → 0.
• The Maupertuis action of γ has the form
J E µ (γ) = γ p · dq = f E µ (x 0 , y 0 , u − , u + ) (6.10) = a + (x 0 , y 0 , u + ) + a − (x 0 , y 0 , u − ) + µâ(x 0 , y 0 , u − , u + , µ), (6.11) whereâ is C 2 bounded on M × S 2 ρ × (0, µ 0 ]. • t ± = τ ± (x 0 , y 0 , u ± ) + µτ ± (x 0 , y 0 , u + , u − , µ), x(t ± ) = x ± µ (x 0 , y 0 , u − , u + ) = ξ ± (x 0 , y 0 , u ± ) + µx ± (x 0 , y 0 , u − , u + , µ). (6.12) whereτ ± andx ± are uniformly C 1 bounded on M × S 2 ρ × (0, µ 0 ]. • If |u + − u − | ≥ δρ, then µa ≤ min t∈[t−,t+] d(γ(t), ∆) ≤ µb, 0 < a < b. (6.13)
The proof of Lemma 6.3 is given in section 7. It is based on Levi-Civita regularization and a generalization of Shilnikov's Lemma [20], see also [21], to normally hyperbolic critical manifolds of a Hamiltonian system.
Next we deduce a local connection theorem. Fix arbitrary small δ > 0, arbitrary large C > 0 and let
Q ρ = {(q − , q + ) ∈ Σ 2 ρ : |q − − q + | ≤ Cρ, |u + + u − | ≥ δρ}.
Theorem 6.1. Let ρ > 0 be sufficiently small. There exists µ 0 > 0 such that for all (q − , q + , µ) ∈ Q ρ × (0, µ 0 ]:
• There exist t − < 0 < t + and a trajectory γ :
[t − , t + ] → N ρ of system (H µ )
with energy E such that γ(t ± ) = q ± and the minimum of d(γ(t), ∆) is attained at t = 0.
• γ smoothly depends on (q − , q + , µ) ∈ Q ρ × (0, µ 0 ] and converges to a reflection trajectory in Proposition 6.1 as µ → 0.
• The Maupertuis action of γ has the form
J E µ (γ) = γ p · dq = g E µ (q − , q + ) = g E (q − , q + ) + µĝ(q − , q + , µ),
whereĝ is C 2 bounded on Q ρ × (0, µ 0 ].
• If |u + − u − | ≥ δρ, then (6.13) holds.
Thus the action J E µ (γ) = g E µ (q − , q + ) has a limit g E (q − , q + ) as µ → 0 which is the action of the reflection orbit in Proposition 6.1. The condition that the distance to ∆ is attained at t = 0 is needed only to exclude time translations, so that t ± are uniquely defined.
Proof. We need to find (x 0 , y 0 ) such that x ± µ (x 0 , y 0 , u − , u + ) = x ± . Since the implicit function theorem worked in the proof of Proposition 6.1, by (6.12), for small µ > 0 it will work also here.
Proof of Theorem 5.1. Let γ be a nondegenerate n-collision chain with energy E. Let t = (t 1 , . . . , t n ) be collision times, x = (x 1 , . . . , x n ), γ(t j ) = (x j , x j ), the corresponding collision points and y j the collision total momenta. Take δ > 0 so small that the collision points and the collision speeds v ± j = v(t j ± 0) satisfy
|x j | ≥ δ, |v + j | = |v − j | ≥ δ √ 2α.
Then (x j , y j ) ∈ M and x j ∈ D. Take small ρ > 0 and let t ± j = t j ± s ± j be the closest to t j times when q ± j = γ(t ± j ) ∈ Σ ρ . Since γ satisfies the changing direction condition, (q − j , q + j ) ∈ Q ρ if C > 0 is taken sufficiently large and ρ > 0 sufficiently small. Moreover for ξ ± j ∈ Σ ρ close to q ± j , we have (ξ − j , ξ + j ) ∈ Q ρ . Thus by Theorem 6.1 for small µ > 0 the points ξ ± j can be joined in N ρ by a trajectory γ j µ of system (H µ ) with energy E and the Maupertuis action J E µ (γ j µ ) = g E µ (ξ − j , ξ + j ). Since γ| [tj,tj+1] is nondegenerate and, by no early collisions condition, does not come near ∆, for ξ + j close to q + j and ξ − j+1 close to q − j+1 and small µ > 0, the points ξ + j and ξ − j+1 can be joined by a trajectory σ j µ of system (H µ ) with energy E and the Maupertuis action
J E µ (σ E µ ) = h E µ (ξ + j , ξ − j+1 )
. This trajectory smoothly depends on µ also for µ = 0, and h E 0 (ξ + j , ξ − j+1 ) is the Maupertuis action of a connecting trajectory of system (H 0 ).
Combine the trajectories γ j µ , σ j µ in a broken trajectory γ µ with energy E and Maupertuis action
f µ (ξ) = J E µ (γ µ ) = n j=1 (J E µ (γ j µ ) + J E µ (σ j µ )) = n j=1 (g E µ (ξ + j , ξ − j+1 ) + h E µ (ξ − j , ξ + j )), ξ = (ξ − 1 , ξ + 1 , . . . , ξ − n , ξ + n ) ∈ Σ 2n ρ .
The function f µ has a limit f 0 as µ → 0 and
f µ (ξ) = f 0 (ξ) + µf (ξ, µ), where f 0 (ξ) = n j=1 (d(ξ j , ξ + j ) + h E 0 (ξ + j , ξ − j+1 ) + d(ξ − j+1 , ξ j+1 )), and ξ j = ξ(ξ − j , ξ + j ) is defined by (6.9). The remainderf (ξ, µ) is C 2 bounded on Y × (0, µ 0 ], where the neighborhood Y ⊂ Σ 2n ρ of q = (q − 1 , q + 1 , . . . , q − n , q + n )
is independent of µ.
Looking for critical points with respect to ξ ± j with fixed ξ j we obtain f 0 = J E k (ξ 1 , . . . , ξ n ) for some k ∈ Z 2n , and J E k has a nondegenerate modulo rotation critical point x. Thus f 0 has a nondegenerate critical point q ∈ Σ 2n ρ . Then for small µ > 0 the function f µ (ξ) has a nondegenerate modulo rotation critical point ξ µ close to q. The corresponding broken trajectory γ µ has no break of velocity at intersection points ξ ± j with Σ ρ and hence γ µ is a periodic trajectory of system (H µ ) with energy E.
Levi-Civita regularization
In this section we prove Lemma 6.3. In the Jacobi variables (6.1), the Hamiltonian H µ takes the form
H µ = (1 + µ)|y| 2 2 + |v| 2 2α − α 1 |α 2 u − x| − α 2 |α 1 u + x| − µα |u| .
Let us perform the Levi-Civita regularization on the fixed energy level H µ = E. We identify u, v ∈ R 2 = C with complex numbers and make a change of variables u = ξ 2 , v = η/2ξ.
Since v · du = Re (v dū) = Re (η dξ) = η · dξ, the change is symplectic:
p · dq = y · dx + η · dξ. (7.1)
Finally, we obtain a transformation q 1 = x − α 2 ξ 2 , q 1 = x + α 1 ξ 2 , p 1 = α 1 y − η/2ξ, p 2 = α 2 y + η/2ξ.
The Levi-Civita map g : R 2 × R 2 × U × R 2 → (R 4 \ ∆) × R 4 , g(x, y, ξ, η) = (q 1 , q 2 , p 1 , p 2 ), is a symplectic double covering undefined at ξ = 0 which corresponds to the collision set ∆. Let
H E µ (x, y, ξ, η) = |ξ| 2 (H µ • g − E) + µα = |η| 2 8α − |ξ| 2 E + α 1 |α 2 ξ 2 − x| + α 2 |α 1 ξ 2 + x| − (1 + µ)|y| 2 2 .
Let Σ E µ = {H µ = E} and Γ E µ = {H E µ = µα}. Since g(Γ E µ ) = Σ E µ , the map g takes orbits of system (H E µ ) on Γ E µ to orbits of system (H µ ) on Σ E µ . The time parametrization is changed: the new time is given by dτ = |ξ| 2 dt. In the following we will continue to denote the new time by t.
The singularity at ∆ disappeared after regularization. The regularized Hamiltonian H E µ is smooth on P = {(x, y, ξ, η) ∈ U × R 6 : x = α 2 ξ 2 , x = −α 1 ξ 2 }.
which means excluding collisions of m 1 and m 2 with m 3 . The parameter µα may be regarded as new energy. The rotation group and the integral of angular momentum are now (x, y, ξ, η) → (e iθ x, e iθ y, e iθ/2 ξ, e iθ/2 η), G = ix · y + iξ · η/2.
The Hamiltonian H E µ has a critical manifold ξ = η = 0 which is contained in the level set Γ E 0 of H E 0 . We have H E 0 (x, y, ξ, η) = |η| 2 8α − |ξ| 2 λ(x, y) + O(|ξ| 4 ).
Collisions of m 1 , m 2 with nonzero relative velocity correspond to the solutions asymptotic to
M = M × {(0, 0)},
where M is as in (6.2). This is is a compact normally hyperbolic symplectic critical manifold for H E 0 . We obtain Theorem 7.1. Collision orbits of system (H 0 ) with energy E correspond to orbits of system (H E 0 ) doubly asymptotic to M. Orbits of system (H µ ) with energy E passing O(µ)-close to the singular set ∆ correspond to orbits of system (H E µ ) on the level Γ E µ passing O( √ µ)-close to M.
Next we translate Lemma 6.3 to the new variables. Let r > 0 and let N r = M × B r be a tubular neighborhood of M in P. By the stable and unstable manifold theorems for normally hyperbolic invariant manifolds [13], if r > 0 is small enough, for any (x 0 , y 0 ) ∈ M and ξ − ∈ S r there exists a unique solution ζ − : [0, +∞) → N r , ζ − (t) = (x(t), y(t), ξ(t), η(t)), of system (H E 0 ) such that ξ(0) = ξ − and ζ(∞) = (x 0 , y 0 , 0, 0) ∈ M. We denote its action by J(ζ − ) = ζ− y · dx + η · dξ = J − (x 0 , y 0 , ξ − ).
Since the stable and unstable manifolds are smooth, J − is a smooth function on M × S r . Similarly we define the function J + on M × S r as the action of a solution ζ + asymptotic to M as t → −∞.
We have an analog of Shilnikov's Lemma [20]. Fix small ε > 0 and denote Q r = {(x 0 , y 0 , ξ − , ξ + ) ∈ M × S 2 r : ξ − · ξ + ≥ ε 2 r 2 }.
Theorem 7.2. There exists r > 0 and µ 0 > 0 such that for any (x 0 , y 0 , ξ − , ξ + , µ) ∈ Q r × (0, µ 0 ]:
• There exists T > 0 and a solution ζ(t) = (x(t), y(t), ξ(t), η(t)) ∈ N r , t ∈ [−T, T ], of system (H E µ ) on Γ E µ such that
x(0) = x 0 , y(0) = y 0 , ξ(−T ) = ξ − , ξ(T ) = ξ + . (7.2)
• ζ smoothly depends on (x 0 , y 0 , ξ − , ξ + , µ) ∈ Q r × (0, µ 0 ].
• The Maupertuis action is a smooth function on Q r × (0, µ 0 ] and has the form J(ζ) = ζ y·dx+η·dξ = J − (x 0 , y 0 , ξ − )+J + (x 0 , y 0 , ξ + )+µĴ(x 0 , y 0 , ξ − , ξ + , µ),
(7.3) whereĴ is C 2 bounded on Q r × (0, µ 0 ].
A result very similar to Theorem 7.2 was proved in [6]. A complete proof of Theorem 7.2 will be published in [9].
Proof of Lemma 6.3. We set ρ = r 2 and u = ξ 2 . For given u ± ∈ S ρ take ξ ± ∈ S r such that ξ + · ξ − ≥ 0. If ξ + · ξ − > 0, then u + = −u − . Moreover for given δ > 0 there exists ε > 0 such that |u − + u + | ≥ δρ implies ξ − · ξ + ≥ ε 2 r 2 . If ζ is a trajectory in Theorem 7.2, then the corresponding trajectory g(ζ) of system (H µ ) satisfies the conditions of Lemma 6.3. In particular, by (7.1), J ± (x 0 , y 0 , ξ ± ) = a ± (x 0 , y 0 , u ± ).
Next we describe Kepler orbits Γ : [0, τ ] → U connecting x − , x + while making n = [Γ] full revolutions around 0. To define the number n ∈ Z of revolutions, setn = [(θ(τ ) − θ(0))/2π], Γ(t) = r(t)e iθ(t) .Proposition 4.1. For any (x − , x + ) ∈ X E and any n ∈ Z: • There exists a Kepler orbit Γ = Γ n (E, x − , x + ) : [0, τ ] → U of energy E joining the points x − , x + and making [Γ] = n revolutions.
Proposition 4. 2 .
2For any n ∈ Z and any (τ, x − , x + ) ∈ D n : • There exists a nondegenerate Kepler orbit Γ = Γ n (τ, x − , x + ) : [0, τ ] → U with [Γ] = n full revolutions joining the points x − and x + .
Proposition 5.3. To any critical point (s, x, Φ) of A EGk
We identify R 2 with C, so multiplication by i is rotation by π/2.
Accuracy of Kepler approximation for fly-by orbits near an attracting center. V M Alexeyev, Y S Osipov, Erg. Th. & Dyn. Syst. 2Alexeyev V. M. and Osipov Y. S., Accuracy of Kepler approximation for fly-by orbits near an attracting center. Erg. Th. & Dyn. Syst., 2 (1982), 263-300.
Mathematical aspects of classical and celestial mechanics. V I Arnold, V V Kozlov, A I Neishtadt, Encyclopedia of Math. Sciences. 3Springer-VerlagArnold V. I., Kozlov V. V., Neishtadt A. I., Mathematical aspects of clas- sical and celestial mechanics, Encyclopedia of Math. Sciences, 3, Springer- Verlag (1989)
Small Denominators and Problems Of Stability of Motion in Classical and Celestial Mechanics. V I Arnold, Usp. Mat. Nauk. 18translation in Russian Math. SurveysArnold V. I., Small Denominators and Problems Of Stability of Motion in Classical and Celestial Mechanics. Usp. Mat. Nauk., 18 (1963),91-193 (translation in Russian Math. Surveys, 18 (1963), 191-257).
Capture dynamics and chaotic motions in celestial mechanics. With applications to the construction of low energy transfers. E Belbruno, Princeton University PressNJBelbruno E., Capture dynamics and chaotic motions in celestial mechanics. With applications to the construction of low energy transfers. Princeton University Press, NJ, (2004).
. G Birkhoff, Dynamical Systems, A.M.S Colloquium PublicationsIXBirkhoff G., Dynamical Systems, A.M.S Colloquium Publications, IX, (1927)
Shadowing chains of collision orbits. S Bolotin, Discr. & Conts. Dynam. Syst. 14Bolotin S., Shadowing chains of collision orbits, Discr. & Conts. Dynam. Syst., 14 (2006), 235-260.
Second species periodic orbits of the elliptic 3 body problem. S Bolotin, Celest. & Mech. Dynam. Astron. 93Bolotin S., Second species periodic orbits of the elliptic 3 body problem. Celest. & Mech. Dynam. Astron., 93 (2006), 345-373.
Symbolic dynamics of almost collision orbits and skew products of symplectic maps. S Bolotin, Nonlinearity. 199Bolotin S., Symbolic dynamics of almost collision orbits and skew products of symplectic maps. Nonlinearity, 19 (2006), no. 9, 2041-2063.
Shilnikov Lemma for a nondegenerate critical manifold of a Hamiltonian system. S Bolotin, P Negrini, In preparationBolotin S. and Negrini P., Shilnikov Lemma for a nondegenerate critical manifold of a Hamiltonian system. In preparation.
Periodic and chaotic trajectories of the second species for the n-centre problem. S V Bolotin, R S Mackay, Celest. Mech. & Dynam. Astron. 77Bolotin S. V. and MacKay R. S., Periodic and chaotic trajectories of the second species for the n-centre problem, Celest. Mech. & Dynam. Astron., 77 (2000), 49-75.
Non-planar second species periodic and chaotic trajectories for the circular restricted three-body problem. S Bolotin, R S Mackay, Celest. Mech. & Dynam. Astron. 944Bolotin S. and MacKay R. S., Non-planar second species periodic and chaotic trajectories for the circular restricted three-body problem. Celest. Mech. & Dynam. Astron. 94 (2006), no. 4, 433-449.
Hill's formula. Uspekhi Mat. Nauk. S Bolotin, D Treschev, Russian Math. Surveys. 652translation inBolotin S. and Treschev D., Hill's formula. Uspekhi Mat. Nauk, 65, no. 2 (2010), 3-70; (translation in Russian Math. Surveys, 65, no. 2 (2010), 191-257.
Asymptotic Stability with Rate Conditions for Dynamical Systems. N Fenichel, Bull. Am. Math.Soc. 802Fenichel N., Asymptotic Stability with Rate Conditions for Dynamical Sys- tems, Bull. Am. Math.Soc., 80, no 2, (1974), 346-349.
Consecutive quasi-collisions in the planar circular RTBP. J Font, A Nunes, C Simo, Nonlinearity. 15Font J., Nunes A., Simo C., Consecutive quasi-collisions in the planar cir- cular RTBP, Nonlinearity, 15 (2002), 115-142.
Second species solutions in the circular and elliptic restricted three body problem, I and II. G Gomez, M Olle, Celest. Mech. & Dynam. Astron. 52Gomez G. and Olle M., Second species solutions in the circular and elliptic restricted three body problem, I and II, Celest. Mech. & Dynam. Astron., 52 (1991), 107-146 and 147-166.
Sur la construction des solutions de seconde espèce dans le problème plan restrient des trois corps. J.-P Marco, L Niederman, Ann. Inst. H. Poincare Phys. Théor. 62Marco J.-P. and Niederman L., Sur la construction des solutions de sec- onde espèce dans le problème plan restrient des trois corps, Ann. Inst. H. Poincare Phys. Théor., 62 (1995), 211-249.
The principle of symmetric criticality. R S Palais, Comm. Math. Phys. 69Palais, R. S. The principle of symmetric criticality. Comm. Math. Phys., 69 (1979), 19-30.
Second species solutions with an O(µ ν ), 0 < ν < 1 near-Moon passage. L M Perko, Celest. Mech. 24Perko L. M., Second species solutions with an O(µ ν ), 0 < ν < 1 near-Moon passage, Celest. Mech., 24 (1981), 155-171.
Solution of Lambert's problem by means of regularization. C Simo, Spanish) Collect. Math. 24Simo, C., Solution of Lambert's problem by means of regularization. (Span- ish) Collect. Math., 24 (1973), 231-247.
On a Poincaré-Birkhoff problem. L P Shilnikov, Math. USSR Sbornik. 3Shilnikov L. P., On a Poincaré-Birkhoff problem. Math. USSR Sbornik 3 (1967), 353-371.
Hamiltonian systems with homoclinic saddle curves. D V Turaev, L P Shilnikov, Dokl. Akad. Nauk SSSR. 3044Soviet Math. Dokl.Turaev, D. V. and Shilnikov, L. P., Hamiltonian systems with homoclinic saddle curves. (Russian) Dokl. Akad. Nauk SSSR 304, no. 4 (1989), 811- 814; (translation in Soviet Math. Dokl., 39, no. 1, (1989), 165-168).
A Treatise On The Analytical Dynamics of Particles and Rigid Bodies. E T Whittaker, Cambridge University PressWhittaker, E. T. A Treatise On The Analytical Dynamics of Particles and Rigid Bodies, Cambridge University Press (1988).
| [] |
[
"Large Deviations of the Smallest Eigenvalue of the Wishart-Laguerre Ensemble",
"Large Deviations of the Smallest Eigenvalue of the Wishart-Laguerre Ensemble"
] | [
"Eytan Katzav \nDepartment of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUnited Kingdom\n",
"Isaac Pérez Castillo \nDepartment of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUnited Kingdom\n"
] | [
"Department of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUnited Kingdom",
"Department of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUnited Kingdom"
] | [] | We consider the large deviations of the smallest eigenvalue of the Wishart-Laguerre Ensemble. Using the Coulomb gas picture we obtain rate functions for the large fluctuations to the left and the right of the hard edge. Our findings are compared with known exact results for β = 1 finding good agreement. We also consider the case of almost square matrices finding new universal rate functions describing large fluctuations. | 10.1103/physreve.82.040104 | [
"https://arxiv.org/pdf/1005.5058v3.pdf"
] | 12,819,521 | 1005.5058 | aff54346742c3de1141b963897db166f7691a973 |
Large Deviations of the Smallest Eigenvalue of the Wishart-Laguerre Ensemble
13 Aug 2010
Eytan Katzav
Department of Mathematics
King's College London
WC2R 2LSStrand, LondonUnited Kingdom
Isaac Pérez Castillo
Department of Mathematics
King's College London
WC2R 2LSStrand, LondonUnited Kingdom
Large Deviations of the Smallest Eigenvalue of the Wishart-Laguerre Ensemble
13 Aug 2010arXiv:1005.5058v3 [cond-mat.dis-nn]numbers: 0540-a0210Yn0250Sk2460-k
We consider the large deviations of the smallest eigenvalue of the Wishart-Laguerre Ensemble. Using the Coulomb gas picture we obtain rate functions for the large fluctuations to the left and the right of the hard edge. Our findings are compared with known exact results for β = 1 finding good agreement. We also consider the case of almost square matrices finding new universal rate functions describing large fluctuations.
In the early 50s, when not much was known about the intricacies of complex atomic nuclei, Wigner suggested to replace the underlying physics of the problem by its apparent statistical features [1]. Surprisingly, it turned out that such statistical approach was more useful than anyone could have anticipated, enabling him to work out the nuclei level spacing distribution. Random Matrix Theory (RMT) has played a central role in various branches of science, since its first appearance in Statistics by Wishart in 1928 [2], through QCD [3], random graphs [4], wireless communications [5] and computational biology [6] to mention a few. And although through more than half a century different, seemingly unrelated, problems have been linked via RMT, certain questions about eigenvalue distributions are still poorly explored. One such question is the distribution of the smallest eigenvalue. In physics, for instance, classical disordered systems offer the ideal context where RMT concepts and tools may be applied. Here, complicated systems are simplified by using a random Hamiltonian, where the smallest eigenvalue is of special interest as it is associated with the ground state. In quantum entanglement, the smallest eigenvalue is a useful measure of entanglement [7]. In mathematics, the minimal eigenvalue appears naturally in many contexts, such as the study of the geometry of random polytopes [8]. It is also extremely relevant for the question of invertibility of random matrices [9], and recently played an important role in the exploding field of compressive sensing [10], where fluctuations of the minimal eigenvalue set the bounds on the number of random measurements needed to fully recover a sparse signal. In statistics, a very important technique used to detect hidden patterns in complex, high-dimensional datasets is called "Principal Components Analysis" (PCA). The idea is to take a data matrix X, and to transform its covariance matrix W = X † X into a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate. Technically speaking, one identifies eigenvalues and eigenvectors of W , and ignores the components corresponding to the lowest eigenvalues, as these eigenmodes contain the least important information. The smallest eigenvalue of W determines the largest eigenvalue of W −1 , and is important in Hotelling's T-square distribution [11], for example. The purpose of this work is to provide a simple physical method, based on the Coulomb gas method in statistical physics [12], that allows us to compute analytically the probability of the smallest eigenvalue in the Wishart-Laguerre ensemble. We consider an ensemble G(M, N ) of M × N rectangular matrices X ∈ G(M, N ), which are drawn from a Gaussian distribution P (X) ∝ exp − β 2 Tr X † X , where β is the Dyson index with classical values β = 1, 2, and 4, corresponding to the real, complex and quaternionic cases respectively. The Wishart ensemble W (M, N ) is defined as the set of N × N matrices W = X † X. If λ denotes the N eigenvalues of a Wishart matrix, their joint PDF reads
P (λ) = 1 Z(0) e −βF (λ)/2(1)
with Z(0) a normalisation constant and F (λ) defined as:
F (λ) = N i=1 λ i − u N i=1 ln(λ i ) − i =j ln |λ i − λ j |(2)
where u = 1 + M − N − 2 β . We will restrict ourselves to the case M ≥ N . It is well known [13] that for large N the density of eigenvalues ρ N (λ) = 1 N f ( λ N ) is given by the Marčenko-Pastur law:
f (x) = 1 2πx (x − ζ − )(ζ + − x) 1 1 x∈[ζ−,ζ+](3)
with )) is referred to here as "almost square matrices". We will focus on the statistical fluctuations around the hard edge, which are captured by the probability distribution of the smallest eigenvalue ρ (min) N (λ), from which any other statistical property of the hard edge may be inferred. A classical result [14] shows that λ min converges almost surely to ζ − as N → ∞. However, the concentration of the minimum around this value is generally unknown, and can be a challenging task. The distribution of the smallest eigenvalue ρ (min) N (λ) (as well as for the maximal eigenvalue) has been formally expressed using Multivariate Hypergeometric Functions and Zonal Polynomials. This was first done long ago for real matrices (β = 1) in [15], then generalised to complex matrices (β = 2) in [16] and only recently to any β in [17]. These expressions are rigorous but often not easy to evaluate and manipulate (although a real breakthrough has occurred recently with a new algorithm that can calculate such quantities with complexity that is only linear in the size of the matrix [18], as well as available dedicated packages [19]). Another line of explicit expressions involve a determinantal representation of the distribution of the smallest eigenvalue (as well as other order statistics) for finite N and M (see [20] and references therein). These expression are not always easy to implement, and expecially when M − N is large. More explicit expressions have been derived by Edelman in [21,22] for β = 1 (for β = 2 as well, but only for square matrices). These expressions require the knowledge of some polynomials (different ones for any M and N ). While manageable and useful for small matrices, this turns out to be impractical for large matrices, since no explicit formula exists for these polynomials. It is especially changeling to extract other relevant statistical properties around the hard edge like, for instance, the distribution of the typical (order O(N p ) with 0 < p ≤ 1) or large (order O(N )) fluctuations around their mean values. The typical fluctuations of the smallest eigenvalue have been recently studied rigorously in [23], where it is shown that the smallest eigenvalue λ min follows the Tracy-Widom (TW) distribution, that is, the typical fluctuations of λ min can be expressed as
ζ ± = (1 ± √ 1 + α) 2 , α = 1λ min = ζ − N − ζ 2/3 − c 1/6 N 1/3 χ(4)
where χ follows a TW distribution g β (χ) [23]. This result was proven for the case c < 1 in the large N -limit, and so strictly speaking, it does not apply to either square or almost square matrices. The knowledge about large deviations of the smallest eigenvalue is, as far as we are aware of, mainly unexplored. We would like here to correct the situation. To do so we will use the Coulomb gas approach [12]. Starting with Eq. (1) the cumulative probability of the minimum P (min) N (t) ≡ P (λ min ≥ t) can be written as
P (min) N (t) = ∞ t dλρ (min) N (λ) = Z(t) Z(0)(5)
with
Z(t) = ∞ t · · · ∞ t dλ 1 · · · dλ N e −βF (λ)/2(6)
In this framework Z(t) is understood as the partition function of a 2D Coulomb gas of charged particles restricted on a 1D line, in an external linear-log potential and a hard wall at t. The idea, as in [12], is to evaluate Z(t) by using the saddle-point approximation. It is important to notice that the expression (5) is exact and that while the saddle-point method provides the exact density of eigenvalues for large N , it is only able to capture the large deviations to the right of the smallest eigenvalue from ρ (min) N (λ). This is not entirely surprising as ρ N (λ) is a collective quantity while ρ (min) N (λ) is not. The calculation goes along similar lines as in [24,25], so we shortly describe the needed steps. To apply the saddle-point method to (6), one first introduces the func-
tion ̺(λ) = 1 N N i=1 δ (λ − λ i )
. This allows us to write the Z(t) as a path integral over ̺(λ) and its Lagrange multiplier ̺(λ). Minimising the corresponding functional with respect to ̺(λ), allows to eliminate this multiplier and to unveil the meaning of ̺(λ) as a constrained spectral density. After rescaling the eigenvalues (λ = xN and ζ = t/N ), Z(t) takes the form
Z(t) = {Df }e − β 2 N 2 S[f (x)](7)
with the rescaled density ̺(λ) = 1 N f ( λ N ), and the action
S[f (x)] = ∞ ζ dxf (x)x − α + β−2 βN ∞ ζ dxf (x) ln x − ∞ ζ ∞ ζ dxdx ′ f (x)f (x ′ ) ln |x − x ′ |(8)+ 2 βN ∞ ζ dxf (x) ln f (x) + C 1 ∞ ζ dxf (x) − 1
As we will apply the saddle-point method we neglect the terms which are of order O(N −1 ) in the preceding expression for S[f (x)]. Note however that while the first term in the subleading correction can be easily kept, the entropic term makes the evaluation of the saddle-point equation a changeling task. The next step is to look for the saddle point of the action. The saddle point equation is basically δS[f (x)]/δf (x) = 0. It is useful to differentiate the saddle-point equation once with respect to x:
1 2 − α 2x = P ∞ ζ f ⋆ (x) x − x ′ dx ′ , x ∈ [ζ, ∞)(9)
where f ⋆ (x) is the value of f (x) at the saddle-point. This is a Tricomi integral equation, and is solved as in [24], so we report the final result:
For ζ ∈ [0, ζ − ], f ⋆ (x) is given by Eq. (3), while for ζ ∈ [ζ − , ∞), f ⋆ (x) = √ U − x 2π √ x − ζ x − α ζ/U x 1 1 x∈[ζ,U](10)
with U ≡ U (c, ζ) = w 2 (c, ζ) and w(c, ζ) given by w(c, ζ) = 2p 3ρ 1/3 cos
θ + 2π 3 ,(11)
with p = −[ζ + 2(α + 2)], q = 2α √ ζ, ρ = − p 3 27 , θ = atan 2 √ B q and B = − p 3 27 + q 2 4 . Using these results and after a long and tedious calculation we obtain the following results for the distribution: for t ∈ [0, N ζ − ] we have P (min) N (t) = 1, and
P (min) N (t) = e −βN 2 Φ (min) + t−N ζ − N , t ∈ [N ζ − , ∞) (12)
with the right rate function Φ (13) and the action S(ζ) ≡ S[f ⋆ (x)] given by
(min) + (x) being Φ (min) + (x) = 1 2 [S (x + ζ − ) − S (ζ − )] , x ∈ [0, ∞)S (ζ) = ζ + U 2 − ∆ 2 32 − ln ∆ 4 + α 4 √ U − ζ 2 + α 2 4 ln (ζU ) − α(α + 2) ln √ U + √ ζ 2 ,(14)
where, ∆ = U (c, ζ) − ζ. As mentioned above, the saddlepoint approximation is only able to capture the large fluctuations to the right of λ min as it has a hard wall on the left side (12). Fortunately, the authors of [26] came up with a beautiful physical argument to overcome this shortcoming and to estimate in their case the large deviations from the maximal eigenvalue. Applied to the fluctuation to the left of λ min this method yields (15) with the left rate function Φ (min) − (x) given by:
P (min) N (t) ∼ e −βN Φ (min) − N ζ − −t N , t ∈ [0, N ζ − ]Φ (min) − (x) = − α 2 ln 1 − x ζ− − 1 2 √ x x + ∆ − (16) +2 ln √ x+∆−− √ x √ ∆− + α ln 1 + 2 x ζ− √ x+∆−− √ x ∆− with x ∈ [0, ζ − ] and ∆ − = ζ + − ζ − = 4 √ α + 1.
We now check that the smallest large fluctuations predicted by our results match the largest typical fluctuations given by the TW distribution [23]. Expanding the rate functions Φ (min) ± (x) around x = 0 gives:
Φ (min) − (x) ∼ x→0 2 3ζ−c 1/4 x 3/2(17)Φ (min) + (x) ∼ x→0 1 24ζ 2 − √ c x 3(18)
which yields the following expression of P
P (min) N (t) ∼ exp − 2β 3 χ 3/2 (t) , t ∈ [0, N ζ − ] exp − β 24 |χ(t)| 3 , t ∈ [N ζ − , ∞) with χ(t) ≡ −(N ζ − − t)/(N 1/3 ζ 2/3 − c 1/6
). This result obviously agrees with TW distribution for large |χ| [23,27].
For larger atypical fluctuations, we have the following asymptotic behaviours of the rate functions:
Φ (min) − (x) ∼ x→ζ− − ζ− 2 − ζ − + ln(c) 2c + α ln α (19) − α 2 ln (ζ − − x) + O (ζ − − x) , Φ (min) + (x) ∼ x→∞ 1 2 x − α 2 ln x + ζ− 2 + 3 4 − S(ζ−) 2 .(20)
Interestingly, these results can be compared with the exact asymptotic behaviours predicted in [22] for the case β = 1 and large N . It turns out that for Φ (min) − (x) the leading logarithmic behaviour agrees perfectly with [22], while the constant term cannot be rigorously compared, since it is not available for any N in the exact treatment [22] (although we know that a constant term exists). For Φ (min) + (x) we again see perfect agreement with the linear and logarithmic terms, but the constant deviates from the exact one given in [22]. A comparison of these results, as well as with results of simulations and the TW distribution [28] are summarized in Fig. 1. Note that exact results are only available for β = 1 [22], and so a comparison to exact results for ensembles other than the real Wishart case is not possible. We now pay special attention to the case of almost square matrices, namely when M = N + a with a an integer of order unity (i.e. α = a/N , so α is of order 1/N ). Here it is useful to use the scaling z = N t = N 2 ζ to unravel non-trivial results. Using the Coulomb gas approach we find that in this special case, the PDF of λ min does not simply approach a delta function, in the large N limit, as in [14]. Instead, the cumulative distribution of z has a N -independent limiting shape as shown in Fig. 2 for β = 1, and whose large fluctuations are described by the Coulomb gas prediction:
P (min) N (z) ∼ exp −βaΨ (min) − 4z a 2 , z ∈ [0, a 2 /4] exp −βa 2 Ψ (min) + 4z a 2 , z ∈ [a 2 /4, ∞) (21) with Ψ (min) + (x) = (x − 4 √ x + 3 + ln x) /8 and Ψ (min) − (x) = ln 1+ √ 1−x √ x − √ 1 − x.
The functions Ψ (min) ± (x) are universal, in the sense that they are N, β and a-independent. Note that in this particular regime one needs to keep the first 1/N term appearing (8) as it is of the same order as α = a/N . This can be accounted for easily by replacing in the preceding expressions a → a(β) = a + (β − 2)/β, as long as a(β) is non-negative. To our knowledge, this is the first time that this case is discussed and shown to be universal (even numerically), and obviously no general explicit predictions for its shape have been proposed, apart of the particular case of a(β) = 0 for which Eq. (21) yields P (min) N (z) = e −βz/2 in agreement with the exact result of [22] (for a = β = 1) and [21] (for a = 0, β = 2). Note also that the results for TW distribution reported in [23] do not formally apply to almost square matrices. In Fig. 2 It is important to point out that while the results in [22] are exact, they become difficult to evaluate for large values of N and a daunting task to extract exact results about large fluctuations for large matrices. In contrast, the result (21) provide information of large fluctuations for any value of β and a, when a(β) ≥ 0.
To summarise, in this work we study the large deviation functions of the smallest eigenvalue of the Wishart-Laguerre Ensemble using the Coulomb gas approach. We obtain explicit expressions for both the right and left rate functions for general α. We also highlight the existence of a special regime for almost square matrices, where an interesting limit distribution exists, described by universal rate functions Ψ (min) ± (x). We were able to provide predictions for a(β) ≥ 0, which leaves the question of a(β) < 0 open for further research. Another interesting open question is regarding the typical fluctuations for the smallest eigenvalue for square and almost square matrices, which are not captured by the TW distribution.
−c c and c = N M . The indicator 1 1 x∈D takes the value 1 if x ∈ D and 0 otherwise. The points ζ − and ζ + are usually called the hard edge and the soft edge of the distribution (3), respectively. The case α = 0 (or M = N ) corresponds to square matrices, and the case α = O(1/N ) (or M − N = O(1
smallest large fluctuations of λ min :
. 1. (color online). Results for the smallest eigenvalue distribution − ln P
vs. the scaled variable t/N . Here, N = 11 and M = 110 (c = 1/10), Wishart matrices are real (β = 1). The large deviation functions (dashed lines) compare very well with the exact result of [22] (solid line), while TW (dotted line) describes only small fluctuations.
we have compared our finding with Edelman's exact result for β = 1, N = 200 and M = 205. . 2. (color online). Results for the smallest eigenvalue distribution − ln P
vs. the scaled variable tN . Here, N = 200 and a = 5, Wishart matrices are real (β = 1). The large deviation functions derived here (dashed lines) compare well with the exact result of [22] (solid line).
. E P Wigner, Proc. Cambridge Philos. Soc. 47790E. P. Wigner, Proc. Cambridge Philos. Soc., 47, 790 (1951).
. J Wishart, Biometrika. 2032J. Wishart, Biometrika, 20A, 32 (1928).
. G Akemann, E Kanzieper, Phys. Rev. Lett. 851174G. Akemann and E. Kanzieper, Phys. Rev. Lett., 85, 1174 (2000).
. T Rogers, I Castillo, R Kühn, K Takeda, Phys. Rev. E. 78T. Rogers, I. Pérez Castillo, R. Kühn, and K. Takeda, Phys. Rev. E, 78 (2008).
Random matrix theory and wireless communications. A M Tulino, S Verdú, now Publishers IncHanoverA. M. Tulino and S. Verdú, Random matrix theory and wireless communications (now Publishers Inc., Hanover, 2004).
M B Eisen, P T Spellman, P O Brown, D Botstein, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA9514863M. B. Eisen, P. T. Spellman, P. O. Brown, and D. Bot- stein, Proc. Natl. Acad. Sci. USA, 95, 14863 (1998).
. C N , Arxiv:1006.4091C. N. et al, Arxiv:1006.4091 (2010).
. A E Litvak, A Pajor, M Rudelson, N Tomczak-Jaegermann, Adv. Math. 195491A. E. Litvak, A. Pajor, M. Rudelson, and N. Tomczak- Jaegermann, Adv. Math., 195, 491 (2005).
. M Rudelson, Ann. Math. 168575M. Rudelson, Ann. Math., 168, 575 (2008).
. E J Candes, T Tao, IEEE Trans. Inform. Theory. 525406E. J. Candes and T. Tao, IEEE Trans. Inform. Theory, 52, 5406 (2006).
R J Muirhead, Aspects of multivariate statistical theory. WileyR. J. Muirhead, Aspects of multivariate statistical theory (Wiley, 1982).
. D S Dean, S N Majumdar, Phys. Rev. Lett. 97D. S. Dean and S. N. Majumdar, Phys. Rev. Lett., 97 (2006);
. Phys. Rev. E. 77Phys. Rev. E, 77 (2008).
. V A Marcenko, L A Pastur, Math. USSR-Sb. 1457V. A. Marcenko and L. A. Pastur, Math. USSR-Sb, 1, 457 (1967).
. J W Silverstein, Ann. Probab. 131364J. W. Silverstein, Ann. Probab., 13, 1364 (1985).
. P R Krishnaiah, T C Chang, Ann. Inst. Statist. Math. 23293P. R. Krishnaiah and T. C. Chang, Ann. Inst. Statist. Math., 23, 293 (1971).
. T Ratnarajah, R Vaillancourt, M Alvo, SIAM J. Matrix Anal. Appl. 26441T. Ratnarajah, R. Vaillancourt, and M. Alvo, SIAM J. Matrix Anal. Appl., 26, 441 (2004).
. I Dumitriu, P Koev, SIAM J. Matrix Anal. Appl. 301I. Dumitriu and P. Koev, SIAM J. Matrix Anal. Appl., 30, 1 (2008).
. P Koev, A Edelman, Math. Comp. 75833P. Koev and A. Edelman, Math. Comp., 75, 833 (2006).
. I Dumitriu, A Edelman, G Shuman, J. Symbolic Comput. 42587I. Dumitriu, A. Edelman, and G. Shuman, J. Symbolic Comput., 42, 587 (2007).
. G Akemann, P Vivo, J. Stat. Mech. 9002G. Akemann and P. Vivo, J. Stat. Mech., P09002.
. A Edelman, SIAM J. Matrix Anal. Appl. 9543A. Edelman, SIAM J. Matrix Anal. Appl., 9, 543 (1988).
. A Edelman, Linear Algebra Appl. 15955A. Edelman, Linear Algebra Appl., 159, 55 (1991).
. O N Feldheim, S Sodin, Geom. Funct. Anal. 201O. N. Feldheim and S. Sodin, Geom. Funct. Anal., 20, 1 (2010).
. P Vivo, S N Majumdar, O Bohigas, J. Phys. A. 404317P. Vivo, S. N. Majumdar, and O. Bohigas, J. Phys. A, 40, 4317 (2007).
. Y Chen, S M Manning, J. Phys. A. 273615Y. Chen and S. M. Manning, J. Phys. A, 27, 3615 (1994).
. S N Majumdar, M Vergassola, Phys. Rev. Lett. 10260601S. N. Majumdar and M. Vergassola, Phys. Rev. Lett., 102, 060601 (2009).
. C A Tracy, H Widom, Comm. Math. Phys. 159727C. A. Tracy and H. Widom, Comm. Math. Phys., 159, 151 (1994); 177, 727 (1996).
| [] |
[
"ARTICLE TYPE Probabilistic performance validation of deep learning-based robust NMPC controllers",
"ARTICLE TYPE Probabilistic performance validation of deep learning-based robust NMPC controllers"
] | [
"Benjamin Karg \nLaboratory of Process Automation Systems\nTechnische Universität Dortmund\nEmil-Figge-Str. 7044227DortmundGermany\n",
"Teodoro Alamo \nDepartamento de Ingeniería de Sistemas y Automática\nUniversidad de Sevilla\nEscuela Superior de Ingenieros\nCamino de los Descubrimientos s/n41092SevillaSpain Correspondence\n",
"Sergio Lucia \nLaboratory of Process Automation Systems\nTechnische Universität Dortmund\nEmil-Figge-Str. 7044227DortmundGermany\n"
] | [
"Laboratory of Process Automation Systems\nTechnische Universität Dortmund\nEmil-Figge-Str. 7044227DortmundGermany",
"Departamento de Ingeniería de Sistemas y Automática\nUniversidad de Sevilla\nEscuela Superior de Ingenieros\nCamino de los Descubrimientos s/n41092SevillaSpain Correspondence",
"Laboratory of Process Automation Systems\nTechnische Universität Dortmund\nEmil-Figge-Str. 7044227DortmundGermany"
] | [] | Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using probabilistic validation techniques.We propose a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance indicators. The potential of the proposed approach is demonstrated with simulation results of an uncertain nonlinear system. | 10.1002/rnc.5696 | [
"https://arxiv.org/pdf/1910.13906v2.pdf"
] | 204,960,924 | 1910.13906 | 4c610df550a8a2a0de845ecc3eac5c8f809397dc |
ARTICLE TYPE Probabilistic performance validation of deep learning-based robust NMPC controllers
Benjamin Karg
Laboratory of Process Automation Systems
Technische Universität Dortmund
Emil-Figge-Str. 7044227DortmundGermany
Teodoro Alamo
Departamento de Ingeniería de Sistemas y Automática
Universidad de Sevilla
Escuela Superior de Ingenieros
Camino de los Descubrimientos s/n41092SevillaSpain Correspondence
Sergio Lucia
Laboratory of Process Automation Systems
Technische Universität Dortmund
Emil-Figge-Str. 7044227DortmundGermany
ARTICLE TYPE Probabilistic performance validation of deep learning-based robust NMPC controllers
10.1002/rnc.5696Received: Added at production Revised: Added at production Accepted: Added at productionmodel predictive controlrobust controlprobabilistic validationmachine learning
Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using probabilistic validation techniques.We propose a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance indicators. The potential of the proposed approach is demonstrated with simulation results of an uncertain nonlinear system.
INTRODUCTION
Model predictive control is a popular advanced control technique that can deal with nonlinear systems and constraints while considering general control goals that go beyond conventional set-point tracking tasks. Two of the main obstacles that one faces when implementing and designing a nonlinear model predictive controller are the accuracy of the model and the computational complexity needed to solve a non-convex optimization problem online, which often renders its implementation too slow for fast systems or impossible to be deployed on resource-constrained embedded platforms.
Handling uncertainty in the context of model predictive control is the main goal of robust MPC. Traditional min-max approaches [1] do not explicitly consider the fact that new information will be available in the future, which leads to overconservative solutions. Closed-loop robust MPC avoids the problem of conservativeness by optimizing over control policies instead of optimizing over control inputs [2], leading however to intractable formulations in the general case. Most of the recent robust MPC methods focus on achieving a good trade-off between complexity and performance. Tube-based approaches [3] decompose the robust MPC problem into a nominal MPC and an ancillary controller. The ancillary controller makes sure that the real uncertain system stays close to the trajectory planned by the nominal MPC. By tightening the constraints of the nominal MPC, robust constraint satisfaction can be achieved. In the simplest version, the complexity of tube-based MPC is the same as that of standard MPC. However, if an increased performance is desired, the complexity grows as presented in [4] or [5]. Scenario tree-based [6,7,8] or multi-stage MPC [9] represents the evolution of the uncertainty using a tree of discrete uncertainty realizations. An improved performance can be often seen in practice [10] because the feedback structure is not restricted to be affine, as usually done in tube-based MPC and in other robust approaches [11]. While it is also possible to achieve stability and robust constraint satisfaction guarantees for a multi-stage MPC formulation [7,12,13], its computational complexity grows exponentially with the dimension of the uncertainty space. The presence of uncertainty significantly increases the computational complexity of any NMPC implementation if a non-conservative performance is desired.
The last decade has witnessed an important progress on hardware, algorithms and tailored implementations that enable the efficient solution and implementation of NMPC controllers based, for example, on code generation tools [14,15] that provide efficient implementations of linear and nonlinear MPC on embedded hardware, including low-cost microcontrollers [16] and high-performance FPGAs [17].
A different possibility to achieve embedded nonlinear model predictive control is the use of approximate explicit nonlinear model predictive control [18,19] based on approximating the multi-parametric nonlinear program using similar ideas as for explicit MPC of linear systems [20]. We propose in this work to use deep neural networks to approximate a robust multistage NMPC control law. The idea of using a neural network as function approximator for an NMPC feedback law was already proposed by [21] back in 1995, but only very recently [22,23] deep neural networks (neural networks with several hidden layers) have been proposed as function approximators. The use of deep neural networks is motivated by recent theoretical results that suggest the exponentially better approximation capabilities of deep neural networks in comparison to shallow networks [24].
Assessing the closed-loop performance of approximate controllers, or any other controller subject to further random disturbances or estimation errors, is particularly challenging in the case of complex nonlinear systems. The theory of randomized algorithms [25], [26] provides different schemes capable of addressing this issue. For example, statistical learning techniques can be used to design stochastic model predictive controllers with probabilistic guarantees [27], [28], [29]. Also, under a convexity assumption, convex scenario approaches [30] can be used in the context of chance constrained MPC [31], [32], [33]. The main limitation of the aforementioned approaches based on statistical learning results [25], [34] and scenario based ones [30] is that the number of random scenarios that have to be generated (sample complexity) grows with the dimension of the problem.
Probabilistic validation [35], [36], allows one to determine if a given controller satisfies, with a prespecified probability of violation and confidence, the control constraints. The sample complexity in this case does not depend on the dimension of the problem, but only of the required guaranteed probability of violation and confidence. Examples of probabilistic verification approaches in the context of control of nonlinear uncertain systems can be found, for example, in [26], [37] and [38]. These techniques have also been used for the probabilistic certification of the behaviour of off-line approximations of nonlinear control laws [39], [40].
The main contribution of this paper, which extends the results from [41], is the formulation of general closed-loop performance indicators that are not restricted to binary functions as in [39] and can be computed simulating the closed-loop system with any given controller. We also provide sample complexity bounds that do not grow with the size of the problem for the case of a finite family of design parameters and general performance indicators. Our approach allows to discard a finite number of worst-case closed-loop simulations, improving significantly the applicability of the probabilistic validation scheme compared to existing works. The potential of the presented approach is illustrated for a highly nonlinear towing kite system including a real-time capable embedded implementation of an approximate, but probabilistically safe, robust nonlinear model predictive controller on a low-cost microcontroller.
The paper is organized as follows. The closed-loop performance indicators are introduced in Section 2 which are used in a novel probabilistic validation methodology for arbitrary controllers in Section 3. The mathematical framework for the output feedback robust NMPC problem considered in this work is presented in Section 4 and the use of deep learning to obtain approximate robust NMPC controllers is summarized in Section 5. The case study is detailed in Section 6, the results in Section 7 and the paper is concluded in Section 8.
CLOSED-LOOP PERFORMANCE INDICATORS
System description and constraints
We are interested in optimally controlling the following class of nonlinear discrete time systems:
( + 1) = ( ( ), ( ), ( )),(1)( ) = ℎ( ( ), ( ), ( )),(2)
where ( ) ∈ is the state vector, ( ) ∈ is the control input, and ( ) ∈ is the disturbance vector. In general, not all states can be measured, and a state estimatê ( ) should be computed based on the past measurements ( ) ∈ . It is assumed that the disturbances ( ) take values, with high probability, from a known set .
Assumption 1.
The nonlinear discrete-time system (1) and (2) is observable and controllable.
The closed-loop trajectory should satisfy general nonlinear input and state constraints defined by
( ( ), ( ), ( )) ≤ 0, = 1, … , ,(3)
where is the number of constraints.
Closed-loop behavior
The goal of a controller ∶ ℝ → ℝ is that the closed-loop trajectory of the uncertain nonlinear system defined by
( + 1) = ( ( ), (̂ ( )), ( )),(4)
obtains a desired performance level, e.g. does not violate the predefined constraints, despite the presence of uncertainty, for any initial state (0) in the set 0 of feasible initial conditions, for any admissible initial estimation error (0) −̂ (0) and for any sequence of uncertainty realizations
{ (0), (1), … , (∞)}.
Determining if a given controller provides admissible closed-loop trajectories, under the presence of nonlinearity and uncertainty, is in general an intractable problem [42]. Instead, we focus on the use of finite-time closed-loop performance indicators that can be obtained by simulating the closed-loop system. The underlying assumption is that models which can be run a large number of times are available so that statistical guarantees can be obtained. A closed-loop performance indicator is defined as follows.
Definition 1 (Closed-loop finite-time performance indicator). Let = { (0),̂ (0), (0), … , ( sim − 1)} denote the variables that uniquely define the closed loop trajectories ( ; sim , ) = { (0),̂ (0), (̂ (0)), (0), (1), (̂ (1)), (1), … , ( sim )} given an initial condition (0), an initial estimatê (0), a sequence of uncertainty realizations (0), … , ( sim − 1) that also include the measurement noise, a controller and a finite number of simulation steps sim . A closed-loop finite-time performance indicator is a measurable function ( ; sim , ) ∶ = ℝ × ℝ × ℝ × ⋯ × ℝ → ℝ that takes as input all variables defining the closed-loop trajectories for a controller until time sim and gives a scalar as a measure of closed-loop performance:
( ; sim , ) = ( (0),̂ (0), (0), (1), … , ( sim − 1)).(5)Assumption 2.
There exists a simulator that is able to compute closed-loop trajectories defined by (4). In addition, there exists a known operator Φ that provides the state estimation̂ ( + 1) from̂ ( ), ( ) and ( ). That is,
( + 1) = Φ (̂ ( ), ( ), ( )).(6)
Assumption 2 implies that given sim and the controller , the closed-loop trajectories are completely determined by . Probabilistic validation normally relies on a binary performance indicator that determines if the closed-loop is admissible or not. That is, ( ; sim , ) = 0 if the closed-loop trajectory is admissible for , 1 otherwise. For this particular setting, one can resort to well-known results to obtain probabilistic guarantees about the probability Pr ( ( , sim , )). For a review on these results, see [36]. See also [39], where Hoeffding's inequality [43] is used to derive probabilistic guarantees in the context of learning an approximate model predictive controller.
In this paper we address a more general setting in which we do not circumscribe the performance indicator to the class of binary functions. For example, we consider the closed-loop finite-time performance indicator given by the largest value of any component of along the closed-loop simulation:
( ; sim , ) = max =0,…, sim −1 =1,…,
( ( ), (̂ ( )), ( )).
Another possibility is to consider the average constraint violation as a performance indicator. That is,
( ; sim , ) = 1 sim g sim-1 ∑ =0 ∑ =1 max{0, ( ( ), (̂ ( )), ( ))}.(8)
Moreover, in many applications it is relevant to consider indicators related to the closed-loop cost, such as an average cost:
( ; sim , ) = 1 sim sim-1 ∑ =0 ( ( ), (̂ ( ))),(9)
or any other combination. In the following section we address how to obtain probabilistic guarantees on the random variable ( ; sim , ).
PROBABILISTIC VALIDATION
The derived closed-loop performance indicators can be used in the framework of probabilistic validation [26,36] to obtain probabilistic guarantees regarding the satisfaction of a given set of control specifications. In this section we present a novel result that allows us to address the probabilistic validation of arbitrary control schemes where the performance is influenced by hyper-parameters, e.g. backoff parameters or the control sampling time.
Probabilistic performance indicator levels
We consider a finite family of controllers (̂ ), = 1, … , , corresponding to different combinations of hyper-parameter values, e.g. for constraint backoffs or control sampling times. The objective of this section is to provide a probabilistic validation scheme that allows us to choose, from the possible controllers, the one with the best probabilistic certification for any given closed-loop finite-time performance indicator ( ; sim , ). For simplicity in the notation, we denote the closed-loop finite-time performance indicator obtained with the controller with sim simulation steps as ( ). Remark 1. The stochastic variable that defines the closed-loop trajectories follows the probability distribution from which it is possible to obtain independent identically distributed (i.i.d.) samples.
Remark 1 only requires knowledge of the probability distributions of the uncertainty. Definition 2 (Probabilistic performance indicator level). We say that ∈ IR is a probabilistic performance indicator level with violation probability ∈ (0, 1) for a sample drawn from for the measurable function ∶ → IR if the probability of violation Pr {⋅} satisfies
Pr { ( ) > } ≤ .
To obtain probabilistic performance indicator levels for the considered controllers , = 1, … , , we generate i.i.d.
scenarios ( ) = { ( ) (0),̂ ( ) (0), ( ) (0), … , ( ) ( sim−1 )}, = 1, … , .
For a given controller , with 1 ≤ ≤ , and the uncertain realizations ( ) , = 1, … , , one could simulate the closed-loop dynamics and obtain the performance indicator corresponding to each uncertain realization. That is, one could obtain = [ ( (1) ), ( (2) ), … , ( ( ) )] ⊤ ∈ IR .
It is clear that the largest value of the components of could serve as an empirical performance level for the controller provided that is large enough [35]. Another possibility is to discard the − 1 largest components of and consider the largest of the remaining components as a (less conservative) empirical performance indicator level ( equal to one corresponds to not discarding any component) [44]. In the following section we show how to choose such that the obtained empirical performance indicator levels are, with high confidence 1 − , probabilistic performance indicator levels with probability of violation .
Sample complexity
We first present a generalization of the notion of the maximum of a collection of scalars. This generalization is borrowed from the field of order statistics [45], [46], and will allow us to reduce the conservativeness that follows from the use of the standard notion of max function. See also Section 3 of [44].
+ ≥ (2) + ≥ … ≥ ( −1) + ≥ ( ) + . Clearly, given = [ (1) , … , ( ) ] ⊤ ∈ IR , we have ( , 1) = max 1≤ ≤ ( ) , ( , ) = min 1≤ ≤ ( ) .(1)
Furthermore, ( , 2) denotes the second largest value in , ( , 3) the third largest one, etc. We notice that the notation ( , ) does not need to make explicit and the number of components of .
The next theorem provides a way to compute probabilistic performance levels for a family of controllers. The theorem constitutes a generalization of a similar result, presented in [44] for the particular case = 1. See also the seminal paper [35] for the particularization of the result to the case = 1, = 1. Then, with probability no smaller than 1 − , we have a probability of violation
Pr { ( ) > ( , )} ≤ , = 1, … , , provided that 1 ≤ ≤ and −1 ∑ =0 (1 − ) − ≤ .(10)
In addition, (10) is satisfied if:
≥ 1 − 1 + ln + √ 2( − 1) ln .(11)
Proof. Given the controller and ∈ IR, we denote ( ) the probability of the event ( ) > . That is,
( ) ≔ Pr { ( ) > }.
We denote probability of asymptotic failure the probability of generating i.i.d scenarios and obtaining an empirical probabilistic performance level that does not meet the probabilistic specification on probability of violation. We now make use of Property 3 in [44], which states that, with probability no smaller than
1 − −1 ∑ =0 (1 − ) − , we have ( ( , )) = Pr { ( ) > ( , )} ≤ .
This means that the probability of asymptotic failure Pr {⋅} for samples ( ) , … , ( ) drawn from satisfies
Pr { ( ( , )) > } ≤ −1 ∑ =0 (1 − ) − ≔ ( , , − 1).
Consider now the probability that, after drawing i.i.d. samples ( ) , = 1, … , , one or more of the empirical performance indicator levels = ( , ), = 1, … , ,
are not probabilistic performance indicator levels with violation probability . We have
= Pr { max =1,…, ( ( , )) > } ≤ ∑ =1 Pr { ( ( , )) > } ≤ −1 ∑ =0 (1 − ) − ≤ .
That is, is smaller or equal than provided that (10) holds. This proves the first claim of the property. The second one follows directly from Corollary 1 of [36], which provides an explicit number of samples that guarantees that a binomial expression ( , , − 1) is smaller than a given constant. The major advantage of Theorem 1 is that a family of controllers can be evaluated for the same samples. This is beneficial when the family of controllers can be evaluated in parallel or when drawing samples is expensive, e.g. in experimental setups. The number of required samples for the same probabilistic statement is significantly smaller as when all controllers would be evaluated in a sequential approach as in [47,48,49].
Remark 2. Given a family of controllers , = 1, … , , one does not need to compute all the empirical performance indicator levels ( , ), = 1, … , . It is sufficient to find one that meets the desired performance indicator levels. For example, if the performance indicator ( ) is defined as the average constraint violation along the trajectory (see (8)), then the controller provides an admissible closed-loop trajectory for if and only if ( ) = 0. In this case, the empirical performance indicator ( , ) corresponding to i.i.d. scenarios is equal to 0 if no more than − 1 trajectories are non-admissible when applying the controller to the scenarios. If is chosen according to (10) then Theorem 1 implies that with probability no smaller than 1 − , all the controllers , = 1, … , , providing ( , ) = 0 are such that
Pr ( ( ) > 0) ≤ .
It is also important to remark that the cardinality of the family of proposed controllers has little effect on the sample complexity because it appears into a logarithm. See also Subsection 4.2 in [36] for other randomized approaches based on a design space of finite cardinality.
Remark 3. Theorem 1 can also be applied in the case when the performance indicators only take binary values. This has been presented in a similar form in [50] and was used for control design problems. See, for example, [37], [38].
ROBUST OUTPUT-FEEDBACK NONLINEAR MODEL PREDICTIVE CONTROL
In this work our goal is to design an NMPC scheme that is able to control the uncertain nonlinear system (1) in an outputfeedback setting where not all the states can be measured as described by the output equation (2). While the novel probabilistic validation scheme described in Section 3 can be applied using any controller, we believe that because of the complexity of the general robust output-feedback problem, it is a promising idea to use an approximate robust NMPC scheme that is validated a posteriori using probabilistic validation.
There exist many different robust model predictive control schemes, but there are four important characteristics that differentiate one approach from the other: the choice of cost function, the propagation of the uncertainty, robust constraint satisfaction and the characterization of feedback information.
The cost function can be chosen following a min-max approach, where the worst-case realization of the uncertainty ( ) at each step in the prediction is chosen [1]. Tube-based methods usually choose the cost incurred by the closed-loop system driven by the nominal realization of the uncertainty [51]. Scenario-tree based methods use a weighted sum of a set of discrete scenarios [9] and stochastic MPC schemes [7] make use of, e.g., the expectation operator. In this work, we consider a general cost function ((0), ; p , ) that depends on an initial state set (0), the uncertainty set , the prediction horizon p and the control policy .
The propagation of the uncertainty is one of the key elements of any robust NMPC scheme. A general framework, which is used in this work, is the definition of reachable sets at each sampling time in the prediction based on a current initial condition, the system model, the applied input and the uncertainty set . The reachable set at sampling time + 1 can be thus denoted as:
( + 1) = { ( ( ), ( ), ( )) ∶ ( ) ∈ ( ), ( ) ∈ }.(12)
The are several methods to compute such reachable sets. In the linear case, the consideration of the vertices of the uncertain set and their propagation along the prediction horizon is enough to compute an exact reachable set. In the nonlinear case, linearization techniques [52] or ODE bounding techniques [53] can be used to obtain guaranteed over-approximations, which can be then used in robust optimal control schemes. To maintain the notation independent of the method used to obtain an (over-) approximation of the reachable sets at each sampling time, the bounding operator denoted as ⋄ (⋅) is used, which is defined as:
( + 1) = ⋄ (( ), ( ), ).(13)
Another possibility for the propagation of uncertainty is to resort to probabilistic reachable sets as done in [54].
Robust constraint satisfaction is often one of the main motivations for the use of a robust NMPC approach. It means that the requirements of the closed-loop system in form of input and state constraints should be satisfied for all possible outcomes of the uncertainty and it is usually enforced by embedding the reachable sets (13) into the constraints of an optimization problem.
The characterization of feedback that is employed is another key property of any robust MPC scheme. It is well known that considering a sequence of optimal control inputs in the prediction under uncertainty can result in very conservative performance of the closed-loop because it is ignored that new information about the future will be available in the form of measurements and thus future actions can be adapted accordingly. To avoid this conservatism, closed-loop approaches can be used, in which one optimizes over a sequence of control policies and can be formulated as the receding horizon solution of the following optimization problem:
minimize (⋅) ((0), ; p , ), (14a) ℙ ideal ∶ subject to ( + 1) = ⋄ (( ), (( )), ), for = 0, … , p − 1, (14b) (( ), (( )), ) ≤ 0, = 1, … , , for = 0, … , p − 1,(14c)( p ) ⊆ ,(14d)(0) = {̂ (0)} ⊕ est ,(14e)
where the constraints (14c) denote that ( , ( ), ) ≤ 0 should be satisfied for all ∈ ( ) and for all ∈ . Solving the ideal robust NMPC problem ℙ ideal defined in (14), one obtains a receding horizon policy ideal (̂ (0)) which is a function of the initial state estimatê (0) that has been obtained with a certain estimation error bounded by est .
Obtaining an exact solution of ℙ ideal is usually intractable mainly because of the bounding operator ⋄ (⋅) and the general feedback law (⋅). There are different alternative solutions to obtain approximations of this problem. A common simplifying assumption is to restrict the search to affine policies on the state or on the disturbances [11]. A different alternative is the use of a scenario tree [6], [7], [9] in a so-called multi-stage NMPC approach. A multi-stage NMPC scheme is based on the representation of the uncertainty via a scenario tree (see Figure 1), which branches at each sampling time. This means that the uncertainty set is approximated by a discrete number of uncertainty realizations:
≈ = { 1 , … , }(15)
where is the number of possible realizations of the uncertainty that are considered in the tree. The considered realizations mean that each node branches times which results in nodes at stage . Using a scenario tree formulation, an approximation
1 (0) 2 (1) 4 (2) 2 (1), 2 3 (2) 2 (1), 1 1 (0), 2 1 (1) 2 (2) 1 (1), 2 1 (2) 1 (1), 1 1 (0), 1 (0) = { (0)};(1) = { 1 (1), 2 (1)};(2) = { 1 (2), … , 4 (2)}; FIGURE 1 Scenario tree representation.
of the reachable set can be obtained as the convex hull of the set of all the nodes at a given stage, i.e.:
( ) ≈ Conv(( )) = Conv ⋃ =1 ( ) ,(16)
where Conv(⋅) denotes the convex hull of a set and ( ) denotes the node of the tree at stage as depicted in Figure 1. In the linear case with polytopic uncertainty, including the extreme values of the uncertainty in guarantees an exact representation of the actual reachable set. In the nonlinear case considered in this paper it is only an approximation and therefore we focus on the point-wise approximation. Following the same notation, the bounding operator used to propagate the point-wise uncertainty description can be denoted as:
( + 1) = ⋄ (( ), (( )),) ≈ ⋃ =1 ⋃ =1 ( ( ), ( ), ).(17)
The cost function is usually chosen as a weighted sum of the stage cost for each node in the scenario tree:
( 1 (0),; p , ) = p −1 ∑ =0 ∑ =1 ( ( ), ( ( ))) + p ∑ =1 ( ( p )).(18)
Introducing (18) and (17) in the ideal formulation of robust NMPC ℙ ideal we obtain the optimization problem that should be solved at each sampling time:
minimize (⋅) p −1 ∑ =0 ∑ =1 ( ( ), ( ( ))) + p ∑ =1 ( ( p )),(19a)ℙ ms ∶ subject to( + 1) = ⋃ =1 ⋃ =1 ( ( ), ( ( )), ), for = 0, … , p − 1,(19b)(( ), (( )),) ≤ 0, = 1, … , , for = 0, … , p − 1,(19c)( p ) ⊆ ,(19d)(0) = {̂ (0)},(19e)
where the constraints (19c) denote that ( , ( ), ) ≤ 0 should be satisfied for all ∈( ) and for all ∈. The optimal solution of (19) is denoted as the multi-stage NMPC feedback policy ms .
To avoid the exponential growth of the tree with the prediction horizon, a usual additional simplifying assumption is to consider that the tree branches only up to a given stage (usually called robust horizon). While this simplification introduces further errors in the approximation of the reachable sets at each stage, it achieves good results in practice [10]. The current estimation error as well as the presence of future estimation errors should be also included in the problem formulation to achieve stability and recursive feasibility guarantees. This can be done in a multi-stage framework as shown in [55], but additional uncertainties should be included in the scenario tree. To mitigate the exponential growth of the scenario tree with the number of considered uncertainties, we do not consider the estimation error directly in the formulation of the tree. Following ideas from tube-based MPC, these additional errors will be taken into account by means of constraint tightening as explained in Section 5.
DEEP LEARNING-BASED APPROXIMATE ROBUST NMPC
Despite recent advances in algorithms and hardware, solving the simplified output-feedback robust NMPC problem defined in (19) in real time can be challenging. To avoid the need for the real-time solution of non-convex optimization problems, this work considers the data-based approximation of the implicit feedback law defined by (19) following the same ideas as explicit model predictive control. Approximating an NMPC controller with a neural network was already proposed by [21] back in 1995, where the use of shallow networks (with only one hidden layer) was proposed. This suggestion is based on the universal approximation theory that shows that a neural network with only one layer can approximate any function with any desired accuracy under mild conditions [56].
Deep neural networks
The function approximators chosen for this work are Deep neural networks (DNNs). This is motivated by recent theoretical results that support the increased representation power of neural networks with several hidden layers as opposed to classical shallow networks [24]. For the approximation of MPC laws via deep neural networks good results were obtained in [23,40,39,22] among other recent works. In the case of linear time-invariant systems, it was shown in [22] that a deep neural network with a given size can exactly represent the explicit MPC law. The robust NMPC problem (19) is a parametric optimization problem that depends on the current (estimated) state and on the uncertainty values used to define the scenario tree. To perform a deep learning-based approximation, a finite amount of s samples ( ) of the state space are chosen and then s different optimization problems are solved to obtain the corresponding optimal inputs ms ( ( ) ).
A standard deep feed-forward neural network with fully connected layers is defined as a sequence of layers which determines a function ∶ ℝ → ℝ of the form
( ; ) = +1 • • • … • 1 • 1 ( ) for ≥ 2, +1 • 1 • 1 ( ), for = 1,(20)
where the input of the network is ∈ ℝ and the output of the network is ∈ ℝ . The dimensions of the network are defined by the number of hidden layers and the number of neurons per hidden layer, also denoted as the width of the hidden layer, when equal width for all hidden layers is assumed. In contrast to shallow neural networks with = 1 hidden layers, deep neural networks have ≥ 2 hidden layers. The complexity of a neural network can be defined either by the number of weights
weights = ⋅ ( + 1) + ( − 1) ⋅ ( + 1) ⋅ + ⋅ ( + 1),(21)
or the number of neurons
neurons = ⋅ ,(22)
that form a given network. The number of weights defines the necessary memory that is needed to store a neural network while the number of neurons determines the maximum possible amount of nonlinear functions present in the approximation. Each hidden layer connects a preceding affine function:
( −1 ) = −1 + ,(23)
where −1 ∈ ℝ is the output of the previous layer and 0 = , with a nonlinear activation function . Common choices for the activation function are rectifier linear units (ReLU) and the sigmoid function or the hyperbolic tangent (tanh):
( ) = − − + − ,(24)
which will be used throughout this work. The parameters of all layers are summarized in = { 1 , … , +1 } with
= { , } ∀ = 1, … , + 1,(25)
where are the weights and are the biases describing the corresponding affine functions. The best data-based approximation of the exact multi-stage NMPC (19) with a neural network for a given training data set = {( (1) , ms ( (1) )), … , ( ( s ) , ms ( ( s ) ))} with s elements and fixed dimensions and is achieved for: * = arg min 1
s ∑ =1 ( ms ( ( ) ) − ( ( ) ; )) 2 .(26)
The resulting deep learning-based controller is denoted as dnn ( ) = ( ; * ).
Constraint tightening
We propose to use a robust NMPC scheme to take explicitly into account the most important uncertainties that affect the system. Still, it is virtually impossible to account for all possible uncertainties and to obtain exact state estimates which in comparison to the ideal, robust NMPC feedback law ideal results in two sources of error:
|| ideal ( ( )) − ms (̂ ( ))|| ≤ est + ms ,(27)
where est is the estimation and measurement error and ms is the error caused by the approximation of the reachable set by a set of discrete scenarios. Because solving the multi-stage NMPC problem (19) online is challenging, our goal is to determine a candidate neural network controller by generating input-output data pairs via the solution of the multi-stage NMPC problem (19) and approximating its solution via a deep neural network solving (26). This means that the closed-loop will be controlled using the feedback law dnn that approximates the behavior of ms which introduces an error approx on top of those described in (27):
|| ideal ( ( )) − dnn (̂ ( ))|| = || ideal ( ( )) − ms (̂ ( )) + ms (̂ ( )) − dnn (̂ ( ))|| ≤ || ideal ( ( )) − ms (̂ ( ))|| + || ms (̂ ( )) − dnn (̂ ( ))|| ≤ est + ms + approx .
Finding upper-bounds for each one of the errors to apply traditional robust NMPC schemes is not possible for the general nonlinear case.
To counteract the possible errors est , ms , and approx , and following ideas from tube-based MPC, an additional backoff is used to tighten the original constraints of the robust NMPC problem that is solved to generate input-output data for training:
minimize (⋅) p −1 ∑ =0 ∑ =1 ( ( ), ( ( ))) + p ∑ =1 ( ( p )),(29a)ℙ ms, ∶ subject to( + 1) = ⋃ =1 ⋃ =1 ( ( ), ( ( )), ), for = 0, … , p − 1,(29b)(( ), (( )),) ≤ − , = 1, … , , for = 0, … , p − 1,(29c)(0) = {̂ (0)}.(29d)
Solving (29) online would lead to the feedback controller ms (̂ , ). We are however interested in the proposed approximate robust NMPC dnn (̂ , ) that is obtained by training a deep neural network via (26) based on input-output data generated by solving (29) for many different initial conditions. Introducing a backoff does not guarantee in general that the closed-loop satisfies the constraints. For this reason, closed-loop constraint satisfaction is also not ensured a priori with a terminal set. The probabilistic design scheme presented in the previous sections is employed to select the backoff parameter . The proposed methodology provides probabilistic guarantees on the performance indicators of the closed-loop uncertain system.
CASE STUDY
We investigate the optimal control of a kite which is used to tow a boat. The stable and safe operation of the kite is challenging due to the highly nonlinear system dynamics, uncertain parameters, strong influence from disturbances like wind speed and noisy measurements. To develop optimal control schemes of a kite system, typically models with moderate complexity such as [57,31] are considered because of the required short sampling times. Although for our proposed strategy, also a high-fidelity models could be considered since the majority of the computational load is shifted offline, we consider a popular three-state model as presented in [58] to facilitate the comparison of the results with previous works. We derive an approximate deep learning-based controller from a robust NMPC formulation, which enables a very fast and easy evaluation of the controller even on computationally limited hardware. The idea of learning a controller for a kite has already been exploited in [59], where polynomial basis functions were used to approximate the behaviour of a human pilot based on measurements.
Kite model
In the context of NMPC, we focus on the model presented in [58] which consists of three states, one control input and two uncertain parameters. The state evolution is given by the ordinary differential equations of the three angles kite , kite and kite of the spherical coordinate system describing the position of the kite:
kite = a T (cos kite − tan kite ),(30a)kite = − a T sin kite sin kite , (30b) kite = a T̃ +̇ kite cos kite ,(30c)
where a = 0 cos kite , (30d)
= 0 −̃ ̃ 2 .(30e)
The angle between wind and tether (zenith angle) is described by kite , the angle between the vertical and the plane is denoted by kite and kite represents the orientation of the kite. The three states can be manipulated via the steering deflectioñ . The area of the kite is denoted as , and T is the length of the tether. The effect of the wind is denoted as a , which is strongly influenced by the wind speed 0 , the first uncertain parameter. The glide ratio is dependent on the base glide ratio 0 , the second uncertain parameter, and the magnitude of the steering deflectioñ [60]. The parameters of the kite model are shown in the upper part of Table 1.
Wind model
The wind speed 0 is considered as a single uncertainty in (29), but the realizations of the values are computed based on a simulation model. The underlying wind model was presented in [61] and is described by:
0 = m +̄ N + , (31a) where = m ,(31b)N = − ∕(2 m ),(31c)F = ∕ m ,(31d)F = √ 1.49 F ∕ ,(31e)= ∕ F ,(31f)= − ∕ F + tb ,(31g)
when the wind shear is neglected. The term m gives the current average wind speed, tb is introduced as a white noise generator to model the short term turbulence and (0) = normal(0, 0.25) is the initial state of the turbulence, where normal = normal( normal , normal ) denotes that the variable normal follows a normal distribution with mean normal and standard deviation normal . In a similar manner, unif = unif( unif , unif ) means that the variable unif follows a uniform distribution between unif and unif . An overview of the parameters for the wind model is given in the lower part of Table 1. For further details on modeling assumptions and the choice of parameters the reader is referred to [61].
Extended Kalman Filter
We assume that we can measure the two angles kite and kite and the wind speed 0 . An Extended Kalman Filter (EKF) is used to obtain an estimate of the augmented state aug = [ kite , kite , kite , 0 , 0 ] in each control instance from the measurements:
( aug ) = [ kite + kite , kite + kite , 0 + 0 ] ,(32)
with the zero-mean gaussian noises kite = normal(0, 0.01), kite = normal(0, 0.01) and 0 = normal(0, 0.05). The augmented state is initialized for all simulations as aug (0)
= [ kite (0) ⋅ kite , kite (0) ⋅ kite , kite (0) ⋅ kite , 0 ⋅ 0 , 0 (0) ⋅ 0 ] ,
Objective, constraints and control settings
The goal of the control is to maximize the thrust of the tether defined by:
F = 1 2 2 0 cos 2 kite ( + 1) √ 2 + 1 ⋅ (cos kite cos + sin kite sin sin kite ),(33)
while maintaining a smooth control performance and satisfying the constraints. The desired behaviour is enforced in the stage cost:
( , ) = − F F + (̃ −̃ prev ) 2 ,(34)
where F = 1 −4 and = 0.5 are weights and̃ prev is the previous control input and sampling time of the controller c = 0.15 s with a prediction horizon of P = 40 steps.
Throughout the operation of the kite it has to be ensured that the height of the kite:
ℎ( ) = T sin kite cos kite ,(35)
never falls below ℎ min = 100 m. The height constraint is a critical constraint of the control task since the best performance is obtained when the kite is operated close to ℎ min . Because of the error ms caused by the approximation of the reachable sets in the multi-stage NMPC formulation, the errors due to a deep learning-based approximation approx as well as the errors related to estimation and measurement errors est , constraint satisfaction can not be guaranteed. To cover the effect of the errors, the backoff parameter > 0 m is introduced and the height constraint:
ℎ( ) > ℎ min + ,(36)
is formulated as a soft constraint to avoid numerical problems.
To build a multi-stage NMPC controller, we consider the combinations of the extreme values of the base glide ratio 0 ∈ [4, 6] and the wind speed 0 ∈ [6 m s −1 , 10 m s −1 ] and a one-step robust horizon resulting in a total of four scenarios. The interval for the wind speed is obtained by summarizing the possible effects of the uncertain wind model parameters m , (0) and tb into the single uncertain variable 0 .
Simulation
For the simulation of the system, it is assumed that the uncertain parameters 0 and m are constant over a given closedloop simulation and that tb changes every c = 0.15 s. The values of the uncertain parameters are drawn from the probability distribution described in Table 1.
RESULTS
The proposed method for the probabilistic verification of controllers is analyzed for the towing kite case study. The baseline controller for our investigations, which is also used for the training data generation for the corresponding approximate neural network controller dnn, , is the exact multi-stage NMPC controller ms (̂ , ) (29) that derives its initial state estimatê from the EKF based on the current measurement (32). This means that the baseline controller is affected by the estimation error est and the error ms caused by the discrete representation of the uncertainties in the scenario tree and hence no formal guarantees on constraint satisfaction can be given. To avoid numerical problems for the solver in case of violations, the critical height constraint is implemented as a soft constraint.
Learning an approximate output-feedback robust NMPC controller
The training process of a neural network is determined by the quality of the data and the chosen hyperparameters like activation function of the hidden layers and network size (21), (22). In the following, we discuss how the training data can be generated in a way that reduces the number of samples that are needed to achieve a satisfactory approximation in comparison to a random sampling. For the training of the neural networks we used the toolbox Keras [62] with the backend TensorFlow [63] and Adam [64] as the optimization algorithm. The weights were initialized based on the glorot uniform distribution [65] and the biases were set to zero. All considered networks use hyperbolic tangent (tanh) as activation function in the hidden layers and a linear output layer. As the focus of this work is the verification of a given approximate controller and not the training process or the choice of the optimal network architecture, we refrained from applying methods such as Bayesian Optimization to obtain an optimal structure of the underlying network [66]. We consider two training data sets feas and opt , and two validation data sets feas and opt . Each data data set contains samples ( ( ) , ms ( ( ) )) corresponding to the numerical solution of the multi-stage problem (29) at state ( ) . The subscript opt indicates Mean squared error obtained when training a deep neural network using the space of optimal trajectories opt or the full feasible space feas as training data.
that the data was derived from optimal closed-loop trajectories, e.g. opt = {( ( ) , ms ( ( ) )), … , ( ( sim ⋅ traj ) , ms ( ( sim ⋅ traj ) ))} is composed of traj state-feedback closed-loop simulations of length sim using the exact multi-stage NMPC (29) under the dynamics presented in (30), where the uncertain parameters of the model and the initial conditions are drawn according to the distributions given in Table 1 and first row of Table 2 respectively. The subscript feas means that the data was obtained at randomly sampled states, e.g. feas = {( ( ) , ms ( ( ) )), … , ( ( s ) , ms ( ( s ) ))} is obtained by sampling ( ) uniformly from the feasible state space and solving (29). Since the training data is generated based on simulations, the application of outputfeedback via EKF is not necessary and not used for the data generation. Each trajectory consists of sim = 400 simulation steps which results in a total simulation time of sim = sim ⋅ c = 60 s. For opt , traj = 200 closed loop runs were simulated leading to traj ⋅ sim = 80000 data pairs and for the validation traj = 50 simulations were rolled out, resulting in traj ⋅ sim = 20000 samples in opt . For the data sets feas and feas , s = 80000 and s = 20000 random samples were drawn respectively.
For the following investigations, we trained five deep networks with = 6 layers and = 30 neurons per layer on each training set and evaluated all five obtained networks on each validation set. By averaging the results over five networks the impact of the stochastic learning is reduced. Training a deep neural network with the data pairs opt leads to a significantly smaller average mean squared error (MSE) when compared to the training using the training data feas , as Figure 2 shows, because the sampled space of optimal trajectories is smaller in comparison to the feasible space. To investigate the impact of the training data set on the actual performance, the networks are tested on the validation sets feas and opt . The networks trained on feas perform better when evaluated on whole feasible space with an average MSE of 0.0048 in comparison to the networks trained on opt with an average MSE of 0.2105. But when the networks are evaluated on the space of optimal closed-loop trajectories via opt , the networks trained on opt have a significantly smaller average MSE of 0.0087 than networks trained on feas with an average MSE of 0.1642. The fact that controllers trained on opt do not cover the whole feasible space is not critical since the learning-based controller will only operate in the neighborhood of optimal trajectories where a close approximation of the exact multi-stage NMPC is achieved. Additionally, the controllers will be probabilistically validated, and this validation is completely independent of the data used for training. Our experience shows that extracting training data from closed loop trajectories can significantly reduce the necessary number of training samples s and the dimensions and of the neural network to obtain a desired approximation error of the deep learning-based controller in the critical domain.
For all the results presented in the remainder of the paper, we use deep neural networks with = 6 and = 30 which were trained on the space of optimal trajectories opt due to the observed superior approximation quality in the crucial regions of the state space.
Verification of a deep learning-based embedded output-feedback robust NMPC
Because of the approximation errors, measurement and estimation errors as well as the errors derived from the multi-stage formulation, we refrain from a worst-case deterministic analysis and resort to the probabilistic verification scheme based on Overview of the the parameter sampling via uniform distribution, normal distribution, beta (2,5) and pareto(5) distribution and results of evaluating the approximate controller dnn with = 4 m for 1388 randomly drawn scenarios . The measurement noise meas = [ kite , kite , kite ] , the initial state of the turbulence (0) = normal(0, 0.25), the white noise modelling the short term turbulences tb = normal(0, 0.25) and the initialization of the estimation vector aug (0) is for all scenario spaces identical. closed-loop trajectories presented in Section 3. We consider four possible values for the backoff hyper-parameter , i.e. ∈ {0 m, 2 m, 4 m, 6 m}. This leads to a family of = 4 deep learning-based approximate controllers dnn, . Each one of these controllers was obtained training on data sets opt, containing 80000 data pairs each. The resulting controllers were analyzed for i.i.d. scenarios ( ) , = 1, … , corresponding to closed-loop simulations under the dynamics presented in (30), where the uncertain parameters of the model and the initial conditions are drawn according to the distributions given in Table 1 and first row of Table 2 respectively. Since the height constraint (36) is the most critical constraint, we define the performance indicator:
distribution kite (0) [°] kite (0) [°] kite (0) [°] 0 [-] m [( ; sim , dnn, ) = max =0,…, sim (ℎ min − ℎ( ( , ))),(37)
where ( , ) is the state trajectory at sampling time caused by scenario using controller dnn, . The performance indicator (37) extracts the largest violation of the minimum height ℎ min , if a violation occurs, or the closest value to ℎ min throughout one scenario. Each scenario has a duration of 60 s which means sim = 400. To consider a controller probabilistically safe, we
Pr ( ( ; sim , dnn, ) > 0) ≤ ,(38)
with confidence 1 − for a randomly sampled scenario according to Pr . Following the notation of Theorem 1, the performance indicators corresponding to backoff parameters {0 m, 2 m, 4 m, 6 m} are collected into vectors { , , , } respectively. We consider a value of the discarding parameter = 4. That is, a controller is probabilistically validated if no more than 3 simulations violate the height constraint. For these specifications ( = 0.02, = 1×10 −6 , and = 4), = 1388 samples are required (see (11)). The family of controllers was evaluated for 1388 i.i.d. scenarios ( ) and the results are summarized in Table 3. If no backoff is considered ( = 0 m) the exact multi-stage NMPC operates often at the constraint bound which leads to small violations of the height constraint as ms and est are ignored. The corresponding approximate controller dnn,0 is additionally affected by approx (28) which leads to violations of the height constraint in more than half of the scenarios when applied. Exemplary trajectories for the exact multi-stage NMPC and the approximate controller for one scenario are visualized in Figure 3a.
By considering = 2 m, the amount of violations can be significantly reduced to 8 scenarios, which shows the importance of the backoff parameter. However, the performance of dnn,2 is not considered probabilistically safe because after discarding the allowed number of worst-case simulation runs, we get ( Figure 3b. Due to the backoff, the kite is keeping a safety distance to the constraint bound and the impact of ms and est does not directly lead to constraint violations. Also the trajectory of the approximate controller does not violate the trajectories despite being affected by the additional approximation error approx . The preferred deep learning-based controller is dnn,4 due to the higher average tether thrust F provided. By introducing a performance indicator level for the average thrust per simulation run:
F ( ; sim , ) = 1 sim sim-1 ∑ =0 − F ( ),(39)
it is possible to obtain probabilistic statements about the performance in the same fashion as for violation of the height constraint. Using the parameters = 1 × 10 −6 , = 0.02, = 4 and = 4 we obtain, for the controller dnn,4 , that with confidence 1 − , the probability that the average thrust for a simulation run of 60 s duration is lower than 111.346 kN is not larger than = 0.02. A smaller number of samples is required if the discarding parameter is set equal to 1. However, this leads to more conservative results because violations of the height constraint occur throughout the closed loop simulations used for verification. This is even worse when the performance index is a binary function determining if the trajectories are admissible or not. In this case, the obtained results are often not informative because in a binary setting with = 1, a single violated trajectory out of determines that the controller does not meet the probabilistic constraints. Larger values for , along with the consideration of non-binary violation performance indexes, provide more informative results. One more advantage of the proposed probabilistic method is that a family of controllers can be evaluated in parallel in the closed loop for the same set of sampled scenarios. This can reduce the verification effort significantly, if drawing samples from is costly or the closed-loop experiments have a long duration.
Robustness of the probabilistic validation scheme
All obtained probabilistic guarantees are only valid if the assumptions about the probability density functions (PDFs) of from which the scenarios are drawn are correct. For the verification, the closed-loop simulations were generated using the dynamics presented in (30) and the different dnn, controllers. The uncertain parameters of the model and the initial conditions were drawn according to the distributions given in Table 1 and first row of Table 2 respectively.
To test the robustness of the probabilistic statements w.r.t. to wrong assumptions about the PDFs, the performance of the approximate controllers dnn, is evaluated using not the distribution of the first row of Table 2, but the second (normal distribution), the third (beta distribution) and the fourth one (pareto distribution). The first parameter in the description of the beta distribution is the scaling and the second parameter is the offset, e.g. 0 = 2.0 ⋅ beta(2, 5) + 28.0. The long-tailed pareto distribution is also described with two parameters, where the first one is the tail index and the second one is the scaling, e.g. 0 = pareto(5.0) + 28.0. The possible extreme values of samples from the space of beta distributions beta are identical with those when sampling from the space of uniform distributions , see Figure 4. In case of sampling from normal and pareto , which have infinite support, the occurrence of values in which exceed the bounds of the scenarios considered in the robust MPC formulation and the verification scenarios is likely, which highlights the importance of including the discarding parameter . The four different considered PDFs including the bounds applied in the NMPC formulation are shown in Figure 4 for the example base glade ratio 0 .
The results corresponding to drawing 1388 scenarios from each of the distributions normal , beta and pareto , and evaluating the approximate controller dnn,4 are given in Table 2. For the case of normal and beta one simulation run violates the height constraint, while three simulation runs violate the constraints for pareto . This means that the probabilistic requirements for the safety certificate ( = 0.02, = 1 × 10 −6 , = 4, = 4) hold for all alternative choices of distributions. This shows that neither the training of the network nor the verification approach fails catastrophically when the statistical assumptions are not exactly fulfilled.
Embedded implementation
One of the major advantages of learning the complex optimal control law via deep neural networks is the reduction of computational load and the fast evaluation. The computation of the control input is reduced from solving an optimization problem to one matrix-vector multiplication per layer and the evaluation of the tanh-function. This enables the implementation of a probabilistically validated, approximate robust nonlinear model predictive control on limited hardware such as microcontrollers or field programmable gate arrays (FPGAs). We deployed the approximate controller on a low-cost microcontroller (ARM Cortex-M3 32-bit) running with a frequency of 89 MHz with 96 kB RAM. The memory footprint of both the EKF and the neural network that describes the approximate robust NMPC is only 67.0 kB of the 512 kB flash memory. The average time needed to evaluate the neural network was 32.1 ms (max. evaluation time: 33.0 ms) and the average evaluation time for one EKF step was 28.3 ms (max. evaluation time: 30.0 ms), which shows that the proposed controller is real-time capable with a worst-case evaluation time of 63.0 ms. We analyzed the impact of the evaluation time on the safety by simulating the kite for the same 1388 scenarios considered in Table 3 drawn from the uniform distribution, but by applying the computed control inputs with a delay of delay = 65.0 ms, emulating a hardware-in-the-loop setting. We rounded the time delay up to 65.0 ms to account for possible time measurement errors. To deal with the additional error delay caused by delay , we chose = 6 m. Out of the 1388 simulated scenarios, 1374 were free of violations. This means the controller violates the height constraint in 0.86% of the cases, which is less than the probabilistically guarantee = 0.02 chosen in Section 7.2, despite of the additional errors induced through the delay. If the performance needs to be further improved for the hardware-in-the-loop setting, training data for the controller could be generated where the deterministic delay is incorporated in the NMPC formulation. This is an additional advantage of the proposed approach, because the evaluation time of a given neural network is deterministic and can be known in advance. Additional measures to counteract the impact of delayed application of the control inputs such as advanced-step NMPC [67] or the real-time iteration scheme [68] could be also incorporated in the scheme.
CONCLUSIONS AND FUTURE WORK
The computation complexity related to output-feedback robust NMPC controllers is prohibitive in most cases. Instead of relying on strong assumptions on error bounds and invariant sets that cannot be verified in practice, we propose a probabilistic performance validation scheme that can be used to obtain probabilistic guarantees about the closed-loop performance of approximate robust NMPC controllers based on a tree of discrete scenarios. To enable the implementation of such controllers in real-time even on limited embedded hardware, we used deep learning to approximate the proposed robust NMPC controller.
To deal with errors related to estimation, computation of approximate reachable sets based on scenarios as well as approximation of the resulting optimization problem with a neural network, we tighten the original constraints of the problem using a backoff parameter. The novel probabilistic validation framework lead to less restrictive results when compared to previous approaches because of the incorporation of a discarding parameter and the consideration of non-binary performance indicators. Moreover, the required sample complexity does not depend on the dimension of the problem. The promising results for the embedded output-feedback robust NMPC of a towing kite show the potential of the proposed approach. Future work includes the definition of robust margins based on probabilistic validation techniques as well as the learning of controllers that are parameterized, for example, with a backoff parameter.
Definition 3 (
3Generalized max function). Given the vector = [ (1) , (2) , … , ( ) ] ⊤ ∈ IR , and the integer with 1 ≤ ≤ we denote ( , ) = ( ) + , where the vector + = [ (1) + , (2) + , … , ( ) + ] ⊤ ∈ IR is obtained by rearranging the values of the components of in a nonincreasing order. That is,
Theorem 1 .
1Given the controllers , = 1, … , , and the integer ≥ 1, suppose that i.i.d. scenarios ( ) = { ( ) (0),̂ ( ) (0), ( ) (0), … , ( ) ( sim − 1)}, = 1, … , , are generated. We denote with , = 1, … , , the vector of performance indicators corresponding to the controller . That is, = [ ( (1) ), ( (2) ), … , ( ( ) )] ⊤ ∈ IR , = 1, … , .
where all noises (⋅) are drawn from normal(1, 0.05). Neither the estimates of the uncertain parameters nor the measurement of the wind speed are used in the computations of the controller, because their possible values are considered in the robust NMPC approach. The initial covariance matrix is given by EKF = diag([1 × 10 −2 , 1 × 10 −2 , 1 × 10 −2 , 1.0, 2 × 10 −1 ), the estimate of the process noise by EKF = diag([1 × 10 −5 , 1 × 10 −5 , 1 × 10 −4 , 1 × 10 −5 , 3 × 10 −3 ]), the measurement noise matrix by EKF = diag([1 × 10 −2 , 1 × 10 −2 , 5 × 10 −2 ]) and the observer has a sampling time of EKF = 0.05 s.
FIGURE 2 Mean squared error obtained when training a deep neural network using the space of optimal trajectories opt or the full feasible space feas as training data.
m s − 1 FIGURE 3
13] feasible traj. ( , 4) Comparison of the exact multi-stage NMPC and its deep learning-based approximation for one sample and two different choices of the back-off parameter . By choosing no backoff ( = 0 m) the kite is often operated at the bound which leads to frequent constraint violations due to the estimation and uncertainty discretization errors est and ms . The constraint violations by the neural network controller dnn,0 are more significant as it is additionally affected by the approximation error approx (a). By introducing a backoff of = 4 m the impact of the three error sources is mitigated which enables the probabilistically safe operation of the kite with both controllers (b).
, 4 )
4= 0.273 m > 0 m. With larger backoffs = 4 m and = 6 m, we obtain two probabilistically safe controllers with performance indicator levels ( , 4) = −0.316 m and ( , 4) = −1.818 m, respectively. For the same scenario as in Figure 3a, the trajectories of exact multi-stage NMPC with = 4 m and dnn,4 are depicted in
FIGURE 4
4Four different considerations of the uncertain parameter base glide ratio 0 and the considered bounds in the NMPC formulation. The normal and the pareto distribution exceed the considered bounds.
TABLE 1
1Overview of the model states and parameters and as which variable they are considered in(29).Symbol
Type
Values / Constraints Units Variable
kite model
kite
State
[0,
2 ]
rad
kite
State
[−
2 ,
2 ]
rad
kite
State
[0, 2 ]
rad
̃
Control input
[−10, 10]
N
̃
Known parameter
0.028
-
-
Known parameter
0
rad
-
Known parameter
1
kg m −3
-
ℎ min
Known parameter
100
m
-
0
Uncertain parameter
unif(4, 6)
-
wind model
State
-
s
via
0
Known parameter
0.14
-
Known parameter
100
m
Known parameter
0.15
s
m
Uncertain parameter
unif(7, 9)
m s −1
tb
Uncertain parameter
normal(0, 0.25)
-
TABLE 2
2
TABLE 3
3m}. The parameters for the probabilistic safety certificate were chosen to = 0.02 and = 1 × 10 −6 . The necessary number of samples for 3 discarded worst-case runs ( = 4) is = 1388 and computed via(11).Comparison of the members of a deep learning-based based family of controllers defined by
= 4 different choices
of the backoff parameter = {0 m, 2 m, 4 m, 6 controller
dnn,0
dnn,2
dnn,4
dnn,6
feasible trajectories 660/1388 1380/1388 1385/1388 1387/1388
( , 4) [m]
1.682
0.273
-0.316
-1.818
F (avg.) [kN]
227.516
225.997
224.185
222.179
probabilistically safe
No
No
Yes
Yes
require that the probabilistic performance indicator satisfies:
ACKNOWLEDGMENTSFinancial disclosureNone reported.Conflict of interestThe authors declare no potential conflict of interests.B. Karg ET AL
Robust model predictive control. P J Campo, M Morari, Proc. of the American Control Conference. of the American Control ConferenceP. J. Campo and M. Morari, "Robust model predictive control," in Proc. of the American Control Conference, 1987, pp. 1021-1026.
Worst-case formulations of model predictive control for systems with bounded parameters. J H Lee, Z H Yu, Automatica. 335J. H. Lee and Z. H. Yu, "Worst-case formulations of model predictive control for systems with bounded parameters," Automatica, vol. 33, no. 5, pp. 763-781, 1997.
Robust model predictive control of constrained linear systems with bounded disturbances. D Q Mayne, M M Seron, S V Rakovic, Automatica. 41D. Q. Mayne, M. M. Seron, and S. V. Rakovic, "Robust model predictive control of constrained linear systems with bounded disturbances," Automatica, vol. 41, pp. 219 -224, 2005.
Fully parameterized tube MPC. S Rakovic, B Kouvaritakis, M Cannon, C Panos, R Findeisen, Proc. of the 18th IFAC World Congress Milano. of the 18th IFAC World Congress MilanoS. Rakovic, B. Kouvaritakis, M. Cannon, C. Panos, and R. Findeisen, "Fully parameterized tube MPC," in Proc. of the 18th IFAC World Congress Milano, 2011, pp. 197-202.
Robust tube MPC for linear systems with multiplicative uncertainty. J Fleming, B Kouvaritakis, M Cannon, IEEE Transactions on Automatic Control. 604J. Fleming, B. Kouvaritakis, and M. Cannon, "Robust tube MPC for linear systems with multiplicative uncertainty," IEEE Transactions on Automatic Control, vol. 60, no. 4, pp. 1087-1092, 2015.
Min-max feedback model predictive control for constrained linear systems. P Scokaert, D Mayne, IEEE Transactions on Automatic Control. 438P. Scokaert and D. Mayne, "Min-max feedback model predictive control for constrained linear systems," IEEE Transactions on Automatic Control, vol. 43, no. 8, pp. 1136-1142, 1998.
Stochastic programming applied to model predictive control. D Muñoz De La Peña, A Bemporad, T Alamo, Proc. of the 44th IEEE Conference on Decision and Control. of the 44th IEEE Conference on Decision and ControlD. Muñoz de la Peña, A. Bemporad, and T. Alamo, "Stochastic programming applied to model predictive control," in Proc. of the 44th IEEE Conference on Decision and Control, 2005, pp. 1361-1366.
Scenario-based model predictive control of stochastic constrained linear systems. D Bernardini, A Bemporad, Proc. of the 48th IEEE Conference on Decision and Control. of the 48th IEEE Conference on Decision and ControlD. Bernardini and A. Bemporad, "Scenario-based model predictive control of stochastic constrained linear systems," in Proc. of the 48th IEEE Conference on Decision and Control, 2009, pp. 6333-6338.
Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty. S Lucia, T Finkler, S Engell, Journal of Process Control. 23S. Lucia, T. Finkler, and S. Engell, "Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty," Journal of Process Control, vol. 23, pp. 1306-1319, 2013.
Handling uncertainty in economic nonlinear model predictive control: a comparative case-study. S Lucia, J Andersson, H Brandt, M Diehl, S Engell, Journal of Process Control. 24S. Lucia, J. Andersson, H. Brandt, M. Diehl, and S. Engell, "Handling uncertainty in economic nonlinear model predictive control: a comparative case-study," Journal of Process Control, vol. 24, pp. 1247-1259, 2014.
Optimization over state feedback policies for robust control with constraints. P Goulart, E C Kerrigan, J M Maciejowski, Automatica. 42P. Goulart, E. C. Kerrigan, and J. M. Maciejowski, "Optimization over state feedback policies for robust control with constraints," Automatica, vol. 42, pp. 523-533, 2006.
Robust Multi-stage Nonlinear Model Predictive Control. Shaker. S Lucia, S. Lucia, Robust Multi-stage Nonlinear Model Predictive Control. Shaker, 2014.
Stability properties of multi-stage nonlinear model predictive control. S Lucia, S Subramanian, D Limon, S Engell, Systems & Control Letters. 143104743S. Lucia, S. Subramanian, D. Limon, and S. Engell, "Stability properties of multi-stage nonlinear model predictive control," Systems & Control Letters, vol. 143, p. 104743, 2020.
An auto-generated real-time iteration algorithm for nonlinear MPC in the microsecond range. B Houska, H Ferreau, M Diehl, Automatica. 47B. Houska, H. Ferreau, and M. Diehl, "An auto-generated real-time iteration algorithm for nonlinear MPC in the microsecond range," Automatica, vol. 47, pp. 2279-2285, 2011.
Cvxgen: A code generator for embedded convex optimization. J Mattingley, S Boyd, Optimization and Engineering. 131J. Mattingley and S. Boyd, "Cvxgen: A code generator for embedded convex optimization," Optimization and Engineering, vol. 13, no. 1, pp. 1-27, 2012.
muAO-MPC: A free code generation tool for embedded real-time linear model predictive control. P Zometa, M Kögel, R Findeisen, Proc. of the American Control Conference. of the American Control ConferenceP. Zometa, M. Kögel, and R. Findeisen, "muAO-MPC: A free code generation tool for embedded real-time linear model predictive control," in Proc. of the American Control Conference, 2013, pp. 5320-5325.
Optimized FPGA implementation of model predictive control using high level synthesis tools. S Lucia, D Navarro, O Lucia, P Zometa, R Findeisen, IEEE Transactions on Industrial Informatics. 141S. Lucia, D. Navarro, O. Lucia, P. Zometa, and R. Findeisen, "Optimized FPGA implementation of model predictive control using high level synthesis tools," IEEE Transactions on Industrial Informatics, vol. 14, no. 1, pp. 137-145, 2018.
Approximate explicit receding horizon control of constrained nonlinear systems. T A Johansen, Automatica. 402T. A. Johansen, "Approximate explicit receding horizon control of constrained nonlinear systems," Automatica, vol. 40, no. 2, pp. 293-300, 2004.
On multi-parametric nonlinear programming and explicit nonlinear model predictive control. IEEE Conference on Decision and Control. 3--, "On multi-parametric nonlinear programming and explicit nonlinear model predictive control," in IEEE Conference on Decision and Control, vol. 3, 2002, pp. 2768-2773.
The explicit linear quadratic regulator for constrained systems. A Bemporad, M Morari, V Dua, E N Pistikopoulos, Automatica. 381A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, "The explicit linear quadratic regulator for constrained systems," Automatica, vol. 38, no. 1, pp. 3 -20, 2002.
A receding-horizon regulator for nonlinear systems and a neural approximation. T Parisini, R Zoppoli, Automatica. 3110T. Parisini and R. Zoppoli, "A receding-horizon regulator for nonlinear systems and a neural approximation," Automatica, vol. 31, no. 10, pp. 1443-1451, 1995.
Efficient representation and approximation of model predictive control laws via deep learning. B Karg, S Lucia, IEEE Transactions on Cybernetics. 509B. Karg and S. Lucia, "Efficient representation and approximation of model predictive control laws via deep learning," IEEE Transactions on Cybernetics, vol. 50, no. 9, pp. 3866-3878, 2020.
Approximating explicit model predictive control using constrained neural networks. S Chen, K Saulnier, N Atanasov, Proc. of the American Control Conference. of the American Control ConferenceS. Chen, K. Saulnier, and N. Atanasov, "Approximating explicit model predictive control using constrained neural networks," in Proc. of the American Control Conference, 2018, pp. 1520-1527.
Depth-width tradeoffs in approximating natural functions with neural networks. I Safran, O Shamir, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningI. Safran and O. Shamir, "Depth-width tradeoffs in approximating natural functions with neural networks," in Proceedings of the 34th International Conference on Machine Learning, 2017, pp. 2979-2987.
A Theory of Learning and Generalization. M Vidyasagar, SpringerLondonM. Vidyasagar, A Theory of Learning and Generalization. London: Springer, 1997.
R Tempo, G Calafiore, F Dabbene, Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications. LondonSpringer-Verlag2nd edR. Tempo, G. Calafiore, and F. Dabbene, Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications, 2nd ed. London: Springer-Verlag, 2013.
A scenario approach for non-convex control design. S Grammatico, X Zhang, K Margellos, P Goulart, J Lygeros, IEEE Transactions on Automatic Control. 612S. Grammatico, X. Zhang, K. Margellos, P. Goulart, and J. Lygeros, "A scenario approach for non-convex control design," IEEE Transactions on Automatic Control, vol. 61, no. 2, pp. 334-345, 2016.
Stochastic MPC with offline uncertainty sampling. M Lorenzen, F Dabbene, R Tempo, F Allgöwer, Automatica. 81M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, "Stochastic MPC with offline uncertainty sampling," Automatica, vol. 81, pp. 176-183, 2017.
An offlinesampling SMPC framework with application to autonomous space maneuvers. M Mammarella, M Lorenzen, E Capello, H Park, F Dabbene, G Guglieri, M Romano, F Allgöwer, IEEE Transactions on Control Systems Technology. M. Mammarella, M. Lorenzen, E. Capello, H. Park, F. Dabbene, G. Guglieri, M. Romano, and F. Allgöwer, "An offline- sampling SMPC framework with application to autonomous space maneuvers," IEEE Transactions on Control Systems Technology, pp. 1-15, 2018.
The scenario approach to robust control design. G Calafiore, M Campi, IEEE Transactions on Automatic Control. 515G. Calafiore and M. Campi, "The scenario approach to robust control design," IEEE Transactions on Automatic Control, vol. 51, no. 5, pp. 742-753, 2006.
Control of power kites for naval propulsion. L Fagiano, M Milanese, V Razza, I Gerlero, Proc. of the American Control Conference. of the American Control ConferenceL. Fagiano, M. Milanese, V. Razza, and I. Gerlero, "Control of power kites for naval propulsion," in Proc. of the American Control Conference, 2010, pp. 4325-4330.
Trading performance for state constraint feasibility in stochastic constrained control: A randomized approach. L Deori, S Garatti, M Prandini, Journal of the Franklin Institute. 3541L. Deori, S. Garatti, and M. Prandini, "Trading performance for state constraint feasibility in stochastic constrained control: A randomized approach," Journal of the Franklin Institute, vol. 354, no. 1, pp. 501-529, 2017.
On the road between robust optimization and the scenario approach for chance constrained optimization problems. K Margellos, P Goulart, J Lygeros, IEEE Transactions on Automatic Control. 598K. Margellos, P. Goulart, and J. Lygeros, "On the road between robust optimization and the scenario approach for chance constrained optimization problems," IEEE Transactions on Automatic Control, vol. 59, no. 8, pp. 2258-2263, 2014.
Randomized strategies for probabilistic solutions of uncertain feasibility and optimization problems. T Alamo, R Tempo, E Camacho, IEEE Transactions on Automatic Control. 5411T. Alamo, R. Tempo, and E. Camacho, "Randomized strategies for probabilistic solutions of uncertain feasibility and optimization problems," IEEE Transactions on Automatic Control, vol. 54, no. 11, pp. 2545-2559, 2009.
Probabilistic robustness analysis: explicit bounds for the minimum number of samples. R Tempo, E Bai, F Dabbene, Systems & Control Letters. 30R. Tempo, E. Bai, and F. Dabbene, "Probabilistic robustness analysis: explicit bounds for the minimum number of samples," Systems & Control Letters, vol. 30, pp. 237-242, 1997.
Randomized methods for design of uncertain systems: sample complexity and sequential algorithms. T Alamo, R Tempo, A Luque, D Ramirez, Automatica. 52T. Alamo, R. Tempo, A. Luque, and D. Ramirez, "Randomized methods for design of uncertain systems: sample complexity and sequential algorithms," Automatica, vol. 52, pp. 160-172, 2015.
Feedback law with probabilistic certification for Propofol-based control of BIS during anesthesia. M Alamir, M Fiacchini, I Queinnec, S Tarbouriech, M Mazerolles, International Journal of Robust and Nonlinear Control. 2818M. Alamir, M. Fiacchini, I. Queinnec, S. Tarbouriech, and M. Mazerolles, "Feedback law with probabilistic certification for Propofol-based control of BIS during anesthesia," International Journal of Robust and Nonlinear Control, vol. 28, no. 18, pp. 6254-6266, 2018.
On probabilistic certification of combined cancer therapies using strongly uncertain models. M Alamir, Journal of Theoretical Biology. 384M. Alamir, "On probabilistic certification of combined cancer therapies using strongly uncertain models," Journal of Theoretical Biology, vol. 384, pp. 59-69, 2015.
Learning an approximate model predictive controller with guarantees. M Hertneck, J Köhler, S Trimpe, F Allgöwer, IEEE Control Systems Letters. 23M. Hertneck, J. Köhler, S. Trimpe, and F. Allgöwer, "Learning an approximate model predictive controller with guarantees," IEEE Control Systems Letters, vol. 2, no. 3, pp. 543-548, 2018.
Safe and near-optimal policy learning for model predictive control using primal-dual neural networks. X Zhang, M Bujarbaruah, F Borrelli, arXiv:1906.08257arXiv preprintX. Zhang, M. Bujarbaruah, and F. Borrelli, "Safe and near-optimal policy learning for model predictive control using primal-dual neural networks," arXiv preprint arXiv:1906.08257, 2019.
Learning-based approximation of robust nonlinear predictive control with state estimation applied to a towing kite. B Karg, S Lucia, Proc. of the European Control Conference. of the European Control ConferenceB. Karg and S. Lucia, "Learning-based approximation of robust nonlinear predictive control with state estimation applied to a towing kite," in Proc. of the European Control Conference, 2019, pp. 16-22.
A survey of computational complexity results in systems and control. V Blondel, J Tsitsiklis, Automatica. 369V. Blondel and J. Tsitsiklis, "A survey of computational complexity results in systems and control," Automatica, vol. 36, no. 9, pp. 1249-1274, 2000.
Probability inequalities for sums of bounded random variables. W Hoeffding, Journal of the American Statistical Association. 58301W. Hoeffding, "Probability inequalities for sums of bounded random variables," Journal of the American Statistical Association, vol. 58, no. 301, pp. 13-30, 1963.
Robust design through probabilistic maximization. T Alamo, J Manzano, E Camacho, Uncertainty in Complex Networked Systems. Honor of Roberto Tempo, T. BasarBirkhäuserT. Alamo, J. Manzano, and E. Camacho, "Robust design through probabilistic maximization," in Uncertainty in Complex Networked Systems. In Honor of Roberto Tempo, T. Basar, Ed. Birkhäuser, 2018, pp. 247-274.
An introduction to Order Statistics. M Ahsanullah, V Nevzorov, M Shakil, Atlantis PressParisM. Ahsanullah, V. Nevzorov, and M. Shakil, An introduction to Order Statistics. Paris: Atlantis Press, 2013.
A First Course in Order Statistics. B Arnold, N Balakrishnan, H Nagaraja, John Wiley and SonsNew YorkB. Arnold, N. Balakrishnan, and H. Nagaraja, A First Course in Order Statistics. New York: John Wiley and Sons, 1992.
Polynomial-time algorithms for probabilistic solutions of parameter-dependent linear matrix inequalities. Y Oishi, Automatica. 433Y. Oishi, "Polynomial-time algorithms for probabilistic solutions of parameter-dependent linear matrix inequalities," Automatica, vol. 43, no. 3, pp. 538-545, 2007.
Research on probabilistic methods for control system design. G C Calafiore, F Dabbene, R Tempo, Automatica. 477G. C. Calafiore, F. Dabbene, and R. Tempo, "Research on probabilistic methods for control system design," Automatica, vol. 47, no. 7, pp. 1279-1293, 2011.
Randomized methods for design of uncertain systems: Sample complexity and sequential algorithms. T Alamo, R Tempo, A Luque, D R Ramirez, Automatica. 52T. Alamo, R. Tempo, A. Luque, and D. R. Ramirez, "Randomized methods for design of uncertain systems: Sample complexity and sequential algorithms," Automatica, vol. 52, pp. 160-172, 2015.
On the sample complexity of randomized approaches to the analysis and design under uncertainty. T Alamo, R Tempo, A Luque, Proceedings of the American Control Conference. the American Control ConferenceBaltimore, USAT. Alamo, R. Tempo, and A. Luque, "On the sample complexity of randomized approaches to the analysis and design under uncertainty," in Proceedings of the American Control Conference, Baltimore, USA, 2010.
Model predictive control: Theory and design. J Rawlings, D Mayne, Nob Hill PubJ. Rawlings and D. Mayne, Model predictive control: Theory and design. Nob Hill Pub., 2009.
Reachability analysis of nonlinear differential-algebraic systems. M Althoff, B Krogh, IEEE Transactions on Automatic Control. 59M. Althoff and B. Krogh, "Reachability analysis of nonlinear differential-algebraic systems," IEEE Transactions on Automatic Control, vol. 59, pp. 371-383, 2014.
Convex/concave relaxations of parametric ODEs using Taylor models. A Sahlodin, B Chachuat, Computers & Chemical Engineering. 355A. Sahlodin and B. Chachuat, "Convex/concave relaxations of parametric ODEs using Taylor models," Computers & Chemical Engineering, vol. 35, no. 5, pp. 844 -857, 2011.
Stochastic model predictive control for linear systems using probabilistic reachable sets. L Hewing, M N Zeilinger, 2018 IEEE Conference on Decision and Control (CDC). L. Hewing and M. N. Zeilinger, "Stochastic model predictive control for linear systems using probabilistic reachable sets," in 2018 IEEE Conference on Decision and Control (CDC), 2018, pp. 5182-5188.
A synergistic approach to robust output feedback control: Tube-based multi-stage NMPC. S Subramanian, S Lucia, S Engell, IFAC-PapersOnLine. 5118S. Subramanian, S. Lucia, and S. Engell, "A synergistic approach to robust output feedback control: Tube-based multi-stage NMPC," IFAC-PapersOnLine, vol. 51, no. 18, pp. 500-505, 2018.
Universal approximation bounds for superpositions of a sigmoidal function. A R Barron, IEEE Transactions on Information theory. 393A. R. Barron, "Universal approximation bounds for superpositions of a sigmoidal function," IEEE Transactions on Information theory, vol. 39, no. 3, pp. 930-945, 1993.
Optimal control of towing kites. B Houska, M Diehl, Proc. of the 45th IEEE Conference on Decision and Control. of the 45th IEEE Conference on Decision and ControlB. Houska and M. Diehl, "Optimal control of towing kites," in Proc. of the 45th IEEE Conference on Decision and Control, 2006, pp. 2693-2697.
Control of towing kites for seagoing vessels. M Erhard, H Strauch, IEEE Transactions on Control Systems Technology. 21M. Erhard and H. Strauch, "Control of towing kites for seagoing vessels," IEEE Transactions on Control Systems Technology, vol. 21, pp. 1629-1640, 2013.
Automatic crosswind flight of tethered wings for airborne wind energy: a direct data-driven approach. L Fagiano, C Novara, IFAC Proceedings Volumes. 47L. Fagiano and C. Novara, "Automatic crosswind flight of tethered wings for airborne wind energy: a direct data-driven approach," IFAC Proceedings Volumes, vol. 47, no. 3, pp. 4927-4932, 2014.
Real-time optimization for kites. S Costello, G François, D Bonvin, Proc of the IFAC International Workshop on Periodic Control Systems (PSYCO). of the IFAC International Workshop on Periodic Control Systems (PSYCO)S. Costello, G. François, and D. Bonvin, "Real-time optimization for kites," in Proc of the IFAC International Workshop on Periodic Control Systems (PSYCO), 2013, pp. 64-69.
Crosswind kite control-a benchmark problem for advanced control and dynamic optimization. S Costello, G François, D Bonvin, European Journal of Control. 35S. Costello, G. François, and D. Bonvin, "Crosswind kite control-a benchmark problem for advanced control and dynamic optimization," European Journal of Control, vol. 35, pp. 1-10, 2017.
Keras. F Chollet, F. Chollet et al., "Keras," https://github.com/fchollet/keras, 2015.
TensorFlow: Large-scale machine learning on heterogeneous systems. M A , 2015, software available from tensorflow.org. M. A. et al., "TensorFlow: Large-scale machine learning on heterogeneous systems," 2015, software available from tensorflow.org. [Online]. Available: http://tensorflow.org/
Adam: A method for stochastic optimization. D P Kingma, J Ba, Proceedings of the 3rd International Conference on Learning Representations (ICLR). the 3rd International Conference on Learning Representations (ICLR)D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," in Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2014.
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsX. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249-256.
Neural architecture search with bayesian optimisation and optimal transport. K Kandasamy, W Neiswanger, J Schneider, B Poczos, E P Xing, Advances in neural information processing systems. K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. P. Xing, "Neural architecture search with bayesian optimisation and optimal transport," in Advances in neural information processing systems, 2018, pp. 2016-2025.
The advanced-step nmpc controller: Optimality, stability and robustness. V M Zavala, L T Biegler, Automatica. 451V. M. Zavala and L. T. Biegler, "The advanced-step nmpc controller: Optimality, stability and robustness," Automatica, vol. 45, no. 1, pp. 86-93, 2009.
Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. M Diehl, H G Bock, J P Schlöder, R Findeisen, Z Nagy, F Allgöwer, Journal of Process Control. 124AUTHOR BIOGRAPHYM. Diehl, H. G. Bock, J. P. Schlöder, R. Findeisen, Z. Nagy, and F. Allgöwer, "Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations," Journal of Process Control, vol. 12, no. 4, pp. 577-585, 2002. AUTHOR BIOGRAPHY
He received the B.Eng. degree in mechanical engineering from Ostbayerische Technische Hochschule Regensburg. Benjamin Karg, 2015, and his M.Sc. degree in systems engineering and engineering cybernetics from Otto-von-Guericke Universität. Burglengenfeld, Germany; Regensburg, Bavaria, Germany; Magdeburg, Saxony-Anhalt; Germany; Dortmund, GermanyTechnische Universität Berlin, ; at Technische Universität DortmundGermany, and currently at the chair "Process Automation Systems. where he pursues his PhD. He is also member of the Einstein Center for Digital Future. His research is focused on control engineering, artificial intelligence and edge computing for IoT-enabled cyber-physical systemsBenjamin Karg. was born in Burglengenfeld, Germany, in 1992. He received the B.Eng. degree in mechanical engineering from Ostbayerische Technische Hochschule Regensburg, Regensburg, Bavaria, Germany, in 2015, and his M.Sc. degree in systems engineering and engineering cybernetics from Otto-von-Guericke Universität, Magdeburg, Saxony-Anhalt, Germany, in 2017. He is working as a research assistant, formerly at the laboratory "Internet of Things for Smart Buildings", Technische Universität Berlin, Germany, and currently at the chair "Process Automation Systems" at Technische Universität Dortmund, Dortmund, Germany, where he pursues his PhD. He is also member of the Einstein Center for Digital Future. His research is focused on control engineering, artificial intelligence and edge computing for IoT-enabled cyber-physical systems.
he was an Assistant Professor, an Associate Professor from 2001 to 2010, and has been a Full Professor since March 2010 with the Department of System Engineering and Automatic Control. Spain in 1968. He received the M.Eng. degree in telecommunications engineering from the Polytechnic University of Madrid, Spain, in 1993 and the Ph.D. degree in telecommunications engineering from the University of. Seville, Spain; Germany; Seville, SpainUniversity of Seville ; University ofHis current research interests include decision making. model predictive control, machine learning, randomized algorithms, and optimization strategiesTeodoro Alamo was born in Spain in 1968. He received the M.Eng. degree in telecommunications engineering from the Polytechnic University of Madrid, Spain, in 1993 and the Ph.D. degree in telecommunications engineering from the University of Seville, Spain, in 1998. From 1993 to 2000, he was an Assistant Professor, an Associate Professor from 2001 to 2010, and has been a Full Professor since March 2010 with the Department of System Engineering and Automatic Control, University of Seville. He was at the Ecole Nationale Superiore des Telecommunications (Telecom Paris) from September 1991 to May 1993. Part of his Ph.D. was done at RWTH Aachen, Germany, from June to September 1995. He is the author or coauthor of more than 200 publications including books, book chapters, journal papers, conference proceedings, and educational books. (google scholar profile available at http://scholar.google.es/citations?user=W3ZDTkIAAAAJ&hl=en). He has co-funded the spin-off company Optimal Perfor- mance (University of Seville, Spain). His current research interests include decision making, model predictive control, machine learning, randomized algorithms, and optimization strategies.
Since October 2020, he is a Professor and head of the Laboratory of Process Automation Systems at the Technische Universität Dortmund, Germany. His research interests include decision-making under uncertainty, distributed control, as well as the interplay between machine learning techniques and control theory. Sergio Lucia. received the M.Sc. degree in electrical engineering from the University of Zaragoza. Zaragoza, Spain; Dortmund, Germany; Berlin, GermanyUniversity of DortmundHe joined the Otto-von-Guericke Universität Magdeburg and visited the Massachusetts Institute of Technology as a Postdoctoral FellowSergio Lucia. received the M.Sc. degree in electrical engineering from the University of Zaragoza, Zaragoza, Spain, in 2010, and the Dr. Ing. degree in optimization and automatic control from the Technical University of Dortmund, Dortmund, Germany, in 2014. He joined the Otto-von-Guericke Universität Magdeburg and visited the Massachusetts Institute of Technology as a Postdoctoral Fellow. In 2017, he was appointed Assistant Professor at the Technische Universität Berlin, Germany. Since October 2020, he is a Professor and head of the Laboratory of Process Automation Systems at the Technische Universität Dortmund, Germany. His research interests include decision-making under uncertainty, distributed control, as well as the interplay between machine learning techniques and control theory. Dr. Lucia is currently Associate Editor of the Journal of Process Control.
| [
"https://github.com/fchollet/keras,"
] |
[
"PARSEVAL FRAMES OF PIECEWISE CONSTANT FUNCTIONS",
"PARSEVAL FRAMES OF PIECEWISE CONSTANT FUNCTIONS"
] | [
"Dorin Ervin ",
"Rajitha Ranasinghe "
] | [] | [
"Operators and Matrices"
] | We present a way to construct Parseval frames of piecewise constant functions for L 2 [0,1] . The construction is similar to the generalized Walsh bases. It is based on iteration of operators that satisfy a Cuntz-type relation, but without the isometry property. We also show how the Parseval frame can be dilated to an orthonormal basis and the operators can be dilated to true Cuntz isometries.Mathematics subject classification (2010): 41A30, 26A99. | 10.7153/oam-2020-14-25 | [
"http://files.ele-math.com/abstracts/oam-14-25-abs.pdf"
] | 119,732,198 | 1804.03577 | 655d1cc5e0915e0906933fade7982d2b6e29bdb5 |
PARSEVAL FRAMES OF PIECEWISE CONSTANT FUNCTIONS
2020
Dorin Ervin
Rajitha Ranasinghe
PARSEVAL FRAMES OF PIECEWISE CONSTANT FUNCTIONS
Operators and Matrices
1422020and phrases: Cuntz algebrasParseval framedilation
We present a way to construct Parseval frames of piecewise constant functions for L 2 [0,1] . The construction is similar to the generalized Walsh bases. It is based on iteration of operators that satisfy a Cuntz-type relation, but without the isometry property. We also show how the Parseval frame can be dilated to an orthonormal basis and the operators can be dilated to true Cuntz isometries.Mathematics subject classification (2010): 41A30, 26A99.
Weighted Fourier frames on self-affine measures. Dorin Dutkay, Ervin And, Rajitha Ranasinghe, 10.1016/j.jmaa.2017.12.055J. Math. Anal. Appl. 462DUTKAY, DORIN ERVIN AND RANASINGHE, RAJITHA, Weighted Fourier frames on self-affine mea- sures, J. Math. Anal. Appl., 462 (2018), 1, 1032-1047, 10.1016/j.jmaa.2017.12.055, https://doi.org/10.1016/j.jmaa.2017.12.055.
Isometric dilations for infinite sequences of noncommuting operators. Gelu Popescu, 10.2307/2001359Trans. Amer. Math. Soc. 316POPESCU, GELU, Isometric dilations for infinite sequences of noncommuting operators, Trans. Amer. Math. Soc., 316 (1989), 2, 523-536, 10.2307/2001359, https://doi.org/10.2307/2001359.
Frames for undergraduates. Deguang Han, Keri Kornelson, Larson, Eric Weber, xiv+295,10.1090/stml/040American Mathematical Society40Providence, RIStudent Mathematical LibraryHAN, DEGUANG, KORNELSON, KERI, LARSON, DAVID AND WEBER, ERIC, Frames for under- graduates, Student Mathematical Library, 40 American Mathematical Society, Providence, RI (2007), xiv+295, 10.1090/stml/040, https://doi.org/10.1090/stml/040.
Fourier frames for the Cantor-4 set. Gabriel And Picioroaga, Eric S Weber, 10.1007/s00041-016-9471-0J. Fourier Anal. Appl. 23PICIOROAGA, GABRIEL AND WEBER, ERIC S., Fourier frames for the Cantor-4 set, J. Fourier Anal. Appl., 23 (2017), 2, 324-343, 10.1007/s00041-016-9471-0, https://doi.org/10.1007/s00041-016-9471-0.
Weighted Fourier frames on fractal measures. Dorin Dutkay, Ervin And, Rajitha Ranasinghe, 10.1016/j.jmaa.2016.07.042J. Math. Anal. Appl. 444DUTKAY, DORIN ERVIN AND RANASINGHE, RAJITHA, Weighted Fourier frames on fractal mea- sures, J. Math. Anal. Appl., 444 (2016), 2, 1603-1625, 10.1016/j.jmaa.2016.07.042, https://doi.org/10.1016/j.jmaa.2016.07.042.
Generalized Walsh bases and applications. Dorin Dutkay, Ervin And, Gabriel Picioroaga, 10.1007/s10440-013-9856-xActa Appl. Math. 133DUTKAY, DORIN ERVIN AND PICIOROAGA, GABRIEL, Generalized Walsh bases and applications, Acta Appl. Math., 133 (2014), 1-18, 10.1007/s10440-013-9856-x, https://doi.org/10.1007/s10440-013-9856-x.
Orthonormal bases generated by Cuntz algebras. Dorin Dutkay, Ervin, Picioroaga, Myung-Sin Song, 10.1016/j.jmaa.2013.07.012J. Math. Anal. Appl. 409DUTKAY, DORIN ERVIN, PICIOROAGA, GABRIEL AND SONG, MYUNG-SIN, Orthonor- mal bases generated by Cuntz algebras, J. Math. Anal. Appl., 409 (2014), 2, 1128-1139, 10.1016/j.jmaa.2013.07.012, https://doi.org/10.1016/j.jmaa.2013.07.012.
Pure states on O d. O Bratteli, P E T Jorgensen, A Kishimoto, R F Werner, J. Operator Theory. 43BRATTELI, O., JORGENSEN, P. E. T., KISHIMOTO, A. AND WERNER, R. F., Pure states on O d , J. Operator Theory, 43 (2000), 1, 97-143.
| [] |
[
"Measurement and simulation of charge diffusion in a small-pixel charge-coupled device",
"Measurement and simulation of charge diffusion in a small-pixel charge-coupled device"
] | [
"Beverly J Lamarr \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Gregory Y Prigozhin \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Eric D Miller \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Carolyn Thayer \nDepartment of Climate and Space Sciences and Engineering\nUniversity of Michigan\n2455 Hayward St48109Ann ArborMIUSA\n",
"Marshall W Bautz \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Richard Foster \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Catherine E Grant \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Andrew Malonis \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n",
"Barry E Burke \nLincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA\n",
"Michael Cooper \nLincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA\n",
"Kevan Donlon \nLincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA\n",
"Christopher Leitz \nLincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA\n"
] | [
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Department of Climate and Space Sciences and Engineering\nUniversity of Michigan\n2455 Hayward St48109Ann ArborMIUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA",
"Lincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA",
"Lincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA",
"Lincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA",
"Lincoln Laboratory\nMassachusetts Institute of Technology\n244 Wood St02421LexingtonMAUSA"
] | [] | Future high-resolution imaging X-ray observatories may require detectors with both fine spatial resolution and high quantum efficiency at relatively high X-ray energies (E ≥ 5 keV ). A silicon imaging detector meeting these requirements will have a ratio of detector thickness to pixel size of six or more, roughly twice that of legacy imaging sensors. The larger aspect ratio of such a sensor's detection volume implies greater diffusion of X-ray-produced charge packets. We investigate consequences of this fact for sensor performance, reporting charge diffusion measurements in a fully-depleted back-illuminated CCD with a thickness of 50 µm and pixel size of 8 µm. We are able to measure the size distributions of charge packets produced by 5.9 keV and 1.25 keV X-rays in this device. We find that individual charge packets exhibit a gaussian spatial distribution, and determine the frequency distribution of event widths for a range of detector bias (and thus internal electric field strength) levels. At the largest bias, we find a standard deviation for the largest charge packets (produced by X-ray interactions closest to the entrance surface of the device) of 3.9 µm. We show that the shape of the event width distribution provides a clear indicator of full depletion, and use a previously developed technique to infer the relationship between event width and interaction depth. We compare measured width distributions to simulations. Although we can obtain good agreement for a given detector bias, with our current simulation we are unable to fit the data for the full range of of bias levels with a single set of simulation parameters. We compare traditional, 'sum-above-threshold' algorithms for individual event amplitude determination to gaussian fitting of individual events and find that better spectroscopic performance is obtained with the former for 5.9 keV events, while the two methods provide comparable results at 1.25 keV. The reasons for this difference are discussed. We point out the importance of read-noise driven charge detection thresholds in degrading spectral resolution , and note that the derived read noise requirements for mission concepts such as AXIS and Lynx are probably too lax to assure that spectral resolution requirements can be met. While the measurements reported here were made with a CCD, we note that they have implications for the performance of high aspect-ratio silicon active pixel sensors as well. | 10.1117/1.jatis.8.1.016004 | [
"https://arxiv.org/pdf/2201.07645v1.pdf"
] | 246,035,863 | 2201.07645 | 021163653bf2b8b4d4202bbaaed3bbd55f055db9 |
Measurement and simulation of charge diffusion in a small-pixel charge-coupled device
Beverly J Lamarr
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Gregory Y Prigozhin
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Eric D Miller
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Carolyn Thayer
Department of Climate and Space Sciences and Engineering
University of Michigan
2455 Hayward St48109Ann ArborMIUSA
Marshall W Bautz
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Richard Foster
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Catherine E Grant
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Andrew Malonis
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
77 Massachusetts Ave02139CambridgeMAUSA
Barry E Burke
Lincoln Laboratory
Massachusetts Institute of Technology
244 Wood St02421LexingtonMAUSA
Michael Cooper
Lincoln Laboratory
Massachusetts Institute of Technology
244 Wood St02421LexingtonMAUSA
Kevan Donlon
Lincoln Laboratory
Massachusetts Institute of Technology
244 Wood St02421LexingtonMAUSA
Christopher Leitz
Lincoln Laboratory
Massachusetts Institute of Technology
244 Wood St02421LexingtonMAUSA
Measurement and simulation of charge diffusion in a small-pixel charge-coupled device
*Beverly J. LaMarr, [email protected] image sensorscharge-coupled devicessensor simulation
Future high-resolution imaging X-ray observatories may require detectors with both fine spatial resolution and high quantum efficiency at relatively high X-ray energies (E ≥ 5 keV ). A silicon imaging detector meeting these requirements will have a ratio of detector thickness to pixel size of six or more, roughly twice that of legacy imaging sensors. The larger aspect ratio of such a sensor's detection volume implies greater diffusion of X-ray-produced charge packets. We investigate consequences of this fact for sensor performance, reporting charge diffusion measurements in a fully-depleted back-illuminated CCD with a thickness of 50 µm and pixel size of 8 µm. We are able to measure the size distributions of charge packets produced by 5.9 keV and 1.25 keV X-rays in this device. We find that individual charge packets exhibit a gaussian spatial distribution, and determine the frequency distribution of event widths for a range of detector bias (and thus internal electric field strength) levels. At the largest bias, we find a standard deviation for the largest charge packets (produced by X-ray interactions closest to the entrance surface of the device) of 3.9 µm. We show that the shape of the event width distribution provides a clear indicator of full depletion, and use a previously developed technique to infer the relationship between event width and interaction depth. We compare measured width distributions to simulations. Although we can obtain good agreement for a given detector bias, with our current simulation we are unable to fit the data for the full range of of bias levels with a single set of simulation parameters. We compare traditional, 'sum-above-threshold' algorithms for individual event amplitude determination to gaussian fitting of individual events and find that better spectroscopic performance is obtained with the former for 5.9 keV events, while the two methods provide comparable results at 1.25 keV. The reasons for this difference are discussed. We point out the importance of read-noise driven charge detection thresholds in degrading spectral resolution , and note that the derived read noise requirements for mission concepts such as AXIS and Lynx are probably too lax to assure that spectral resolution requirements can be met. While the measurements reported here were made with a CCD, we note that they have implications for the performance of high aspect-ratio silicon active pixel sensors as well.
Introduction
Large format, megapixel solid-state image sensors have been mainstays of soft (0.1-10 keV) X-ray astronomy for decades. [1][2][3][4][5][6] These devices provide relatively large fields of view, more than adequate spatial resolution, and moderate energy resolution. Their imaging and spectroscopic capabilities † Deceased. also allow discrimination between X-rays from cosmic sources and unwanted background generated by charged particles encountered in the space environment. [1][2][3][4][5][6] Future X-ray observatories are expected to require image sensors that exceed the capabilities of current flight detectors in a number of respects. For example, both the Lynx X-ray observatory, 7 a large mission concept studied by NASA, and the smaller AXIS Probe-class concept, 8 would require imaging detectors with better spatial resolution (pixel size 16 µm) than current systems, as well as excellent spectral resolution and quantum efficiency over a broader energy range (0.2-12 keV) (see Table 1). Together these requirements dictate (for a silicon detector, at least) an unusually 'tall' pixel, with pixel thickness to pixel width ratio ∼ 6:1, compared with current flight systems for which that ratio is roughly ∼ 3:1. Taller pixels entail greater lateral diffusion of the charge packets produced by absorption of X-ray photons, especially for X-rays at the low-energy extreme of the passband. These soft photons are absorbed very close to the detector entrance surface, so the charge packets they produce must drift farthest before collection, and are therefore most likely to be shared amongst two or more adjacent pixels. Given the relatively small total quantity of charge in these packets, good detection efficiency and accurate spectroscopy at these energies requires low read noise. Detector noise requirements for these missions must account for this phenomenon, and accurate modelling of detector performance requires knowledge of it. This challenge faces all silicon X-ray image sensors, including active pixel sensors of any architecture as well as CCDs.
Charge diffusion affects silicon sensor performance in other wavebands. In the visible and IR, its most important effect is a wavelength-dependent contribution to the system point spread function. 9,10 In X-ray photon counting spectroscopic imaging applications, however, the effect of diffusion on spectral resolution and detection efficiency is of greater significance. We have been motivated to revisit this subject by the requirements of ambitious future X-ray observatories. The Lynx X-ray observatory, 7 and even smaller-scale missions such as the AXIS Probe-class concept, 8 require imaging detectors with moderate spatial resolution (16 µm), but this must be accompanied by good response over a very broad energy range (0.3-10 keV). Together these requirements dictate (for a silicon detector, at least) an unusually 'tall' pixel, with pixel size to thickness ratio ∼ 1:6.
Critically, Lynx studies of the high-redshift universe also require excellent detection capability at the lowest energies. Design of detectors meeting these requirements requires careful optimization of diffusion and readout noise. This challenge faces all silicon image sensors, including active pixel sensors of any architecture as well as CCDs.
We note that other future developments in X-ray astrophysics may also require a better understanding of charge diffusion in solid-state image sensors. These include X-ray polarimetry 11 and very high-resolution X-ray imaging. 12 We have been developing advanced CCDs for use in Lynx. 13 CCDs are one of three types of imaging sensor, along with hybrid 14,15 and monolithic 16 CMOS active pixel detectors, considered for this mission. Our work, which also has potential applications in probe-class, Explorer and small missions, has focused on demonstrating fast, low-noise output amplifiers, low-power charge-transfer clocking, and development of appropriate application specific circuits. 17 One of the devices we have tested features a relatively small pixel (8 µm) and a depletion thickness of approximately 50 µm, and thus also offers the opportunity to evaluate the interplay of pixel size and charge diffusion in a sensor with a pixel thickness to width pixel-size ratio near that required for Lynx and AXIS. In this paper we report measurements of charge packet size distributions as a function of device bias and X-ray energy obtained with this device. We compare these measurements to simulations made with a publicly available code 18 developed to characterize the detectors of the Vera C. Rubin Observatory (VRO). 19 We note that VRO CCDs are also high-aspect ratio devices, and that 5.9 keV X-ray characterization of these devices has been reported. 20 Our results have implications for the soft X-ray response of silicon active pixel sensors as well as CCDs.
2 Data Acquisition and Analysis
Detector and X-ray Sources
Data reported here were obtained from a small-pixel, back-illuminated frame transfer detector developed at the MIT Lincoln Laboratory and designated CCID93. The device features 512 x 512 pixels in both image and frame transfer areas. Pixels are 8-µm wide and the device is 50 µm thick.
Its gate structure is implemented in a single layer of polysilicon with small inter-electrode gaps produced by photolithography. This configuration permits fast charge transfer with modest (< 3 V) clock swings. 13 The device architecture allows the substrate to be biased independently of the gate voltage and channel stop potentials to ensure that it is fully depleted. By varying the substrate bias voltage (hereafter 'V sub ') one can change the electric field strength in the depletion region and thus change the amount of lateral diffusion of X-ray induced charge packets.
This test device was equipped with buried channel 'trough' implants of varying widths, as well as a control region with no troughs. The trough implants provide significantly better charge transfer efficiency, and unless otherwise noted, we excluded data obtained in the trough-free region of the device. Details of the device architecture are described elsewhere. 13 We operated the detector at a temperature of −50 • C with serial and parallel register rates of 2.0 MHz and 500 kHz, respectively and gate voltage swings of 3.0 V peak to peak. Readout noise is typically 4.5 electrons, RMS, under these conditions. An STA Archon commercial controller 21 was used to operate the detector and digitize the data.
Data were collected in X-ray photon-counting mode using either a radioactive 55 Fe source (producing Mn K X-rays at 5.9 and 6.4 keV) or a grating monochromator fed by an electronimpact source at energies below 2 keV. The spectral resolving power of the monochromator is typically λ/∆λ = E/∆E ∼ 60-80, far exceeding that of the detector at the energies of interest.
In this work, we focus on results at two energies, 5.9 keV and 1.25 keV from Mn Kα and Mg Kα photons, respectively.
Data Acquisition
Data were obtained for each energy and substrate bias value of interest by repeatedly reading the detector, storing each full-frame readout as a distinct file. The exposure time for each frame was 0.15 s, and source flux was adjusted to less than ∼ 25 detected photons (events) per frame ( 10 −4 events per pixel) to minimize pileup. Approximately 2.3 × 10 5 events were collected for each configuration. Data were obtained at seven values of V sub ranging from −0.2 V to −20 V.
Analysis Methods
Data were acquired in groups of 100 consecutive frames. From each acquired frame an average of the overclock region was subtracted to remove drift of the DC level. A bias (zero-signal) frame was then computed for each group using a clipped average algorithm to remove signal from Xrays and cosmic rays. The bias frame was subtracted pixel-by-pixel from each data frame in its group. X-ray events were then identified by searching the bias-subtracted frames for pixels with amplitudes exceeding a fixed threshold (the 'event threshold') which are also local maxima. The event threshold was set to 10 times the RMS noise level for 5.9 keV X-rays, and 7.5 times the noise for 1.25 keV. This level is high enough to reject spurious events due to noise while low enough to accept evenly split x-rays. For each event the values of a 7×7 pixel array centered on the located maximum were recorded in an event list.
We found that in this relatively small pixel device almost all of the events have signal charge spread over multiple pixels. To characterize the signal distribution we fit 22 a two-dimensional Gaussian function to the pixel values of each event. In these fits the Gaussian standard deviation (σ spatial ) was constrained to be the same in both dimensions, and a constant additive term was included to allow for a local offset bias level. In this way we obtained five parameters (two for position, plus one each for amplitude, σ, and bias offset) for each event.
For comparison, we also computed the sum of all the pixel values in a seven by seven pixel island above a second, lower, threshold value, known as the 'split event' threshold. This is the event energy reconstruction method in use for all past and current X-ray CCD missions for astrophysics, but with the number of pixels in each island increased to compensate for the much smaller pixels in our device. We set the split threshold at four times the RMS readout noise per pixel.
Simulations
Simulated event lists were constructed using POISSON CCD, 18 Simulated Gaussian readout noise with RMS of 4.5 electrons was added to each pixel, and event detection and characterization were performed in the same way as the lab data.
The default charge diffusion parameters in POISSON CCD produced much more diffusion in the simulations than we observe in the lab data, with much larger event sizes. Since the purpose of the simulations was to illuminate the physical processes responsible for the observed data, we tuned the DIFFMULTIPLIER parameter so that the simulated event size distributions closely matched the real data, as described in the following Section. These DIFFMULTIPLIER values ranged from 1.4 to 1.7 for V sub = −0.2 to −20 V, compared to the value of 2.3 determined from charge diffusion measurements in Vera Rubin Observatory CCDs. 18 This parameter is implemented in POISSON CCD as a scale factor for the charge carrier thermal velocity, given by v th = (8kT /m e π) 1/2 × DiffMultiplier, 18 where m e is the bare electron mass. In effect it specifies a value for the thermal velocity effective mass of the electron in the Si conduction band, m * e,tc . Previous estimates suggest m * e,tc /me ≈ 0.27-0.28, 23 which would indicate DIFFMULTIPLIER ≈ 1.9 as an appropriate value.
The modest difference between that and the value DIFFMULTIPLIER = 1.7 required to match our data at highly negative V sub (fully depleted substrate) is likely due to the limitations of the carrier transport model used in POISSON CCD. 18 The larger differences at less negative V sub could arise from incorrect treatment of undepleted bulk in the simulator (C. Lage, private communication).
We saw no difference in the amount of diffusion if electron cloud Coulomb repulsion was turned on (FE55 mode). These issues will be explored in a future paper aimed at further validating the simulations and producing a higher-fidelity simulation methodology for these thick, small-pixel X-ray CCDs. We will also consider implementing techniques used to model drift and diffusion developed recently by other groups. 24,25 3 Measurements
Overview
We measure (two-dimensional) position, amplitude, and width (2D Gaussian σ spatial ) of each event as described in Section 2.3 above. Figure 1 shows scatter plots of summed-pixel event amplitude vs. event width for both X-ray energies and three different values of internal detector bias V sub . show depressed amplitude, implying that events produced in the undepleted region of the detector suffer from incomplete charge collection as well as large lateral diffusion. 26,27 A closer look at the spatial width distributions is provided in Figure 2, which shows distributions for events comprising various number of pixels above the split threshold. (Hereafter we denote the latter quantity as 'pixel multiplicity'). The panels in this figure correspond to those in Figure 1.
Comparison of the upper and lower panels of Figure 2 shows the marked energy dependence in the size distributions. Generally, events with a given pixel multiplicity are spatially smaller at the higher energy. In addition, for 5.9 keV events there is a clear correlation between width and number of pixels above threshold, while at 1.25 keV the variation in size with number of pixels is much smaller. Finally, at the stronger fields (V sub ≤ −5V), the maximum size of events is similar at both energies (about 4 µm for V sub = −20V).
These observations are straightforward consequences of the expected energy dependence of X-ray interaction depths in the detector, coupled with the dependence of lateral diffusion on interaction depth. The relatively more penetrating 5.9 keV X-rays (attenuation length in silicon approximately 29 µm ) are absorbed throughout the 50 µm thickness of the detector. The 1.25 keV X-rays (attenuation length 5 µm) all interact in the third of the detector closest to the entrance window. As a result, there is a much larger range of lateral diffusion and therefore sizes in the population of 5.9 keV events. An important conclusion from this figure, however, is that when the detector is fully depleted (two left columns), the maximum amount of diffusion is the same at both energies.
In all cases the histogram peaks near σ = 2 µm are associated with events for which charge is detected in four or fewer pixels. Although the amplitude of these events can be measured accurately by summing the pixels above split threshold, their sizes cannot be resolved by our detector or reliably measured using the Gaussian fitting algorithm. Since our pixel size is comparable to or greater than than the intrinsic ('pre-pixelization') size of the charge packets, our best-fit values of σ spatial are biased high relative to those of the intrinsic distributions. The magnitude of this bias depends on the intrinsic width. Analytic calculations suggest, and simulations confirm, that this bias ranges from more than 30% to less than 10% for 3 µm σ 6 µm. keV, all events must be subject to considerable diffusion. In this case, however, the relatively large spatial extent of the charge distribution, coupled with the relatively small total number of electrons, can produce low signal-to-noise ratios in the peripheral pixels. Simulations confirm that fits may not converge in these circumstances.
Measurement and applications of the event charge distribution
The data presented in the previous section illustrate the broad range of event sizes that occur even under monochromatic X-ray illumination. A complete characterization of detector response at even a single energy thus requires knowledge of both the distribution of charge for events of a given width (which we have assumed to be Gaussian in the foregoing), and the distribution of event widths. Previous work by Prigozhin and co-workers 26 has shown how this problem can be addressed by direct measurement.
Briefly, we can exploit the fact that for a monochromatic incident beam, both the number of X-rays absorbed and the magnitude of lateral diffusion are monotonic functions of the depth in the detector at which the X-ray is absorbed. The former relation is simply a consequence of the attenuation of the X-ray intensity with depth, and can be determined reliably from known material properties. The latter results because the time required for the liberated electrons to drift from the interaction point to the buried channel, and thus the amount of lateral diffusion, decreases with depth. The expected width-depth relation is illustrated in Figure 3, which shows simulation results for 5.9 keV X-rays for various values of V sub . Because (for a given field configuration) this relationship is also monotonic in depth, it is possible to measure it using X-ray data. Moreover, it is also possible to obtain a high-quality measurement of the shape of the charge distribution for events interacting at a given depth, and thus test our assumption that this distribution is Gaussian. There is a clearly monotonic trend of event width with depth. It is also clear that the maximum event size depends on V sub .
V sub -0.2V V sub -1V V sub -2V V sub -5V V sub -10V V sub -20V
Average charge distribution at fixed depth
To characterize event charge distributions at fixed depth we first order events by their measured values of σ spatial , and then select 1000 events surrounding each of several percentile points in the cumulative σ spatial distribution, thus selecting a group of events coming from approximately the same depth in the device. To minimize the effects of pixelization, we create a 70×70 subpixel grid over the 7×7 pixel island around each event's central pixel. Each element of the subpixel grid as assigned an amplitude equal to 1/100 of the amplitude of the pixel in which that element lies.
We next align the centroid of each event with the origin of the subpixel grid, and then average all subpixel amplitudes over all events in the percentile group. Since centroids can be determined with precision finer than a single pixel, this procedure produces a mean event charge distribution with subpixel resolution.
The results for 5.9 keV at three different percentiles and three different values of V sub are shown in Figure 4. Each row corresponds to a different value of V sub , with the strongest field at the top. Each column is a different value of percentile in the cumulative width distribution, with the largest percentile width at the left.
Qualitatively, the figure shows as expected that for a given field distribution, event widths become smaller as the interaction depth increases (to the right), and that for a given depth, event widths also become smaller as the field strength increases (bottom to top). Note that at V sub = −1 V (bottom row) the detector is only partially depleted, and the cloud size is quite large. In both the middle and top rows, with more more negative V sub the device is fully depleted, and cloud size shrinks accordingly. As noted in Section 3.1, at very small cloud sizes (σ spatial < 4 µm), the effects of pixelization become evident: the inferred shape tends to become non-circular, reflecting that of the pixels rather than that of the intrinsic charge distribution. We shall return to the consequences of this under-sampling in Section 3.3 below.
How accurate is our assumption of a Gaussian charge distribution? Projections (i.e., sums along rows and columns) of the signal distribution are shown to the right and beneath each panel in Figure 4, along with best-fit Gaussian curves. The central portions of the profiles are clearly very well described by a Gaussian function over at least two orders of magnitude, which is consistent with theoretical calculations for clouds formed within the depleted region. 28 There is a slight excess in measured signal compared to the Gaussian wings, but the level of discrepancy is on the order of 0.1% of the total charge packet signal per pixel. We note that the deviation is largest for the data with the smallest σ spatial , which are most affected by the pixelization. We also note that for vertical signal distributions the tail on the top side is noticeably higher than on the bottom side, no doubt as a result of charge transfer inefficiency that causes electrons trapped during parallel transfer to be re-emitted into the pixels behind the event center. A similar distortion of charge-cloud shape has been used previously as a diagnostic of charge transfer inefficiency in electron multiplier CCDs. 29 We conclude that charge packets are Gaussian to a very good approximation, and that the most significant deviations arise at small widths as a result of pixelization.
Inferring interaction depth from the cumulative event width distribution
As noted at the beginning of this section, since both the density of X-ray events and the size of the corresponding charge distributions vary monotonically with interaction depth, it is possible to map the density of the event widths onto the distance from the illuminated surface. This idea was proposed previously by Prigozhin and collaborators. 26 We implement this map as follows. The integral number of photons absorbed between the illuminated detector surface and the plane at depth z below that surface) can be written N z = N 0 (1 − exp(−z/λ)). Here N 0 is the incident photon fluence. The total number of events N total absorbed in an ideal device with thickness t is N total = N 0 (1 − exp(−t/λ)), so the cumulative fraction of events absorbed above depth z,
N z f rac = N z /N total , is N z f rac = 1 − exp(−z/λ) 1 − exp(−t/λ)(1)
This relationship is monotonic and can therefore be inverted to give the value of z as function of the fraction of events above that depth:
z = −λ ln(1 − N z f rac (1 − exp(−t/z)))(2)
Both the detector thickness (t = 50 µm) and, for a given photon energy, the absorption length ( λ = 29 µm for 5.9 keV X-rays) are known so the relationship between z and N z f rac is completely specified.
We also sort the observed events in order of width to form their cumulative distribution as a function of that parameter. Invoking our physically motivated assumption that width decreases with depth, we use this empirical width distribution, together with equation 2, to determine the relationship between width and depth. We note that it is important to include all events in this calculation, even the ones with very small measured width, in spite of the fact that the width of such events is not measured accurately, as discussed in Section 3.1.
The resulting measurements of event width as a function of depth below the entrance window are shown for different values of V sub in Figure 5. As expected, stronger internal fields (more negative V sub ) reduce the diffusion time and thus event widths. At V sub > −3 V the shape of the curves is different than at lower V sub . This is almost certainly due to formation of an undepleted layer of silicon near the illuminated surface, in which charge collection may be inefficient, excluding some events from our analysis. This would violate our assumption that all events are included, and would introduce errors in our assignment of depth to width. This interpretation is supported by the simulated width-depth curves shown in Figure 3; the simulations do not include such surface effects, and the simulated width-depth curves do not show this shape change.
Another limitation of this algorithm is that it neglects uncertainties in measurements of σ spatial .
We defer investigation of the significance of this effect to future work.
Detector characterization from differential event width distributions
In this section we turn from the cumulative to the differential form of the width distribution and
show how it varies with detector bias. In principle, differential width distributions are an important tool in predicting detector performance. They can be used, for example to estimate proportions of events with different pixel multiplicities, which in turn are needed to predict the spectral resolution of an image sensor. They also provide powerful observables for use in validating a detector model, especially when measured over a range of internal field conditions. Figure 6 shows these distributions for 5.9 keV X-rays over a range of substrate bias. The distributions shown include only those events with measurable widths, that is, those for which at least four pixels have measurable charge.
The bias dependence of these distributions tells a now familiar story, but also provides new insights. At the largest bias (V sub = −20 V, black histogram) the bulk of the silicon is fully depleted.
As there is a strong electric field throughout the device, drift times are short, so the corresponding width distribution is relatively narrow, and shows a sharp edge at 3.9 µm. This edge can be interpreted as the width of events interacting in the immediate vicinity of the detector entrance surface. This maximum event width should apply to events of all energies, and is thus a very useful parameter for estimating the response of the detector to low energy photons which are necessarily absorbed there. The shape of the left side of the distribution is determined by the width-depth relation, and is thus sensitive to internal field distributions and charge transport properties of the detector. The steep but finite slope of the edge is in principle a measure of the statistical uncertainty of the width measurement. The small peak near σ spatial = 2 µm is caused by very narrow events produced close to the buried channel, for which our width measurement is not reliable, as discussed above in Section 3.1.
As V sub increases (algebraically), the internal electric field strength drops, broadening the dis- Observatory CCDs. 18 Our simulations for V sub < −5 V, at which the device is fully depleted,
show good agreement with our data. Agreement is poorer at less negative V sub ; the measured distribution for -0.2 V has a more extended tail to larger σ spatial than the simulations would predict. This is likely due to both a poor representation of the backside passivation layer and an inadequate correction for backside surface charge losses in the simulation. These issues are exacerbated when the region near the backside is not fully depleted, producing a field-free region where electrons can linger and greatly affecting the measured charge diffusion. Both the need for tuned diffusion and possible improvements to undepleted backside characterization will be addressed in a future paper focused on simulations.
V sub -20V V sub -10V V sub -5V V sub -2V V sub -1V V sub -0.2V
lab data simulations Fig 7: The distribution of Gaussian σ spatial from simulations of 5.9 keV X-rays, compared to the data (see Figure 6). Only events with at least 4-pixel multiplicity are shown, and for clarity some V sub values are not plotted. A single diffusion factor was tuned for each simulation to recover a similar distribution to the data, as described in the text. The simulated distributions are similar to the data for large negative V sub in which the device is fully depleted. For small V sub and large σ spatial , the distributions differ significantly, likely due to limitations of the simulations.
Event Amplitude Estimation
The spectral resolution of an X-ray image sensor is its spectral resolution, that depends on the accuracy with which the total charge associated with an X-ray event (the event 'amplitude') is measured. Traditionally, in large-pixel devices (those with pixels much larger than characteristic event widths) used for Chandra, XMM-Newton, Suzaku and other X-ray instruments, the amplitude is estimated as the sum signal in a few pixels around a local maximum exceeding the split threshold. In detectors with pixel sizes comparable to characteristic event widths, the majority of events have charge spread over multiple pixels. In this case, numerous pixels that fall below the split threshold may in aggregate contain a significant fraction of the total charge, and it is natural to consider whether alternative methods that explicitly account for finite event width could provide a better estimate of event amplitude.
We evaluated direct fitting of the event charge distribution as a method for determining event amplitude. A comparison of this approach with simple summing of pixels exceeding the split threshold is shown in Figure 8 for 5.9 keV and 1.25 keV photons. To investigate the magnitude of charge lost due to the thresholding used in the traditional algorithm, we separately plot spectra for events of various pixel multiplicities. In Figure 9, we show spectral FWHM and peak location as a function of pixel multiplicity (see also Appendix A). These data were obtained with substrate bias V sub = −20V, providing the maximum internal electric field strength. In general, integrating the best-fit spatial Gaussian does produce a slightly higher estimate for the event amplitude, consistent with the idea that a functional form accurately describing the event shape can recover signal lost to the surrounding pixels that fall below split threshold. On the other hand, the spectral distributions derived from the Gaussian fits are noticeably broader and themselves clearly non-Gaussian, especially at 5.9 keV. The 5.9 keV spectra in the left panels of Figure 8 and the corresponding response parameters in the left panels of Figure 9 show that the Gaussian fit performs worst for events for which a relatively small number of pixels (roughly 5 or fewer) exceed the split threshold. We attribute this behavior to the spatial under-sampling of events with intrinsically narrow charge distributions. We also note that at this energy there is only a small change in the spectral peak location with pixel multiplicity for either amplitude determination method, suggesting that relatively little charge is contained in pixels below the split threshold.
The situation is different at the lower energy, as the right panels of Figures 8 and 9 show. Here the performance of the two amplitude determination methods is quite similar, and spectral widths are in some cases marginally better for Gaussian fits. Remarkably, good results are obtained with this method even for events with as few as two pixels above threshold, suggesting that as expected these events are more extended and suffer less from under-sampling than their counterparts at higher energy. The low pixel multiplicity of these events is due to truncation by the threshold rather than an intrinsically narrow spatial distribution. This interpretation also explains the systematic increase of spectral peak location with pixel multiplicity at this energy. Figure 9 shows that Gaussian fits are indeed less susceptible, though not immune, to spectral broadening due to charge loss at this energy: peak locations change less with pixel multiplicity, and, as a result, the spectral full-width at half maximum for the entire data set is actually slightly better for the fitting method than for pixel summation.
A complication of this simple picture is presented by the spectrum of four-pixel events derived from Gaussian fitting, which shows a high-energy tail. We interpret this tail as another consequence of poor fitting arising from (maximal) under-sampling of those charge clouds with spatial centroids close to the center of a 2x2 pixel array. Examination of the fitted centroids of these tail events confirms this interpretation. We defer a detailed analysis of photon location to future work.
These inferences are generally supported by simulations, although the details are subtle, as shown in Figure 10. Here again we separately plot spectra for events with different pixel multiplicities. The results are remarkably similar to the measured data. At both energies, the Gaussian fitting method does indeed improve the amplitude estimate, the true value of which is known from the simulation inputs: the peaks of the spectral distributions are very close to the input energy. At 5.9 keV, the simulated spectral redistribution is much broader using the Gaussian method than the sum of pixels, just as we observe in the measured spectra. These broad features are dominated by events with low pixel multiplicities. At 1.25 keV, the Gaussian fit and pixel summing esti-mates produce similar core spectral responses, while the Gaussian fit estimate features an extended high-energy tail populated mainly by four-pixel events.
In summary, we find that for our devices and at the energies we probed, although a Gaussian fit to individual events may provide a slightly less biased amplitude estimate on average, the effective spectral resolution is worse at 5.9 keV and comparable at 1.25 keV to that obtained with the traditional sum of pixels above threshold algorithm. At the higher energy, the spatial sampling provided by 8 µm pixels is generally insufficient to support spatial modelling of individual events. At 1.25 keV, the fitting method recovers some of the sub-threshold signal ignored by the pixel summation algorithm. This results in event amplitude estimates with less bias, but with no less dispersion, than those of the summation method. All data were obtained with strong internal electric fields (V sub = −20V). At the higher energy, the sum of pixels method produces a much narrower spectral response, although the performance of the Gaussian method can be improved by eliminating events with low multiplicity. At the lower energy, the Gaussian method produces a similar core spectral response compared to the summed pixel method, but with an extended high-energy tail populated predominantly by 4-pixel events. The broad spectra of low-multiplicity events at both energies are caused by poor performance of the Gaussian fit for undersampled distributions. , with the event energies calculated by summing all pixels above split threshold (top panels) and by integrating under a Gaussian fit (bottom panels). Spectra are separated by the number of pixels above the split threshold, as in Figure 8. At both energies, the Gaussian integral method performs better at estimating the most likely energy of the ensemble of events, as the peak is closer to the expected energy indicated by the vertical line. The effects of few-pixel events are similar to those shown in Figure 8, and are caused by poor performance of the Gaussian fit for undersampled distributions.
Summary and Discussion
Our measurements of X-ray-induced charge packets have yielded a number of results. We confirm that charge packets produced at a given detector depth exhibit (on average) a Gaussian spatial distribution to remarkable accuracy. We show that the distribution of charge packet widths, parameterized by the Gaussian standard deviation σ spatial , provides useful information on detector structure. The cumulative width distribution gives the relationship between width and interaction depth. The differential width distribution provides, for a fully depleted detector, a precise estimate of the size of events produced in the immediate vicinity of the entrance window. Remarkably, this parameter, which is a crucial determinant of detector response to low-energy (< 1 keV) X-rays, is most easily and accurately measured with higher energy X-rays using the techniques we present here. The shape of the differential width distribution also provides a clear indication of the extent to which the detector is fully depleted.
We have used these diagnostics to tune and validate an implementation of the POISSON CCD simulation. We find reasonable agreement with measurements by reducing the amount of (lateral) diffusion, relative to values reported for VRO devices, 18 by of order 35% when the detector is fully depleted. The poorer agreement for partially depleted configurations indicates that further development development of the simulation is needed. We will report on this in a future contribution.
We investigated use of Gaussian fits to individual events to estimate their amplitudes. We found that at 5.9 keV this estimator is not as accurate as traditional methods which sum pixel values exceeding a threshold. For a significant fraction of events at this energy, our detector provides insufficient spatial resolution to measure event shape. At 1.25 keV, the fitting method clearly recovers signal that is neglected by traditional pixel summation algorithms. While this produces a less biased amplitude estimate, it does not improve spectral resolution at the energies we've investigated. This may be a consequence of the relatively low signal-to-noise ratio of the sub-threshold pixels included in the fits. We speculate that more sophisticated fitting algorithms, for example incorporating priors based on readily measurable event characteristics, may be more successful.
We shall also explore this approach in future work.
Finally, our results highlight an important and perhaps under-appreciated mechanism through which read noise can degrade spectral resolution at lower X-ray energies. It is widely understood that spectral resolution is degraded when charge is shared amongst multiple pixels, since the readout noise associated with each pixel sums in quadrature in the event amplitude calculation. We have shown the importance of a second mechanism by which readout noise degrades spectral resolution: the effective loss of signal in pixels with values below threshold (see the right-hand panels of Figure 8). Since the threshold must be a multiple of readout noise, the magnitude of this lost sub-threshold signal increases as readout noise increases. In fact, the spectral broadening due to lost, sub-threshold charge in our 1.25 keV data is considerably larger than that due to the noise injected by the sense node amplifier itself, as is demonstrated in the righthand panels of Figure 9.
The spectral resolution is much worse than the theoretical (Fano plus readnoise) expectation, and the charge lost is largest (peak shift 10% to 20%) for pixel multiplicities n=1 and n=2. As a result, the integrated spectral resolution, summed over all events (∼ 150 eV FWHM) is considerably worse than expected from Fano noise plus the weighted quadrature sum of readout noise alone (< 90 eV FWHM.)
Conclusions and Future Work
Mega-pixel X-ray sensors with large ratios of depletion thickness to pixel size are required for future strategic missions such as Lynx and AXIS. We find that direct charge-cloud size measurements in a 50 µm thick, 8 µm-pixel device are useful for validating a basic drift and diffusion simulation of such devices, although more work is required to achieve accurate modeling over a wide range of operating conditions. Our measurements and simulations suggest that read-noise dependent, sub-threshold charge loss may be the most important determinant of low-energy spectral resolution and it is therefore essential that this process is fully understood when establishing sensor noise requirements for these missions. In fact, the detector model described here predicts that sensors capable of meeting the low-energy spectral resolution requirements of AXIS and Lynx require noise considerably below their notional upper limit of 4 electrons RMS.
To test this proposition and to make it quantitative, we are extending this work in several ways.
We are currently acquiring data at lower X-ray energies with lower noise detectors. These data will provide more stringent tests of the current simulation. We are also working to improve the fidelity of the simulation by refining the treatment of lateral diffusion and incorporating a more realistic model of the detector entrance window. We expect this work to lead to more robust detector requirements for future X-ray missions.
Acknowledgements
We are deeply saddened by the passing of our co-author and colleague Barry E. Burke. His work advanced imaging technology and enabled the contributions of generations of astronomers to scientific knowledge. He was a most generous, and in the very highest sense, a good human being. +4.9% a: 'Peak shift' is defined as the difference between the emission line energy and the peak (mode) of a Gaussian fit to the spectrum.
a software package that, given the structural and operational characteristics of the detector, (1) solves Poisson's equation for the electric field in three dimensions at all points within the detector volume, and (2) tracks the drift and diffusion of electrons placed within this volume until they reach the buried channel, where they are collected in pixels defined by the channel stop and barrier gate fields. We set up the simulated detector to reflect as nearly as possible the characteristics and operating conditions of the CCID93 device described in Section 2.1: 50-µm thickness, 8-µm pixels, and 3-phase gate structure with one collecting gate held at +1.5 V and two barrier gates held at −1.5 V. Implant and doping parameters were provided by MIT Lincoln Laboratory. The simulation volume covered 9×9 pixels, with nonlinear grid spacing in the vertical direction allowing finer grid sampling of 15 nm at the front and back sides of the device to properly capture the electric field structure, and coarser grid sampling (up to 150 nm) throughout the bulk of the device. The electric field was simulated at an operating temperature of −50 • C at each of the V sub settings used for the real device.For each V sub , we simulated detection of 200,000 5.9 keV and 1.25 keV photons by introducing small clouds of electrons and allowing them to drift and diffuse until collected in pixels. The number of electrons in each cloud was drawn from a Fano noise distribution appropriate for the photon energy in Si (e.g., 1615±14 electrons for 5.9 keV), and the interaction depth was drawn from an exponential distribution with appropriate attenuation length. Each interaction location was generated from a uniform distribution across the central pixel of the 9×9-pixel simulation volume. The final pixel distribution of electrons was converted into an event island in energy units.
Fig 1 :
1Scatter plots of summed-pixel amplitude vs. width (2D Gaussian σ) for X-ray events at two energies and three values of the substrate bias (V sub ). The color indicates the areal density of events in the width/amplitude plane. Histograms of event amplitude and width are projected onto the right and bottom edges, respectively, of each panel.Here color indicates density of events in the amplitude-width plane, and projections of the data in these two axes show the corresponding distributions in amplitude and width. Approximately230,000 events are represented in each panel of the figure. The (vertical) amplitude distributions show the detector's spectral response function at energies of 5.9 and 6.4 keV (upper panels) and 1.25 keV (lower panels). Silicon K-escape and fluorescence lines are evident in the upper panels. The (horizontal) width distributions reflect the amount of lateral diffusion experienced by charge packets before collection.Comparison of the three columns inFigure 1shows how the device response changes with the strength of the electric fields in the detector's photosensitive volume, with the strongest field (V sub = −20V) on the left and the weakest (V sub = −1 V) on the right. Thus in the left column the relatively strong fields provide relatively rapid charge collection, and thus relatively small event widths, as well as relatively good spectral (amplitude) resolution. As field strength decreases in the center and right columns, the figures show the effects of progressively smaller fields ( V sub = −5V and −1V, respectively): both width and amplitude distributions become progressively wider. In fact, the double-peaked width distribution inFigure 1csuggests that the detector is no longer fully depleted at this low value of substrate bias. Events with the largest widths at this value of V sub also
For σ < 3 µm, our fitting algorithm fails to converge, and tends to return values of σ ≈ 2 µm. This accounts for the small peaks at this value in the width distributions shown in Figures 1 and 2.Such spatially unresolved charge distributions can arise in different ways. At 5.9 keV, some X-rays can penetrate deep into the detector to interact close to the CCD's buried channel, and thus can produce events that suffer very little diffusion. After pixelization all information about the intrinsic size of these events is lost and the Gaussian model does not describe them well. At 1.25
Fig 2 :
2Histograms of fitted event width (Gaussian σ spatial ) by number of pixels above the split threshold (pixel multiplicity) for various values of the substrate bias V sub and for two X-ray energies.
Fig 3 :
3Simulated best-fit Gaussian σ spatial as a function of the photon interaction depth for different values of substrate bias parameter V sub and photon energy of 5.9 keV. The device is 50 µm thick. Contours show 5%, 10%, 20%, 40%, and 70% of the maximum density of points for each V sub .
Fig 4 :
4Measured average event profiles produced by 5.9 keV X-rays. The top, middle and bottom rows show results for V sub = −20 V, −5 V and −1 V, respectively. The left, middle and right columns are for events in the 75th, 50th and 25th width percentiles, respectively. Horizontal and vertical Projections of the two-dimensional distributions, together with best-fit Gaussians, are shown for each case. Here the units of ordinates are electrons per pixel.
Fig 5 :
5Measured cumulative distributions of event width (σ spatial ) for 5.9 keV X-rays as a function of absorption distance (depth) below the entrance window.
tribution to larger event widths, until a clear transition occurs at V sub = −4V. This is caused by the formation of an undepleted region near the illuminated surface of the device. As internal field strength drops further, the events in the undepleted region form a separate hump in the histogram at much larger widths. In this way the differential width distributions provide a readily measurable indicator of full depletion within the device. A simple 1D calculation of V sub at which full depletion of 50 µm thick slab of silicon with doping concentration of 2.65x10 12 cm −3 would occur yields a value of V sub = 5.1 V, in good agreement with both our experimental and simulated full depletion transition.
Figure 7
7compares the event width distributions derived from the simulations with those from the measurements. We tuned the simulation's DIFFMULTIPLIER parameter (described in Section 2.4 above) for each V sub so that the peak of the simulated distribution matched the peak of the mea-sured distribution. The tuned values have a small range of 1.4-1.7, with smaller values for smaller (less negative) V sub . The upper end of this range is not too different from the value DIFFMUL-Fig 6:The differential width distributions of events with pixel multiplicity greater than 3 produced by 5.9 keV X-rays. Distributions for a range of detector bias (V sub ) conditions are shown. TIPLIER ≈ 1.9 expected for a cannonical value of the thermal velocity electron effective mass,23 although it differs noticeably from the value of 2.3 determined for (fully depleted) Vera Rubin
Fig 8 :
8Measured event amplitude spectra. Different colors show results for events of various pixel multiplicities. Left and right panels show results for 5.9 keV and 1.25 keV, respectively. Event amplitudes are calculated by summing all pixels above split threshold (top panels) or by integrating under the Gaussian fit to the spatial charge distribution of each event (bottom panels).
Fig 9 :
9Spectral response parameters FWHM (top) and peak shift (bottom) for 5.9 keV (left) and 1.25 keV (right). These were measured by fitting a single Gaussian to the spectra shown inFigures 8 and 10. Points with arrows indicate values out of the plot range. The spectral FWHM "theoretical limit" is the Fano spectral width convolved with pixel-based noise of 4.5 electrons RMS. At 5.9 keV, the spatial Gaussian integral summation method performs poorly compared to a simple sum of pixel values, except when the number of pixels above threshold is large. At 1.25 keV, the spatial Gaussian performs similarly or better than the pixel summation method. SeeTables 2 and 3in Appendix A for a full tabulation of values.
Fig 10 :
10Spectra of simulated events from 5.9 keV photons (left panels) and 1.25 keV photons (right panels)
Table 1 :
1Requirements for future X-ray Probe and Flagship missionsParameter
Requirement
AXIS 8 Probe
Lynx 7 Flagship
Angular resolution 1
0.5"
0.5"
Energy range
0.2 -12 keV
0.2 -10 keV
Spectral resolution 2 60 eV FWHM @ 1 keV 70 eV FWHM @ 0.3 keV
Pixel size
16 µm
16 µm
Read noise
≤ 4 electrons RMS
≤ 4 electrons RMS
Notes: 1. Half-power diameter on axis. 2. Full width at half maximum
Table 2 :
2Spectral resolution using different event reconstruction methods.
Table 3 :
3Spectral peak shift a using different event reconstruction methods.Measurements
Simulations
frac.
Pixel sum Gauss. int.
frac.
Pixel sum
Gauss. int.
Energy
# pix events peak shift peak shift events peak shift
peak shift
5.9 keV
any
1
−1.2%
+1.0%
1
−0.9%
+0.6%
1
0.02
−0.7%
−39.4%
0.04
−0.3%
−44.3%
2
0.06
−0.8%
−13.1%
0.08
−0.5%
+1.0%
3
0.06
−1.1%
−6.8%
0.07
−1.0%
−7.6%
4
0.30
−1.3%
+0.9%
0.35
−0.9%
+5.9%
5
0.16
−1.6%
+1.4%
0.14
−1.5%
+1.0%
6
0.25
−1.2%
+0.1%
0.22
−1.0%
+0.2%
7
0.13
−0.7%
−0.9%
0.09
−0.6%
−1.2%
8
0.02
+0.0%
−1.4%
0.004
+0.0%
−2.3%
1.25 keV any
1
−7.6%
−1.1%
1
−6.0%
+1.1%
1
0.002
−13.2%
−49.8%
0.001
−8.6%
−20.3%
2
0.082
−16.5%
−5.9%
0.03
−14.0%
−3.4%
3
0.353
−11.3%
−3.3%
0.27
−11.3%
−1.5%
4
0.526
−4.9%
+1.2%
0.63
−4.6%
+2.4%
5
0.034
−0.4%
+0.7%
0.06
−2.2%
+1.9%
6
0.000
...
...
0.002
+2.6%
We thank Michelle Gabutti for assistance with data acquisition, Craig Lage for providing key updates to and generous assistance with the POISSON CCD code, and Sven Herrmann for valuable discussions. We gratefully acknowledge support of this work by NASA through Strategic Astrophysics Technology grants 80NSSC18K0138 and 80NSSC19K0401 to MIT and by MKI's Kavli Research Infrastructure Fund.Appendix A: Tabulation of Spectral Response ParametersWe include here inTables 2 and 3tabulation of the spectral response parameters plotted inFigure 9for different event pixel multiplicities, as discussed in Section 3.3. The spectra FWHM and peak shift are measured from fitting a single Gaussian to the spectra shown inFigures 8128 a: 'Fano + noise' is the theoretical Fano-limit spectral FWHM added in quadrature with Gaussian readout noise in each pixel. b: Conversion from measured analog-to-digital units (ADU) to eV uses a gain factor derived from the peak of the 5.9 keV histogram of events with pixel multiplicity ≥ 5 (seeFig, 8a).The simulations assume an electron liberation energy of 3.65 eV to convert to energy, and the same measured gain factor of 1.718 eV ADU −1 to compare to lab data ADU values.
CCD soft X-ray imaging spectrometer for the ASCA satellite. B Burke, R Mountain, P Daniels, IEEE Transactions on Nuclear Science. 41B. Burke, R. Mountain, P. Daniels, et al., "CCD soft X-ray imaging spectrometer for the ASCA satellite," IEEE Transactions on Nuclear Science 41, 375-385 (1994).
Advanced CCD imaging spectrometer (ACIS) instrument on the Chandra X-ray Observatory," in X-Ray and Gamma-Ray Telescopes and Instruments for Astronomy. G P Garmire, M W Bautz, P G Ford, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. J. E. Truemper and H. D. Tananbaum4851G. P. Garmire, M. W. Bautz, P. G. Ford, et al., "Advanced CCD imaging spectrometer (ACIS) instrument on the Chandra X-ray Observatory," in X-Ray and Gamma-Ray Telescopes and Instruments for Astronomy., J. E. Truemper and H. D. Tananbaum, Eds., Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series 4851, 28-44 (2003).
XMM-Newton observatory. I. The spacecraft and operations. F Jansen, D Lumb, B Altieri, Astronomy and Astrophysics. 365F. Jansen, D. Lumb, B. Altieri, et al., "XMM-Newton observatory. I. The spacecraft and operations," Astronomy and Astrophysics 365, L1-L6 (2001).
The Swift Gamma-Ray Burst Mission. N Gehrels, G Chincarini, P Giommi, Astrophysical Journal. 611N. Gehrels, G. Chincarini, P. Giommi, et al., "The Swift Gamma-Ray Burst Mission," Astro- physical Journal 611, 1005-1020 (2004).
The X-Ray Observatory Suzaku. K Mitsuda, M Bautz, H Inoue, Publications of the Astronomical Society of Japan. 59K. Mitsuda, M. Bautz, H. Inoue, et al., "The X-Ray Observatory Suzaku," Publications of the Astronomical Society of Japan 59, S1-S7 (2007).
The eROSITA X-ray telescope on SRG. P Predehl, R Andritschke, V Arefiev, Astronomy and Astrophysics. 6471P. Predehl, R. Andritschke, V. Arefiev, et al., "The eROSITA X-ray telescope on SRG," Astronomy and Astrophysics 647, A1 (2021).
Lynx X-Ray Observatory: an overview. J A Gaskin, D A Swartz, A Vikhlinin, Journal of Astronomical Telescopes, Instruments, and Systems. 521001J. A. Gaskin, D. A. Swartz, A. Vikhlinin, et al., "Lynx X-Ray Observatory: an overview," Journal of Astronomical Telescopes, Instruments, and Systems 5, 021001 (2019).
AXIS: a probe class next generation high angular resolution x-ray imaging satellite. R Mushotzky, Space Telescopes and. Proc. SPIE 10699R. Mushotzky, "AXIS: a probe class next generation high angular resolution x-ray imaging satellite," in Space Telescopes and Instrumentation 2018: Ultraviolet to Gamma Ray, Proc. SPIE 10699 (2018).
Back-illuminated, fully-depleted CCD image sensors for use in optical and near-IR astronomy. D E Groom, S E Holland, M E Levi, Nuclear Instruments and Methods in Physics Research A. 442D. E. Groom, S. E. Holland, M. E. Levi, et al., "Back-illuminated, fully-depleted CCD im- age sensors for use in optical and near-IR astronomy," Nuclear Instruments and Methods in Physics Research A 442, 216-222 (2000).
Pixelation Effects in Weak Lensing. F W High, J Rhodes, R Massey, Publications of the Astronomical Society of the Pacific. 119F. W. High, J. Rhodes, R. Massey, et al., "Pixelation Effects in Weak Lensing," Publications of the Astronomical Society of the Pacific 119, 1295-1307 (2007).
A small satellite version of a soft x-ray polarimeter. H Marshall, S Heine, A Garner, Proceedings of the SPIE, Space Telescopes and Instrumentation: Ultraviolet to Gamma Ray. the SPIE, Space Telescopes and Instrumentation: Ultraviolet to Gamma Ray1144411444H. Marshall, S. Heine, A. Garner, et al., "A small satellite version of a soft x-ray polarimeter," in Proceedings of the SPIE, Space Telescopes and Instrumentation: Ultraviolet to Gamma Ray 11444, 11444Y (2020).
A diffraction limited Wolter nested-shell telescope concept with pico-radian resolution. M Schattenburg, R Heilmann, H Marshall, Proceedings of the SPIE, Space Telescopes and Instrumentation: Ultraviolet to Gamma Ray. the SPIE, Space Telescopes and Instrumentation: Ultraviolet to Gamma Ray11444M. Schattenburg, R. Heilmann, H. Marshall, et al., "A diffraction limited Wolter nested-shell telescope concept with pico-radian resolution," in Proceedings of the SPIE, Space Telescopes and Instrumentation: Ultraviolet to Gamma Ray 11444 (2020).
Toward fast, low-noise charge-coupled devices for Lynx. M W Bautz, B E Burke, M Cooper, Journal of Astronomical Telescopes, Instruments, and Systems. 521015M. W. Bautz, B. E. Burke, M. Cooper, et al., "Toward fast, low-noise charge-coupled devices for Lynx," Journal of Astronomical Telescopes, Instruments, and Systems 5, 021015 (2019).
Overview of the high-definition x-ray imager instrument on the Lynx x-ray surveyor. A D Falcone, R P Kraft, M W Bautz, Journal of Astronomical Telescopes, Instruments, and Systems. 521019A. D. Falcone, R. P. Kraft, M. W. Bautz, et al., "Overview of the high-definition x-ray imager instrument on the Lynx x-ray surveyor," Journal of Astronomical Telescopes, Instruments, and Systems 5, 021019 (2019).
Hybrid CMOS detectors for the Lynx x-ray surveyor high definition x-ray imager. S V Hull, A D Falcone, E Bray, Journal of Astronomical Telescopes, Instruments, and Systems. 521018S. V. Hull, A. D. Falcone, E. Bray, et al., "Hybrid CMOS detectors for the Lynx x-ray sur- veyor high definition x-ray imager," Journal of Astronomical Telescopes, Instruments, and Systems 5, 021018 (2019).
Monolithic CMOS detectors for use as x-ray imaging spectrometers. A Kenter, R Kraft, T Gauron, UV, X-Ray, and Gamma-Ray Space Instrumentation for Astronomy XXI, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 11118. 1111806A. Kenter, R. Kraft, and T. Gauron, "Monolithic CMOS detectors for use as x-ray imag- ing spectrometers," in UV, X-Ray, and Gamma-Ray Space Instrumentation for Astronomy XXI, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 11118, 1111806 (2019).
Mcrc v1: development of integrated readout electronics for next generation X-ray CCD detectors for future satellite observatories. S Herrmann, J Wong, T Chattopadhay, Proceedings of the SPIE, X-ray. the SPIE, X-ray11454S. Herrmann, J. Wong, T. Chattopadhay, et al., "Mcrc v1: development of integrated readout electronics for next generation X-ray CCD detectors for future satellite observatories," in Proceedings of the SPIE, X-ray, Optical,and Infrared Detectors for Astronomy 11454 (2020).
Poisson CCD: A dedicated simulator for modeling CCDs. C Lage, A Bradshaw, J Anthony Tyson, Journal of Applied Physics. 130164502C. Lage, A. Bradshaw, J. Anthony Tyson, et al., "Poisson CCD: A dedicated simulator for modeling CCDs," Journal of Applied Physics 130, 164502 (2021).
LSST: From Science Drivers to Reference Design and Anticipated Data Products. 19ž, S M Ivezić, J A Kahn, Tyson, Astrophysical Journal. 87311119Ž. Ivezić, S. M. Kahn, J. A. Tyson, et al., "LSST: From Science Drivers to Reference Design and Anticipated Data Products," Astrophysical Journal 873, 111 (2019).
X-ray analysis of fully depleted CCDs with small pixel size. I V Kotov, J Haupt, P Kubanek, Nuclear Instruments and Methods in Physics Research A. 787I. V. Kotov, J. Haupt, P. Kubanek, et al., "X-ray analysis of fully depleted CCDs with small pixel size," Nuclear Instruments and Methods in Physics Research A 787, 12-19 (2015).
Archon: a modern controller for high performance astronomical CCDs. G Bredthauer, Ground-based and Airborne Instrumentation for Astronomy V, Proc. SPIE 9147. G. Bredthauer, "Archon: a modern controller for high performance astronomical CCDs," in Ground-based and Airborne Instrumentation for Astronomy V, Proc. SPIE 9147 (2014).
Non-linear Least-squares Fitting in IDL with MPFIT. C B Markwardt, Astronomical Data Analysis Software and Systems. XVIII, D. A. Bohlender, D. Durand, and P. Dowler411251C. B. Markwardt, "Non-linear Least-squares Fitting in IDL with MPFIT," in Astronomical Data Analysis Software and Systems XVIII, D. A. Bohlender, D. Durand, and P. Dowler, Eds., Astronomical Society of the Pacific Conference Series 411, 251 (2009).
Intrinsic concentration, effective densities of states, and effective mass in silicon. M A Green, Journal of Applied Physics. 676M. A. Green, "Intrinsic concentration, effective densities of states, and effective mass in silicon," Journal of Applied Physics 67(6), 2944-2954 (1990).
Studies on Small Charge Packet Transport in High-Resistivity Fully Depleted CCDs. M S Haro, G Fernandez Moroni, J Tiffenberg, IEEE Transactions on Electron Devices. 675M. S. Haro, G. Fernandez Moroni, and J. Tiffenberg, "Studies on Small Charge Packet Transport in High-Resistivity Fully Depleted CCDs," IEEE Transactions on Electron Devices 67(5), 1993-2000 (2020).
FBMC3D-A Large-Scale 3-D Monte Carlo Simulation Tool for Modern Electronic Devices. I Prigozhin, S Dominici, E Bellotti, IEEE Transactions on Electron Devices. 681I. Prigozhin, S. Dominici, and E. Bellotti, "FBMC3D-A Large-Scale 3-D Monte Carlo Simulation Tool for Modern Electronic Devices," IEEE Transactions on Electron Devices 68(1), 279-287 (2021).
An experimental study of charge diffusion in the undepleted silicon of X-ray CCDs. G Prigozhin, N R Butler, S E Kissel, IEEE Transactions on Electron Devices. 50G. Prigozhin, N. R. Butler, S. E. Kissel, et al., "An experimental study of charge diffusion in the undepleted silicon of X-ray CCDs," IEEE Transactions on Electron Devices 50, 246-253 (2003).
Application of the Mesh Experiment for the Back-Illuminated Charge-Coupled Device: I. Experiment and the Charge Cloud Shape. E Miyata, M Miki, J Hiraga, Japanese Journal of Applied Physics. 415827E. Miyata, M. Miki, J. Hiraga, et al., "Application of the Mesh Experiment for the Back- Illuminated Charge-Coupled Device: I. Experiment and the Charge Cloud Shape," Japanese Journal of Applied Physics 41, 5827 (2002).
Charge diffusion in CCD X-ray detectors. G G Pavlov, J A Nousek, Nuclear Instruments and Methods in Physics Research A. 428G. G. Pavlov and J. A. Nousek, "Charge diffusion in CCD X-ray detectors," Nuclear Instru- ments and Methods in Physics Research A 428, 348-366 (1999).
Analysis of the EMCCD point-source response using x-rays. I V Kotov, S Hall, D Gopinath, Nuclear Instruments and Methods in Physics Research A. 985164706I. V. Kotov, S. Hall, D. Gopinath, et al., "Analysis of the EMCCD point-source response using x-rays," Nuclear Instruments and Methods in Physics Research A 985, 164706 (2021).
| [] |
[
"Electronic structure of undoped and potassium doped coronene investigated by electron energy-loss spectroscopy",
"Electronic structure of undoped and potassium doped coronene investigated by electron energy-loss spectroscopy"
] | [
"Friedrich Roth \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"Johannes Bauer \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"Benjamin Mahns \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"Bernd Büchner \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n",
"Martin Knupfer \nIFW Dresden\nP.O. Box 270116D-01171DresdenGermany\n"
] | [
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany",
"IFW Dresden\nP.O. Box 270116D-01171DresdenGermany"
] | [] | We performed electron energy-loss spectroscopy studies in transmission in order to obtain insight into the electronic properties of potassium intercalated coronene, a recently discovered superconductor with a rather high transition temperature of about 15 K. A comparison of the loss function of undoped and potassium intercalated coronene shows the appearance of several new peaks in the optical gap upon potassium addition. Furthermore, our core level excitation data clearly signal filling of the conduction bands with electrons. | 10.1103/physrevb.85.014513 | [
"https://arxiv.org/pdf/1201.2002v1.pdf"
] | 119,299,712 | 1201.2002 | 486cceb7c96b9464ada47df95def299707c4ae01 |
Electronic structure of undoped and potassium doped coronene investigated by electron energy-loss spectroscopy
arXiv:1201.2002v1 [cond-mat.supr-con] 10 Jan 2012
Friedrich Roth
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
Johannes Bauer
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
Benjamin Mahns
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
Bernd Büchner
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
Martin Knupfer
IFW Dresden
P.O. Box 270116D-01171DresdenGermany
Electronic structure of undoped and potassium doped coronene investigated by electron energy-loss spectroscopy
arXiv:1201.2002v1 [cond-mat.supr-con] 10 Jan 2012(Dated: December 21, 2013)
We performed electron energy-loss spectroscopy studies in transmission in order to obtain insight into the electronic properties of potassium intercalated coronene, a recently discovered superconductor with a rather high transition temperature of about 15 K. A comparison of the loss function of undoped and potassium intercalated coronene shows the appearance of several new peaks in the optical gap upon potassium addition. Furthermore, our core level excitation data clearly signal filling of the conduction bands with electrons.
I. INTRODUCTION
Organic molecular crystals-built from π conjugated molecules-have been subject of intense research for a number of reasons. [1][2][3][4] Due to their relatively open crystal structure their electronic properties can be easily modified by the addition of electron acceptors and donors, which can lead to novel and, in some cases, intriguing or unexpected physical properties. A prominent example for the latter is the formation of metallic, superconducting or insulating phases in the alkali metal doped fullerides depending on their stoichiometry. [5][6][7][8] In particular the superconducting fullerides have attracted a lot of attention, and rather high transition temperatures (T c 's) in e. g. K 3 C 60 (T c = 18 K) 9 , Cs 2 RbC 60 (T c = 33 K) 10 or Cs 3 C 60 (T c = 38 K 11,12 and T c = 40 K under 15 kbar 13 ) have been reported. In this context, further interesting phenomena were reported in alkali metal doped molecular materials such as the observation of an insulator-metal-insulator transition in alkali doped phthalocyanines 14 , a transition from a Luttinger to a Fermi liquid in potassium doped carbon nanotubes 15 , or the formation of a Mott state in potassium intercalated pentacene. 16 However, in the case of organic superconductors, no new systems with high T c 's similar to those of the fullerides have been discovered in the past decade. Recently, the field was renewed with the discovery of superconductivity in alkali doped picene with a T c up to 18 K. 17 Furthermore, after this discovery superconductivity was also reported in other alkali metal intercalated polycyclic aromatic hydrocarbons, such as phenanthrene (T c = 5 K) 18,19 and coronene (T c = 15 K). 20 Motivated by these discoveries and numerous publications on picene, both experimental 17,21-23 and theoretical [24][25][26][27][28][29][30] , the investigation of the physical properties of coronene in the undoped and doped state is required in order to develop an understanding of the superconducting and normal state properties.
The coronene molecule is made out of six benzene rings which are arranged in a circle as depicted in Fig. 1 left panel. In the condensed phase, coronene adopts a monoclinic crystal structure, with lattice constants a = 16.094Å, b = 4.690Å, c = 10.049Å, and β = 110.79 • , the space group is P2 1 /a, and the unit cell contains two inequivalent molecules. 31 The molecules arrange in a herringbone manner (cf. Fig. 1) which is typical for many aromatic molecular solids. Furthermore, coronene crystals show two structural phase transitions depending on pressure and temperature in the range between 140-180 K 32,33 and at 50 K. 34 In this contribution we report on an investigation of the electronic properties of undoped and potassium doped coronene using electron energy-loss spectroscopy (EELS). EELS studies of other undoped and doped molecular materials in the past have provided useful insight into their electronic properties. [35][36][37] We demonstrate that potassium addition leads to a filling of the conduction bands and the appearance of several new low energy excitation. In addition, our analysis of the spectra allows a determination of the dielectric function of K doped coronene.
II. EXPERIMENTAL
Thin films of coronene were prepared by thermal evaporation under high vacuum onto single crystalline KBr substrates kept at room temperature with a deposition rate of 0.8 nm/min and a evaporation temperature of about 500 K. The film thickness was about 100 nm. These coronene films were floated off in destilled water, mounted onto standard electron microscopy grids and transferred into the spectrometer. Prior to the EELS measurements the films were characterized in-situ using electron diffraction. All observed diffraction peaks were consistent with the crystal structure of coronene. 20,31 Moreover, the diffraction spectra show no significant pronounced texture which leads to the conclusion that our films are essentially polycrystalline.
All electron diffraction studies and loss function measurements were carried out using the 172 keV spectrometer described in detail elsewhere. 38 We note that at this high primary beam 3 energy only singlet excitations are possible. The energy and momentum resolution were chosen to be 85 meV and 0.03Å −1 , respectively. We have measured the loss function Im[-1/ǫ(q, ω)], which is proportional to the dynamic structure factor S(q, ω), for a momentum transfer q parallel to the film surface [ǫ(q, ω) is the dielectric function]. The C 1s and K 2p core level studies were measured with an energy resolution of about 200 meV and a momentum resolution of 0.03Å. In order to obtain a direction independent core level excitation information, we have determined the core level data for three different momentum directions such that the sum of these spectra represent an averaged polycrystalline sample. 39 The core excitation spectra have been corrected for a linear background, which has been determined by a linear fit of the data 10 eV below the excitation threshold.
Potassium was added in several steps by evaporation from a commercial SAES (SAES GETTERS S.p.A., Italy) getter source under ultra-high vacuum conditions (base pressure lower than 10 −10 mbar) until a doping level of about K 3 coronene was achieved, which is reported to be the superconducting phase. 20 In detail, in each doping step, the sample was exposed to potassium for 5 min, the current through the SAES getter source was 6 A and the distance to the sample was about 30 mm. During potassium addition, the film was kept at room temperature.
In order to perfom a Kramers-Kronig analysis (KKA), the raw date have been corrected by substracting contributions of multiple scattering processes and elimination of contributions of the direct beam by fitting the plasmon peak with a model function. 40
III. RESULTS AND DISCUSSION
In the left panel of Fig. 2 we show C 1s and K 2p core level excitations of undoped and potassium doped coronene. These data can be used to analyze the doping induced changes and furthermore to determine the stoichiometry of the potassium doped coronene films.
Moreover, the C 1s excitations represent transitions into empty C 2p-derived levels, and thus allow to probe the projected unoccupied electronic density of states of carbon-based materials. [41][42][43] Both spectra were normalized at the step-like structure in the region between 291 eV and 293 eV, i. e. to the σ * derived intensity, which is proportional to the number of carbon atoms. For the undoped case (black circles), we can clearly identify a sharp and strong feature in the range between 284 -286 eV, and some additional small features Below ∼ 291 eV the structures can be assigned to transitions into π * states representing the unoccupied electronic states. Also, because of the higher symmetry of the coronene molecules, compared to other carbon based materials like picene, we expect a degeneracy of the higher molecular orbitals which can be directly seen by the well separated features above 286 eV. Such a well pronounced structure representing higher lying molecular orbitals is very similar to what was observed for fullerene. [43][44][45][46] When we focus at the dominating excitation feature right after the excitation onset, as shown in the right panel of Fig. 2 from the much broader and less structured K 2p core excitation spectrum of a pure potassium multiplayer. 53 We observe a clear broadening of the first C 1s feature which might arise from a change of the binding energy of the C 1s core levels because of the introduced potassium atoms as well as lifetime effects in the metallic doped coronene. Moreover, upon charging the coronene molecules will most likely undergo a Jahn-Teller distortion 54 , which leads to a splitting of the electronic molecular levels and thus a spectral broadening in our data. Also, band structure calculations of K 3 coronene 20 predict shifts of the conduction bands as compared to pristine coronene which would result in a broadening as seen in Fig. 2.
Additionally, also a mixing of different phases, which we can not exclude, can result in a broadening of the C 1s signal.
Importantly, a clear reduction of the spectral weight of the first C 1s excitation feature is observed in Fig. 2 upon doping. Taking into account the four conduction bands that contribute to this intensity in the undoped case, this reduction is a clear signal for the successful doping. The stoichiometry analysis can further be substantiated by comparing the K 2p and C 1s core excitation intensities in comparison to other doped molecular films with known stoichiometry such as K 6 C 60 7 like in previous publications. 41,51,52 The results
shown in the left panel of Fig. 2 indicate a doping level of K 2.8 coronene, which again is in very good agreement to the other results discussed above.
Doping of coronene also causes major changes in the electronic excitation spectrum as revealed in Fig. 3, where we show a comparison of the loss functions in an energy range of 0-10 eV measured using EELS for different doping steps. These data are taken with a small momentum transfer q of 0.1Å −1 , which represents the optical limit. For undoped coronene, we can clearly identify two main maxima at about 4.3 eV and 6.9 eV, which are due to excitations between the occupied and unoccupied electronic levels. In addition, zooming into the energy region around the excitation onset in the experimental spectra reveals an optical gap of 2.8 eV (cf. Fig. 3). This onset also represents a lower limit for the band gap of solid coronene. The excitation onset of coronene is followed by five additional well separated features at 3 eV, 3.3 eV, 3.5 eV, 3.7 eV and 3.95 eV. The main features of our spectrum are in good agreement with previous EEL measurements in the gas phase 55,56 and optical absorption data. 57,58 In general, the lowest electronic excitations in organic molecular solids usually are excitons, i. e. bound electron-hole pairs. [59][60][61][62] The decision criterion that has to be considered in A direct assignment of the observed features however requires further investigations. In order to obtain deeper insight into potassium introduced variations, we have analyzed the measured loss function, Im(-1/ǫ), of doped coronene using a Kramers Kronig analysis (KKA). 38 This analysis has been carried out for a metallic ground state since the observation of superconductivity 20 as well as band structure calculations 49 signal such a ground state for the stoichiometry of K 3 coronene. Furthermore, the evolution of the loss function in Fig. 3 (left panel) indicates a filling of the former energy gap.
In Fig. 4 we present the results of this analysis in a wide energy range between 0 -40 eV.
The loss function (cf. Fig. 4
∞ 0 dω Im − 1 ǫ(q,ω) ω = π 2 .
Here, after our KKA we arrive at a value of 1.562, very close to the expectation of π 2 . In order to obtain access to further information we show in Fig. 5 ω = 0 we can derive the dc-conductivity of K 2.8 coronene of about 6300 Ω −1 cm −1 . We have additionally fitted the optical conductivity using a simple Drude-Lorentz model:
ǫ(ω) = 1 − ω 2 D ω 2 + iγ D ω + j f j ω 2 jo − ω 2 − iγ j ω(1)
Within this model, the free electron contribution is described by the Drude part (ω D ,γ D ) and the interband transitions are represented by the Lorentz oscillators (f j , γ j and ω jo ).
The resulting fit parameters are given in Table I also lower than m 0 has been deduced previously. Finally, a comparison of the unscreened and screened plasma frequencies can be used to derieved the averaged screening ǫ ∞ of the charge carrier plasmon 52 . This would give a value of ǫ ∞ ∼ ( 4.35 /1.8) 2 ∼ 5.8. We note however that this is a very rough approximation for K 3 coronene, since there are close lying interband excitations in the corresponding energy region.
IV. SUMMARY
To summarize, we have investigated the electronic properties of potassium doped coronene compared to undoped coronene using electron energy-loss spectroscopy in transmission. Core level excitation data signal the formation of a doped phase with a stoichiometry close to K 3 coronene, which is reported to be superconductive. The reduction of the lowest lying C 1s excitation features clearly demonstrates that potassium addition leads to a filling of the coronene conduction bands. Furthermore, the electronic excitation spectrum changes substantially upon doping. In particular, several new low energy features show up upon potassium intercalation, and one of these features can be associated with the charge carrier plasmon.
2 FIG. 1 .
21Left panel: Schematic representation of the molecular structure of coronene. Right panel: Crystal structure of coronene.
, we can identify a characteristic fine structure with maxima at 284.75 eV, 284.85 eV, 285.15 eV and 285.35 eV. These features can be identified with maxima in the unoccupied density of states, since the carbon 1s levels of the different C atoms in coronene are virtually equivalent as revealed by x-ray photoemission spectra (cf. Fig. 2 and Ref.( 47 )). The peak width of the C 1s photoemission line is smaller than 1 eV as seen in the inset of Fig. 2 (Notice: The energy resolution for the XPS-measurements is ≈ 0.35 eV. The broadening of the spectral linewidth is a result of lifetime-effects, very similar to what was observed for C 60 , where all carbon atoms are symmetrically equivalent 48 ). The observation of four structures in Fig. 2 (right panel) is in very good agreement with first principle band structure calculations for 5undoped coronene which predict four close lying conduction bands (arising from the doubly degenerate LUMO with e 1g symmetry as well as the doubly degenerate LUMO+1 with e 2u symmetry) in this energy region.49 Interestingly, a related fine structure was reported for picene both experimentally21 and theoretically 50 which shows also superconductivity upon doping. The step-like structure above 291 eV corresponds to the onset of transitions into σ * -derived unoccupied levels.Also in the case of potassium doped coronene (red open squares) the spectrum is dominated by a sharp excitation feature in the range between 284 -286 eV and, in addition, by K 2p core excitations, which can be observed at 297.2 eV and 300 eV, and which can be seen as a first evidence of the successful doping of the sample. In particular, the well structured K 2p core excitations signal the presence of K + ions, in agreement with other studies of potassium doped molecular films. 7,51,52 Their spectral shape is clearly different
FIG. 3 .
3Left panel: Evolution of the loss function of coronene in the range of 0 -10 eV upon potassium doping. All spectra were normalized in the high energy region between 9 -10 eV. (K content increases from bottom (red open circles) to top (blue open squares)) Right panel: Loss function of solid coronene measured with a momentum transfer of q = 0.1Å −1 at 20 K.
order to analyse the excitonic character and binding energy of an excitation is the energy of the excitation with respect to the so-called transport energy gap, which represents the energy needed to create an unbound, independent electron-hole pair. Different values from 3.29 eV 47 up to 3.54 eV 63 and 3.62 eV 64 for coronene are published in previous publications.Consequently, the lowest excitation that is observed can safely be attributed to a singlet exciton.Upon doping, the spectral features become broader, and a downshift of the second, major excitation can be observed. We assign this downshift to a relaxation of the molecular structure of coronene as a consequence of the filling of anti-bonding π * levels. Furthermore, a decrease of intensity of the feature at 4.3 eV upon potassium intercalation is visible. Inaddition, for the doped films new structures at 1.15 eV, 1.9 eV, 2.55 eV, 3.3 eV and 3.75 eV are observed in the former gap of coronene. Taking into account the structural relaxation upon doping and the fact that the doped molecules are susceptible to a Jahn-Teller distortion 54 , one would expect additional excitations in this energy region similar to what has been observed previously for other doped π-conjucated material. 51,65,66 These then arise from excitations between the split former HOMO and LUMO of coronene and excitations from the former LUMO to LUMO+1, which become possible as soon as the LUMO is occupied.
FIG. 4 .
4Loss function (Im(-1/ǫ), real part (ǫ 1 ) and imaginary part (ǫ 2 ) of the dielectric function of K doped coronene. The momentum transfer is q = 0.1Å −1 . Note that in contrast toFig. 3, the loss function is corrected for the contribution of the direct beam and multiple scattering.
upper panel) is dominated by a broad maximum in the range between 20 -27 eV which can be assigned to the π + σ plasmon, a collective excitation of all valence electrons in the system. Various interband excitations at 2 -20 eV can be observed as maxima in the imaginary part of the dielectric function, ǫ 2 (lower panel). Most interestingly, ǫ 2 shows in the energy range between 0 -6 eV only 4 maxima in contrast to the 5 featuresin the loss function. The zero crossing, which can be seen as a definition of a charge carrier plasmon, near 1.8 eV in the real part of the dielectric function, ǫ 1 (middel panel) as well as the absence of a peak at the same energy in the imaginary part of the dielectric function and accordingly the optical conductivity, σ, (cf.Fig. 5), leads to the conclusion that the second spectral feature at 1.9 eV represents a collective excitation (density oscillation), and we assign it to the charge carrier plasmon of doped coronene.To test the consistency of our KKA analysis we check the sum rules as described elsewhere.67 An evaluation of this sum rule for the loss function and the dielectric function after our KKA results in a very good agreement of the two values and further with the value what is expected from a calculation of the electron density of doped coronene. Moreover, a further sum rule which is only valide for metallic systems can be employed 67 :
FIG. 5 .
5the optical conductivity, σ, of potassium doped coronene. As we can see from Fig. 5 the optical conductivity consists of a free electron contribution at low energies due to intraband transitions in the conduction bands and some additional interband contributions. From the optical Optical conductivity σ = ωǫ 0 ǫ 2 of K doped coronene for a momentum transfer of q = 0.1Å −1 (black circles) derived by a Kramers-Kronig analysis of the EELS intensity. Additionally, the result of a Drude-Lorentz fit is shown (red line).
and are also shown in Fig. 5 as red line. This Figure demonstrates that our model description of the data is very good. We note, that the result of our fit also describes ǫ 1 very well, which demonstrates the consistency of our description. We arrive at an unscreened plasma frequency ω D of about 4.35 eV. Furthermore, we can compare this value with the plasma frequency which we expect if we take the additional six conduction electrons per unit cell (2 coronene molecules per unit cell) in K 3 coronene into account. We arrive at a value of 3.42 eV which is somewhat lower than what we have derived using our fit procedure. This might be related to an effective mass of the charge carriers in doped coronene, which is reduced as compared to the free electron value, m 0 . For related (undoped) organic crystals such as rubrene 68 , PTCDA 69,70 or pentacene 71 an effective mass i ω jo (eV) γ j (eV) f j (eV) γ D (eV) ω D (eV) 1
Fig. 5 )
5using formula(1). The Drude part is given by the plasma energy ω D and the width of the plasma (damping) γ D , while f j , γ j and ω jo are the oscillator strength, the width and the energy position of the Lorentz oscillators.
doped (red open squares) coronene. Right panel: Zoom at the dominant C 1s excitation right after the excitation onset at 284.4 eV for undoped coronene measured with higher energy resolution (85 meV). The inset shows the C 1s core-level spectrum for an undoped coronene film grown on SiO 2 measured using x-ray photoemission spectroscopy (XPS). at 286.1 eV, 286.9 eV, 288.6 ev and 290.2 eV as well as a broad excitation at ∼ 294 eV.norm. Intensity (arb. units)
304
300
296
292
288
284
280
276
Energy (eV)
undoped coronene
doped coronene
I
n
t
e
n
s
i
t
y
(
a
r
b
.
u
n
i
t
s
)
2
8
6
.
4
2
8
6
.
0
2
8
5
.
6
2
8
5
.
2
2
8
4
.
8
2
8
4
.
4
E
n
e
r
g
y
(
e
V
)
2
9
0
2
8
8
2
8
6
2
8
4
2
8
2
2
8
0
B
i
n
d
i
n
g
E
n
e
r
g
y
(
e
V
)
u
n
d
o
p
e
d
c
o
r
o
n
e
n
e
X
P
S
C
1
s
FIG. 2. Left panel: C 1s and K 2p core level excitations of undoped (black circles) and potassium
TABLE I. Parameters derived from a Drude-Lorentz fit of the optical conductivity (as shown in1.47
1.01
3.12
0.40
4.35
2
2.25
0.65
2.55
3
2.90
0.34
2.22
4
3.57
0.22
1.06
5
4.52
0.77
3.67
6
5.48
1.83
5.26
7
9.15
2.75
7.00
8
10.64
2.84
6.76
R We Thank, R Schönfelder, S Hübel, Leger, This work has been supported by the Deutsche Forschungsgemeinschaft. grant number KN393/14We thank R. Schönfelder, R. Hübel and S. Leger for technical assistance. This work has been supported by the Deutsche Forschungsgemeinschaft (grant number KN393/14).
. M Granström, K Petritsch, A Arias, A Lux, M Andersson, R Friend, 10.1038/26183Nature. 395257M. Granström, K. Petritsch, A. Arias, A. Lux, M. Andersson, and R. Friend, Nature, 395, 257 (1998).
. A Dodabalapur, H Katz, L Torsi, R Haddon, 10.1126/science.269.5230.1560Science. 2691560A. Dodabalapur, H. Katz, L. Torsi, and R. Haddon, Science, 269, 1560 (1995).
. G Tsivgoulis, J Lehn, 10.1002/adma.19970090107Adv. Mater. 939G. Tsivgoulis and J. Lehn, Adv. Mater., 9, 39 (1997).
. Y Kaji, R Mitsuhashi, X Lee, H Okamoto, T Kambe, N Ikeda, A Fujiwara, M Yamaji, K Omote, Y Kubozono, 10.1016/j.orgel.2009.01.006Org. Electron. 10432Y. Kaji, R. Mitsuhashi, X. Lee, H. Okamoto, T. Kambe, N. Ikeda, A. Fujiwara, M. Yamaji, K. Omote, and Y. Kubozono, Org. Electron., 10, 432 (2009).
O Gunnarson, Alkali Doped Fullerides. SingaporeWorld ScientificO. Gunnarson, Alkali Doped Fullerides (World Scientific, Singapore, 2004).
J Weaver, D Poirier, 10.1016/S0081-1947(08)60577-9Solid State Physics. H. Ehrenreich and F. SpaepenAcademic Press48J. Weaver and D. Poirier, in Solid State Physics, Vol. 48, edited by H. Ehrenreich and F. Spaepen (Academic Press, 1994) pp. 1-108.
. M Knupfer, 10.1016/S0167-5729(00)00012-1Surface Science Reports. 421M. Knupfer, Surface Science Reports, 42, 1 (2001).
. O Gunnarsson, 10.1103/RevModPhys.69.575Rev. Mod. Phys. 69575O. Gunnarsson, Rev. Mod. Phys., 69, 575 (1997).
. A Hebard, M Rosseinsky, R Haddon, D Murphy, S Glarum, T Palastra, A Ramirez, A Kortan, Nature. 350600A. Hebard, M. Rosseinsky, R. Haddon, D. Murphy, S. Glarum, T. Palastra, A. Ramirez, and A. Kortan, Nature, 350, 600 (1991).
. K Tanigaki, T Ebbesen, S Saito, J Mizuki, J Tsai, Y Kubo, S Kuroshima, 10.1038/352222a0Nature. 352222K. Tanigaki, T. Ebbesen, S. Saito, J. Mizuki, J. Tsai, Y. Kubo, and S. Kuroshima, Nature, 352, 222 (1991).
. A Y Ganin, Y Takabayashi, Y Z Khimyak, S Margadonna, A Tamai, M J Rosseinsky, K Prassides, 10.1038/nmat2179Nat Mater. 7367A. Y. Ganin, Y. Takabayashi, Y. Z. Khimyak, S. Margadonna, A. Tamai, M. J. Rosseinsky, and K. Prassides, Nat Mater, 7, 367 (2008).
. A Y Ganin, Y Takabayashi, P Jeglic, D Arcon, A Potocnik, P J Baker, Y Ohishi, M , A. Y. Ganin, Y. Takabayashi, P. Jeglic, D. Arcon, A. Potocnik, P. J. Baker, Y. Ohishi, M. T.
. M D Mcdonald, A Tzirakis, G R Mclennan, M Darling, M J Takata, K Rosseinsky, Prassides, 10.1038/nature09120Nature. 466221McDonald, M. D. Tzirakis, A. McLennan, G. R. Darling, M. Takata, M. J. Rosseinsky, and K. Prassides, Nature, 466, 221 (2010).
. T Palstra, O Zhou, Y Iwasa, P Sulewski, R Fleming, B Zegarski, 10.1016/0038-1098(94)00787-XSolid State Commun. 93327T. Palstra, O. Zhou, Y. Iwasa, P. Sulewski, R. Fleming, and B. Zegarski, Solid State Commun., 93, 327 (1995).
. M Craciun, S Rogge, M Boer, S Margadonna, K Prassides, Y Iwasa, A Morpurgo, Adv. Mater. 18320M. Craciun, S. Rogge, M. den Boer, S. Margadonna, K. Prassides, Y. Iwasa, and A. Morpurgo, Adv. Mater., 18, 320 (2006).
. H Rauf, T Pichler, M Knupfer, J Fink, H Kataura, Phys. Rev. Lett. 9312H. Rauf, T. Pichler, M. Knupfer, J. Fink, and H. Kataura, Phys. Rev. Lett., 93, 096805 (2004). 12
. M F Craciun, G Giovannetti, S Rogge, G Brocks, A F Morpurgo, J Van Den, Brink, 10.1103/PhysRevB.79.125116Phys. Rev. B. 79125116M. F. Craciun, G. Giovannetti, S. Rogge, G. Brocks, A. F. Morpurgo, and J. van den Brink, Phys. Rev. B, 79, 125116 (2009).
. R Mitsuhashi, Y Suzuki, Y Yamanari, H Mitamura, T Kambe, N Ikeda, H Okamoto, A Fujiwara, M Yamaji, N Kawasaki, Y Maniwa, Y Kubozono, Nature. 46476R. Mitsuhashi, Y. Suzuki, Y. Yamanari, H. Mitamura, T. Kambe, N. Ikeda, H. Okamoto, A. Fujiwara, M. Yamaji, N. Kawasaki, Y. Maniwa, and Y. Kubozono, Nature, 464, 76 (2010).
. X Wang, R Liu, Z Gui, Y Xie, Y Yan, J Ying, X Luo, X Chen, 10.1038/ncomms1513Nat. Commun. 2507X. Wang, R. Liu, Z. Gui, Y. Xie, Y. Yan, J. Ying, X. Luo, and X. Chen, Nat. Commun., 2, 507 (2011).
. P L De Andres, A Guijarro, J A Vergés, 10.1103/PhysRevB.84.144501Phys. Rev. B. 84144501P. L. de Andres, A. Guijarro, and J. A. Vergés, Phys. Rev. B, 84, 144501 (2011).
. Y Kubozono, H Mitamura, X Lee, X He, Y Yamanari, Y Takahashi, Y Suzuki, Y Kaji, R Eguchi, K Akaike, T Kambe, H Okamoto, A Fujiwara, T Kato, T Kosugi, H Aoki, 10.1039/C1CP20961BPhys. Chem. Chem. Phys. 1316476Y. Kubozono, H. Mitamura, X. Lee, X. He, Y. Yamanari, Y. Takahashi, Y. Suzuki, Y. Kaji, R. Eguchi, K. Akaike, T. Kambe, H. Okamoto, A. Fujiwara, T. Kato, T. Kosugi, and H. Aoki, Phys. Chem. Chem. Phys., 13, 16476 (2011).
. F Roth, M Gatti, P Cudazzo, M Grobosch, B Mahns, B Büchner, A Rubio, M Knupfer, New J. Phys. 12103036F. Roth, M. Gatti, P. Cudazzo, M. Grobosch, B. Mahns, B. Büchner, A. Rubio, and M. Knupfer, New J. Phys., 12, 103036 (2010).
. F Roth, B Mahns, B Büchner, M Knupfer, 10.1103/PhysRevB.83.165436Phys. Rev. B. 83165436F. Roth, B. Mahns, B. Büchner, and M. Knupfer, Phys. Rev. B, 83, 165436 (2011).
. H Okazaki, T Wakita, T Muro, Y Kaji, X Lee, H Mitamura, N Kawasaki, Y Kubozono, Y Yamanari, T Kambe, T Kato, M Hirai, Y Muraoka, T Yokoya, 10.1103/PhysRevB.82.195114Phys. Rev. B. 82195114H. Okazaki, T. Wakita, T. Muro, Y. Kaji, X. Lee, H. Mitamura, N. Kawasaki, Y. Kubozono, Y. Yamanari, T. Kambe, T. Kato, M. Hirai, Y. Muraoka, and T. Yokoya, Phys. Rev. B, 82, 195114 (2010).
. P Cudazzo, M Gatti, F Roth, B Mahns, M Knupfer, A Rubio, 10.1103/PhysRevB.84.155118Phys. Rev. B. 84155118P. Cudazzo, M. Gatti, F. Roth, B. Mahns, M. Knupfer, and A. Rubio, Phys. Rev. B, 84, 155118 (2011).
. T Kato, T Kambe, Y Kubozono, 10.1103/PhysRevLett.107.077001Phys. Rev. Lett. 10777001T. Kato, T. Kambe, and Y. Kubozono, Phys. Rev. Lett., 107, 077001 (2011).
. A Subedi, L Boeri, 10.1103/PhysRevB.84.020508Phys. Rev. B. 8420508A. Subedi and L. Boeri, Phys. Rev. B, 84, 020508 (2011).
. M Kim, B I Min, G Lee, H J Kwon, Y M Rhee, J H Shim, 10.1103/PhysRevB.83.214510Phys. Rev. B. 83214510M. Kim, B. I. Min, G. Lee, H. J. Kwon, Y. M. Rhee, and J. H. Shim, Phys. Rev. B, 83, 214510 (2011).
. G Giovannetti, M Capone, 10.1103/PhysRevB.83.134508Phys. Rev. B. 83134508G. Giovannetti and M. Capone, Phys. Rev. B, 83, 134508 (2011).
. P L De Andres, A Guijarro, J A Vergés, 10.1103/PhysRevB.83.245113Phys. Rev. B. 83245113P. L. de Andres, A. Guijarro, and J. A. Vergés, Phys. Rev. B, 83, 245113 (2011).
. M Casula, M Calandra, G Profeta, F Mauri, 10.1103/PhysRevLett.107.137006Phys. Rev. Lett. 107137006M. Casula, M. Calandra, G. Profeta, and F. Mauri, Phys. Rev. Lett., 107, 137006 (2011).
. T Echigo, M Kimara, T Maruoka, 10.2138/am.2007.2509Am. Miner. 921262T. Echigo, M. Kimara, and T. Maruoka, Am. Miner., 92, 1262 (2007).
. T Yamamoto, S Nakatani, T Nakamura, K Mizuno, A H Matsui, Y Akahama, H Kawamura, 10.1016/0301-0104(94)00084-0Chem. Phys. 184247T. Yamamoto, S. Nakatani, T. Nakamura, K. Mizuno, A. H. Matsui, Y. Akahama, and H. Kawa- mura, Chem. Phys., 184, 247 (1994).
. R Totoki, T Aoki-Matsumoto, K Mizuno, J Lumines, 10.1016/j.jlumin.2004.09.096112308R. Totoki, T. Aoki-Matsumoto, and K. Mizuno, J. Lumines., 112, 308 (2005).
. S Nakatani, T Nakamura, K Mizuno, A Matsui, 10.1016/0022-2313(94)90432-4J. Lumin. 58343S. Nakatani, T. Nakamura, K. Mizuno, and A. Matsui, J. Lumin., 58, 343 (1994).
. R Schuster, M Knupfer, H Berger, 10.1103/PhysRevLett.98.037402Phys. Rev. Lett. 9813R. Schuster, M. Knupfer, and H. Berger, Phys. Rev. Lett., 98, 037402 (2007). 13
. F Roth, A König, R Kraus, M Grobosch, T Kroll, M Knupfer, Eur. Phys. J. B. 74339F. Roth, A. König, R. Kraus, M. Grobosch, T. Kroll, and M. Knupfer, Eur. Phys. J. B, 74, 339 (2010).
. M Knupfer, T Pichler, M S Golden, J Fink, M Murgia, R H Michel, R Zamboni, C Taliani, 10.1103/PhysRevLett.83.1443Phys. Rev. Lett. 831443M. Knupfer, T. Pichler, M. S. Golden, J. Fink, M. Murgia, R. H. Michel, R. Zamboni, and C. Taliani, Phys. Rev. Lett., 83, 1443 (1999).
. J Fink, Adv. Electron. Electron Phys. 75121J. Fink, Adv. Electron. Electron Phys., 75, 121 (1989).
Electron Energy-Loss Spectroscopy in the Electron Microscope. R Egerton, Springer2nd editionR. Egerton, Electron Energy-Loss Spectroscopy in the Electron Microscope (Springer; 2nd edi- tion, 1996).
. R Schuster, R Kraus, M Knupfer, H Berger, B Büchner, 10.1103/PhysRevB.79.045134Phys. Rev. B. 7945134R. Schuster, R. Kraus, M. Knupfer, H. Berger, and B. Büchner, Phys. Rev. B, 79, 045134 (2009).
. F Roth, A König, R Kraus, M Knupfer, 10.1063/1.2920179J. Chem. Phys. 128194711F. Roth, A. König, R. Kraus, and M. Knupfer, J. Chem. Phys., 128, 194711 (2008).
. M Knupfer, T Pichler, M S Golden, J Fink, A Rinzler, R E Smalley, 10.1016/S0008-6223(98)00263-2Carbon. 37733M. Knupfer, T. Pichler, M. S. Golden, J. Fink, A. Rinzler, and R. E. Smalley, Carbon, 37, 733 (1999).
. M Knupfer, J Fink, J Armbruster, H Romberg, Z. Phys. B. 989M. Knupfer, J. Fink, J. Armbruster, and H. Romberg, Z. Phys. B, 98, 9 (1995).
. C Chen, L Tjeng, P Rudolf, G Meigs, J Rowe, J Chen, J Mccauley, A Smith, A Mcghie, W Romanow, E Plummer, Nature. 352603C. Chen, L. Tjeng, P. Rudolf, G. Meigs, J. Rowe, J. Chen, J. McCauley, A. Smith, A. McGhie, W. Romanow, and E. Plummer, Nature, 352, 603 (1991).
. B Wästberg, S Lunell, C Enkvist, P A Brühwiler, A J Maxwell, N Mårtensson, 10.1103/PhysRevB.50.13031Phys. Rev. B. 5013031B. Wästberg, S. Lunell, C. Enkvist, P. A. Brühwiler, A. J. Maxwell, and N. Mårtensson, Phys. Rev. B, 50, 13031 (1994).
. E Sohmen, J Fink, W Krätschmer, Z. Phys. B-Condens. Mat. 8687E. Sohmen, J. Fink, and W. Krätschmer, Z. Phys. B-Condens. Mat., 86, 87 (1992).
. P G Schroeder, C B France, B A Parkinson, R Schlaf, 10.1063/1.1473217J. Appl. Phys. 919095P. G. Schroeder, C. B. France, B. A. Parkinson, and R. Schlaf, J. Appl. Phys., 91, 9095 (2002).
. D Poirier, T Ohno, G Krolll, Y Chen, P Benning, J Weaver, L Chibante, R Smalley, {10.1126/science.253.5020.646}Science. 253646D. Poirier, T. Ohno, G. KrollL, Y. Chen, P. Benning, J. Weaver, L. Chibante, and R. Smalley, Science, 253, 646 (1991).
. T Kosugi, T Miyake, S Ishibashi, R Arita, H Aoki, 10.1103/PhysRevB.84.020507Phys. Rev. B. 8420507T. Kosugi, T. Miyake, S. Ishibashi, R. Arita, and H. Aoki, Phys. Rev. B, 84, 020507 (2011).
. T Kosugi, T Miyake, S Ishibashi, R Arita, H Aoki, 10.1143/JPSJ.78.113704J. Phys. Soc. Jpn. 78113704T. Kosugi, T. Miyake, S. Ishibashi, R. Arita, and H. Aoki, J. Phys. Soc. Jpn, 78, 113704 (2009).
. K Flatz, M Grobosch, M Knupfer, 10.1063/1.2741539J. Chem. Phys. 126214702K. Flatz, M. Grobosch, and M. Knupfer, J. Chem. Phys., 126, 214702 (2007).
. F Roth, B Mahns, B Büchner, M Knupfer, 10.1103/PhysRevB.83.144501Phys. Rev. B. 83144501F. Roth, B. Mahns, B. Büchner, and M. Knupfer, Phys. Rev. B, 83, 144501 (2011).
. Y Ma, P Rudolf, C T Chen, F Sette, 10.1116/1.578011J. Vac. Sci. Technol. A. 101965Y. Ma, P. Rudolf, C. T. Chen, and F. Sette, J. Vac. Sci. Technol. A, 10, 1965 (1992).
. T Sato, H Tanaka, A Yamamoto, Y Kuzumoto, K Tokunaga, 10.1016/S0301-0104(02)00981-3Chem. Phys. 28791T. Sato, H. Tanaka, A. Yamamoto, Y. Kuzumoto, and K. Tokunaga, Chem. Phys., 287, 91 (2003).
. M A Khakoo, J M Ratliff, S Trajmar, 10.1063/1.459248J. Chem. Phys. 938616M. A. Khakoo, J. M. Ratliff, and S. Trajmar, J. Chem. Phys., 93, 8616 (1990).
. R Abouaf, S Diaz-Tendero, 10.1039/b904614cPhys. Chem. Chem. Phys. 1114R. Abouaf and S. Diaz-Tendero, Phys. Chem. Chem. Phys., 11, 5686 (2009). 14
. N Nijegorodov, R Mabbs, W Downey, 10.1016/S1386-1425(01)00457-7Spectroc. Acta Pt. A-Molec. Biomolec. Spectr. 572673N. Nijegorodov, R. Mabbs, and W. Downey, Spectroc. Acta Pt. A-Molec. Biomolec. Spectr., 57, 2673 (2001).
. K Ohno, H Inokuchi, T Kajiwara, 10.1246/bcsj.45.996Bull. Chem. Soc. Jpn. 45996K. Ohno, H. Inokuchi, and T. Kajiwara, Bull. Chem. Soc. Jpn., 45, 996 (1972).
M Pope, C E Swenberg, Electronic processes in organic crystals and polymers. New YorkOxford University PressSecond EditionM. Pope and C. E. Swenberg, Electronic processes in organic crystals and polymers (Oxford University Press, Second Edition, New York, 1999).
. M Knupfer, 10.1007/s00339-003-2182-9Appl. Phys. A. 77623M. Knupfer, Appl. Phys. A, 77, 623 (2003).
. R W Lof, M A Van Veenendaal, B Koopmans, H T Jonkman, G A Sawatzky, 10.1103/PhysRevLett.68.3924Phys. Rev. Lett. 683924R. W. Lof, M. A. van Veenendaal, B. Koopmans, H. T. Jonkman, and G. A. Sawatzky, Phys. Rev. Lett., 68, 3924 (1992).
. I Hill, A Kahn, Z Soos, R Pascal, 10.1016/S0009-2614(00)00882-4Chem. Phys. Lett. 327181I. Hill, A. Kahn, Z. Soos, and R. Pascal, Chem. Phys. Lett., 327, 181 (2000).
. R Rieger, M Kastler, V Enkelmann, K Müllen, 10.1002/chem.200800832Chem. Eur. J. 146322R. Rieger, M. Kastler, V. Enkelmann, and K. Müllen, Chem. Eur. J., 14, 6322 (2008).
. P I Djurovich, E I Mayo, S R Forrest, M E Thompson, 10.1016/j.orgel.2008.12.011Organic Electronics. 10515P. I. Djurovich, E. I. Mayo, S. R. Forrest, and M. E. Thompson, Organic Electronics, 10, 515 (2009).
. P A Lane, X Wei, Z V Vardeny, 10.1103/PhysRevLett.77.1544Phys. Rev. Lett. 771544P. A. Lane, X. Wei, and Z. V. Vardeny, Phys. Rev. Lett., 77, 1544 (1996).
. M Golden, M Knupfer, J Fink, J Armbruster, T Cummins, H Romberg, M Roth, M Sing, M Schmidt, E Sohmen, 10.1088/0953-8984/7/43/004J. Phys.-Condes. Matter. 78219M. Golden, M. Knupfer, J. Fink, J. Armbruster, T. Cummins, H. Romberg, M. Roth, M. Sing, M. Schmidt, and E. Sohmen, J. Phys.-Condes. Matter, 7, 8219 (1995).
G D Mahan, Many Particle Physics. Springer3nd editionG. D. Mahan, Many Particle Physics (Physics of Solids and Liquids) (Springer; 3nd edition, 2000).
. S.-I Machida, Y Nakayama, S Duhm, Q Xin, A Funakoshi, N Ogawa, S Kera, N Ueno, H Ishii, 10.1103/PhysRevLett.104.156401Phys. Rev. Lett. 104156401S.-I. Machida, Y. Nakayama, S. Duhm, Q. Xin, A. Funakoshi, N. Ogawa, S. Kera, N. Ueno, and H. Ishii, Phys. Rev. Lett., 104, 156401 (2010).
. R Temirov, S Soubatch, A Luican, F S Tautz, Nature. 444350R. Temirov, S. Soubatch, A. Luican, and F. S. Tautz, Nature, 444, 350 (2006).
. N Ueno, S Kera, Prog. Surf. Sci. 83490N. Ueno and S. Kera, Prog. Surf. Sci., 83, 490 (2008).
. K Doi, K Yoshida, H Nakano, A Tachibana, T Tanabe, Y Kojima, K Okazaki, J. Appl. Phys. 98K. Doi, K. Yoshida, H. Nakano, A. Tachibana, T. Tanabe, Y. Kojima, and K. Okazaki, J. Appl. Phys., 98 (2005).
| [] |
[] | [] | [] | [] | In this paper we present several (quasi-)norm equivalences involving L p (l q ) norm of a certain vector-valued functions and extend the equivalences to p = ∞ and 0 < q < ∞ in the scale of Triebel-Lizorkin spaces, motivated by Fraizer, Jawerth[13]. We also study a Fourier multiplier theorem for L p A (l q ). By applying the results, we will improve the multilinear Hörmander multiplier theorems in Tomita[27],Grafakos, Si [18], and the boundedness results for bilinear pseudo-differential operators in Koezuka, Tomita[20]. | null | [
"https://arxiv.org/pdf/1903.08892v1.pdf"
] | 84,846,198 | 1903.08892 | daa8c9104110b3b3294e9373f88252dfeaf58349 |
21 Mar 2019
21 Mar 2019A CERTAIN VECTOR-VALUED FUNCTION SPACE AND ITS APPLICATIONS TO MULTILINEAR OPERATORS BAE JUN PARK
In this paper we present several (quasi-)norm equivalences involving L p (l q ) norm of a certain vector-valued functions and extend the equivalences to p = ∞ and 0 < q < ∞ in the scale of Triebel-Lizorkin spaces, motivated by Fraizer, Jawerth[13]. We also study a Fourier multiplier theorem for L p A (l q ). By applying the results, we will improve the multilinear Hörmander multiplier theorems in Tomita[27],Grafakos, Si [18], and the boundedness results for bilinear pseudo-differential operators in Koezuka, Tomita[20].
Introduction
Let N and Z be the collections of all natural numbers and all integers, respectively, and N 0 := N ∪ {0}. We will work on d-dimensional Euclidean space R d . We denote by S = S(R d ) the space of all Schwartz functions on R d and by S ′ the set of all tempered distributions. Let D denote the set of all dyadic cubes in R d , and for each k ∈ Z let D k be the subset of D consisting of the cubes with side length 2 −k . For each Q ∈ D we denote the side length of Q by l(Q). Let us denote the characteristic function of measurable set Q by χ Q . The symbol X Y means that there exists a positive constant C, possibly different at each occurrence, such that X ≤ CY , and X ≈ Y signifies C −1 Y ≤ X ≤ CY for a positive unspecified constant C. For f ∈ S the Fourier transform is defined by the formula f (ξ) := R d f (x)e −2πi x,ξ dx while f ∨ (ξ) = f (−ξ) denotes the inverse Fourier transform. We also extend these transforms to S ′ .
For r > 0 let E(r) denote the space of all distributions whose Fourier transforms are supported in ξ ∈ R d : |ξ| ≤ 2r . Let A > 0. For 0 < p < ∞ and 0 < q ≤ ∞ or for p = q = ∞ we define L p A (l q ) := f = {f k } k∈Z ⊂ S ′ : f k ∈ E(A2 k ), f L p (l q ) < ∞ . Then it is known in [28] that L p A (l q ) is a quasi-Banach space (Banach space if p, q ≥ 1) with a (quasi-)norm · L p (l q ) . We will study some (quasi-)norm equivalences on L p A (l q ) and one of main results is an extension of the norm equivalences to the case p = ∞ and 0 < q < ∞ in the scale of Triebel-Lizorkin space. Suppose 0 < q < ∞ and q ≤ t ≤ ∞, and σ > d/q. Then for each Q ∈ D and f k ∈ E(A2 k ) there exists a proper measurable subset S Q of Q, depending on {f k } k∈Z , σ, and t, such that (1.1)
sup P ∈D 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q k∈Z L ∞ (l q ) .
Here M t σ,2 k means a variant of Peetre's maximal operator, defined in Section 2. This can be compared with the estimate that for 0 < p < ∞ or p = q = ∞
(1.2) f L p (l q ) ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q k∈Z L p (l q ) , f ∈ L p A (l q ).
if min (p, q) ≤ t ≤ ∞ and σ > d/ min (p, q). Note that for 1 < p < ∞, according to Littlewood-Paley theory,
f L p ≈ φ k * f k∈Z L p (l 2 )
and, using (1.2), this is also equivalent to
(1.3) Q∈D k inf y∈Q M t σ,2 k φ k * f (y) χ S Q k∈Z L p (l 2 )
where {φ k } k∈Z is a homogeneous Littlewood-Paley partition of unity, defined in Section 2.
On the other hand, using a deep connection between BM O and Carleson measure,
f BM O ≈ sup P ∈D 1 |P | P ∞ k=− log 2 l(P ) φ k * f (x) 2 dx 1/2 .
The main value of (1.1) is that · BM O can be expressed in the form · L ∞ (l 2 ) as an extension of (1.3) to p = ∞. To be specific,
(1.4) f BM O ≈ Q∈D k inf y∈Q M t σ,2 k φ k * f (y) χ S Q k∈Z L ∞ (l 2 ) .
The results (1.1) and (1.2) are stated in Theorem 3.4 and Lemma 3.1 (1), respectively. Other (quasi-)norm equivalences will be provided by using a method of ϕ-transform and by using Fefferman-Stein's sharp maximal function in Section 4 and 5. Based on those results we will improve the L p A (l q )-multiplier theorem of Triebel [28, 1.6.3, 2.4.9]. Suppose f k ∈ E(A2 k ). Then for s > d/ min (1, p, q) − d/2 one has Note that m k is not necessarily compactly supported. The main technique of Triebel to prove the multiplier theorem is a complex interpolation theorem for analytic families of operators, but the interpolation method cannot be applied to the endpoint case q = ∞ or p = ∞. In our proof we adopt the idea in the author [24], which provides a totally different and elementary proof. The multiplier theorem is stated in Theorem 6.1.
m k f k ∨ k∈Z L p (l q ) sup l∈Z m l (2 l ·) L 2 s f L p (l q ) , p < ∞ or p = q = ∞,
One of problems in bilinear operator theory is L p × L q → L r boundedness estimates for 1/r = 1/p + 1/q, and Hölder's inequality is primarily required. Note that the BM O-norm equivalence (1.4) enables us to still utilize Hölder's inequality to obtain some boundedness results involving BM O-type function spaces. In this context the reader will readily notice that (1.1) may be very suitable to endpoint estimates for bilinear or multilinear operators. Indeed, in Theorem 7.1 and 7.2, we will extend and improve the multilinear version of Hörmander's multiplier theorems of Tomita [27] and Grafakos, Si [18] using (1.1). We will also improve the boundedness result of multilinear pseudo-differential operators of Koezuka and Tomita [20] by using the (quasi-)norm equivalence results and the multiplier theorem in Theorem 6.1, and furthermore, this immediately establishes bmo-endpoint Kato-Ponce inequality. See Section 8 for details.
The paper is organized as follows. In Section 2 we present definitions of several function spaces and some maximal inequalities, which will be basic ingredients of our results. Section 3-5 contain some (quasi-)norm equivalences on the space L p A (l q ) and its extensions to p = ∞ and 0 < q < ∞. In Section 6, we state and prove a generalized L p A (l q )-multiplier theorem. Section 7 and 8 are devoted to applications of our results to multilinear multiplier operators and multilinear pseudo-differential operators of type (1, 1).
Function spaces and some maximal inequalities
2.1. Function spaces. Let Φ 0 ∈ S satisfy Supp( Φ 0 ) ⊂ ξ ∈ R d : |ξ| ≤ 1 and Φ 0 (ξ) = 1 for |ξ| ≤ 1/2. Define φ := Φ 0 − 2 −d Φ(2 −1 ·) and φ k := 2 kd φ(2 k ·). Then {Φ 0 } ∪ {φ k } k∈N and {φ k } k∈Z form inhomogeneous and homogeneous Littlewood-Paley partition of unity, respectively. Note that Supp( φ k ) ⊂ ξ ∈ R d : 2 k−2 ≤ |ξ| ≤ 2 k and
Φ 0 (ξ) + k∈N φ k (ξ) = 1, (inhomogeneous) k∈Z φ k (ξ) = 1, ξ = 0. (homogeneous)
For 0 < p, q ≤ ∞ and s ∈ R d , inhomogeneous Triebel-Lizorkin space F s,q p is the collection of all f ∈ S ′ such that
f F s,q p := Φ 0 * f L p + 2 sk φ k * f k∈N L p (l q ) < ∞, 0 < p < ∞ or p = q = ∞, f F s,q ∞ := Φ 0 * f L ∞ + sup P ∈D,l(P )<1 1 |P | P ∞ k=− log 2 l(P ) 2 skq φ k * f (x) q dx 1/q , 0 < q < ∞
where the supremum is taken over all dyadic cubes whose side length is less than 1. Similarly, homogeneous Triebel-Lizorkin spaceḞ s,q p is defined to be the collection of all f ∈ S ′ /P (tempered distribution modulo polynomials) such that
f Ḟ s,q p := 2 sk φ k * f k∈Z L p (l q ) < ∞, 0 < p < ∞ or p = q = ∞, f Ḟ s,q ∞ := sup P ∈D 1 |P | P ∞ k=− log 2 l(P ) 2 skq φ k * f (x) q dx 1/q , 0 < q < ∞.
Then these spaces provide a general framework that unifies classical function spaces.
L p spaceḞ 0,2 p = F 0,2 p = L p 1 < p < ∞ Hardy spaceḞ 0,2 p = H p , F 0,2 p = h p 0 < p ≤ 1 Fractional Sobolev spaceḞ s,2 p =L p s , F s,2 p = L p s 1 < p < ∞ Hardy-Sobolev spaceḞ s,2 p = H p s , F s,2 p = h p s 0 < p ≤ 1 BM O, bmoḞ 0,2 ∞ = BM O, F 0,2 ∞ = bmo Sobolev-BM OḞ s,2 ∞ = BM O s , F s,2 ∞ = bmo s . The Hardy space h p , 0 < p ≤ ∞, consists of all f ∈ S ′ such that (2.1) f h p := sup 0<t<1 Φ t 0 * f L p < ∞, where Φ t 0 := t −d Φ 0 (t −1 ·)
, and the space bmo is a localized version of BM O defined as the set of locally integrable functions f satisfying
f bmo := sup l(Q)≤1 1 |Q| Q f (x) − f Q dx + sup l(Q)>1 1 |Q| Q |f (x)|dx < ∞
where f Q is the average of f over a cube Q. It is known that (h 1 ) * = bmo and h p = L p for 1 < p ≤ ∞. For s ∈ R the spaces h p s and bmo s are defined similarly, using where the supremum is taken over all cubes containing x, and for 0 < r < ∞ let M r f := M(|f | r ) 1/r . Then Fefferman-Stein's vector-valued maximal inequality in [9] says that for 0 < p < ∞, 0 < q ≤ ∞, and 0 < r < min (p, q) one has
f h p s := J s f h p , f bmos := J s f bmo where J s := (1 − ∆) s/(2.2) M r f k k∈Z L p (l q ) {f k } k∈Z L p (l q ) .
Clearly, (2.2) also holds when p = q = ∞. We now introduce a variant of Hardy-Littlewood maximal function. For ǫ ≥ 0, r > 0, and k ∈ Z, let M k,ǫ r be defined by
M k,ǫ r f (x) := sup x∈Q,2 k l(Q)≤1 1 |Q| Q |f (y)| r dy 1/r + sup x∈Q,2 k l(Q)>1 2 k l(Q) −ǫ 1 |Q| Q |f (y)| r dy 1/r .
Note that M k,ǫ r f (x) is decreasing function of ǫ, and M k,0 r f (x) ≈ M r f (x). Then the following maximal inequality holds.
Lemma 2.1. [23] Let 0 < r < q < ∞, ǫ > 0, and µ ∈ Z. For k ∈ Z let f k ∈ E(A2 k ) for some A > 0. Then one has
sup P ∈Dµ 1 |P | P ∞ k=µ M k,ǫ r f k (x) q dx 1/q sup R∈Dµ 1 |R| R ∞ k=µ |f k (x)| q dx 1/q .
Here, the implicit constant of the inequality is independent of µ.
For k ∈ Z and σ > 0 we define the Peetre's maximal operator M σ,2 k by
M σ,2 k f (x) := sup y∈R d |f (x − y)| (1 + 2 k |y|) σ .
It is proved in [23] that if σ > d/r and f ∈ E(A2 k ) for some A > 0 then
(2.3) M σ,2 k f (x) M k,σ−d/r r f (x) uniformly in k
and this allow us to replace M r and M k,ǫ r by M σ,2 k in (2.2) and Lemma 2.1. Now we generalize the Peetre's maximal operator. For k ∈ Z, σ > 0, and 0 < t ≤ ∞ let
M t σ,2 k f (x) := 2 kd/t f k (x − ·) (1 + 2 k | · |) σ L t , which is an extension of M ∞ σ,2 k f (x) = M σ,2 k f (x). Lemma 2.2. Let σ > 0 and k ∈ Z. Suppose 0 < t ≤ s ≤ ∞ and f ∈ E(A2 k ) for some A > 0. Then M s σ,2 k f (x) M t σ,2 k f (x).
Proof. Since the case t = s is trivial, we only consider the case t < s. Let Ψ 0 ∈ S satisfy Supp( Ψ 0 ) ⊂ ξ : |ξ| ≤ 2 2 A and Ψ 0 (ξ) = 1 for |ξ| ≤ 2A.
Then we note that f = Ψ k * f . We first assume s = ∞ and 0 < t < ∞. If 1 < t < ∞, then it follows from Hölder's inequality that
|f (x − y)| (1 + 2 k |y|) σ ≤ R d |f (x − z)| |Ψ k (z − y)| (1 + 2 k |y|) σ dz ≤ R d |f (x − z)| (1 + 2 k |z|) σ |Ψ k (z − y)|(1 + 2 k |z − y|) σ dz ≤ M t σ,2 k f (x)2 −kd/t R d |Ψ k (z)|(1 + 2 k |z|) σ t ′ dz 1/t ′ M t σ,2 k f (x).
If 0 < t ≤ 1 then we apply Nikolskii's inequality to obtain
|f (x − y)| 2 kd(1/t−1) R d |f (x − z)| t |Ψ k (z − y)| t dz 1/t and thus |f (x − y)| (1 + 2 k |y|) σ 2 kd(1/t−1) R d |f (x − z)| t (1 + 2 k |z|) σt |Ψ k (z − y)| t (1 + 2 k |z − y|) σt dz 1/t M t σ,2 k f (x).
This proves
(2.4) M σ,2 k f (x) M t σ,2 k f (x).
Now assume 0 < t < s < ∞. Then one has
M s σ,2 k f (x) = 2 kd/s R d |f (x − y)| (1 + 2 k |y|) σ s dy 1/s ≤ M σ,2 k f (x) 1−t/s 2 kd/s R d |f (x − y)| (1 + 2 k |y|) σ t dy 1/s M t σ,2 k f (x) 1−t/s M t σ,2 k f (x) t/s = M t σ,2 k f (x)
. by applying (2.4).
Lemma 2.3. Let σ > 0 and k ∈ Z. Suppose 0 < t ≤ s ≤ ∞. Then M s σ,2 k M t σ,2 k f (x) ≈ M t σ,2 k f (x). Proof. One direction is clear because M t σ,2 k f (x) ≤ M σ,2 k M t σ,2 k f (x) M s σ,2 k M t σ,2 k f (x), which follows from Lemma 2.2.
For the opposite direction we only concern ourselves with the case 0 < t < s < ∞ since the other cases follow in a similar and simpler way. By applying Minkowski's inequality with s/t > 1, one has
M s σ,2 k M t σ,2 k f (x) = 2 kd/s 2 kd/t R d R d |f (x − z)| q (1 + 2 k |y − z|) σq (1 + 2 k |y|) σq dz s/t dy 1/s ≤ 2 kd/s 2 kd/t R d |f (x − z)| t R d 1 (1 + 2 k |y − z|) σs (1 + 2 k |y|) σs dy t/s dz 1/t
and a standard computation (see [16,Appendix K]) yields that R d 1 (1 + 2 k |y − z|) σs (1 + 2 k |y|) σs dy
2 −kd (1 + 2 k |y|) σs . Therefore, M s σ,2 k M t σ,2 k f (x) M t σ,2 k f (x).
Elementary considerations reveal that for σ > 0 and Q ∈ D k
(2.5) sup y∈Q |f (y)| inf y∈Q M σ,2 k f (y)
and then it follows from Lemma 2.3 that for 0 < t ≤ ∞
(2.6) sup y∈Q M t σ,2 k f (y) inf y∈Q M σ,2 k M t σ,2 k f (y) ≈ inf y∈Q M t σ,2 k f (y) if f ∈ E(A2 k ) for some A > 0. Lemma 2.4. Let 0 < t ≤ s ≤ ∞, 0 < t < ∞, k ∈ Z, σ > d/t, and 0 < ǫ < σ − d/t. Suppose f k ∈ E(A2 k ) for some A > 0.Then M s σ,2 k f (x) M k,ǫ t f (x).
Proof. Due to Lemma 2.2 it suffices to show
M t σ,2 k f (x) M k,ǫ t f (x) Let E 0 := y ∈ R d : |y| ≤ 2 −k and E j := y ∈ R d : 2 −k+j−1 < |y| ≤ 2 −k+j , j ≥ 1.
Then one has
R d |f (x − y)| t (1 + 2 k |y|) σt dy ∞ j=0 2 −jσt E j |f (x − y)| t dy ≤ 2 −kd M k,ǫ t f (x) t ∞ j=0 2 −jt(σ−d/t−ǫ) ,
which concludes the proof since σ − d/t − ǫ > 0.
Note that, from (2.3), Lemma 2.4 also holds for s = ∞ and ǫ = σ − d/t. As a immediate consequence of (2.2) and lemma 2.4, one has the following inequality.
Lemma 2.5. Let 0 < p, q ≤ ∞, min (p, q) ≤ t ≤ ∞, and σ > d/ min (p, q). Suppose A > 0 and f k ∈ E(A2 k ) for each k ∈ Z. (1) For 0 < p < ∞ or p = q = ∞ M t σ,2 k f k k∈Z L p (l q ) {f k } k∈Z L p (l q ) .
(2) For p = ∞, 0 < q < ∞, and µ ∈ Z
sup P ∈Dµ 1 |P | P ∞ k=− log 2 l(P ) M t σ,2 k f k (x) q dx 1/q sup P ∈Dµ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q
where the constant in the inequality is independent of µ.
As an application of Lemma 2.5 (2), for µ ∈ Z, q 1 < q 2 < ∞, and f k ∈ E(A2 k ) for some A > 0, one has
{f k } k≥µ L ∞ (l ∞ ) sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q 2 dx 1/q 2 sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q 1 dx 1/q 1 . (2.7)
We omit the proof and refer to [23].
3. Equivalence of (quasi-)norms by using M t σ,2 k Let f k ∈ E(A2 k ) for some A > 0 and f := {f k } k∈Z . For convenience in notation we will occasionally write
M σ (f ) := M σ,2 k f k k∈Z , M t σ (f ) := M t σ,2 k f k k∈Z . If 0 < p < ∞ or p = q = ∞, then one has the (quasi-)norm equivalence f L p (l q ) ≈ M σ (f ) L p (l q ) ≈ M t σ f L p (l q )
. for t ≥ min (p, q) and σ > d/ min (p, q) due to Lemma 2.5. Moreover, it follows from (2.5) and (2.6) that
|f k (x)| = Q∈D k |f k (x)|χ Q (x) Q∈D k inf y∈Q M t σ,2 k f k (y) χ Q (x) ≤ M t σ,2 k f k (x)
and thus Lemma 2.5 (1) implies
(3.1) f L p (l q ) ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ Q k∈Z L p (l q ) .
Similarly, if 0 < q < ∞ and µ ∈ Z, then
sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q ≈ sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) Q∈D k inf y∈Q M t σ,2 k f k (y) χ Q (x) q dx 1/q (3.2) = sup P ∈D,l(P )≤2 −µ 1 |P | ∞ k=− log 2 l(P ) Q∈D k ,Q⊂P inf y∈Q M t σ,2 k f k (y) q |Q| 1/q (3.3)
for t ≥ q and σ > d/q.
Lemma 3.1. Let 0 < p, q ≤ ∞, min (p, q) ≤ t ≤ ∞, A > 0, and µ ∈ Z. For each Q ∈ D let S Q be a measurable subset of Q with |S Q | > γ|Q| for some 0 < γ < 1. Suppose σ > d/ min (p, q) and f k ∈ E(A2 k ) for each k ∈ Z. (1) For 0 < p < ∞ or p = q = ∞ f L p (l q ) ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q k∈Z L p (l q ) .
(2) For p = ∞ and 0 < q < ∞
sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q ≈ sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q (x) q dx 1/q .
Note that the constants in the estimates are independent of S Q as long as S Q satisfies |S Q | > γ|Q|.
Proof of Lemma 3.1. The second assertion follows immediately from (3.3) and the condition |S Q | > γ|Q|. Thus we only consider the first one. Assume 0 < p < ∞ or p = q = ∞. Since χ Q ≥ χ S Q one direction is obvious due to (3.1). We will base the converse on the pointwise estimate that for 0 < r < ∞
(3.4) χ Q (x) M r (χ S Q )(x)χ Q (x),
which is due to the observation that for x ∈ Q
1 < 1 γ 1/r |S Q | 1/r |Q| 1/r = 1 γ 1/r 1 |Q| Q χ S Q (y)dy 1/r ≤ γ −1/r M r (χ S Q )(x).
Choose r < p, q and then apply (3.1) and (3.4) to obtain
f L p (l q ) Q∈D k inf y∈Q M t σ,2 k f k (y) M r (χ S Q )χ Q k∈Z L p (l q ) ≤ inf y∈Q M t σ,l(Q) −1 f − log 2 l(Q) (y) M r (χ S Q ) Q∈D L p (l q ) inf y∈Q M t σ,l(Q) −1 f − log 2 l(Q) (y) χ S Q Q∈D L p (l q ) = Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q k∈Z L p (l q )
where the maximal inequality (2.2) is applied in the third inequality (with a different countable index set D).
We are mainly interested in the following result.
Theorem 3.2. Let 0 < q < ∞, q ≤ t ≤ ∞, µ ∈ Z, and σ > d/q. Suppose f k ∈ E(A2 k ) for some A > 0. For 0 < γ < 1 and Q ∈ D there exists a proper measurable subset S Q of Q, depending on γ, q, σ, t, f , such that |S Q | > (1 − γ)|Q| and
sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q k≥µ L ∞ (l q )
, uniformly in µ.
We note that the constant in the equivalence is independent of f , just depending on γ. Moreover, by taking the supremum over µ ∈ Z we may replace " sup P ∈D,l(P )≤2 −µ " by " sup P ∈D " and " {· · · } k≥µ L ∞ (l q ) " by " {· · · } k∈Z L ∞ (l q ) ". Then as a corollary we have the following BM O characterization.
Corollary 3.3. Let 2 ≤ t ≤ ∞ and σ > d/2. Suppose f ∈ BM O. For 0 < γ < 1 and Q ∈ D there exists a proper measurable subset S Q of Q, depending on γ, σ, t, f , such that |S Q | > (1 − γ)|Q| and f BM O ≈ Q∈D k inf y∈Q M t σ,2 k φ k * f (y)χ S Q k∈Z L ∞ (l 2 ) .
A simple application of Corollary 3.3 is the estimate
(3.5) f, g f BM O g H 1 .
This provides one direction of the duality between H 1 and BM O, which was first announced in [8] and the proof appeared in [4,9]. By using Corollary 3.3 and Hölder's inequality (3.5) can be also proved. To be specific,
f, g = R d k∈Z φ k * f (x) φ k * g(x)dx ≤ R d k∈Z φ k * f (x) φ k * g(x) dx = R d k∈Z Q∈D k φ k * f (x) φ k * g(x) χ Q (x)dx R d k∈Z Q∈D k inf y∈Q M σ,2 k (φ k * f )(y) inf y∈Q M σ,2 k ( φ k * g)(y) χ Q (x)dx R d k∈Z Q∈D k inf y∈Q M σ,2 k (φ k * f )(y) inf y∈Q M σ,2 k ( φ k * g)(y) χ S Q (x)dx ≤ Q∈D k inf y∈Q M σ,2 k φ k * f (y) χ S Q k∈Z L ∞ (l 2 ) × Q∈D k inf y∈Q M σ,2 k φ k * g (y) χ Q k∈Z L 1 (l 2 ) ≈ f BM O g H 1
where the third inequality follows from (3.4) and Peetre's maximal inequality and S Q is the subset of Q for BM O norm equivalence of f as in Corollary 3.3.
Here { φ k } is a sequence of Schwartz functions such that Supp( φ k ) ⊂ {ξ ∈ R d : |ξ| ≈ 2 k } and k∈Z φ k (ξ) φ k (ξ) = 1 for ξ = 0.
3.1. Proof of Theorem 3.2. One direction is clear, for any subset S Q of Q with |S Q | > (1 − γ)|Q|, due to Lemma 3.1. Therefore, we will prove that there exists a measurable subset
S Q such that |S Q | > (1 − γ)|Q| and Q∈D k inf y∈Q M t σ,2 k f k (y) χ S Q k≥µ L ∞ (l q ) sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q . (3.6)
To choose such a subset S Q we set up notation and terminology. For 0 < q ≤ ∞ and P ∈ D we define
G q P (f )(x) := Q∈D k ,Q⊂P inf y∈Q |f k (y)| χ Q (x) k≥− log 2 l(P ) l q .
Recall that the nonincreasing rearrangement f * of a non-negative measurable function f is given by
f * (γ) := inf λ > 0 : {x ∈ R d : f (x) > λ} ≤ γ and satisfies (3.7) x ∈ R d : f (x) > f * (γ) ≤ γ, γ > 0.
For P ∈ D, 0 < γ < 1, and a non-negative measurable function f , the "γ-median of f over P " is defined as
m γ P (f ) := inf λ > 0 : x ∈ P : f (x) > λ ≤ γ|P | .
We consider the γ-median of G q P (f ) over P and the supremums of the quantity over P ∈ D, l(P ) ≤ 2 −µ . That is,
m γ,q P (f ) := m γ P G q P (f ) = inf λ > 0 : x ∈ P : G q P (f )(x) > λ ≤ γ|P | , m γ,q,µ (f )(x) := sup P ∈D,l(P )≤2 −µ m γ,q P (f )χ P (x),
Observe that
m γ,q P (f ) = G q P (f )χ P * (γ|P |)
and by (3.7) one has
x ∈ P : G q P (f )(x) > m γ,q,− log 2 l(P ) (f )(x) ≤ x ∈ P : G q P (f )(x) > m γ,q P (f ) ≤ γ|P |. (3.8) Moreover, (3.9) m γ,q,µ 1 (f )(x) ≤ m γ,q,µ 2 (f )(x) for µ 1 ≥ µ 2 .
Now for each P ∈ D we define
S γ,q P (f ) := x ∈ P : G q P (f )(x) ≤ m γ,q,− log 2 l(P ) f (x) . Then (3.8) yields that (3.10) S γ,q P (f ) ≥ (1 − γ)|P |.
Then (3.6) can be established by the following proposition.
Proposition 3.4. Let 0 < q < ∞, q ≤ t ≤ ∞, µ ∈ Z, and σ > d/q. Suppose 0 < γ < 1 and f k ∈ E(A2 k ) for k ∈ Z. Then Q∈D k inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) k≥µ L ∞ (l q ) sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q uniformly in µ. Remark. For 0 < q < ∞ f L ∞ (l q ) ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) k∈Z L ∞ (l q ) while for 0 < p < ∞ or p = q = ∞ f L p (l q ) ≈ Q∈D k inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) k∈Z L p (l q ) ,
which is due to Lemma 3.1 (1).
Proof of Proposition 3.4. Assume 0 < q < ∞, q ≤ t ≤ ∞, µ ∈ Z, and σ > d/q. Our claim is Q∈D k inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) k≥µ L ∞ (l q ) = sup P ∈D,l(P )≤2 −µ Q∈D k ,Q⊂P inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) k≥µ L ∞ (l q ) ≤ m γ,q,µ M t σ (f ) L ∞ (Claim 1) sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q . (Claim 2)
To verify (Claim 1) let ν ≥ µ and fix P ∈ D ν and x ∈ P . Then it suffices to show that
(3.11) Q∈D k ,Q⊂P inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) (x) k≥µ l q ≤ m γ,q,ν M t σ (f ) (x)
due to (3.9). Suppose that the left hand side of (3.11) is a nonzero number. Then there exists the "maximal" dyadic cube P 0 (x) ⊂ P such that x ∈ S γ,q P 0 (x) (M t σ (f )), and thus
(3.12) G q P 0 (x) M t σ (f ) (x) ≤ m γ,q,− log 2 l(P 0 (x)) M t σ (f ) (x) ≤ m γ,q,ν M t σ (f ) (x)
where the second inequality holds due to (3.9). The maximality of P 0 (x) yields that the left hand side of (3.11) is
Q∈D k ,Q⊂P 0 (x) inf y∈Q M t σ,2 k f k (y) χ S γ,q Q (M t σ (f )) (x) k≥− log 2 l(P 0 (x)) l q ≤ Q∈D k ,Q⊂P 0 (x) inf y∈Q M t σ,2 k f k (y) χ Q (x) k≥− log 2 l(P 0 (x)) l q = G q P 0 (x) M t σ (f ) (x) ≤ m γ,q,ν M t σ (f ) (x),
where the last one follows from (3.12). This proves (3.11).
To achieve (Claim 2) fix ν ≥ µ and let us assume
(3.13) ǫ > γ −1/q sup P ∈D,l(P )≤2 −ν 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q .
Then, using Chebyshev's inequality, (3.2), and (3.13), there exists a constant C A,q,t,σ > 0 such that for R ∈ D ν
x ∈ R : G q R M t σ (f ) (x) > ǫ ≤ 1 ǫ q G q R M t σ (f ) q L q = 1 ǫ q R ∞ k=− log 2 l(R) Q∈D k inf y∈Q M t σ,2 k f k (y) χ Q (x) q dx ≤ C A,q,t,σ |R| ǫ q sup P ∈D,l(P )≤2 −ν 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q ≤ C A,q,t,σ γ|R|.
This yields that
m C A,q,t,σ γ,q R M t σ (f ) ≤ ǫ < 2ǫ. So far, we have proved that for any R ∈ D ν , m C A,q,t,σ γ,q R M t σ (f ) ≤ 2γ −1/q sup P ∈D,l(P )≤2 −ν 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q , which is equivalent to m γ,q R M t σ (f ) ≤ 2C 1/q A,q,t,σ γ −1/q sup P ∈D,l(P )≤2 −ν 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q .
We complete the proof by taking the supremum over R ∈ D, l(R) = 2 −ν ≤ 2 −µ .
4.
Equivalence of (quasi-)norms by using the method of ϕ-transform
For a sequence of complex numbers b :
= {b Q } Q∈D we define b ḟ 0,q p := g q (b) L p , 0 < p < ∞ or p = q = ∞ b ḟ 0,q ∞ := sup P ∈D 1 |P | P Q⊂P |b Q ||Q| −1/2 χ Q (x) q dx 1/q , 0 < q < ∞ where (4.1) g q (b)(x) := |b Q ||Q| −1/2 χ Q (x) Q∈D l q .
Furthermore, for C > 0 let ϕ, ϕ ∈ S satisfy
Supp( ϕ), Supp( ϕ) ⊂ {ξ : 1/2 ≤ |ξ| ≤ 2}, | ϕ(ξ)|, | ϕ(ξ)| ≥ C > 0 for 3/4 ≤ |ξ| ≤ 5/3, and k∈Z ϕ k (ξ) ϕ k (ξ) = 1, ξ = 0 where ϕ k (x) := 2 kd ϕ(2 k x) and ϕ k (x) := 2 kd ϕ(2 k x) for k ∈ Z.
Then the norms inḞ 0,q p can be characterized by the discreteḟ 0,q p norms. For each Q ∈ D let x Q denote the lower left corner of Q. Every f ∈Ḟ 0,q p can be decomposed as [11,12] for more details.
(4.2) f (x) = Q∈D b Q ϕ Q (x) where ϕ Q (x) := |Q| 1/2 ϕ k (x − x Q ) for Q ∈ D k and b Q := f, ϕ Q . Moreover, in this case, one has (4.3) b ḟ 0,q p f Ḟ 0,q p . The converse estimate also holds. For any sequence b = {b Q } Q∈D of complex numbers satisfying b ḟ 0,q p < ∞, f (x) := Q∈D b Q ϕ Q (x) belongs toḞ 0,q p and (4.4) f Ḟ 0,q p b ḟ 0,q p . See
In this section we will give an analogous properties of
{f k } k∈Z with f k ∈ E(A2 k ) for some A > 0, like (4.2), (4.3), and (4.4). From now on we fix A > 0 and suppose f k ∈ E(A2 k ). Let Ψ 0 ∈ S satisfy Supp( Ψ 0 ) ⊂ ξ : |ξ| ≤ 2 2 A and Ψ 0 (ξ) = 1 for |ξ| ≤ 2A.
For each k ∈ Z and Q ∈ D k let Ψ k := 2 kd Ψ 0 (2 k ·) and
Ψ Q (x) := |Q| 1/2 Ψ k (x − x Q ). Lemma 4.1. Let 0 < p < ∞ or p = q = ∞. (1) Assume f k ∈ E(A2 k ) for each k ∈ Z.
Then there exists a sequence of complex
numbers b := {b Q } Q∈D such that f k (x) = Q∈D k b Q Ψ Q (x) and b ḟ 0,q p f L p (l q ) . (2) For any sequence b = {b Q } Q∈D of complex numbers satisfying b ḟ 0,q p < ∞, f k (x) := Q∈D k b Q Ψ Q (x) satisfies (4.5) f L p (l q ) b ḟ 0,q p .
For the case p = ∞ and 0 < q < ∞ we fix µ ∈ Z and let
b ḟ 0,q ∞ (µ) := sup P ∈D,l(P )≤2 −µ 1 |P | P Q⊂P |b Q ||Q| −1/2 χ Q (x) q dx 1/q . Lemma 4.2. Let 0 < q < ∞ and µ ∈ Z.
(1) Assume f k ∈ E(A2 k ) for each k ≥ µ. Then there exists a sequence of complex
numbers b := {b Q } Q∈D,l(Q)≤2 −µ such that f k (x) = Q∈D k b Q Ψ Q (x) and b ḟ 0,q ∞ (µ) sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q . (2) For any sequence b = {b Q } Q∈D,l(Q)≤2 −µ of complex numbers satisfying b ḟ 0,q ∞ (µ) < ∞, f k (x) := Q∈D k b Q Ψ Q (x) satisfies sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q b ḟ 0,q ∞ (µ) .
For the proof we assume A = 2 −2 , for otherwise the conclusions follow from a standard dyadic dilation method (with an inessential change of constant).
Proof of Lemma 4.1. Let 0 < p < ∞ or p = q = ∞. (1) Since Supp( f k (2 k ·)) ⊂ {|ξ| ≤ 1/2}, using the Fourier series representation of f k (2 k ·), one has f k (ξ) = 2 −kd l∈Z d f k (2 −k l)e −2πi l/2 k ,ξ .
Then
f k (x) = f k Ψ k ∨ (x) = 2 −kd l∈Z d f k (2 −k l)Ψ k (x − 2 −k l) = l∈Z d 2 −kd/2 f k (2 −k l)2 −kd/2 Ψ k (x − 2 −k l).
For any Q ∈ D k we write
Q = Q k,l := {x ∈ R d : 2 −k l i ≤ x i ≤ 2 −k (l i + 1), i = 1, . . . , d} where l = (l 1 , . . . , l d ) ∈ Z d . That is, Q k,l is the dyadic cube in D k whose lower left corner is 2 −k l. Now let b Q k,l := 2 −kd/2 f k (2 −k l) = |Q k,l | 1/2 f k (x Q k,l ) Ψ Q k,l (x) := 2 −kd/2 Ψ k (x − 2 −k l) = |Q k,l | 1/2 Ψ k (x − x Q k,l )
and then one can write
(4.6) f k (x) = Q∈D k b Q Ψ Q (x).
Furthermore, for a.e. x ∈ R d there exists Q 0 ∈ D k whose interior contains x. Therefore, for any σ > 0 one has
(4.7) Q∈D k |b Q ||Q| −1/2 χ Q (x) = |b Q 0 ||Q 0 | −1/2 = |f k (x Q 0 )| M σ,2 k f k (x) a.e. x.
Now we select σ > d/ min (p, q) and then
b ḟ 0,q p = |b Q ||Q| −1/2 χ Q Q∈D L p (l q ) = Q∈D k |b Q ||Q| −1/2 χ Q k∈Z L p (l q ) M σ,2 k f k k∈Z L p (l q ) f L p (l q )
where (4.7) and Lemma 2.5 (1) are applied.
(2) For a given b = {b Q } Q∈D and k ∈ Z let
f k (x) := Q∈D k b Q Ψ Q (x).
For each k ∈ Z and x ∈ R d let
E k 0 (x) := Q ∈ D k : |x − x Q | < 2 −k E k j (x) := Q ∈ D k : 2 −k+j−1 ≤ |x − x Q | < 2 −k+j , j ∈ N. Choose 0 < ǫ < min (1, p, q) and M > d/ǫ. Since |Ψ Q (x)| 2 −jM |Q| −1/2 on E k j , by decomposing Q∈D k = ∞ j=0
Q∈E k j (x) and using l ǫ ֒→ l 1 one has
|f k (x)| ∞ j=0 2 −jM Q∈E k j (x) |b Q ||Q| −1/2 ǫ 1/ǫ ≈ ∞ j=0 2 −j(M −d/ǫ) 1 2 −kd 2 jd R d Q∈E k j (x) |b Q ||Q| −1/2 χ Q (y) ǫ dy 1/ǫ M ǫ Q∈D k |b Q ||Q| −1/2 χ Q (x). (4.8)
Then, using the estimate (4.8) and the maximal inequality (2.2), one has
f L p (l q ) Q∈D k |b Q ||Q| −1/2 χ Q k∈Z L p (l q ) = b ḟ 0,q p ,
as required.
Proof of Lemma 4.2. Assume 0 < q < ∞ and µ ∈ Z.
(1) We apply (4.6), (4.7) and Lemma 2.5 (2), choosing σ > d/q, to obtain
b ḟ 0,q ∞ (µ) = sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) Q∈D k |b Q ||Q| −1/2 χ Q (x) q dx 1/q sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) M σ,2 k f k (x) q dx 1/q sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q .
(2) We first observe that
(4.9) b ḟ 0,q ∞ (µ) = sup P ∈D,l(P )≤2 −µ 1 |P | Q⊂P |b Q ||Q| −1/2 q |Q| 1/q . Let f k (x) := Q∈D k b Q Ψ Q (x)
and choose M > d/ min (1, q). Using Hölder's inequality if q > 1 or the embedding l q ֒→ l 1 if q ≤ 1, one has
|f k (x)| Q∈D k |b Q ||Q| −1/2 1 (1 + 2 k |x − x Q |) 2M Q∈D k |b Q ||Q| −1/2 q 1 (1 + 2 k |x − x Q |) M q 1/q and thus sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q sup P ∈D,l(P )≤2 −µ ∞ k=− log 2 l(P ) Q∈D k |b Q ||Q| −1/2 q 1 |P | P 1 (1 + 2 k |x − x Q |) M q dx 1/q .
For each P ∈ D and m ∈ Z d let P + l(P )m := x + l(P )m : x ∈ P and denote by D k (P, m) the subfamily of D k that contains any dyadic cubes contained in P + l(P )m. Then in the last expression we decompose
I P k,M 1/q m∈Z d ,|m|≤2d 1 |P | ∞ k=− log 2 l(P ) Q∈D k (P,m) |b Q ||Q| −1/2 q |Q| 1/q sup R:l(R)=l(P ) 1 |R| ∞ k=− log 2 l(R) Q∈D k ,Q⊂R |b Q ||Q| −1/2 q |Q| 1/q .
For the other term corresponding to J P k,M , observe that if |m| > 2d and Q ∈ D k (P, m) then
|x − x Q | l(P )|m| which gives J P k,M m∈Z d ,|m|>2d 1 |m| M q 1 2 kM q 1 l(P ) M q Q∈D k (P,m) |b Q ||Q| −1/2 q .
Now we apply triangle inequality if q ≥ 1 or l q ֒→ l 1 if q < 1 to obtain that ∞ k=− log 2 l(P )
J P k,M min (1,q)/q m∈Z d |m|>2d 1 |m| M min (1,q) ∞ k=− log 2 l(P ) 1 2 kM q 1 l(P ) M q Q∈D k Q⊂P +ml(P ) |b Q ||Q| −1/2 q min (1,q)/q ≤ m∈Z d |m|>2d 1 |m| M min (1,q) 1 |P | ∞ k=− log 2 l(P ) Q∈D k Q⊂P +ml(P ) |b Q ||Q| −1/2 q |Q| min (1,q)/q sup R:l(R)=l(P ) 1 |R| ∞ k=− log 2 l(R) Q∈D k ,Q⊂R |b Q ||Q| −1/2 q |Q| min (1,q)/q because M min (1, q) > d and 2 k l(P ) ≥ 1.
Combining these estimates, one has
sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k (x)| q dx 1/q sup R∈D,l(R)≤2 −µ 1 |R| ∞ k=− log 2 l(R) Q∈D k ,Q⊂R |b Q ||Q| −1/2 q |Q| 1/q ≤ b ḟ 0,q ∞ (µ) .
Equivalence of (quasi-)norms by using sharp maximal functions
Given a locally integrable function f on R d the Fefferman-Stein sharp maximal function f ♯ is defined by
f ♯ (x) = sup P :x∈P 1 |P | P |f (y) − f P |dy
where f P := 1 |P | P f (z)dz and the supremum is taken over all cubes P containing x. Then a fundamental inequality of Fefferman and Stein [10] says that for 1 < p < ∞,
1 ≤ p 0 ≤ p if f ∈ L p 0 (R d ) then we have (5.1) Mf L p f ♯ L p .
Now one has the following characterization ofḞ 0,q p by the sharp maximal functions. For 0 < q < p < ∞,
(5.2) f Ḟ 0,q p ≈ sup x∈P 1 |P | P k≥− log 2 l(P ) φ k * f (y) q dy 1/q L p (x)
where the supremum is taken over all cubes containing x ( not necessarily dyadic cubes ).
Observe that one can actually replace the maximal functions by dyadic maximal ones in (5.1). That is, for locally integrable function f we define the dyadic maximal function
1 |P | P |f (y) − f P |dy
where the supremums are taken over all dyadic cubes Q containing x. Then for 1 < p < ∞, 1 ≤ p 0 ≤ p, and f ∈ L p 0 one has
(5.3) M (d) f L p M ♯ f L p .
The proof of (5.2) is based on (5.1), and by applying (5.3) instead of (5.1) one may replace "sup x∈P " in (5.2) by " sup x∈P ∈D ", which means the supremum over all dyadic cubes containing x.
We refer the reader to [25], [26, Proposition 6.1 and 6.2] for details. We characterize L p A (l q ), 0 < q < p < ∞, by using analogous sharp maximal functions.
Lemma 5.1. Let 0 < q < p < ∞ and A > 0. Suppose f k ∈ E(A2 k ) for each k ∈ Z. Then f L p (l q ) ≈ sup x∈P ∈D 1 |P | P ∞ k=− log 2 l(P ) |f k (y)| q dy 1/q L p (x)
where the supremum is taken over all dyadic cubes containing x.
The proof of Lemma 5.1 is essentially the same as the proof of (5.2), which is given in [26], replacing (5.1) by (5.3). We omit the proof and refer to [25,26].
A multiplier theorem for a vector-valued function space
In this section we will study Fourier multipliers for L p A (l q ) for 0 < p < ∞ or p = q = ∞, and a proper extension to the case p = ∞ and 0 < q < ∞. We continue to use the notation f := {f k } k∈Z .
Theorem A. ( [28, 1.6.3, 2.4.9] ) Let 0 < p < ∞, 0 < q ≤ ∞, and A > 0. Suppose f k ∈ E(A2 k ) for each k ∈ Z, and {m k } k∈Z satisfies
(6.1) sup l∈N m l (2 l ·) L 2 s < ∞ for s > d/ min (1, p, q) − d/2 if q < ∞ d/p + d/2 if q = ∞ . Then (6.2) m k f k ∨ k∈N L p (l q ) sup l∈N m l (2 l ·) L 2 s f L p (l q ) .
It was first proved that for 1 < p, q < ∞ if (6.1) holds for s > d/2, then (6.2) works by using the classical Hörmander-Mikhlin multiplier theorem. Moreover, for 0 < p < ∞ and 0 < q ≤ ∞ it is easy to obtain that (6.2) is true under the assumption (6.1) with s > d/2 + d/ min (p, q). Then a complex interpolation method is applied to derive s > d/ min (1, p, q) − d/2 when 0 < p, q < ∞. However, this method cannot be applied to the endpoint case p = ∞ or q = ∞ and thus one does not have any conclusion when p = ∞, and the assumption s > d/p + d/2 is required when q = ∞, which is stronger than seemingly "natural" condition s > d/ min (1, p) − d/2. In [24] the author proved theḞ 0,q p -multiplier theory for the full range 0 < p, q ≤ ∞ with the condition s > d/ min (1, p, q) − d/2 in a different method. By using some techniques in [24] we will improve Theorem A. Theorem 6.1. Let 0 < p, q ≤ ∞, j ≥ 0, and µ ∈ Z. Suppose f k ∈ E(A2 k ) for each k ∈ Z, and {m k } k∈Z satisfies
sup l∈Z m l (2 l+j ·) L 2 s < ∞ for s > d/ min (1, p, q) − d/2.
(1) For 0 < p < ∞ or p = q = ∞,
m k f k+j ∨ k∈Z L p (l q ) sup l∈Z m l (2 l+j ·) L 2 s f L p (l q ) uniformly in j. (2) For p = ∞ and 0 < q < ∞ sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) m k f k+j ∨ (x) q dx 1/q sup l≥µ m l (2 l+j ·) L 2 s sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) |f k+j (x)| q dx 1/q
uniformly in µ and j.
Note that the condition s > d/ min (1, p, q) − d/2 is sharp except the case p = ∞ and q < 1, and counter examples for the sharpness can be found in [24].
Since the inequality in Theorem 6.1 (2) holds uniformly in µ, by taking supremum over µ ∈ Z, one has
sup P ∈D 1 |P | P ∞ k=− log 2 l(P ) m k f k+j ∨ (x) q dx 1/q sup l∈Z m l (2 l+j ·) L 2 s sup P ∈D 1 |P | P ∞ k=− log 2 l(P ) |f k+j (x)| q dx 1/q .
Observe that for j ≥ 0
m k f k+j ∨ (x) = m k (2 j ·) f k+j (·/2 j ) ∧ ∨ (2 j x)
and by using a change of variables, one may assume j = 0 in the proof of Theorem 6.1. Indeed, once the theorem is established for j = 0, then for 0 < p < ∞ or p = q = ∞
m k f k+j ∨ k∈Z L p (l q ) = 2 −jd/p m k (2 j ·) f k+j (·/2 j ) ∧ ∨ k∈Z L p (l q ) sup l∈Z m l (2 l+j ·) L 2 s 2 −jd/p f k+j (·/2 j ) k∈Z L p (l q ) = sup l∈Z m l (2 l+j ·) L 2 s f L p (l q )
uniformly in j, since f k+j (2 j ·) ∈ E(A2 k ). When p = ∞ and 0 < q < ∞, one has
sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P ) m k f k+j ∨ (x) q dx 1/q = sup R∈D,l(R)≤2 −µ+j 1 |R| R ∞ k=− log 2 l(R)+j m k (2 j ·) f k+j (·/2 j ) ∧ ∨ (x) q dx 1/q sup l≥µ m l (2 l+j ) L 2 s sup R∈D,l(R)≤2 −µ+j 1 |R| R ∞ k=− log 2 l(R) f k+j (x/2 j ) q dx 1/q = sup l≥µ m l (2 l+j ·) L 2 s sup P ∈D,l(P )≤2 −µ 1 |P | P ∞ k=− log 2 l(P )
|f k+j (x)| q dx 1/q uniformly in j.
As mentioned above, the proof of Theorem A in [28] relies on the classical Mikhlin-Hörmander multiplier theorem. In order to prove Theorem 6.1 we will, instead, make use of the following lemma. Lemma 6.2. Suppose 0 < p ≤ ∞, k ∈ Z, and s > 0. Suppose f k ∈ E(A2 k ) and {m k } k∈Z satisfies
m k (2 k ·) L 2 s < ∞ for s > d/ min (1, p) − d/2. Then (6.3) m k f k ∨ L p m k (2 k ·) L 2 s f k L p uniformly in k.
Proof. Let Ψ k ∈ S be defined as before. That is,
Supp( Ψ 0 ) ⊂ {|ξ| ≤ 2 2 A}, Ψ 0 (ξ) = 1 for |ξ| ≤ 2A, and Ψ k := 2 kd Ψ 0 (2 k ·). Then f k = Ψ k * f k and m k f k ∨ = m k Ψ k ∨ * f k .
Our claim is that
(6.4) m k Ψ k ∨ * f k L p m k (2 k ·) Ψ 0 L 2 s f k L p .
Then (6.3) follows from the observation that
m k (2 k ·) Ψ 0 L 2 s = R d 1 + |ξ| 2 s R d m k (2 k ·)(ξ − η)Ψ 0 (η)dη 2 dξ 1/2 R d 1 + |η| 2 s/2 Ψ 0 (η) R d 1 + |ξ − η| 2 s m k (2 k ·)(ξ − η) 2 dξ 1/2 dη m k (2 k ·) L 2 s (6.5)
where Minkowski's inequality is applied with 1 + |ξ| 2 ≤ 2(1 + |η| 2 )(1 + |ξ − η| 2 ).
In order to prove (6.4) we first assume 1 ≤ p ≤ ∞ and s > d/2. Using Young's inequality,
m k Ψ k ∨ * f k L p ≤ m k Ψ k ∨ L 1 f k L p and one has m k Ψ k ∨ L 1 = 2 −kd m k Ψ k ∨ · /2 k L 1 ≤ 2 −kd m k Ψ k ∨ · /2 k Φ 0 L 1 + ∞ l=1 2 −kd m k Ψ k ∨ · /2 k φ l L 1 Φ 0 * m k (2 k ·) Ψ 0 L 2 + ∞ l=1 2 ld/2 φ l * m k (2 k ·) Ψ 0 L 2 m k (2 k ·) Ψ 0 F s,2 2 ≈ m k (2 k ·) Ψ 0 L 2 s
where the second inequality follows from Schwarz inequality and Plancherel's theorem, and the the third one from Schwarz inequality with s > d/2. This proves (6.4) for 1 ≤ p ≤ ∞. For 0 < p < 1 assume s > d/p − d/2 and apply Nikolskii's inequality to obtain
m k Ψ k ∨ * f k L p 2 kd(1/p−1) m k Ψ k ∨ L p f k L p .
Now we observe that
2 kd(1/p−1) m k Ψ k ∨ L p = 2 −kd m k Ψ k ∨ (·/2 k ) L p = 2 −kd m k Ψ k ∨ (·/2 k ) Φ 0 p L p + ∞ l=1 2 −kd m k Ψ k ∨ (·/2 k ) φ l p L p 1/p Φ 0 * m k (2 k ·) Ψ 0 L 2 + ∞ l=1 2 lp(d/p−d/2) φ l * m k (2 k ·) Ψ 0 p L 2 1/p m k (2 k ·) Ψ 0 F s,2 2 ≈ m k (2 k ·) Ψ 0 L 2 s
where Hölder's inequality and Plancherel's theorem are applied in the first inequality. This completes the proof of (6.4) for 0 < p < 1.
We now proceed with the proof of Theorem 6.1. We assume j = 0 as mentioned above. Since the constant A plays a minor role and affects the result only up to a constant, we fix A = 2 −2 in the proof to avoid unnecessary complications. Moreover, if f k ∈ E(2 k−2 ), then m k f k ∨ = m k Ψ k ∨ * f k and due to (6.5), one has
m k Ψ k (2 k ·) L 2 s = m k (2 k ·) Ψ 0 L 2 s m k (2 k ·) L 2 s .
This enables us to assume (6.6) Supp(m k ) ⊂ |ξ| ≤ 2 k in the proof. With this assumption, m k f k ∨ = m ∨ k * f k . We first deal with the case p = ∞ and 0 < q < ∞.
Proof of Theorem 6.1 (2). Suppose ν ≥ µ and P ∈ D ν . Let P * = 9P denote the dilate of P by a factor of 9 with the same center. Then P * is a union of some dyadic cubes near P . Then we decompose
1 |P | P ∞ k=ν m ∨ k * f k (x) q dx 1/q 1 |P | P ∞ k=ν m ∨ k * χ P * f k (x) q dx 1/q + 1 |P | P ∞ k=ν m ∨ k * χ (P * ) c f k (x) q dx 1/q =: U P + V P Note that (6.7) m ∨ k * χ P * f k = m ∨ k * Ψ k+1 * χ P * f k . Therefore, using Lemma 6.2 for s > d/ min (1, q) − d/2 one has U P 1 |P | ∞ k=ν m ∨ k * Ψ k+1 * χ P * f k q L q 1/q sup l≥ν m l (2 l ·) L 2 s 1 |P | ∞ k=ν Ψ k+1 * χ P * f k q L q 1/q .
Then for any σ > 0
(6.8) Ψ k+1 * χ P * f k L q P * M σ,2 k f k (y) q dy 1/q
. This follows immediately from Young's inequality if q ≥ 1. If q < 1 we apply Hörlder's inequality with 1/q > 1 and (2.5) to obtain
Ψ k+1 * χ P * f k q L q = Q∈D k ,Q⊂P * Ψ k+1 * χ Q f k q L q ≤ Q∈D k ,Q⊂P * Q |Ψ k+1 (x − y)|dy q dx f k q L ∞ (Q) Q∈D k ,Q⊂P * 2 −kd inf y∈Q M σ,2 k f k (y) q ≤ Q∈D k ,Q⊂P * Q M σ,2 k f k (y) q dy = P * M σ,2 k f k (y) q dy
and this proves (6.8). Therefore
U P sup l≥ν m l (2 l ·) L 2 s 1 |P | P * ∞ k=ν M σ,2 k f k (y) q dy 1/q (6.9) sup l≥µ m l (2 l ·) L 2 s sup R∈Dν 1 |R| R ∞ k=ν M σ,2 k f k (y) q dy 1/q .
Choosing σ > d/q and applying Lemma 2.5 (2), one obtains
U P sup l≥µ m l (2 l ·) L 2 s sup R∈D,l(R)≤2 −µ 1 |R| R ∞ k=− log 2 l(R) f k (x) q dx 1/q . To estimate V P choose ǫ > 0 such that s > d/ min (1, q) − d/2 + ǫ ≥ d/2 + ǫ.
Then for x ∈ P and for some C > 0
m ∨ k * χ (P * ) c f k (x) ≤ |z| l(P ) |m ∨ k (z)||f k (x − z)|dz ≤ M ǫ,2 k f k (x) |z| l(P ) 1 + 2 k |z| ǫ |m ∨ k (z)|dz ≤ M ǫ,2 k f k (x) ∞ l=k−ν+C |z|≈2 l−k 1 + 2 k |z| ǫ |m ∨ k (z)|| φ l (2 k z)|dz M ǫ,2 k f k (x) ∞ l=k−ν 2 l(ǫ+d/2) φ l * m k (2 k ·) L 2 M ǫ,2 k f k (x)2 −(k−ν)(s−ǫ−d/2) m k (2 k ·) L 2 s .
where the penultimate inequality follows from the Schwarz inequality and Plancheral's theorem, and the last one from Schwarz inequality and the fact F s,2 2 = L 2 s . Now choose t > max (d/ǫ, q) and then
V P sup l≥ν m l (2 l ·) L 2 s 1 |P | P ∞ k=ν 2 −q(k−ν)(s−ǫ−d/2) M ǫ,2 k f k (x) q dx 1/q sup l≥µ m l (2 l ·) L 2 s 1 |P | P ∞ k=ν M ǫ,2 k f k (x) t dx 1/t (6.10) sup l≥µ m l (2 l ·) L 2 s sup R∈Dν 1 |R| R ∞ k=ν |f k (x)| t dx 1/t sup l≥µ m l (2 l ·) L 2 s sup R∈D,l(R)≤2 −µ 1 |R| R ∞ k=− log 2 l(R) |f k (x)| q dx 1/q
where the second inequality follows from Hölder's inequality, the third and fourth ones from Lemma 2.5 and (2.7), respectively. By taking the supremum of U P and V P over all dyadic cubes P whose side length is less or equal to µ, the proof of Theorem 6.1 (2) ends.
Proof of Theorem 6.1 (1). The proof of the case 0 < p = q ≤ ∞ is a strightforward application of Lemma 6.2 and therefore we work with only the case p = q and 0 < p < ∞.
The case 0 < p ≤ 1 and p < q ≤ ∞. Assume s > d/p − d/2. The proof is basesd on "∞-atoms" forḟ 0,q p . We recall in [14] that a sequence of complex numbers r = {r Q } Q∈D is called an ∞-atom forḟ 0,q p if there exists a dyadic cube Q 0 such that
r Q = 0 if Q ⊂ Q 0 and (6.11) g q (r) L ∞ ≤ |Q 0 | −1/p .
where g q (r) is defined as in (4.1). Then one has the following atomic decomposition ofḟ 0,q p , which is analogous to the H p atomic decomposition for 0 < p ≤ 1.
Lemma B. ([14]) Suppose 0 < p ≤ 1, p ≤ q ≤ ∞, and b = {b Q } Q∈D ∈ḟ 0,q p .
Then there exist C d,p,q > 0, a sequence of scalars {λ j }, and a sequence of ∞-atoms
r j = {r j,Q } Q∈D foṙ f 0,q p such that b = {b Q } = ∞ j=1 λ j {r j,Q } = ∞ j=1 λ j r j and ∞ j=1 |λ j | p 1/p ≤ C d,p,q b ḟ 0,q p . Moreoever, b ḟ 0,q p ≈ inf ∞ j=1 |λ j | p 1/p : b = ∞ j=1 λ j r j , r j is an ∞-atom forḟ 0,q p .
According to Lemma 4.1 and Lemma B, if Supp( f k ) ⊂ {ξ : |ξ| ≤ 2 k−1 } for each k ∈ Z, then there exist {b Q } Q∈D ∈ḟ 0,q p , a sequence of scalars {λ j }, and a sequence of ∞-atoms {r j,Q } forḟ 0,q p such that
f k (x) = Q∈D k b Q Ψ Q (x) = ∞ j=1 λ j Q∈D k r j,Q Ψ Q (x), k ∈ Z, and ∞ j=1 |λ j | p 1/p b ḟ 0,q p f k k∈Z L p (l q ) .
Then by applying l p ֒→ l 1 and Minkowski's inequality with q/p > 1, one has
m ∨ k * f k k∈Z L p (l q ) ∞ j=1 |λ j | p 1/p sup n≥1 m ∨ k * Q∈D k r n,Q Ψ Q k∈Z L p (l q ) f k k∈Z L p (l q ) sup n≥1 m ∨ k * Q∈D k r n,Q Ψ Q k∈Z L p (l q )
. Therefore, it suffices to show that the supremum in the last expression is dominated by a constant times sup l∈Z m l (2 l ·) L 2 s , which is equivalent to
m ∨ k * A Q 0 ,k k∈Z L p (l q ) sup l∈Z m l (2 l ·) L 2 s uniformly in Q 0
where {r Q } is an ∞-atom forḟ 0,q p associated with Q 0 ∈ D and
A Q 0 ,k (x) := Q∈D k ,Q⊂Q 0 r Q Ψ Q (x).
Suppose Q 0 ∈ D ν for some ν ∈ Z. Then the condition Q ⊂ Q 0 ensures that A Q 0 ,k vanishes unless ν ≤ k, and thus we actually need to prove
m ∨ k * A Q 0 ,k k≥ν L p (l q ) sup l∈Z m l (2 l ·) L 2 s uniformly in ν and Q 0 .
We observe that for x ∈ R d (6.12)
|r Q ||Q| −1/2 χ Q (x) Q⊂Q 0 l q ≤ |Q 0 | −1/p and for 0 < r < ∞ (6.13) A Q 0 ,k L r Q∈D k ,Q⊂Q 0 |r Q ||Q| −1/2 χ Q L r ≤ |Q 0 | −1/p+1/r
by using the argument in (4.5) and the estimate (6.11). Moreover,
Supp( A Q 0 ,k ) = Supp( Ψ k ) ⊂ ξ : |ξ| ≤ 2 k .
Let Q * 0 := 9Q 0 be the dilate of Q, concentric with Q, with side length 9l(Q 0 ), and Q * * 0 := 9Q * 0 (= 81Q 0 ). Then (6.14) and the first one is dominated by
m ∨ k * A Q 0 ,k k≥ν L p (l q ) Q * * 0 m ∨ k * A Q 0 ,k (x) k≥ν p l q dx 1/p + (Q * * 0 ) c m ∨ k * A Q 0 ,k (x) k≥ν p l q dx 1/p|Q * * 0 | 1/p−1/q m ∨ k * A Q 0 ,k k≥ν l q (L q ) sup l∈Z m l (2 l ·) L 2 s |Q 0 | 1/p−1/q A Q 0 ,k k≥ν L q (l q ) sup l∈Z m l (2 l ·) L 2 s |Q 0 | 1/p−1/q {r Q } Q∈D,Q⊂Q 0 ḟ 0,q q sup l∈Z m l (2 l ·) L 2 s
where the first inequality follows from Lemma 6.2, the second one from (4.5), and the last one from (6.12).
To handle the term (6.14) we apply the embedding l p ֒→ l q and then obtain
(6.14) ≤ ∞ k=ν m ∨ k * A Q 0 ,k p L p ((Q * * 0 ) c ) 1/p . Writing m ∨ k * A Q 0 ,k p L p ((Q * * 0 ) c ) ≤ m ∨ k * A Q 0 ,k χ Q * 0 p L p ((Q * * 0 ) c ) + m ∨ k * A Q 0 ,k χ (Q * 0 ) c p L p ((Q * * 0 ) c )
, the proof will be finished once we establish the estimates that for some ǫ > 0
(6.15) m ∨ k * A Q 0 ,k χ Q * 0 L p ((Q * * 0 ) c ) 2 −ǫ(k−ν) sup l∈Z m l (2 l ·) L 2 s and (6.16) m ∨ k * A Q 0 ,k χ (Q * 0 ) c L p ((Q * * 0 ) c ) 2 −ǫ(k−ν) sup l∈Z m l (2 l ·) L 2 s .
By applying the embedding l p ֒→ l 1
m ∨ k * A Q 0 ,k χ Q * 0 L p ((Q * * 0 ) c ) ≤ Q∈D k ,Q⊂Q * 0 (Q 0 * * ) c m ∨ k * A Q 0 ,k χ Q (x) p dx 1/p ≤ Q∈D k ,Q⊂Q * 0 A Q 0 ,k p L ∞ (Q) (Q 0 * * ) c Q |m ∨ k (x − y)|dy p dx 1/p . Since s > d/p − d/2 there exists M > d(1 − p) such that s > M/p + d/2 > d/p − d/2. We choose ǫ > 0 such that s > M/2 + d/2 + ǫ.
Recall that x Q denotes the left lower corner of Q ∈ D and observe that for
Q ⊂ Q * 0 (Q 0 * * ) c Q |m ∨ k (x − y)|dy p dx 2 −kM l(Q 0 ) −M +d(1−p) Q (Q 0 * * ) c 1 + 2 k |x − x Q | M/p m ∨ k (x − y) dxdy p 2 −k(M +pd) l(Q 0 ) −M +d(1−p) R d 1 + 2 k |y| M/p |m ∨ k (y)|dy p 2 −k(M +pd) l(Q 0 ) −M +d(1−p) 2 −kdp 1 + | · | M/p+d/2+ǫ m ∨ k (·/2 k ) p L 2 ≤ 2 −k(M +pd) l(Q 0 ) −M +d(1−p) sup l∈Z m l (2 l ·) p L 2 s
where the first one follows from Hölder's inequality if 0 < p < 1 (it is trivial if p = 1), the second one from the fact that |x − x Q | |x − y| for x ∈ (Q * * 0 ) c and y ∈ Q ⊂ Q * 0 , and the third one from Schwarz inequality. By applying (2.5) one obtains that for any σ > 0
m ∨ k * A Q 0 ,k χ Q * 0 L p ((Q * * 0 ) c ) sup l∈Z m l (2 l ·) L 2 s 2 −k(M/p+d) l(Q 0 ) −M/p+d/p−d Q∈D k ,Q⊂Q * 0 inf y∈Q M σ,2 k A Q 0 ,k (y) p 1/p ≤ sup l∈Z m l (2 l ·) L 2 s 2 −(k−ν)(M/p−d(1/p−1)) Q * 0 M σ,2 k A Q 0 ,k (x) p dx 1/p
and then Lemma 2.5 (1) with σ > d/p and (6.13) prove (6.15) with ǫ = M/p−d(1/p−1) > 0. To verify (6.16) we observe that, similar to (6.7) under the assumption (6.6),
m ∨ k * A Q 0 ,k χ (Q * 0 ) c = m ∨ k * Ψ k+1 * A Q 0 ,k χ (Q * 0 ) c
and, it follows from Lemma 6.2 that
m ∨ k * A Q 0 ,k χ (Q * 0 ) c L p sup l∈Z m l (2 l ·) L 2 s Ψ k+1 * A Q 0 ,k χ (Q * 0 ) c L p .
Moreover,
Ψ k+1 * (A Q 0 ,k χ (Q * 0 ) c ) L p R d Q∈D k ,Q⊂Q 0 |r Q ||Q| −1/2 (Q * 0 ) c Ψ k+1 (x − y) 1 (1 + 2 k |y − x Q |) 2L dy p dx 1/p 2 −kL Q∈D k ,Q⊂Q 0 |r Q ||Q| −1/2 R d (Q * 0 ) c |Ψ k+1 (x − y)| |y − x Q 0 | L dy p dx 1/p because |y − x Q | l(Q 0 ) and 1 (1 + 2 k |y − x Q |) 2L 2 k l(Q 0 ) −L 1 + 2 k |x Q − x Q 0 | L 1 + 2 k |y − x Q 0 | L 1 2 k |y − x Q 0 | L for y ∈ (Q * 0 ) c and Q ⊂ Q 0 .
Notice that due to (6.12)
Q∈D k ,Q⊂Q 0 |r Q ||Q| −1/2 ≤ 2 νd(1/p−1) 2 kd
and, using Hölder's inequality (if p < 1), one obtains
R d (Q * 0 ) c |Ψ k+1 (x − y)| |y − x Q 0 | L dy p dx 1/p 2 −kd(1/p−1) (Q * 0 ) c 1 |y − x Q 0 | L R d 1 + 2 k |x − x Q 0 | N/p Ψ k+1 (x − y) dxdy 2 −kd(1/p−1) (Q * 0 ) c 1 + 2 k |y − x Q 0 | N/p |y − x Q 0 | L dy 2 −kd(1/p−1) 2 kN/p (Q * 0 ) c 1 |y − x Q 0 | L−N/p dy 2 −kd(1/p−1) 2 kN/p 2 ν(L−N/p−d) for N > d(1 − p) and L − N/p > d.
In conclusion, one has
Ψ k+1 * (A Q 0 ,k χ (Q * 0 ) c ) L p 2 −(k−ν)(L−N/p+d/p−2d)
and this proves (6.16) with ǫ = L − N/p + d/p − 2d > 0.
The case 1 < p < ∞ and p < q ≤ ∞. Assume s > 0 and interpolate two estimates
{m ∨ k * f k } k∈Z L 1 (l q ) sup l∈Z m l (2 l ·) L 2 s f L 1 (l q ) and {m ∨ k * f k } k∈Z L q (l q ) sup l∈Z m l (2 l ·) L 2 s f L q (l q ) ,
which have been already proved.
The case 0 < q < p < ∞. Assume s > d/ min (1, q) − d/2, and choose ǫ > 0 and t > 0 such that s > d/ min (1, q) − d/2 + ǫ ≥ d/2 + ǫ and t > max (d/ǫ, q).
We first consider the case t < p < ∞. In this case we apply Lemma 5.1 to obtain
m ∨ k * f k k∈Z L p (l q ) sup x∈P ∈D 1 |P | P ∞ k=− log 2 l(P ) m ∨ k * f k (y) q dy 1/q L p (x)
. Now let x ∈ P ∈ D ν for some ν ∈ Z and define P * = 9P as before. Then, using (6.9),
1 |P | P ∞ k=ν m ∨ k * χ P * f k (x) q dy 1/q sup l∈Z m l (2 l ·) L 2 s 1 |P | P * ∞ k=ν M σ,2 k f k (y) q dy 1/q sup l∈Z m l (2 l ·) L 2 s M k∈Z M σ,2 k f k q (x) 1/q
for σ > d/q. By the L p/q boundedness of M and Lemma 2.5 (1)
sup x∈P ∈D 1 |P | P ∞ k=− log 2 l(P ) m ∨ k * χ P * f k (y) q dy 1/q L p (x) sup l∈Z m l (2 l ·) L 2 s f L p (l q ) .
Furthermore, one obtains, from (6.10), that
1 |P | P ∞ k=ν m ∨ k * χ (P * ) c f k (x) q dy 1/q sup l∈Z m l (2 l ·) L 2 s 1 |P | P ∞ k=ν M ǫ,2 k f k (y) t dy 1/t sup l∈Z m l (2 l ·) L 2 s M k∈Z M ǫ,2 k f k t (x) 1/t .
Then by the L p/t boundedness of M (since p/t > 1 from our assumption), Lemma 2.5 (1) with ǫ > d/t, and the embedding l q ֒→ l t one has
sup x∈P ∈D 1 |P | P ∞ k=− log 2 l(P ) m ∨ k * χ (P * ) c f k (y) q dy 1/q L p (x) L 2 s m k k∈Z f L p (l q ) .
This proves that for s > d/ min (1, q) − d/2
(6.17) m ∨ k * f k k∈Z L p (l q ) sup l∈Z m l (2 l ·) L 2 s f L p (l q ) , q < t < p.
The general case q < p follows from interpolation between (6.17) and L q (l q ) estimate with the same value s > d/ min (1, q) − d/2. Here l q , 0 < q ≤ ∞, is a quasi-Banach space and Peetre's real interpolation method, so called K-method, works. See [13,Chapter 6] for more details about the interpolation method.
Hörmander multiplier theorem for multilinear operators
For notational convenience we will occasionally write f := (f 1 , . . . , f n ), ξ := (ξ 1 , . . . , ξ n ), k := (k 1 , . . . , k n ), v := (v 1 , . . . , v n ), d ξ := dξ 1 · · · dξ n , and d v := dv 1 · · · dv n .
For m ∈ L ∞ (R d ) n the n-linear multiplier operator T m is defined by
T m f (x) := (R d ) n m( ξ) n j=1 f j (ξ j ) e 2πi x, n j=1 ξ j d ξ for f j ∈ S(R d ).
Let ϑ (n) ∈ S((R d ) n ) satisfy the properties that 0 ≤ ϑ (n) ≤ 1, ϑ (n) = 1 for 2 −1 ≤ | ξ| ≤ 2, and Supp(ϑ (n) ) ⊂ ξ ∈ (R d ) n : 2 −2 ≤ | ξ| ≤ 2 2 . Define
L r,ϑ (n) s [m] := sup l∈Z m(2 l · 1 , . . . , 2 l · n )ϑ (n) L r s ((R d ) n )
. A multilinear version of Hörmander's multiplier theorem was established by Tomita [27].
Theorem C. Suppose 1 < p, p 1 , . . . , p n < ∞ and 1/p = 1/p 1 +· · ·+1/p n . If m ∈ L ∞ ((R d ) n ) satisfies L 2,ϑ (n) s [m] < ∞ for s > nd/2, then there exists a constant C > 0 so that
T m f L p ≤ CL 2,ϑ (n) s [m] n j=1 f j L p j .
Another boundedness result was obtained by Grafakos, Si [18] Theorem D. Let 0 < p < ∞ and 1/p = 1/p 1 + · · · + 1/p n . Suppose 1 < r ≤ 2 and m satisfies L r,ϑ (n) s [m] < ∞ for s > nd/r. Then there exists a number δ > 0 satisfying 0 < δ ≤ r − 1, such that
T m f L p L r,ϑ (n) s [m] n j=1 f j L p j . whenever r − δ < p j < ∞ for 1 ≤ j ≤ n.
In this section we will generalize Theorem C and D. Let
X p := H p if p < ∞ BM O if p = ∞ .
Theorem 7.1. Let 1 < p < ∞ and 1 < p i,j ≤ ∞, 1 ≤ i, j ≤ n, satisfies
1 p = 1 p i,1 + · · · + 1 p i,n for 1 ≤ i ≤ n. Suppose m satisfies L 2,ϑ (n) s [m] < ∞ for s > nd/2.
Then
T m f L p L 2,ϑ (n) s [m] n i=1 f i X p i,i 1≤j≤n,j =i f j L p i,j .
Theorem 7.2. Let 1 < p < ∞ and 1 < p i,j ≤ ∞, 1 ≤ i, j ≤ n, satisfies
1 p = 1 p i,1 + · · · + 1 p i,n for 1 ≤ i ≤ n.
Suppose 1 < u ≤ 2, 0 < r ≤ 2, and m satisfies L u,ϑ (n) s [m] < ∞ for s > nd/r. Then there exists a number δ > 0 such that
T m f L p L u,ϑ (n) s [m] n i=1 f i X p i,i 1≤j≤n,j =i f j H p i,j whenever r − δ < p i,j ≤ ∞ for 1 ≤ i, j ≤ n.
Under the same hypothesis s > nd/r, the condition L r,ϑ (n) s [m] < ∞ in Theorem D is improved by L u,ϑ (n) s [m] < ∞ for any 1 < u ≤ 2 in Theorem 7.2. In turn, due to the independence of u in L u,ϑ (n) s [m] < ∞, one has better freedom in the range 0 < r ≤ 2 and
r − δ < p i,j ≤ ∞. Note that L u 1 ,ϑ (n) s [m] L u 2 ,ϑ (n) s [m] if 1 < u 1 < u 2 ≤ 2.
We will first prove Theorem 7.2 and then turn to the proof of Theorem 7.1. 7.1. Proof of Theorem 7.2. Choose 0 < t < r such that s > nd/t > nd/r and let δ = r − t > 0. Suppose p 1 , . . . , p n > t = r − δ.
Let ϑ (n) be a cutoff function on (R d ) n such that 0 ≤ ϑ (n) ≤ 1, ϑ (n) ( ξ) = 1 for 2 −1 n −1/2 ≤ | ξ| ≤ 2n 1/2 , and Supp( ϑ (n) ) ⊂ ξ ∈ (R d ) n : 2 −2 n −1/2 ≤ | ξ| ≤ 2 2 n 1/2 . Then using Calderón reproducing formula, Littlewood-Paley partition of unity {φ k } k∈Z , and triangle inequality, we first see that
L u, ϑ (n) s [m] L u,ϑ (n) s [m].
Thus it suffices to prove the estimate that
(7.1) T m f L p L u, ϑ (n) s [m] n i=1 f i X p i,i
1≤j≤n,j =i f j H p i,j .
We use a notation L u
s [m] := L u, ϑ (n) s [m].
By using Littlewood-Paley partition of unity {φ k } k∈Z , m( ξ) can be decomposed as
m( ξ) = k∈(Z) d m( ξ) φ k (ξ 1 ) · · · φ n (ξ n ) = k 1 ∈Z k 2 ,...,kn≤k 1 · · · + k 2 ∈Z k 1 <k 2 k 3 ,.
..,kn≤k 2 · · · + · · · + kn∈Z k 1 ,...,k n−1 <kn · · · =: m (1) ( ξ) + m (2) ( ξ) + · · · + m (n) ( ξ). Then (7.1) is a consequence of the following estimates that for s > nd/r
T m (i) f L p L u s [m] f i X p i,i 1≤j≤n,j =i f j H p i,j for each 1 ≤ i ≤ n.
We only concern ourselves with the case i = 1 and use symmetry for other cases by setting p j := p 1,j for 1 ≤ j ≤ n.
We write
m (1) ( ξ) = k∈Z k 2 ,...,kn≤k m( ξ) φ k (ξ 1 ) φ k 2 (ξ 2 ) · · · φ kn (ξ n ) = k∈Z m( ξ) ϑ (n) ( ξ/2 k ) φ k (ξ 1 ) k 2 ,...,kn≤k φ k 2 (ξ 2 ) · · · φ kn (ξ n )
since ϑ (n) ( ξ/2 k ) = 1 for 2 k−1 ≤ |ξ 1 | ≤ 2 k+1 and |ξ j | ≤ 2 k+1 for 2 ≤ j ≤ n. Let
m k ( ξ) := m( ξ) ϑ (n) ( ξ/2 k )
and then we note that
(7.2) m k (2 k ·) L u s ((R d ) n ) ≤ L u s [m]
and
m (1) ( ξ) = k∈Z m k ( ξ) φ k (ξ 1 ) k 2 ,...,kn≤k φ k 2 (ξ 2 ) · · · φ kn (ξ n ).
We further decompose m (1) as
m (1) ( ξ) = m (1) low ( ξ) + m (1) high ( ξ) where m (1) low ( ξ) := k∈Z m k ( ξ) φ k (ξ 1 ) k 2 ,...,kn≤k max 2≤j≤n (k j )≥k−3−⌊log 2 n⌋ φ k 2 (ξ 2 ) · · · φ kn (ξ n ), m (1) high ( ξ) := k∈Z m k ( ξ) φ k (ξ 1 ) k 2 ,...,kn≤k−4−⌊log 2 n⌋ φ k 2 (ξ 2 ) · · · φ kn (ξ n ).
We refer to T m
T m (1) low f (x) = k∈Z k 2 ,...,kn≤k max 2≤j≤n (k j )≥k−3−⌊log 2 n⌋ T m k (f 1 ) k , (f 2 ) k 2 , . . . , (f n ) kn (x)
where (g) l := φ l * g for g ∈ S and l ∈ Z. It suffices to consider only the sum over k 3 , . . . , k n ≤ k 2 and k − 3 − ⌊log 2 n⌋ ≤ k 2 ≤ k, and we will actually prove that k∈Z k−3−⌊log 2 n⌋≤k 2 ≤k k 3 ,...,kn≤k 2
T m k (f 1 ) k , (f 2 ) k 2 , . . . , (f n ) kn L p L u s [m] f 1 X p 1 n j=2 f j H p j .
We define Φ l := 2 ld Φ 0 (2 l ·) for l ∈ Z and then observe that for any g ∈ S m≤l φ m * g = Φ l * g and
(7.3) sup k∈Z |Φ k * f | L p f H p , 0 < p ≤ ∞.
We see that k∈Z k−3−⌊log 2 n⌋≤k 2 ≤k k 3 ,...,kn≤k 2
T m k (f 1 ) k , (f 2 ) k 2 , . . . , (f n ) kn (x) = k∈Z k−3−⌊log 2 n⌋≤k 2 ≤k T m k (f 1 ) k , (f 2 ) k 2 , (f 3 ) k 2 , . . . , (f n ) k 2 (x)
where (f j ) k 2 := Φ k 2 * f j . Since the second sum is a finite sum over k 2 near k, we may only consider the case k 2 = k and thus our claim is
(7.4) k∈Z T m k (f 1 ) k , (f 2 ) k , (f 3 ) k , . . . , (f n ) k L p L u s [m] f 1 X p 1 n j=2 f j H p j .
To prove (7.4) let 0 < ǫ < min (1, t) such that 1/ǫ = 1 − 1/u + 1/t, which implies u ′ = 1 1/ǫ−1/t where 1/u+1/u ′ = 1. Then using Nikolskii's inequality and Hölder's inequality with u ′ /ǫ > 1 one has
T m k (f 1 ) k , (f 2 ) k , (f 3 ) k , . . . , (f n ) k (x) = (R d ) n m ∨ k ( v)(f 1 ) k (x − v 1 )(f 2 ) k (x − v 2 ) n j=3 (f j ) k (x − v j )d v 2 nkd(1/ǫ−1) (R d ) n |m ∨ k ( v)| ǫ (f 1 ) k (x − v 1 ) ǫ (f 2 ) k (x − v 2 ) ǫ n j=3 (f j ) k (x − v j ) ǫ d v 1/ǫ ≤ 2 nkd(1/ǫ−1) (R d ) n 1 + 2 k |v 1 | + · · · + 2 k |v n | su ′ |m ∨ k ( v)| u ′ d v 1/u ′ × (R d ) n (f 1 ) k (x − v 1 ) t (f 2 ) k (x − v 2 ) t 1 + 2 k |v 1 | + · · · + 2 k |v n | st n j=3 (f j ) k (x − v n ) t d v 1/t .
We observe that by using Hausdorff Young's inequality with u ′ ≥ 2 and (7.2),
(R d ) n 1 + 2 k |v 1 | + · · · + 2 k |v n | su ′ |m ∨ k ( v)| u ′ d v 1/u ′ 2 nkd/u m k (2 k ·) L u s 2 nkd/u L u s [m], and (R d ) n (f 1 ) k (x − v 1 ) t (f 2 ) k (x − v 2 ) t 1 + 2 k |v 1 | + · · · + 2 k |v n | st n j=3 (f j ) k (x − v n ) t d v 1/t ≤ R d |(f 1 ) k (x − v 1 )| t (1 + 2 k |v 1 |) st/n dv 1 1/t R d |(f 2 ) k (x − v 2 )| t (1 + 2 k |v 2 |) st/n dv 2 1/t n j=3 R d |(f j ) k (x − v j )| t (1 + 2 k |v j |) st/n dv j 1/t ≤ 2 −nkd/t M t s/n,2 k (f 1 ) k (x)M t s/n,2 k (f 2 ) k (x) n j=3 M t s/n,2 k (f j ) k (x). Therefore T m k (f 1 ) k , (f 2 ) k , (f 3 ) k , . . . , (f n ) k (x) L u s [m]M t s/n,2 k (f 1 ) k (x)M t s/n,2 k (f 2 ) k (x) n j=3
M t s/n,2 k (f j ) k (x) (7.5) because 1/ǫ − 1 + 1/u − 1/t = 0. Now fix 0 < γ < 1 and for each Q ∈ D k let S Q := S γ,2 Q M t s/n,2 k (f 1 ) k . Then it follows from (3.10) and (3.4) that
|S Q | ≥ (1 − γ)|Q|, χ Q (x) M τ (χ S Q )(x)χ Q (x) for 0 < τ < ∞.
Now we choose τ < min (1, p) and apply (2.6) to obtain
k∈Z T m k (f 1 ) k , (f 2 ) k , (f 3 ) k , . . . , (f n ) k L p L u s [m] k∈Z Q∈D k M t s/n,2 k (f 1 ) k M t s/n,2 k (f 2 ) k χ Q L 1 1/p 1 +1/p 2 n j=3 M t s/n,2 k (f j ) k k∈Z L p j (l ∞ ) L u s [m] k∈Z Q∈D k inf y∈Q M t s/n,2 k (f 1 ) k (y) inf y∈Q M t s/n,2 k (f 2 ) k (y) M τ (χ S Q ) L 1 1/p 1 +1/p 2 × n j=3 (f j ) k k∈Z L p j (l ∞ ) L u s [m] k∈Z Q∈D k inf y∈Q M t s/n,2 k (f 1 ) k (y) inf y∈Q M t s/n,2 k (f 2 ) k (y) χ S Q L 1 1/p 1 +1/p 2 n j=3 f j H p j L u s [m] Q∈D k inf y∈Q M t s/n,2 k (f 1 ) k (y) χ S Q k∈Z L p 1 (l 2 ) n j=2 f j H p j L u s [m] f 1 X p 1 n j=2 f j H p j
where the first inequality follows from Hölder's inequality, the second one from Lemma 2.5 (1), the third one from (7.3), the fourth one from Hölder's inequality, and the last one from
Lemma 3.1 (1) if p 1 < ∞ or Theorem 3.4 if p 1 = ∞.
7.1.2. High frequency part. The proof for the high frequency part is based on the property that if g k is supported on {ξ : A −1 2 k ≤ |ξ| ≤ A2 k } for A > 1 then
(7.6) φ k * k+h l=k−h g l k∈Z L p (l q ) g k k∈Z L p (l q )
for h ∈ N. The proof of (7.6) is elementary and standard, so we omit it. Just use the estimate |φ k * g l (x)| M σ,2 l g l (x) for k − h ≤ l ≤ k + h and apply Lemma 2.5 (1). We note that
T m (1) high f (x) = k∈Z T m k (f 1 ) k , (f 2 ) k,n , . . . , (f n ) k,n (x) where (f j ) k,n := Φ k−4−⌊log 2 n⌋ * f j .
Observe that the Fourier transform of T m k (f 1 ) k , (f 2 ) k,n , . . . , (f n ) k,n is supported in ξ ∈ R d : 2 k−2 ≤ |ξ| ≤ 2 k+2 and thus (7.6) yields that
T m (1) high f L p T m (1) high f H p ≈ T m (1) high f Ḟ 0,2 p T m k (f 1 ) k , (f 2 ) k,n , . . . , (f n ) k,n k∈Z L p (l 2 ) .
Using the argument that led to (7.5), one has
T m k (f 1 ) k , (f 2 ) k,n , . . . , (f n ) k,n (x) L u s [m]M t s/n,2 k (f 1 ) k (x) n j=2 M t s/n,2 k (f j ) k,n (x).
Fix 0 < γ < 1 and for Q ∈ D k let S Q := S γ,2 Q M t s/n,2 k (f 1 ) k as before and proceed the similar arguments to obtain that
T m (1) high f L p L u s [m] M t s/n,2 k (f 1 ) k n j=2 M t s/n,2 k (f j ) k,n k∈Z L p (l 2 ) L u s [m] Q∈D k inf y∈Q M t s/n,2 k (f 1 ) k (y) n j=2 inf y∈Q M t s/n,2 k (f j ) k,n (y) χ S Q k∈Z L p (l 2 ) L u s [m] Q∈D k inf y∈Q M t s/n,2 k (f 1 )(y) χ S Q k∈Z L p 1 (l 2 ) n j=2 f j H p j L u s [m] f 1 X p 1 n j=2 f j H p j
for p 1 , . . . , p n > t.
7.2.
Proof of Theorem 7.1. As in the proof of Theorem 7.2, it suffices to deal with T m (1) . Suppose 1 < p < ∞ and 1 < p j ≤ ∞ for 1 ≤ j ≤ n. Then we will prove
(7.7) T m (1) f L p L 2 s [m] f i X p 1 n j=2 f j L p j , s > nd/2
for each 1 ≤ i ≤ n. First of all, it follows, from Theorem 7.2 with r = u = 2, that (7.7) holds for 2 ≤ p j ≤ ∞. Now assume 1 < p ≤ min (p 1 , . . . , p n ) < 2. Observe that only one of p ′ j s could be less than 2 because 1/p = 1/p 1 + · · · + 1/p n < 1, and we will actually look at two cases 1 < p 1 < 2 ≤ p 2 , . . . , p n and 1 < p 2 < 2 ≤ p 1 , p 3 , . . . , p n . Let T * j m (1) be the jth transpose of T m (1) , defined by the unique operator satisfying . . . , f n ), h := T (f 1 , . . . , f j−1 , h, f j+1 , . . . , f n ), f j for f 1 , . . . , f n , h ∈ S. Then it is known in [27] that T * j m (1) = T (m (1) ) * j where (m (1) ) * j (ξ 1 , . . . , ξ n ) = m (1) ξ 1 , . . . , ξ j−1 , −(ξ 1 + · · · + ξ n ), ξ j+1 , . . . , ξ n , and then
T * j m (1) (f 1 ,(7.8) L 2 s (m (1) ) * j L 2 s [m (1) ] L 2 s [m].
7.2.1. The case 1 < p < p 1 < 2. Let 2 < p ′ , p ′ 1 < ∞ be the conjugates of p, p 1 , respectively. That is, 1/p + 1/p ′ = 1/p 1 + 1/p ′ 1 = 1. Then X p 1 = L p 1 and 1/p ′ 1 = 1/p ′ + 1/p 2 + · · · + 1/p n . Therefore
T m (1) (f 1 , . . . , f n ) L p = sup h L p ′ =1 T (m (1) ) * 1 (h, f 2 , . . . , f n ), f 1 ≤ f 1 L p 1 sup h L p ′ =1 T (m (1) ) * 1 (h, f 2 , . . . , f n ) L p ′ 1 L 2 s (m (1) ) * 1 n j=1 f j L p j L 2 s [m] n j=1 f j L p j
where the second inequality follows from Theorem 7.2 and the last one from (7.8).
7.2.2. The case 1 < p < p 2 < 2. Similarly, let 2 < p ′ , p ′ 2 < ∞ be the conjugates of p, p 2 and then
T m (1) (f 1 , . . . , f n ) L p = sup h L p ′ =1 T (m (1) ) * 2 (f 1 , h, f 3 , . . . , f n ), f 2 ≤ f 2 L p 2 sup h L p ′ =1 T (m (1) ) * 2 (f 1 , h, f 3 , . . . , f n ) L p ′ 2 L 2 s (m (1) ) * 2 f 1 X p 1 n j=2 f j L p j L 2 s [m] f 1 X p 1 n j=2 f j L p j .
8. Multilinear pseudo-differential operators of type (1, 1)
We use notations f := (f 1 , . . . , f n ), ξ := (ξ 1 , . . . , ξ n ), l := (l 1 , . . . , l n ), d ξ := dξ 1 · · · dξ n , d η := dη 1 · · · dη n , α := (α 1 , . . . , α n ), |α| := |α 1 | + · · · + |α n |, ∂ α ξ := ∂ α 1 ξ 1 · · · ∂ αn ξn , and ∂ α η := ∂ α 1 η 1 · · · ∂ αn ηn . Multilinear pseudo-differential operators were studied by Coifman and Meyer [5,6,7] and there have been a large number of variants of their results. In this section, we will study boundedness of n-linear pseudo-differential operators associated with forbidden symbols. The n-linear Hörmander symbol class M n S m 1,1 consists of all a ∈ C ∞ (R d ) n+1 having the property that for all multi-indices α 1 ,. . . ,α n ,β there exists a constant C = C α,β such that ∂ α ξ ∂ β x a(x, ξ) ≤ C 1 + n j=1 |ξ j | m−|α|+|β| , and the corresponding n-linear pseudo-differential operator T [a] is defined by
T [a] f (x) := (R d ) n a(x, ξ) n j=1
f j (ξ j )e 2πi x, n j=1 ξ j d ξ for f 1 , . . . , f n ∈ S(R d ). Denote by OpM n S m 1,1 the class of n-linear pseudo-differential operators with symbols in M n S m 1,1 . Bilinear pseudo-differential operators(n=2) in OpM 2 S 0 1,1 have bilinear Calderón-Zygmund kernels, but in general they are not bilinear Calderón-Zygmund operators. In particular, they do not always give rise to a mapping L p 1 × L p 2 → L p for 1 < p, p 1 , p 2 ≤ ∞ with 1/p = 1/p 1 + 1/p 2 .
The boundedness properties of operators in OpM 2 S 0 1,1 have been studied by Bényi and Torres [2], and Bényi, Nahmod, and Torres [1] in the scale of Lebesgue-Sobolev spaces. To be specific, Bényi and Torres [2] proved that if a ∈ M 2 S 0 1,1 , then
T [a] (f 1 , f 2 ) L p s f 1 L p 1 s f 2 L p 2 + f 1 L p 1 f 2 L p 2
s for 1 < p 1 , p 2 , p < ∞, 1/p 1 + 1/p 2 = 1/p, and s > 0. Moreover, this result was generalized to a ∈ M 2 S m 1,1 , m ∈ R, by Bényi-Nahmod-Torres [1]. Naibo [22] investigated bilinear pseudo-differential operators on Triebel-Lizorkin spaces and Koezuka and Tomita [20] slightly developed the result of Naibo. These results can be readily extended to a multilinear operators. For a ∈ M n S m 1,1 and N ∈ N 0 we define where 1/p = 1/p 1 + · · · + 1/p n . Observe that
a (1) (x, ξ) = ∞ k=0
a(x, ξ) φ k (ξ 1 ) Φ k (ξ 2 ) · · · Φ k (ξ n ) =:
∞ k=0 a k (x, ξ).
Of course, we regard φ 0 as Φ 0 when k = 0. Then each a k belongs to M n S m 1,1 and for each N ∈ N 0 Let φ 0 := Φ 1 and φ k := φ k−1 + φ k + φ k+1 for k ≥ 1. By using Fourier series expansion and the fact that φ k = 1 on Supp( φ k ) and Φ k+1 = 1 on Supp( Φ k ), one can write
a k (x, ξ) = l∈(Z d ) n c l k (x)ϕ l 1 k (ξ 1 )ϑ l 2 k (ξ 2 ) · · · ϑ ln k (ξ n ) where c l k (x) := (R d ) n
a k (x, 2 k η 1 , . . . , 2 k η n )e −2πi η 1 ,l 1 . . . e −2πi ηn,ln d η ϕ l 1 k (ξ 1 ) := e 2πi l 1 ,2 −k ξ 1 φ k (ξ 1 ), ϑ l j k (ξ j ) := e 2πi l j ,2 −k ξ j Φ k+1 (ξ j ), 2 ≤ j ≤ n. It can be verified that for l ∈ Z d and multi-index α one has where the first inequality follows from integration by parts and the second inequality is due to (8.5) and the fact that the domain of the integral is actually |η 1 | ≤ 2 ×· · ·× |η n | ≤ 2 .
Supp(ϕ l 0 ) ⊂ {|ξ| ≤ 2}, Supp(ϕ l k ) ⊂ {2 k−3 ≤ |ξ| ≤ 2 k+1 } for k ≥ 1 Supp(ϑ l k ) ⊂ {|ξ| ≤ 2 k+1 } for k ≥ 0 ∂ α ξ ϕ l k (ξ) ,
Moreover, we decompose m l,N m l,N k,j := φ k+j * m l,N k , j ≥ 1 (high frequency part) and then (8.6) yields that for j ∈ N 0 (8.7) m l,N k,j (x) a M n S m 1,1,N 2 −jN 2 km uniformly in l. To be specific, m l,N k,0 (x) m l,N k L ∞ a M nS m 1,1,N 2 km and for j ≥ 1, using vanishing moment condition of φ k+j and Taylor expansion,
m l,N k,j (x) = R d φ k+j (x − y)m l,N k (y)dy = R d φ k+j (x − y) m l,N k (y) − |β|<N 1 β! ∂ β x m l,N k (x)(y − x) β dy |β|=N ∂ β x m l,N k L ∞ R d φ k+j (x − y) |x − y| N dy
a M nS m 1,1,N 2 km 2 −jN . We also observe that due to the Fourier support of Φ k and φ k+j Supp( m l,N k,j ) ⊂ |ξ| ≤ 2 k+j . Then (1 + |l j |) N m l,N k (x)ϕ l 1 k (ξ 1 )ϑ l 2 k (ξ 2 ) · · · ϑ ln k (ξ n ) = l∈(Z d ) n n j=1 1 (1 + |l j |) N k,j∈N 0 m l,N k,j (x)ϕ l 1 k (ξ 1 )ϑ l 2 k (ξ 2 ) · · · ϑ ln k (ξ n ).
Setting
A l,N k,j (x, ξ) := m l,N k,j (x)ϕ l 1 k (ξ 1 )ϑ l 2 k (ξ 2 ) · · · ϑ ln k (ξ n ), we write
T [a (1) ] f = l∈(Z d ) n n j=1 1 (1 + |l j |) N k,j∈N 0 T [A l,N k,j ]
f . Choose N > 0 sufficiently large so that N > s and N > d/ min (1, p, q)+d/ min (p 1 , . . . , p n , q). Then the proof of (8.4) can be deduced from the following estimate that (1 + |l j |) σ f 1 F s+m,q p 1 n j=2 f j h p j for d/min (p 1 , . . . , p n , q) < σ < N − d/ min (1, p, q). From now on we shall prove (8.8). Let φ * 0 := Φ 2 and φ * k be Schwartz functions such that Supp( φ * k ) ⊂ ξ : 2 k−2 ≤ |ξ| ≤ 2 k+2 , φ * k = 1 on Supp( φ k ), for k ≥ 1.
Now one has
log 2 l(P ) |f k (x)| q dx 1/q , p = ∞ and q < ∞.
possible because P and Q's are dyadic cubes with l(Q) ≤ l(P ). Using (4.9), one has ∞ k=− log 2 l(P )
M
(d) f (x) := sup P ∈D,x∈P 1 |P | P |f (y)|dy, and the dyadic sharp maximal funtion M ♯ f (x) := sup P ∈D:x∈P
the low frequency part, and T m(1) high as the high frequency part of T m(1) (due to the Fourier supports of T m Low frequency part. To obtain the estimates for the operator T m
k (x, 2 k η 1 , . . . , 2 k η n )e −2πi n j=1 η j ,l j d η |α 1 |,...,|αn|≤N (R d ) n ∂ α η ∂ β x a k (x, 2 k η 1 , . . . , 2 k η n ) 2 k|α| d η
2 is the fractional Laplacian operator. See[13,15,28] for further details.2.2. Maximal inequalities. Let M be the Hardy-Littlewood maximal operator, defined
by
Mf (x) := sup
x∈Q
1
|Q| Q
|f (y)|dy
where the supremum is taken over (x, ξ) ∈ (R d ) n+1 and the maximum is taken over |α 1 |, . . . , |α n |, |β| ≤ N . For 0 < p, q ≤ ∞ let τ p := d/ min (1, p) − d, τ p,q := d/ min (1, p, q) − d.Theorem E.[20,22]Let 0 < p < ∞, 0 < q ≤ ∞, m ∈ R, and a ∈ M n S m 1,1 . Let {p i,j } 1≤i,j≤n satisfy 1 < p i,j < ∞ and (8.1) 1 p = 1 p i,1 + · · · + 1 p i,n for 1 ≤ i ≤ n.then there exists a positive integer N such that1≤j≤n,j =i f j h p i,j for f 1 , . . . , f n ∈ S(R d ). Moreover, the inequality also holds for p i,j = ∞, i = j.We recall that h p = L p for 1 < p ≤ ∞ and F s,2 p = h p s for 0 < p < ∞. In this section we extend Theorem E to the full range 0 < p, p i,j ≤ ∞ with the weaker condition s > τ p,q , instead of (8.2), using Theorem 3.4 and Theorem 6.1.Theorem 8.1. Suppose 0 < p, q ≤ ∞, m ∈ R, and a ∈ M n S m 1,1 . Let {p i,j } 1≤i,j≤n satisfy 0 < p i,j ≤ ∞ and (8.1). If s > τ p,q , then there exists a positive integer N such thatAs a corollary, from h p s = F s,2 p and bmo s = F s,2 ∞ , the following estimates hold. LetCorollary 8.2. Suppose 0 < p ≤ ∞, m ∈ R, and a ∈ M n S m 1,1 . Let {p i,j } 1≤i,j≤n satisfy 0 < p i,j ≤ ∞ and (8.1). If s > τ p , then there exist positive integers N > 0 such thatGeneralization of Kato-Ponce inequality. The classical Kato-Ponce commutator estimate[19]plays a key role in the wellposedness theory of Navier-Stokes and Euler equations in Sobolev spaces. The commutator estimate has been recast later on into the following fractional Leibniz rule, so called Kato-Ponce inequality. Recall that J s := (1 − ∆) s/2 be the fractional Laplacian operators. Thenwhere 1/p = 1/p 1 + 1/p 2 = 1/ p 1 + 1/ p 2 , 1 < p < ∞, and 1 < p 1 , p 2 , p 1 , p 2 ≤ ∞. Grafakos, Oh[17]and Muscalu, Schlag[21]extended the inequality (8.3) to the wider range 1/2 < p < ∞ under the assumption that s > τ p or s ∈ 2N. The case p = ∞ was settled by Bourgain and Li[3].As a consequence of Corollary 8.2 in the case a ≡ 1, one obtains the following extension of Kato-Ponce inequality, which includes an endpoint case of bmo type.The proof is based on the decomposition technique by Bényi and Torres[2]. By using Littlewood-Paley partition of unity a ∈ M n S m 1,1 can be written aswhere we use Φ 0 , instead of φ 0 . Then, due to the symmetry, it is enough to work only with a (1) and our actual goal is to show that if s > τ p,q thenChoose σ satisfying σ > d/min (p 1 , . . . , p n , q) and letObserve thatand a similar analysis reveals that for each 2 ≤ j ≤ nMoreover, from (2.1),Now (8.7), (8.9), and (8.10) establish the pointwise estimate thatWe observe thatf ) ⊂ |ξ| ≤ 2 k+j + n2 k and this yields, with the support condition of φ h , that for h ∈ N 0By assuming A l,N k,j = 0 for k < 0 and applying a change of variables the last expression isf .8.1.1.The case 0 < p < ∞ or p = q = ∞. From (8.14) one has.It follows from the observation (8.13) that the Fourier transform of Tf is supported on |ξ| ≤ 2 u+h . We choose t > 0 such that s > t − d/2 > τ p,q and apply Theorem 6.1 (1) to obtainwhere we applied a change of variables in the last inequality. Then the estimate (8.12) provesand one has. because N > s > t − d/2. Moreover, using (2.6),Now let S Q := S γ,q Q ({M σ,2 k (f 1 ) k } k∈N 0 ) and apply (3.4) and (2.2) for 0 < r < min (1, p) to show that the last expression iswhere Hölder's inequality, Lemma 3.1 (1) (for p 1 < ∞), Theorem 3.4 (with µ = 0 for p 1 = ∞), and (8.11) are applied.Combining all together the proof of (8.8) ends for 0 < p < ∞ or p = q = ∞.8.1.2.The case p = ∞ and 0 < q < ∞. Suppose σ > d/q. First of all, by using (8.8) for p = q = ∞ and the embedding F s+m,qNow we fix a dyadic cube P ∈ D with l(P ) < 1. Then it follows from (8.14) thatWe choose t > 0 such that s > t − d/2 > τ q and apply Theorem 6.1 (2) with µ = 1. ThenWe only concern ourselves with the case u − j − 3 − ⌊log 2 n⌋ ≤ −1 since the other case follows in a similar and simpler way. The supremum in the last expression is less than a constant times the sum of (8.17) supBy using (8.12) with the estimate , which proves (8.8) for the part corresponding to(8.18).Similarly,(8.12)yields that for N > s , which completes the proof of (8.8).
Sobolev space estimates and symbolic calculus for bilinear pseudodifferential operators. Á Bényi, A R Nahmod, R H Torres, J. Geom. Anal. 16Á. Bényi, A.R. Nahmod, and R. H. Torres, Sobolev space estimates and symbolic calculus for bilinear pseudodifferential operators, J. Geom. Anal. 16 (2006) 431-453.
Symbolic calculus and the transposes of bilinear pseudodifferential operators. Á Bényi, R H Torres, Comm. Partial Differ. Equ. 28Á. Bényi and R. H. Torres, Symbolic calculus and the transposes of bilinear pseudodifferential operators, Comm. Partial Differ. Equ. 28 (2003) 1161-1181.
On an endpoint Kato-Ponce inequality. J Bourgain, D Li, Diff. Integr. Equ. 27J. Bourgain and D. Li, On an endpoint Kato-Ponce inequality, Diff. Integr. Equ. 27 (2014) 1037-1072.
L M O Carleson ; B, Two remarks on H 1 and. 22L. Carleson, Two remarks on H 1 and B.M.O., Advances in Math. 22 (1976) 269-277.
On commutators of singular integrals and bilinear singular integrals. R R Coifman, Y Meyer, Trans. Amer. Math. Soc. 212R. R. Coifman and Y. Meyer, On commutators of singular integrals and bilinear singular integrals, Trans. Amer. Math. Soc. 212 (1975) 315-331.
Au delà des opérateurs pseudo-différentiels. R R Coifman, Y Meyer, Astérisque. 57R. R. Coifman and Y. Meyer, Au delà des opérateurs pseudo-différentiels, Astérisque 57 (1978) 1-185.
Calderón-Zygmund and Multilinear Operators, Cambridge Stud. R R Coifman, Y Meyer, Wavelets , Adv. Math. David Salinger48Cambridge University PressR. R. Coifman and Y. Meyer, Wavelets. Calderón-Zygmund and Multilinear Operators, Cambridge Stud. Adv. Math., 48, Cambridge University Press, Cambridge 1997, translated from the 1990 and 1991 French originals by David Salinger.
Characterizations of bounded mean oscillation. C Fefferman, Bull. Amer. Math. Soc. 77C. Fefferman, Characterizations of bounded mean oscillation, Bull. Amer. Math. Soc. 77 (1971) 587-588.
Some maximal inequalities. C Fefferman, E M Stein, Amer. J. Math. 93C. Fefferman and E. M. Stein, Some maximal inequalities, Amer. J. Math. 93 (1971) 107-115.
H p spaces of several variables. C Fefferman, E M Stein, Acta Math. 129C. Fefferman and E. M. Stein, H p spaces of several variables, Acta Math. 129 (1972) 137-193.
Decomposition of Besov spaces. M Frazier, B Jawerth, Indiana Univ. Math. J. 34M. Frazier and B. Jawerth, Decomposition of Besov spaces, Indiana Univ. Math. J. 34 (1985) 777-799.
The ϕ-transform and applications to distribution spaces, in "Function Spaces and Applications. M Frazier, B Jawerth, Lecture Notes in Math. 1302Springer-VerlagM. Frazier and B. Jawerth, The ϕ-transform and applications to distribution spaces, in "Function Spaces and Applications", Lecture Notes in Math. Vol. 1302, Springer-Verlag, New York/Berlin, (1988) 223- 246.
A discrete transform and decomposition of distribution spaces. M Frazier, B Jawerth, J. Func. Anal. 93M. Frazier and B. Jawerth, A discrete transform and decomposition of distribution spaces, J. Func. Anal. 93 (1990) 34-170.
Applications of the φ and wavelet transforms to the theory of function spaces, Wavelets and Their Applications. M Frazier, B Jawerth, Jones and BartlettBoston, MAM. Frazier and B. Jawerth, Applications of the φ and wavelet transforms to the theory of function spaces, Wavelets and Their Applications, pp.377-417, Jones and Bartlett, Boston, MA, 1992
A local version of real Hardy spaces. D Goldberg, Duke Math. J. 46D. Goldberg, A local version of real Hardy spaces, Duke Math. J. 46 (1979) 27-42.
. L Grafakos, Classical Fourier Analysis. SpringerL. Grafakos, Classical Fourier Analysis, Springer (2008)
The Kato-Ponce inequality. L Grafakos, S Oh, Comm. Partial Differ. Equ. 39L. Grafakos and S. Oh, The Kato-Ponce inequality, Comm. Partial Differ. Equ. 39 (2014) 1128-1157.
The Hörmander multiplier theorem for multilinear operators. L Grafakos, Z Si, J. Reine Angew. Math. 668L. Grafakos and Z. Si, The Hörmander multiplier theorem for multilinear operators, J. Reine Angew. Math. 668 (2012) 133-147.
Commutator estimates and the Euler and Navier-Stokes equations. T Kato, G Ponce, Comm. Pure Appl. Math. 41T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math. 41 (1988) 891-907.
K Koezuka, N Tomita, Bilinear Pseudo-differential Operators with Symbols in BS m 1,1 on Triebel-Lizorkin Spaces. 24K. Koezuka and N. Tomita, Bilinear Pseudo-differential Operators with Symbols in BS m 1,1 on Triebel- Lizorkin Spaces, J. Fourier Anal. Appl. 24 (2018) 309-319.
C Muscalu, W Schlag, Classical and Multilinear Harmonic Analysis. CambridgeCambridge University PressIIC. Muscalu and W. Schlag, Classical and Multilinear Harmonic Analysis, vol. II. Cambridge University Press. Cambridge (2013).
On the bilinear Hörmander classes in the scales of Triebel-Lizorkin and Besov spaces. V Naibo, J. Fourier Anal. Appl. 21V. Naibo, On the bilinear Hörmander classes in the scales of Triebel-Lizorkin and Besov spaces, J. Fourier Anal. Appl. 21 (2015) 1077-1104.
Some maximal inequalities on Triebel-Lizorkin spaces for p = ∞. B Park, Accepted in Math. NachrB. Park, Some maximal inequalities on Triebel-Lizorkin spaces for p = ∞, Accepted in Math. Nachr.
Fourier multiplier theorems for Triebel-Lizorkin spaces. B Park, Accepted in Math. Z. B. Park, Fourier multiplier theorems for Triebel-Lizorkin spaces, Accepted in Math. Z.
Boundedness of pseudo-differential operators of type (0, 0) on Triebel-Lizorkin and Besov spaces. B Park, SubmittedB. Park, Boundedness of pseudo-differential operators of type (0, 0) on Triebel-Lizorkin and Besov spaces, Submitted.
Remarks on singular convolution operators. A Seeger, Studia Math. 2A. Seeger, Remarks on singular convolution operators, Studia Math. 97(2) (1990) 91-114.
A Hörmander type multiplier theorem for multilinear operators. N Tomita, J. Func. Anal. 259N. Tomita, A Hörmander type multiplier theorem for multilinear operators, J. Func. Anal. 259 (2010) 2028-2044.
School of Mathematics, Korea Institute for Advanced Study. H Triebel, Theory of Function Spaces. SeoulBirkhauserBasel-Boston-StuttgartRepublic of Korea E-mail address: [email protected]. Triebel, Theory of Function Spaces, Birkhauser, Basel-Boston-Stuttgart (1983). School of Mathematics, Korea Institute for Advanced Study, Seoul, Republic of Korea E-mail address: [email protected]
| [] |
[
"Soft Physics from STAR",
"Soft Physics from STAR"
] | [
"Fuqiang Wang \nDepartment of Physics\nPurdue University\n47906West LafayetteIndianaUSA\n"
] | [
"Department of Physics\nPurdue University\n47906West LafayetteIndianaUSA"
] | [] | New results on soft hadron distributions and correlations measured with the STAR experiment are presented. Knowledge about the bulk properties of relativistic heavy-ion collisions offered by these results is discussed. * For full list of STAR authors and acknowledgments, see appendix 'Collaborations' of this volume. | 10.1016/j.nuclphysa.2006.06.035 | [
"https://arxiv.org/pdf/nucl-ex/0510068v2.pdf"
] | 11,781,384 | nucl-ex/0510068 | eb86bd09e3a3c776d248dfa016986e99d5bdd0cf |
Soft Physics from STAR
27 Oct 2005
Fuqiang Wang
Department of Physics
Purdue University
47906West LafayetteIndianaUSA
Soft Physics from STAR
27 Oct 2005(for the STAR Collaboration * )
New results on soft hadron distributions and correlations measured with the STAR experiment are presented. Knowledge about the bulk properties of relativistic heavy-ion collisions offered by these results is discussed. * For full list of STAR authors and acknowledgments, see appendix 'Collaborations' of this volume.
INTRODUCTION
The mission of the Relativistic Heavy-Ion Collider (RHIC) is to create and study the Quark-Gluon Plasma (QGP), a thermalized system of deconfined quarks and gluons, long predicted to exist at high energy densities by Quantum Chromodynamics (QCD). Since the turn-on of RHIC in 2000, a large amount of data has been accumulated. STAR [ 1] and the other RHIC experiments [ 2] have critically assessed the first three years' wealth of data and concluded that a new form of hot and dense matter, with energy density far above the predicted critical energy density for hadron-QGP phase-transition [ 3], has been created in heavy-ion collisions at RHIC.
Two new, successful runs, Run-IV and Run-V, were conducted. Results from these runs were presented at this Quark Matter conference. This proceedings presents an overview of soft physics results from STAR [ 4] and discusses what new knowledge, beyond that in [ 1], can be learned from the results on the bulk properties of RHIC collisions. The overview focuses on three areas:
(1) interactions between jets and the bulk medium, which probe the early stage of the medium,
(2) elliptic flow, which is sensitive to the early stage equation of state, and
(3) freeze-out bulk properties, which contain information accumulated over the evolution of the collision.
For Hanbury Brown-Twiss (HBT) and small momentum correlations that are not covered in this overview the reader is referred to [ 5,6], for event-by-event fluctuations to [ 7], and for forward-rapidity physics to [ 8].
JET-MEDIUM INTERACTIONS
Jets are produced early, by hard parton-parton scatterings. The scattered partons traverse the dense medium being created in heavy-ion collisions. They are coupled to the medium via strong interactions, lose energy, and fragment into hadrons, likely inside the medium. These fragment hadrons are predominantly soft, but possess characteristic jetlike angular correlations. The degree of energy loss and the changes in jet-like correlations depend on, and thus provide information on the gluon density of the medium [ 9]. The STAR detector with large acceptance, full azimuth coverage is ideal for jet correlation measurements.
Soft-Soft Correlations
STAR has measured angular correlations between two low p ⊥ particles without coincidence requirement with a high p ⊥ particle. Clear jet correlation structures, even at p ⊥ as low as 0.6-0.8 GeV/c, are observed in p+p collisions [ 10]. Such correlation analysis is also carried out in heavy-ion collisions. Figure 1 shows angular correlations in 130 GeV Au+Au collisions between two soft particles of 0.15 < p ⊥ < 2 GeV/c [ 11]. The first and second harmonic terms have been subtracted. The small-angle correlation peak, characteristic of (mini-)jets, narrows in φ with centrality, and more dramatically, broadens in η by a factor of 2.3 from peripheral to central collisions [ 11]. The results demonstrate the strong coupling between these correlated particles and the medium. Angular correlations between soft particle pair of 0.15 < p ⊥ < 2 GeV/c for (a) central to (d) peripheral collisions [ 11]. The η ∆ independent first and second harmonic terms in φ ∆ have been subtracted.
Hard-Soft Correlations
Jets can be more cleanly selected by triggering on high p ⊥ particles; A high p ⊥ particle likely selects di-jets from hard parton-parton scatterings. The hard-scattered partons lose energy in the medium, emerging as lower p ⊥ particles than expected from fragmentation in vacuum. Due to energy loss, the measured high p ⊥ particles come preferentially from jets produced on the surface of the collision zone and directed outward. The partner jet, directed inward, suffers maximal energy loss resulting in the observed depletion of high p ⊥ particles and enhancement of low p ⊥ particles [ 12,13]. The low p ⊥ particles are broadly distributed and appear not much harder than the inclusive hadrons from medium decay [ 13]. This illustrates, experimentally, the thermalization processes in heavy ion-collisions: particles from two distinctively different sources, jets and medium, approach equilibration via parton-parton interactions. [ 14]. Right: The p ⊥ of associated hadrons on the away side for the three systems (upper) and three trigger p ⊥ selections (lower). The shaded areas are systematic uncertainties.
STAR has further studied the azimuthal dependence of associated p ⊥ . Figure 2(left) shows the number and p ⊥ -weighted correlation functions in pp, d+Au, and central Au+Au collisions [ 14,15]. The subtracted background is obtained from mixed-events, modulated by elliptic flow and normalized to the signal in the 0.8 < |∆φ| < 1.2 region, and is the major source of systematic uncertainties [ 13]. The pp and d+Au data are similar, and the Au+Au correlation is significantly broader. Figure 2(right) shows the p ⊥ , obtained from the ratio of the correlation functions, as a function of ∆φ on the away side. The inclusive hadron p ⊥ is shown as line. The p ⊥ for pp and d+Au are peaked at ∆φ = π, as expected from jet fragmentation, and are much larger than the inclusive one. For central Au+Au collisions, however, the p ⊥ is smallest at ∆φ = π for the two low trigger p ⊥ selections and appears to be equal to the inclusive p ⊥ , while at other angles it is between the values for the simpler systems (pp and d+Au) and the inclusive data.
The p ⊥ versus ∆φ result indicates that the spectrum at ∆φ = π is softer than that further away from π. In other words, relatively fewer high p ⊥ associated particles are found at π. This is also demonstrated in the correlation functions with varying associated p ⊥ : with increasing associated p ⊥ , the correlation function flattens and even develops double-hump structure [ 14,16].
Three-Particle Correlations
Our correlation function results are qualitatively consistent with a number of scenarios. In one scenario, the away-side jets directed at π are maximally modified due to the maximum path-length through the medium. In an alternate scenario, the measured awayside correlation is the sum of away-side jets deflected by radial flow, each individual jet relatively confined in azimuth. Because the jet axis is now deflected at an angle away from π and high p ⊥ jet fragments are relatively confined to the jet axis, a dip at π is F. Wang resultant in the correlation function at high associated p ⊥ . In a third scenario, the energy deposited by the away-side jet generates a sonic shock wave in the medium, producing Mach cone effect: larger number of particles and possibly larger p ⊥ along the conical flow direction, approximately at ∆φ = π ± 1 [ 17].
In order to distinguish various scenarios, 3-particle correlations are analyzed. Figure 3 shows 3-particle correlation results in minimum bias d+Au and 10% central Au+Au collisions for trigger p ⊥ =3-4 GeV/c and associated p ⊥ =1-2 GeV/c [ 14]. A number of combinatoric backgrounds have been subtracted:
(a) Correlated trigger-associated pair (see section 2.2) with a background particle. This background is constructed by coupling the 2-particle correlation function measured in the same event with the v 2 modulated background.
(b) Correlated soft-soft hadron pair (see section 2.1) in the underlying event that is not related to the trigger particle. This background is constructed from soft-soft correlations in non-triggered events.
(c) Random triplets that are correlated by elliptic flow. This background is constructed
by 1 + 2v trig 2 v (1) 2 cos(∆φ 1 ) + 2v trig 2 v (2) 2 cos(∆φ 2 ) + 2v (1) 2 v(2)
2 cos(∆φ 1 − ∆φ 2 ) and normalized to the signal in the region of 0 . (color online) Three-particle correlations in min-bias d+Au and 10% central Au+Au collisions [ 14]. The p ⊥ ranges are p trig ⊥ =3-4 GeV/c for the trigger and p ⊥ =1-2 GeV/c for the associated particles.
.8 < |∆φ 1,2 | < 1.2. trig φ - 2 φ = 2 φ ∆ trig φ - 1 φ = 1 φ ∆ 2 φ ∆ d 1 φ ∆ d
As seen in Fig. 3, four peaks are evident in both d+Au and Au+Au, corresponding to near-near, near-away, away-near, and away-away correlations for the two associated particles. The away-away peak is elongated along the diagonal axis, in d+Au perhaps due to k ⊥ broadening, and in Au+Au with possible additional effect from deflected jets. The correlation strength is stronger in Au+Au than in d+Au and is qualitatively consistent with the relative two-particle correlation strengths.
Conical flow would yield 3-particle correlation signals at (π ± 1, π ∓ 1) along the offdiagonal axis. To investigate this possibility, we take the differences between the averaged correlation signals in three regions, 'center': |∆φ 1,2 − π| < 0.4, 'deflected': |∆φ 1,2 − (π ± 1)| < 0.4, and 'cone': |∆φ 1 −(π±1)| < 0.4 and |∆φ 2 −(π∓1)| < 0.4. The differences in 10% central Au+Au collisions per radian 2 are center − def lected = 0.3 ± 0.3(stat) ± 0.4(syst), and center − cone = 2.6 ± 0.3(stat) ± 0.8(syst). The systematic uncertainties are obtained from those in v 2 , taken to be between 4-particle cumulant and reaction plane results, and those in background normalization. The latter affects greatly the absolute correlation magnitudes, but not the differences. The results indicate that the correlation strength is significantly lower in the conical flow 'cone' area than in the 'center' or 'deflected' area; The distinctive features of conical flow are not observed in the present data.
ELLIPTIC FLOW
In non-central collisions, the overlap region is anisotropic (nearly elliptic). Large pressure built up in the collision center results in pressure gradient dependent on azimuthal angle, which generates momentum space anisotropy, or elliptic flow. Once the spatial anisotropy disappears due to the anisotropic expansion, development of elliptic flow ceases. This self-quenching process happens quickly, therefore elliptic flow is primarily sensitive to the early stage equation of state (EOS) [ 18]. Figure 4 shows the measured elliptic flow parameter, v 2 , as a function of p ⊥ for K 0 S , Λ, φ, Ξ and Ω [ 19,20]. Large v 2 is observed for all particle species, indicating strong interactions at the early stage. Since φ, Ξ and Ω have low hadronic cross-sections, their observed large v 2 may suggest that the elliptic flow is built up in the partonic stage. (color online) Azimuthal anisotropy v 2 for strange (left) and (hidden) multi-strange (right) hadrons [ 19]. The curves are empirical fits. The shaded areas are ranges of hydro results. Figure 4 also shows the range of v 2 from hydrodynamic calculations. A more detailed comparison is shown in Fig. 5 where the hydro results, using an EOS with a first-order F. Wang hadron-QGP phase-transition [ 18], are compared to pion, kaon, proton, and lambda v 2 measurements [ 21]. The mass dependence observed in data, characteristic of a common flow velocity, is well described by hydrodynamics. The magnitude of the v 2 , however, is described only at the 20-30% level. The description with hadronic EOS is worse [ 18]. The discrepancy indicates perhaps that the hydro calculation is too simplistic and more realistic EOS is needed. On the other hand, non-flow effects in the measured v 2 need to be addressed rigorously. Nevertheless, since ideal hydrodynamic fluid is a thermalized system with zero mean-free-path yielding the maximum possible v 2 , the approximate consistency between the measured v 2 and the hydro results suggests an early thermalization in heavyion collisions at RHIC.
Hydrodynamic Description
Constituent Quark Scaling
While well described by hydrodynamics at low p ⊥ , v 2 was found to saturate at high p ⊥ > 2 GeV/c [ 19]. The saturation value for mesons is about 2/3 of that for baryons. This separation pattern holds for π, K, p, Λ, and Ξ, and seems to hold for φ and Ω [ 20,22]. This result, together with the baryon-meson splitting of high p ⊥ suppression pattern [ 23], suggests the relevance of the constituent quark degrees of freedom in the intermediate p ⊥ region [ 24]. Figure 3.2 shows new, high precision measurement of v 2 , scaled by the number of valence quarks n, as a function of p ⊥ also scaled by n [ 19]. Although deviation is visible, constituent quark scaling still holds to a good degree. Figure 6. (color online) Measurements of scaled v 2 (p ⊥ /n)/n for identified hadrons and ratios between the measurements and a polynomial fit through all data points [ 19].
If constituent quark coalescence is indeed the production mechanism giving rise to the baryon-meson difference at intermediate p ⊥ , i.e. hadronization of the bulk medium does occur, then it seems natural to conclude that a deconfined phase of quarks and gluons is created, prior to hadronization.
Angular Correlations with Baryons and Mesons
Constituent quark coalescence/recombination [ 25] is expected to produce either no jetlike angular correlations, or weaker ones than those from jet fragmentation. Figure 7 shows correlation results for charged hadrons with Λ, K S , p, or π of 3 < p trig ⊥ < 4 GeV/c [ 14]. The Λ and K S are identified by their topological decays in the TPC; the p and π at high p ⊥ are statistically identified by the relativistic rise of the specific ionization energy loss in the TPC and are required to have 50% and 95% purity, respectively. Little difference is found in Fig. 7 between baryon and meson triggers, or particle and anti-particle triggers. The lack of difference in jet-like correlations between leading baryons and mesons and the success of coalescence and recombination models in describing intermediate p ⊥ particle yields and v 2 are hard to reconcile, but may provide a unique window to advance our understanding of hadronization in heavy-ion collisions.
FREEZE-OUT BULK PROPERTIES
If thermalization is reached at an initial stage, the final stage hadrons will possess thermal distributions. Indeed, the measured particle spectra and yields [ 26] and eventby-event p ⊥ fluctuations [ 27] indicate a nearly chemically and (local-)kinetically equilibrated system at the final freeze-out stage.
Chemical and Kinetic Freeze-out Parameters
STAR [ 28,29] has now measured hadron distributions at 62 GeV, and extracted chemical freeze-out properties from stable particle ratios within the thermal model [ 30] and kinetic freeze-out properties from particle p ⊥ distributions within the blast-wave model [ 31]. Figure 8 shows the extracted chemical freeze-out temperature (T CH ) and strangeness suppression factor (γ S ), and Fig. 9 the extracted kinetic freeze-out temperature (T KIN ) and average radial flow velocity ( β ⊥ ). Resonance decays are found to have no significant effect on the extracted kinetic freeze-out parameters [ 28]. The results at 62 GeV are qualitatively the same as those obtained at 200 GeV.
It is found that the extracted T CH is independent of centrality and is close to the predicted hadronization temperature [ 3]. The T KIN , extracted from distributions of common particle species (π, K, p), is found to steadily decrease with centrality, while the corresponding β ⊥ increases [ 26,32]. This provides evidence for further expansion from chemical to kinetic freeze-out, driving the system to lower temperature. Thus one may argue that the constant T CH , alone, suggests that chemical freeze-out measures hadronization; hadronic scatterings from hadronization (at a universal temperature) to chemical freeze-out must be negligible because they would result in a dropping T CH .
F. Wang
The extracted γ S is found to increase from pp and peripheral collisions to central collisions. In central Au+Au collisions, γ S is found to be consistent with unity, suggesting that strangeness is similarly equilibrated as the light u,d quarks. The extracted T KIN for rare particles (φ, Ξ, Ω) is higher than that for the common particles and appears to be independent of centrality and close to T CH . The flow velocity is lower, but sizeable. This may indicate, especially noting their low hadronic cross-sections, that these rare particles kinetically and chemically freeze-out at the same time; the measured β ⊥ for the rare particles then can be considered as the flow velocity at chemical freeze-out (and perhaps also at hadronization). This is in agreement with the v 2 results: a significant amount of flow is built up at the partonic stage. Extracted chemical freeze-out temperature (left) and strangeness suppression factor (right) from stable particle ratios [ 28,29].
System Size and Time Span
It is found that resonance yields are lower than predicted by the thermal model using the fitted parameters to stable particle ratios [ 33]. This implies a finite time span from chemical to kinetic freeze-out during which resonances decay and the daughter particles rescatter resulting in a loss of resonance signals. On the other hand, regeneration of resonances from particle rescattering is also possible. Thus, yields of short-lived resonances are determined by conditions after chemical freeze-out, and effectively measure the kinetic freeze-out temperature.
System size and time information can be measured by HBT. Figure 10 shows the sideward size (R side ) of the freeze-out system measured by HBT of pion pairs at low k ⊥ [ 5]. Due to radial flow, the true R side , to be found at k ⊥ = 0, is about 20% larger than these in Fig. 10. R side is equal to the initial RMS size in pp and peripheral d+Au and becomes twice as large in central Au+Au collisions, with a smooth trend in-between. This result indicates an expansion of a factor of 2 from initial to final state in central Au+Au. There remain a number of interesting questions: What is the size at chemical freeze-out? Will it, together with flow velocity, be consistent with the resonance results? Will hydrodynamic-like models, successful for v 2 and hadron spectra results, be able to explain HBT results?
CONCLUSIONS
STAR has measured wealth of data pertinent to properties of the bulk medium created in relativistic heavy-ion collisions. The new results from Run-IV and Run-V, some of which are reviewed here, have furthered our understanding and yielded new insights. We summarize the results as follow.
(1) Jet correlations are strongly modified by the medium. Correlated hadrons are nearly equilibrated with the medium via strong interactions of partons, or jets, or both with the medium. The distinctive features of conical flow are not observed in our present three-particle correlation data.
(2) Elliptic flow is large, even for rare particles such as φ, Ξ, and Ω. Elliptic flow can be reasonably well described by hydrodynamics at low p ⊥ , and at high p ⊥ displays saturation patterns grouping into baryons and mesons. The results suggest partonic collectivity, early thermalization, and the relevance of the constituent quark degrees of freedom. Quark coalescence offers a simple mechanism for hadronization; however, it appears at odd with the jet-like correlation results with trigger baryons and mesons.
(3) Extracted chemical freeze-out temperature does not change with centrality; chemical freeze-out may coincide with hadronization with significant radial flow. Pion, kaons, protons favor a kinetic freeze-out changing with centrality, and together with resonance yields, suggest a finite time span from chemical to kinetic freeze-out.
Figure 1 .
1(color online)
Figure 2 .
2(color online) Left: Background subtracted number (upper) and p ⊥ -weighted (lower) correlation functions in pp, central 20% d+Au and 5% Au+Au collisions
Figure 3
3Figure 3. (color online) Three-particle correlations in min-bias d+Au and 10% central Au+Au collisions [ 14]. The p ⊥ ranges are p trig ⊥ =3-4 GeV/c for the trigger and p ⊥ =1-2 GeV/c for the associated particles.
Figure 4 .
4Figure 4. (color online) Azimuthal anisotropy v 2 for strange (left) and (hidden) multi-strange (right) hadrons [ 19]. The curves are empirical fits. The shaded areas are ranges of hydro results.
0.02 -0.02 -0.02 -0.02
Figure 5 .
5(color online) Elliptic flow v 2 as a function of p ⊥ from minimum bias Au+Au collisions[ 21]. Hydrodynamic calculations are shown in dotdashed lines.
Figure 7 .
7(color online) Azimuthal correlation functions of charged hadrons with identified trigger particles, Λ and K S (p trig ⊥ =3-3.5 GeV/c, p ⊥ =1-2 GeV/c) and p,p, π + and π − (p trig ⊥ =3-4 GeV/c, p ⊥ =0.15-3 GeV/c)[ 14].
Figure 8 .
8(color online)
Figure 9 .Figure 10 .
910(color online) Extracted kinetic freeze-out temperature versus average flow velocity within the blast-wave model[ 28,29].Initial RMS from Glauber (fm) (color online) Sideward size from 2π HBT measurements versus initial RMS size of the collision system calculated by Glauber model[ 5].
. J Adams, STAR CollaborationNucl. Phys. 757J. Adams et al. (STAR Collaboration), Nucl. Phys. A757, 102-183 (2005).
. K Adcox, PHENIX CollaborationNucl. Phys. 757K. Adcox et al. (PHENIX Collaboration) Nucl. Phys. A757, 184-283 (2005);
. I Arsene, BRAHMS Collaboration757I. Arsene et al. (BRAHMS Collaboration) ibid, A757, 1-27 (2005);
. B B Back, PHOBOS Collaboration757B.B. Back et al. (PHOBOS Collaboration) ibid, A757, 28-101 (2005).
. F Karsch, Nucl Phys A698. 199F. Karsch, Nucl Phys A698, 199c (2002).
. M Anderson, Nucl. Instrum. Meth. 499659M. Anderson et al., Nucl. Instrum. Meth. A499, 659 (2003);
. K H Ackermann, ibid. 499713K.H.Ackermann et al., ibid, A499, 713 (2003).
. Z Chajecki, STAR Collaborationthese proceedingsZ. Chajecki et al. (STAR Collaboration), these proceedings.
. P Chaloupka, STAR Collaborationthese proceedingsP. Chaloupka et al. (STAR Collaboration), these proceedings.
. C Pruneau, STAR Collaborationthese proceedingsC. Pruneau et al. (STAR Collaboration), these proceedings.
. B Mohanty, STAR Collaborationthese proceedingsB. Mohanty et al. (STAR Collaboration), these proceedings.
. X.-N Wang, M Gyulassy, Phys Rev Lett. 681480X.-N. Wang and M. Gyulassy, Phys Rev Lett 68, 1480 (1992);
. R Baier, D Schiff, B G Zakharov, Ann Rev Nucl Part Sci. 5037R. Baier, D. Schiff, and B.G. Zakharov, Ann Rev Nucl Part Sci 50, 37 (2000).
. T Trainor, STAR Collaborationposter presented at this conferenceT. Trainor et al. (STAR Collaboration), poster presented at this conference.
. J Adams, STAR Collaborationnucl-ex/0411031J. Adams, et al. (STAR Collaboration), nucl-ex/0411031.
. C Adler, STAR CollaborationPhys. Rev. Lett. 9082302C. Adler et al. (STAR Collaboration), Phys. Rev. Lett. 90, 082302 (2003).
. J Adams, STAR Collaborationnucl-ex/0501016Phys. Rev. Lett. 95152301J. Adams et al. (STAR Collaboration), Phys. Rev. Lett. 95, 152301 (2005) [nucl-ex/0501016];
. F Wang, STAR CollaborationJ. Phys. G. 301299F. Wang (STAR Collaboration), J. Phys. G 30, S1299 (2004).
. J Ulery, STAR Collaborationthese proceedingsJ. Ulery et al. (STAR Collaboration), these proceedings.
. F Wang, STAR Collaborationnucl-ex/0508021F. Wang et al. (STAR Collaboration), nucl-ex/0508021.
. M Horner, STAR Collaborationposter presented at this conferenceM. Horner et al. (STAR Collaboration), poster presented at this conference.
. H Stoecker, nucl-th/0406018H. Stoecker, nucl-th/0406018;
. J Casalderrey-Solana, E V Shuryak, D Teaney, ; H Stoecker, nucl-th/0506013J. Casalderrey-Solana, E.V. Shuryak, and D. Teaney, hep-ph/0411315; H. Stoecker, nucl-th/0506013.
. P Huovinen, nucl-th/0305064P. Huovinen, nucl-th/0305064;
nucl-th/0305084; E.V. Shuryak, Prog Part Nucl Phys. P Kolb, U Heinz, 53273P. Kolb and U. Heinz, nucl-th/0305084; E.V. Shuryak, Prog Part Nucl Phys 53, 273 (2004).
. M Oldenburg, STAR Collaborationthese proceedingsM. Oldenburg et al. (STAR Collaboration), these proceedings.
. X Cai, STAR Collaborationthese proceedingsX. Cai et al. (STAR Collaboration), these proceedings.
. J Adams, STAR CollaborationPhys. Rev. C. 7214904J. Adams et al. (STAR Collaboration), Phys. Rev. C 72, 014904 (2005).
. J Adams, STAR CollaborationPhys Rev Lett. 9252302J. Adams et al. (STAR Collaboration), Phys Rev Lett 92, 052302 (2004);
. J Adams, STAR Collaboraitonnucl-ex/0504022J. Adams et al. (STAR Collaboraiton) nucl-ex/0504022.
. J Dunlop, STAR Collaborationthese proceedingsJ. Dunlop et al. (STAR Collaboration), these proceedings.
. D Molnar, S A Voloshin, Phys. Rev. Lett. 9192301D. Molnar and S.A. Voloshin, Phys. Rev. Lett. 91, 92301 (2003).
. V Greco, C M Ko, P Levai, Phys. Rev. C. 6834904V. Greco, C.M. Ko and P. Levai, Phys. Rev. C 68, 34904 (2003);
. R J Fries, Phys. Rev. C. 6844902R.J. Fries et al., Phys. Rev. C 68, 44902 (2003);
. R C Hwa, C B Yang, Phys. Rev. C. 7024905R.C. Hwa and C.B. Yang, Phys. Rev. C 70, 024905 (2004);
. R J Fries, S A Bass, B Muller, Phys. Rev. Lett. 94122301R.J. Fries, S.A. Bass, and B. Muller, Phys. Rev. Lett. 94, 122301 (2005).
. J Adams, STAR CollaborationPhys. Rev. Lett. 92112301J. Adams et al. (STAR Collaboration), Phys. Rev. Lett. 92, 112301 (2004).
. J Adams, STAR Collaborationnucl-ex/0308033J. Adams et al. (STAR Collaboration), nucl-ex/0308033.
. L Molnar, STAR Collaborationnucl-ex/0507027L. Molnar et al. (STAR Collaboration), nucl-ex/0507027.
. J Speltz, STAR Collaborationposter presented at this conferenceJ. Speltz et al. (STAR Collaboration), poster presented at this conference.
. P Braun-Munzinger, I Heppe, J Stachel, Phys. Lett. B. 46515P. Braun-Munzinger, I. Heppe, and J. Stachel, Phys. Lett. B 465, 15 (1999);
. N Xu, M Kaneta, Nucl. Phys. 698306N. Xu and M. Kaneta, Nucl. Phys. A698, 306 (2002).
. E Schnedermann, J Sollfrank, U W Heinz, Phys. Rev. C. 482462E. Schnedermann, J. Sollfrank, and U.W. Heinz, Phys. Rev. C 48, 2462 (1993);
. U A Wiedemann, U W Heinz, Phys. Rev. C. 563265U.A. Wiedemann and U.W. Heinz, Phys. Rev. C 56, 3265 (1997).
It is worth of noting that similar T CH is also found in pp and e + e − collisions, which signifies the statistical nature of particle production (or hadronization). The extracted β ⊥ is not zero in pp which may be due to contamination of soft jet fragments in the measured p ⊥ spectra. It is worth of noting that similar T CH is also found in pp and e + e − collisions, which signifies the statistical nature of particle production (or hadronization). The extracted β ⊥ is not zero in pp which may be due to contamination of soft jet fragments in the measured p ⊥ spectra.
. S Salur, STAR Collaborationthese proceedingsS. Salur et al. (STAR Collaboration), these proceedings.
| [] |
[
"AMALGAMATED FREE PRODUCT RIGIDITY FOR GROUP VON NEUMANN ALGEBRAS",
"AMALGAMATED FREE PRODUCT RIGIDITY FOR GROUP VON NEUMANN ALGEBRAS"
] | [
"Ionuţ Chifan ",
"Adrian Ioana "
] | [] | [] | We provide a fairly large family of amalgamated free product groups Γ = Γ1 * Σ Γ2 whose amalgam structure can be completely recognized from their von Neumann algebras. Specifically, assume that Γi is a product of two icc non-amenable bi-exact groups, and Σ is icc amenable with trivial one-sided commensurator in Γi, for every i = 1, 2. Then Γ satisfies the following rigidity property: any group Λ such that L(Λ) is isomorphic to L(Γ) admits an amalgamated free product decomposition Λ = Λ1 * ∆ Λ2 such that the inclusions L(∆) ⊆ L(Λi) and L(Σ) ⊆ L(Γi) are isomorphic, for every i = 1, 2. This result significantly strengthens some of the previous Bass-Serre rigidity results for von Neumann algebras. As a corollary, we obtain the first examples of amalgamated free product groups which are W * -superrigid.2Theorem A strengthens such Bass-Serre rigidity results in the case Σ is icc, by removing all assumptions on the group Λ. This is strongest type of rigidity that one can expect for II 1 factors of general AFP groups. To make this precise, note that if Γ = Γ 1 * Σ Γ 2 , then L(Γ) is determined up to isomorphism by the isomorphism classes of the inclusions L(Σ) ⊆ L(Γ 1 ) and L(Σ) ⊆ L(Γ 2 ). Conversely, Theorem A asserts that, under certain assumptions, these isomorphism classes can be reconstructed from the isomorphism class of L(Γ).Remark 1.2. We do not know whether Theorem A holds for plain free product groups, i.e. when Σ = {e}. Note in this respect that by a result in [DR01] isomorphism of the free group factors would imply that L(Γ 1 * Γ 2 ) ∼ = L(Γ 1 * Γ 2 * F ∞ ), for any icc groups Γ 1 and Γ 2 . However, even in the case Σ = {e}, we can still sometimes deduce that any group Λ with L(Γ) = L(Λ) must contain subgroups Λ 1 , Λ 2 such that L(Γ i ) is unitarily conjugate to L(Λ i ), for every i ∈ {1, 2}. Indeed, in the context of Theorem A, this holds if Γ 1 has property (T) (or Haagerup's property) while Γ 2 does not (see Corollary 3.9). Remark 1.3. The groups covered by Theorem A are typically not W * -superrigid. Indeed, the groups (A * B × A * B) * A×A (A * B × A * B), where A and B are any icc amenable groups, satisfy the hypothesis of Theorem A (see Example 1.1(a)) but produce isomorphic II 1 factors by [Co76].Nevertheless, by combining Theorem A with results from [IPV10,CdSS15] we prove the following:Corollary B. Let Γ 0 be an icc, non-amenable, bi-exact group, and Σ 0 < Γ 0 be an icc, amenable subgroup such that the following two conditions hold:(2) the centralizer in Γ 0 of any finite index subgroup of Σ 0 ∩ gΣ 0 g −1 is trivial, for any g ∈ Γ 0 .If Λ is any countable group and θ : L(Γ) → L(Λ) is any * -isomorphism, then there exist a group isomorphism δ : Γ → Λ, a unitary u ∈ L(Λ), and a character η : Γ → T such that θ(u g ) = η(g)uv δ(g) u * , for every g ∈ Γ.Here, {u g } g∈Γ and {v h } h∈Λ denote the canonical unitaries generating L(Γ) and L(Λ).Corollary B provides a new class of W * -superrigid groups. Note that unlike all of the known classes of W * -superrigid groups, our examples are not generalized wreath product groups nor special subgroups of such groups, as in [IPV10, BV13, Be14]. As such, Corollary B gives first the examples of AFP groups which are W * -superrigid. | 10.1016/j.aim.2018.02.025 | [
"https://arxiv.org/pdf/1705.07350v2.pdf"
] | 119,606,032 | 1705.07350 | f3616fe127cb1974c744a8801d63de6157db2019 |
AMALGAMATED FREE PRODUCT RIGIDITY FOR GROUP VON NEUMANN ALGEBRAS
24 Jun 2017
Ionuţ Chifan
Adrian Ioana
AMALGAMATED FREE PRODUCT RIGIDITY FOR GROUP VON NEUMANN ALGEBRAS
24 Jun 2017
We provide a fairly large family of amalgamated free product groups Γ = Γ1 * Σ Γ2 whose amalgam structure can be completely recognized from their von Neumann algebras. Specifically, assume that Γi is a product of two icc non-amenable bi-exact groups, and Σ is icc amenable with trivial one-sided commensurator in Γi, for every i = 1, 2. Then Γ satisfies the following rigidity property: any group Λ such that L(Λ) is isomorphic to L(Γ) admits an amalgamated free product decomposition Λ = Λ1 * ∆ Λ2 such that the inclusions L(∆) ⊆ L(Λi) and L(Σ) ⊆ L(Γi) are isomorphic, for every i = 1, 2. This result significantly strengthens some of the previous Bass-Serre rigidity results for von Neumann algebras. As a corollary, we obtain the first examples of amalgamated free product groups which are W * -superrigid.2Theorem A strengthens such Bass-Serre rigidity results in the case Σ is icc, by removing all assumptions on the group Λ. This is strongest type of rigidity that one can expect for II 1 factors of general AFP groups. To make this precise, note that if Γ = Γ 1 * Σ Γ 2 , then L(Γ) is determined up to isomorphism by the isomorphism classes of the inclusions L(Σ) ⊆ L(Γ 1 ) and L(Σ) ⊆ L(Γ 2 ). Conversely, Theorem A asserts that, under certain assumptions, these isomorphism classes can be reconstructed from the isomorphism class of L(Γ).Remark 1.2. We do not know whether Theorem A holds for plain free product groups, i.e. when Σ = {e}. Note in this respect that by a result in [DR01] isomorphism of the free group factors would imply that L(Γ 1 * Γ 2 ) ∼ = L(Γ 1 * Γ 2 * F ∞ ), for any icc groups Γ 1 and Γ 2 . However, even in the case Σ = {e}, we can still sometimes deduce that any group Λ with L(Γ) = L(Λ) must contain subgroups Λ 1 , Λ 2 such that L(Γ i ) is unitarily conjugate to L(Λ i ), for every i ∈ {1, 2}. Indeed, in the context of Theorem A, this holds if Γ 1 has property (T) (or Haagerup's property) while Γ 2 does not (see Corollary 3.9). Remark 1.3. The groups covered by Theorem A are typically not W * -superrigid. Indeed, the groups (A * B × A * B) * A×A (A * B × A * B), where A and B are any icc amenable groups, satisfy the hypothesis of Theorem A (see Example 1.1(a)) but produce isomorphic II 1 factors by [Co76].Nevertheless, by combining Theorem A with results from [IPV10,CdSS15] we prove the following:Corollary B. Let Γ 0 be an icc, non-amenable, bi-exact group, and Σ 0 < Γ 0 be an icc, amenable subgroup such that the following two conditions hold:(2) the centralizer in Γ 0 of any finite index subgroup of Σ 0 ∩ gΣ 0 g −1 is trivial, for any g ∈ Γ 0 .If Λ is any countable group and θ : L(Γ) → L(Λ) is any * -isomorphism, then there exist a group isomorphism δ : Γ → Λ, a unitary u ∈ L(Λ), and a character η : Γ → T such that θ(u g ) = η(g)uv δ(g) u * , for every g ∈ Γ.Here, {u g } g∈Γ and {v h } h∈Λ denote the canonical unitaries generating L(Γ) and L(Λ).Corollary B provides a new class of W * -superrigid groups. Note that unlike all of the known classes of W * -superrigid groups, our examples are not generalized wreath product groups nor special subgroups of such groups, as in [IPV10, BV13, Be14]. As such, Corollary B gives first the examples of AFP groups which are W * -superrigid.
Introduction
In [MvN36,MvN43], Murray and von Neumann found a natural way to associate a von Neumann algebra, denoted by L(Γ), to every countable discrete group Γ. More precisely, L(Γ) is defined as the weak operator closure of the complex group algebra CΓ acting by left convolution on the Hilbert space ℓ 2 (Γ). The classification of group von Neumann algebras has since been a central theme in operator algebras driven by the following question: what aspects of the group Γ are remembered by L(Γ)? This question is the most interesting when Γ is icc (i.e., the conjugacy class of every non-trivial element of Γ is infinite), which corresponds to L(Γ) being a II 1 factor. Von Neumann algebras tend to forget a lot of information about the groups they are constructed from. This is best illustrated by Connes' theorem asserting that all II 1 factors arising from icc amenable groups are isomorphic to the hyperfinite II 1 factor [Co76]. Consequently, amenable groups manifest a striking lack of rigidity: any algebraic property of the group (e.g., being torsion free or finitely generated) is completely lost in the passage to von Neumann algebras.
In sharp contrast, in the non-amenable case, Popa's deformation/rigidity theory has led to the discovery of several instances when various properties of a group Γ can be recovered from L(Γ). We only highlight three developments in this direction here, and refer the reader to the surveys [Po06a,Va10,Io12] for more information. Thus, it was shown in [Po03,Po04] that within a large of icc groups (containing the wreath product Z/2Z ≀ Γ, for any infinite property (T) group Γ) isomorphism of the associated II 1 factors implies isomorphism of the groups. A few years later, the first examples of groups, called W * -superrigid groups, that can be entirely reconstructed from their von Neumann algebras were discovered in [IPV10] (see [BV13,Be14] for the only other know examples). Specifically, a group Γ is called W * -superrigid if whenever L(Λ) is isomorphic to L(Γ), Λ must be isomorphic to Γ. Most recently, the following product rigidity phenomenon was found in [CdSS15]: if Γ 1 and Γ 2 are icc hyperbolic groups, then any group Λ such that L(Λ) is isomorphic to L(Γ 1 × Γ 2 ) admits a decomposition Λ = Λ 1 × Λ 2 such that L(Λ i ) is isomorphic I.C. was partially supported by NSF grants DMS # 1600688 and DMS # 1301370. Part of this work was done during a Flex Load Semester awarded by the CLAS at University of Iowa.
A.I. was partially supported by NSF Career Grant DMS #1253402 and a Sloan Foundation Fellowship.
to L(Γ i ), up to amplifications, for every i ∈ {1, 2}. In other words, the von Neumann algebra L(Γ) completely remembers the product structure of the underlying group Γ.
Motivated by these advances, it seems natural to investigate instances when other constructions in group theory can be recognized from the von Neumann algebraic structure. We make progress on this general problem here by providing a class of amalgamated free product (abbreviated AFP) groups Γ whose von Neumann algebra L(Γ) entirely remembers the amalgam structure of Γ.
Before stating our main result, let us recall the definition of bi-exact groups [BO08, Definition 15.1.2]. A countable group Γ is said to be bi-exact (or to belong to Ozawa's class S [Oz04]) if it is exact and admits a map µ : Γ → Prob(Γ) satisfying lim x→∞ µ(gxh) − g · µ(x) = 0, for all g, h ∈ Γ. The class of bi-exact groups includes all hyperbolic groups [Oz03], the wreath product A ≀ Γ of any amenable group A with a bi-exact group Γ [Oz04], the group Z 2 ⋊ SL 2 (Z) [Oz08], and is closed under free products.
Theorem A. Let Γ = Γ 1 * Σ Γ 2 be an amalgamated free product group satisfying the following:
(1) Σ is an icc amenable group and [Σ : Σ ∩ gΣg −1 ] = ∞, for every g ∈ Γ i \ Σ and i ∈ {1, 2}.
(2) Γ i = Γ 1 i × Γ 2 i , where Γ j i is an icc, non-amenable, bi-exact group, for every i, j ∈ {1, 2}. Denote M = L(Γ) and let Λ be an arbitrary group such that M = L(Λ).
Then there exist a decomposition Λ = Λ 1 * ∆ Λ 2 and a unitary u ∈ M such that uL(Λ 1 )u * = L(Γ 1 ), uL(Λ 2 )u * = L(Γ 2 ), and uL(∆)u * = L(Σ).
Before placing Theorem A into context, let us present some classes of groups to which it applies.
Example 1.1. Let Σ 0 < Γ 0 be an inclusion of groups satisfying the following condition: (⋆) Γ 0 is icc, non-amenable, bi-exact, Σ 0 is icc, amenable, and [Σ 0 : Σ 0 ∩ gΣ 0 g −1 ] = ∞, for all g ∈ Γ 0 \ Σ 0 . Having such an inclusion Σ 0 < Γ 0 , the hypothesis of Theorem A is satisfied for Γ 1 = Γ 2 = Γ 0 ×Γ 0 , and Σ < Γ 1 ∩ Γ 2 equal to either Σ 0 × Σ 0 or Σ = {(g, g)|g ∈ Σ 0 }.
On the other hand, (⋆) is verified by the following group inclusions:
(a) A < A * B, where A is any icc amenable group, and B is any non-trivial bi-exact group. (b) A ≀ C < A ≀ D, where A is any non-trivial amenable group and C is any infinite maximal amenable subgroup of any icc hyperbolic group D (see Section 2.4 for a proof of this assertion). For instance, take C = Z and D = C * F n−1 = F n , for some 2 n +∞. (c) Z 2 ⋊ M < Z 2 ⋊ F n , where we view F n as a subgroup of SL 2 (Z), for some 2 n +∞, and M ∈ F n is any matrix such that |Tr(M )| > 2 and M = M ℓ 0 , for every M 0 ∈ F n and ℓ 2. For instance, define F 2 = M 1 , M 2 < SL 2 (Z) as the group generated by M 1 = 1 2 0 1 , M 2 = 1 0 2 1 and let M = M 1 M 2 = 5 2 2 1 .
The fact that a large class of AFP groups satisfy Theorem A should not be surprising since II 1 factors of such groups (and, more generally, AFP II 1 factors M = M 1 * B M 2 ) have been shown to be extremely rigid (see, e.g., [Oz04, IPP05, Pe06, Po06b, CH08, Ki09, PV09, HPV09, Io12, Va13,HU15]). In particular, Bass-Serre-type rigidity results for AFP II 1 factors have been discovered in [IPP05]. For instance, assume that Γ = Γ 1 * Σ Γ 2 and Λ = Λ 1 * ∆ Λ 2 are AFP groups, where Γ 1 , Γ 2 , Λ 1 , Λ 2 are icc property (T) groups. It follows from [IPP05] that if θ : L(Γ) → L(Λ) is any * -isomorphism, then θ(L(Γ i )) is unitarily conjugate to L(Λ i ), up to a permutation of indices. Later on, the same conclusion was shown to hold assuming that Γ 1 , Γ 2 , Λ 1 , Λ 2 are products of icc non-amenable groups and Σ, ∆ are amenable [CH08] (see also [Oz04,Pe06] in the case Σ and ∆ are trivial).
Example 1.4. To give some concrete examples of inclusions Σ 0 < Γ 0 to which Corollary B applies, let A be any non-trivial amenable group and C be any infinite maximal amenable subgroup of any icc hyperbolic group D. Put Σ 0 := A ≀ C and Γ 0 := A ≀ D. Then condition (1) from Corollary B is satisfied by Example 1.
1(b). If g ∈ Γ 0 , then Σ 0 ∩ gΣ 0 g −1 ⊇ A (C) = ⊕ c∈C A. Thus, any finite index subgroup of Σ 0 ∩ gΣ 0 g −1 contains A (C) 0 , for some finite index subgroup A 0 < A. Since A is icc, the centralizer of A (C) 0
in Γ 0 is trivial, which proves that condition (2) from Corollary B is also satisfied.
In particular, Corollary B applies in the case Σ 0 = S ∞ ≀ Z and Γ 0 = S ∞ ≀ F n , for some n 2, where S ∞ denotes the group of finite permutations of N.
We end by noticing that the groups Γ from Corollary B are also C * -superrigid, in the sense that they can be completely recovered from their reduced C * -algebras. Recall in this respect that the reduced C * -algebra C * r (Γ) of Γ is defined as the operator norm closure of the linear span of the left regular representation {u g } g∈Γ ⊆ U (ℓ 2 (Γ)). A countable group Γ is then called C * -superrigid if whenever C * r (Λ) is isomorphic to C * r (Γ), Λ must be isomorphic to Γ.
Corollary C. Let Γ be any AFP group as in Corollary B.
If Λ is any countable group and θ : C * r (Γ) → C * r (Λ) is any * -isomorphism, then there exist a group isomorphism δ : Γ → Λ, a unitary u ∈ L(Λ), and a character η : Γ → T such that θ(u g ) = η(g)uv δ(g) u * , for every g ∈ Γ.
The first examples of non-abelian torsion-free C * -superrigid groups were recently found in [KRTW]. The groups considered in [KRTW] are virtually abelian, hence amenable. In contrast, Corollary C provides the first examples of non-amenable groups that are C * -superrigid.
Comments on the proof of Theorem A. We end the introduction with some brief and informal comments on the proof of Theorem A. Let Γ = Γ 1 * Σ Γ 2 be as in the hypothesis of Theorem A, where Γ i = Γ 1 i × Γ 2 i is a product of icc, non-amenable, bi-exact groups, for every i ∈ {1, 2}. Our goal is to investigate all possible group von Neumann algebra decompositions of M = L(Γ). To this end, let Λ be a countable group such that M = L(Λ), and consider the * -homomorphism △ :
M → M⊗M given by △(v h ) = v h ⊗ v h , for all h ∈ Λ [PV09].
In the first part of the proof, we use Ozawa's work on subalgebras with non-amenable commutant inside von Neumann algebras of (relatively) bi-exact groups [Oz03,Oz04,BO08] to conclude that
(1.1) △(L(Γ 1 1 )) ≺ M⊗L(Γ n m ), for some m, n ∈ {1, 2}.
Here, P ≺ Q denotes the fact that a corner of P embeds into a corner of Q inside the ambient algebra, in the sense of Popa [Po03].
The second part of the proof consists of combining (1.1) with an ultrapower technique from [Io11] to deduce the existence of a subgroup Ω < Λ such that (1.2) L(Γ 1 1 ) ≺ L(Ω) and the centralizer of Ω in Λ is non-amenable.
For these first two parts, see Theorem 3.3.
Further, by using (1.2) and building on techniques from [IPP05,CdSS15] we find a subgroup Θ < Λ such that L(Θ)z and L(Θ)(1 − z) are unitarily conjugate to corners of L(Γ 1 ) and L(Γ 2 ), respectively, for some non-zero central projection z ∈ L(Θ) (see Theorems 3.6 and 3.2). More precisely, Θ is defined as the one-sided commensurator of Ω in Λ [FGS10].
The final part of the proof is the subject of Section 4. Thus, we first use that Σ is icc to derive that z = 1 (see Proposition 4.1). As a consequence of this and the analogous analysis for Γ 1 2 instead of Γ 1 1 , we obtain subgroups Θ 1 , Θ 2 < Λ such that L(Θ i ) is unitarily conjugate to L(Γ i ). With some additional work, we finally prove that Λ = Θ 1 * Θ 1 ∩hΘ 2 h −1 (hΘ 2 h −1 ), for some h ∈ Λ, and that the conclusion holds for Λ 1 = Θ 1 and Λ 2 = hΘ 2 h −1 .
Preliminaries
We begin this section by reviewing several concepts in von Neumann algebras and group theory. We then record several technical ingredients for the proofs of our main results.
2.1. Terminology. All von Neumann algebras M considered in this article are tracial, i.e., they are endowed with a unital, faithful, normal linear functional τ : M → C satisfying τ(xy) = τ(yx), for all x, y ∈ M . Given x ∈ M , we denote by x its operator norm, and by x 2 = τ(x * x) 1/2 its so-called 2-norm. For a tracial von Neumann algebra M , we denote by U (M ) its unitary group, by Z(M ) its center, by P(M ) the set of its projections, and by (M ) 1 = {x ∈ M | x ≤ 1} its unit ball with respect to the operator norm. For a set S ⊆ M , we denote by W * (S) the smallest von Neumann subalgebra of M which contains S. If S is closed under adjoint, then W * (S) is equal to the bicommutant S ′′ of S .
All inclusions P ⊆ M of von Neumann algebras are assumed unital, unless otherwise specified. Given von Neumann subalgebras P, Q ⊆ M , we denote by E P : M → P the conditional expectation onto P , by P ′ ∩ M = {x ∈ M | xy = yx, for all y ∈ P } the relative commutant of P in M , and by P ∨ Q = W * (P ∪ Q) the von Neumann algebra generated by P and Q.
All groups considered in this article are countable and discrete. For a group Γ, we denote by {u g } g∈Γ ⊆ U (ℓ 2 (Γ)) its left regular representation given by u g (δ h ) = δ gh , where δ h is the Dirac mass at h. The weak operator closure of the linear span of {u g } g∈G in B(ℓ 2 (Γ)) is called the group von Neumann algebra of Γ and is denoted by L(Γ). L(Γ) is a II 1 factor precisely when Γ has infinite non-trivial conjugacy classes (icc) [MvN43].
Let Γ be a group. Given subsets K, F ⊆ Γ, we put KF = {kf |k ∈ K, f ∈ F }. Given a subgroup Σ < Γ, we denote by C Σ (K) = {g ∈ Σ|gk = kg, for all k ∈ K} the centralizer of K in Σ. For a positive integer n, we denote by 1, n the set {1, 2, ..., n}. (1) There exist p ∈ P(P ), q ∈ P(Q), a * -homomorphism θ : pP p → qQq and a non-zero partial isometry v ∈ qM p such that θ(x)v = vx, for all x ∈ pP p.
(2) For any group U ⊂ U(P ) such that U ′′ = P there is no sequence (u n ) n ⊂ U satisfying E Q (xu n y) 2 → 0, for all x, y ∈ M .
If one of the two equivalent conditions from Theorem 2.1 holds we say that a corner of P embeds into Q inside M , and write P ≺ M Q. If we moreover have that P p ′ ≺ M Q, for any nonzero projection p ′ ∈ P ′ ∩ 1 P M 1 P , then we write P ≺ s M Q. Next, we record two basic intertwining results that will be used later on.
Lemma 2.2. Let Γ 1 , Γ 2 < Γ be countable groups such that L(Γ 1 ) ≺ L(Γ) L(Γ 2 ). Then there exists g ∈ Γ such that [Γ 1 : Γ 1 ∩ gΓ 2 g −1 ] < ∞.
Proof. Denote by e the orthogonal projection from ℓ 2 (Γ) onto ℓ 2 (Γ 2 ). Consider Jones' basic construction L(Γ), e ⊆ B(ℓ 2 (Γ)) of the inclusion L(Γ 2 ) ⊆ L(Γ) endowed with the usual semi-finite trace T r and 2-norm x 2,T r = T r(x * x) 1/2 (see e.g. [Jo81,PP86]). Denote by H := L 2 ( L(Γ), e ) the Hilbert space obtained by completing L(Γ), e with respect to . 2,T r . Let π : Γ → U (H) denote the unitary representation given by π(g)(ξ) = u g ξu * g . Claim 2.3. If ξ ∈ H is a π(Γ 1 )-invariant vector, then ξ belongs to the . 2,T r -closure of the linear span of {u g eu h | g, h ∈ Γ with [Γ 1 : Γ 1 ∩ gΓ 2 g −1 ] < ∞}.
Proof of Claim 2.3. Let H k ⊆ H be the . 2,T r -closure of the linear span of {u gk eu * g |g ∈ Γ}. Then H k is π(Γ)-invariant, for every k ∈ Γ, and if S ⊆ Γ is such that Γ = ⊔ k∈S {hkh −1 |h ∈ Γ 2 }, then H = g∈S H k . Thus, in order to prove the claim, we may assume that ξ belongs to H k , for some fixed k ∈ Γ. Notice that
u g u k eu * g , u k e = 1, if g ∈ C Γ 2 (k) 0, if g / ∈ C Γ 2 (k)
Thus, {u gk eu * g } g∈Γ/C Γ 2 (k) is a π(Γ)-invariant orthonormal basis of H k . Let ξ ∈ H k be a π(Γ 1 )invariant vector and write ξ = g∈Γ/C Γ 2 (k) c g u gk eu * g , for some scalars c g . If c g = 0, for some g ∈ Γ/C Γ 2 (k), then π(Γ 1 )(u gk eu * g ) must be finite, or equivalently [Γ 1 : Γ 1 ∩ gC Γ 2 (k)g −1 ] < ∞. This implies that [Γ 1 : Γ 1 ∩ (gk)Γ 2 (gk) −1 ] < ∞, which yields the claim.
We are now ready to derive the lemma. Since L(Γ 1 ) ≺ L(Γ) L(Γ 2 ), [Po03, Theorem 2.1] implies that the L(Γ)-L(Γ)-bimodule H contains a non-zero L(Γ 1 )-central vector. Thus, H contains a non-zero π(Γ 1 )-invariant vector, and the Claim 2.3 implies the conclusion.
Lemma 2.4. Let Γ 1 , Γ 2 < Γ be countable groups, and Q ⊆ qL(Γ)q be a von Neumann subalgebra. For every i ∈ 1, 2, suppose that p i ∈ P(Q ′ ∩ qL(Γ)q) and u i ∈ U (L(Γ)) satisfy u i Qp i u * i ⊆ L(Γ i ). Assume that p 1 p 2 = 0 and let p ∈ Q ′ ∩ qL(Γ)q be a projection such that pp 1 p 2 = 0.
Then there exists g ∈ Γ such that Qp ≺ L(Γ) L(Γ 1 ∩ gΓ 2 g −1 ).
Moreover, if Q ⊆ L(Γ 1 ) ∩ vL(Γ 2 )v * , for some partial isometry v ∈ L(Γ) satisfying vv * = q and v * v ∈ L(Γ 2 ), then we can find g ∈ Γ such that Q ≺ L(Γ) L(Γ 1 ∩ gΓ 2 g −1 ) and τ(vu * g ) = 0.
Proof. Assume that pp 1 p 2 = 0, and put δ := pp 1 p 2 2 > 0. For a set S ⊆ Γ, we denote by e S the orthogonal projection from ℓ 2 (Γ) onto the closed linear span of {u g |g ∈ S}. Note that (2.1) upp 1 p 2 ∈ pp 1 u * 2 (L(Γ 2 )) 1 u 2 ∩ pu * 1 (L(Γ 1 )) 1 u 1 p 2 , for all u ∈ U (Q).
Let S 1 ⊆ Γ be a finite set such that pp 1 u * 2 − v 1 2 δ/5, where v 1 = e S 1 (pp 1 u * 2 ). By Kaplansky's density theorem, for any i ∈ 2, 4, we can find S i ⊆ Γ finite and v i ∈ (L(Γ)) 1 belonging to the linear span of {u g |g ∈ S i } such that v 2 − u 2 2 δ/5, v 3 − pu * 1 2 δ/5, and v 4 − u 1 p 2 2 δ/5. By using these inequalities and (2.1) we get that upp 1 p 2 − e S 1 Γ 2 S 2 ∩S 3 Γ 2 S 4 (upp 1 p 2 ) 2 4δ/5, for all u ∈ U (Q). Since upp 1 p 2 2 = δ, we get that e S 1 Γ 2 S 2 ∩S 3 Γ 1 S 4 (upp 1 p 2 ) 2 δ/5, for every u ∈ U (Q).
It is easy to see that there is S ⊆ Γ finite such that S 1 Γ 2 S 2 ∩ S 3 Γ 1 S 4 ⊆ ∪ g,h,k∈S h(Γ 1 ∩ gΓ 2 g −1 )k. Then the last inequality implies that g,h,k∈S For the moreover assertion, put δ := q 2 . Let T 1 , T 2 ⊆ Γ finite such that v − w 1 2 δ/3 and v * − w 2 2 δ/3, where w 1 = e T 1 (v) and w 2 ∈ (L(Γ)) 1 satisfies w 2 = e T 2 (w 2 ). We may moreover assume that τ(vu * g ) = 0, for all g ∈ T 1 . Then as above we find that e Γ 1 ∩T 1 Γ 2 T 2 (u) 2 δ/3, for all u ∈ U (Q). Since Γ 1 ∩ T 1 Γ 2 T 2 ⊆ ∪ g∈T 1 ,h∈T (Γ 1 ∩ gΓ 2 g −1 )h, for some finite set T ⊆ Γ, we conclude that Q ≺ L(Γ) L(Γ 1 ∩ gΓ 2 g −1 ), for some g ∈ T 1 . This finishes the proof.
E L(Γ 1 ∩gΓ 2 g −1 ) (u * h upp 1 p 2 u * k ) 2 2 δ 2 /25, for every u ∈ U (Q).
2.3. Commensurators and quasinormalizers. Let Σ < Γ be an inclusion of countable groups, and P ⊆ M be an inclusion of tracial von Neumann algebras.
The commensurator Comm Γ (Σ) of Σ in Γ is defined as the subgroup of all g ∈ Γ for which there exists a finite set F ⊆ Γ such that Σg ⊆ F Σ and gΣ ⊆ ΣF . Thus, g ∈ Comm Γ (Σ) if and only if [Σ :
Σ ∩ gΣg −1 ] < ∞ and [gΣg −1 : Σ ∩ gΣg −1 ] < ∞. The quasi-normalizer qN M (P ) of P in M is defined as the * -algebra of all x ∈ M for which there exist x 1 , x 2 , ..., x k ∈ M such that P x ⊆ i x i P and xP ⊆ i P x i (see [Po99, Definition 4.8]).
In this paper, we will use the following one sided versions of these notions considered in [FGS10]. The one sided commensurator Comm
(1) Γ (Σ) is defined as the semigroup of all g ∈ Γ for which there exists a finite set F ⊆ Γ such that Σg ⊆ F Σ. Thus, g ∈ Comm (1) Γ (Σ) if and only if [Σ : Σ ∩ gΣg −1 ] < ∞. The one sided quasi-normalizer qN (1) M (P ) is defined as the set of all x ∈ M for which there exist x 1 , x 2 , ..., x k ∈ M such that P x ⊆ i x i P .
We begin this subsection with two general results on quasi-normalizers. Firstly, we record the following formula for one sided quasi-normalizers of corners.
Then W * (qN (1) pM p (pP p)) = p W * (qN (1) M (P )) p, for any projection p ∈ P . Moreover, qN (1) p ′ M p ′ (P p ′ ) = p ′ qN (1) M (P ) p ′ , for any projection p ′ ∈ P ′ ∩ M .
The main assertion follows from the proof of [Po03, Lemma 3.5], where a similar formula for the usual quasi-normalizer is provided. It appears as such in [FGS10, Proposition 6.2]. The moreover assertion is immediate.
Secondly, we establish a useful property of subalgebras having a trivial one sided quasi-normalizer.
Lemma 2.6. Let M be a tracial von Neumann algebra, and P, Q ⊆ M von Neumann subalgebras. Assume that qN (1) M (P ) = P and Q is a II 1 factor. Suppose also that P ≺ s M Q and that qP q = qQq, for some non-zero projection q ∈ P .
Then there exists u ∈ U (M ) such that uP u * = Q. Moreover, if P ⊆ Q, then P = Q.
Proof. Let us first show that P can be unitarily conjugated into Q. To this end, let r ∈ Z(P ) be a non-zero projection. Since P r ≺ M Q, we can find projections r 0 ∈ P r and q 0 ∈ Q, a non-zero partial isometry v ∈ q 0 M r 0 , and a * -homomorphism θ : r 0 P r 0 → q 0 Qq 0 such that θ(x)v = vx, for all x ∈ r 0 P r 0 . Moreover, after replacing r 0 with a smaller projection, we may assume that τ(q 0 ) τ(q). Since Q is a II 1 factor we can find a unitary η ∈ Q such that q 1 := ηq 0 η * q. Let ϕ : r 0 P r 0 → q 1 Qq 1 be given by ϕ(x) = ηθ(x)η * . If we put w = ηv ∈ q 1 M r 0 , then ϕ(x)w = wx, for all x ∈ r 0 P r 0 . Since q 1 q we have that q 1 Qq 1 = q 1 P q 1 , and thus wP r 0 = ϕ(p 0 P r 0 )w ⊆ P w.
We claim that w ∈ P . This follows from the proof of [Po03, Lemma 3.5]. For completeness, we include the argument here. Let z ∈ Z(P ) be the central support of r 0 . If ε > 0, then we can find a projection z ′ ∈ Z(P )z such that τ(z − z ′ ) < ε and there exists partial isometries ξ 1 , ..., ξ n ∈ r 0 P satisfying
i≥1 ξ * i ξ i = z ′ . Then wz ′ P = wP z ′ ⊆ n i=1 wP ξ * i ξ i ⊆ i=1 P wξ i , and therefore wz ′ ∈ qN
(1) M (P ) = P . Since ε > 0 is arbitrary and w = wz, the claim follows. Put r 1 = w * w ∈ r 0 P r 0 and q 2 = ww * ∈ q 1 P q 1 = q 1 Qq 1 . Then wP w * = q 2 P q 2 = q 2 Qq 2 , so in particular r 1 P r 1 can be unitarily conjugated into Q. Since Q is a II 1 factor, we deduce that P r ′ can be unitarily conjugated into Q, where r ′ ∈ Z(P )r denotes the central support of r 1 . Thus, for every non-zero projection r ∈ Z(P ), there is a non-zero projection r ′ ∈ Z(P )r such that P r ′ can be unitarily conjugated into Q. Since Q is a II 1 factor, a maximality argument implies the existence of u ∈ U (M ) such that uP u * ⊆ Q.
Finally, we prove that uP u * = Q, which will imply both assertions of the lemma. Let r 0 ∈ P be a projection such that τ(r 0 ) τ(q) and put q 0 = ur 0 u * ∈ Q. Then we can find η ∈ U (Q) such that q 1 := ηq 0 η * q. Since ur 0 P r 0 u * ⊆ q 0 Qq 0 = η * q 1 Qq 1 η = η * q 1 P q 1 η, it follows as above that ηur 0 ∈ P . This implies that ur 0 P r 0 u * = η * q 1 P q 1 η, thus ur 0 P r 0 u * = q 0 Qq 0 . Hence we have that r 0 P r 0 = r 0 (u * Qu)r 0 , for any projection r 0 ∈ P with τ(r 0 ) τ(q). This clearly implies that P = u * Qu, which finishes the proof.
In the rest of this subsection, we establish several results controlling quasi-normalizers in group von Neumann algebras.
Lemma 2.7 ([Po03]). Let Γ 1 < Γ be countable groups, and P ⊆ pL(Γ 1 )p be a von Neumann subalgebra, for a projection p ∈ L(Γ 1 ). Assume that P ⊀ L(Γ 1 ) L(Γ 1 ∩ gΓ 1 g −1 ), for all g ∈ Γ \ Γ 1 . If x ∈ L(Γ) satisfies xP ⊆ n i=1 L(Γ 1 )x i , for some x 1 , ..., x n ∈ L(Γ), then xp ∈ L(Γ 1 ). Proof. Since P ⊀ L(Γ 1 ) L(Γ 1 ∩ gΓ 1 g −1 ), for all g ∈ Γ \ Γ 1 , we can find a net u n ∈ U (P ) such that E L(Γ 1 ∩gΓ 1 g −1 ) (u n a) 2 → 0, for all a ∈ L(Γ 1 ) and g ∈ Γ \ Γ 1 (see the proof [IPP05,(2.2) E L(Γ 1 ) (bu n c) 2 → 0, for every b, c ∈ L(Γ) with E L(Γ 1 ) (b) = 0.
By Kaplansky's density theorem, in order to prove (2.2), we may assume that
b = u g , c = u h , for some g ∈ Γ \ Γ 1 and h ∈ Γ. If gΓ 1 h ∩ Γ 1 = ∅, then E L(Γ 1 ) (u g u n u h ) = 0, for all n. If gΓ 1 h ∩ Γ 1 is non-empty, fix k ∈ gΓ 1 h ∩ Γ 1 , and put l = g −1 kh −1 ∈ Γ 1 . Then gΓ 1 h ∩ Γ 1 = (gΓ 1 g −1 ∩ Γ 1 )k. Thus, if γ ∈ Γ 1 , then gγh ∈ gΓ 1 h ∩ Γ 1 if and only if γ ∈ (Γ 1 ∩ g −1 Γ 1 g)l.
Therefore,
E L(Γ 1 ) (u g u n u h ) 2 = E L(Γ 1 ∩g −1 Γ 1 g) (u n u * l ) 2 → 0.
This altogether proves (2.2), and finishes the proof.
The next result strengthens the conclusion of Lemma 2.7 in the case of inclusions of group von Neumann algebras.
Lemma 2.8. Let Γ 1 < Γ 2 < Γ be countable groups. Denote by S ⊆ Γ the set of g ∈ Γ such that [Γ 1 : Γ 1 ∩ gΓ 2 g −1 ] < ∞. If x ∈ L(Γ) satisfies L(Γ 1 )x ⊆ n i=1 x i L(Γ 2 ), for some x 1 , ..., x n ∈ L(Γ), then x belongs to the . 2 -closure of the linear span of {u g } g∈S .
This result generalizes [FGS10, Theorem 5.1], which addressed the case Γ 1 = Γ 2 . Although later on we will only use this particular case of Lemma 2.8, for completeness we provide a different proof that at the same time handles the general case. The proof that we include follows closely the proof of [Po03, Theorem 2.1].
Proof. Let K ⊆ ℓ 2 (Γ) be the . 2 -closure of the linear span of L(Γ 1 )xL(Γ 2 ), and f the orthogonal
projection from ℓ 2 (Γ) onto K. Since K is a L(Γ 1 )-L(Γ 2 )-bimodule, f ∈ L(Γ 1 ) ′ ∩ L(Γ), e L(Γ 2 ) . Since K is contained in the . 2 -closure of n i=1 x i L(Γ 2 ), we also have that T r(f ) < ∞.
Viewing f as an element of L 2 ( L(Γ 1 ), e L(Γ 2 ) ), Claim (2.3) gives that f belongs to the . 2,T r -closure of the linear span of {u g eu h } g∈S,h∈Γ . This implies that f (ℓ 2 (Γ)) is contained the . 2 -closure of the linear span of {u g } g∈S . Since x ∈ f (ℓ 2 (Γ)), the conclusion follows.
Corollary 2.9 ( [FGS10]). Let Σ < Γ be countable groups.
Then W * (qN (1) L(Γ) (L(Σ))) = L(∆), where ∆ < Γ is the subgroup generated by Comm (1) Γ (Σ).
In particular, if Comm
(1)
Γ (Σ) = Σ, then qN (1) L(Γ) (L(Σ)) = L(Σ).
Proof. This result is part (ii) of [FGS10, Corollary 5.2]. For completeness, we show how it follows from Lemma 2.8. If g ∈ Comm
(1) Γ (Σ), then u g ∈ qN (1) L(Γ) (L(Σ)). This implies the inclusion ⊇. If x ∈ qN
(1) L(Γ) (L(Σ)), then Lemma 2.8 gives that x ∈ L(∆), which implies the reverse inclusion. We end this section with two results concerning von Neumann algebras of amalgamated free product groups.
Corollary 2.10 ( [IPP05]). Let Γ = Γ 1 * Σ Γ 2 be an amalgamated free product group. Let P ⊆ p(Γ 1 )p be a von Neumann subalgebra, for a projection p ∈ P . Assume that P ⊀ L(Γ 1 ) L(Σ).
If x ∈ L(Γ)p satisfies xP ⊆ n i=1 L(Γ 1 )x i , for some x 1 , ..., x n ∈ L(Γ), then x ∈ L(Γ 1 ).
Proof. This result is a particular case of [IPP05, Theorem 1.1]. Since Γ 1 ∩ gΓ 1 g −1 ⊆ Σ, for all g ∈ Γ \ Γ 1 , it also follows from Lemma 2.7.
Lemma 2.11. Let Γ = Γ 1 * Σ Γ 2 be an amalgamated free product group with Comm
(1) Γ i (Σ) = Σ, for every i ∈ 1, 2. Then we have the following:
(1) Comm (1) Γ (Σ) = Σ. (2) L(Σ) ⊀ L(Γ) L(Σ ∩ gΣg −1 ), for every g ∈ Γ \ Σ. (3) L(Σ) ⊀ L(Γ) L(Γ i ∩ gΓ i g −1 )
, for every g ∈ Γ i \ Σ and i ∈ 1, 2.
Proof.
(1) Let g ∈ Γ \ Σ. Let g = g 1 g 2 ...g n be the reduced form of g, where n ≥ 1 is an integer, j(k) ∈ 1, 2 and g k ∈ Γ j(k) \ Σ, for all k ∈ 1, n, and j(1) = j(2) = ... = j(n). If x ∈ Σ ∩ gΣg −1 , then
g −1 xg = g −1 n ...g −1 2 g −1 1 xg 1 g 2 ...g n ∈ Σ, which forces g −1 1 xg 1 ∈ Σ. Thus, Σ ∩ gΣg −1 ⊆ Σ ∩ g 1 Σg −1 1 . Since g ∈ Γ j(1) \ Σ, we have that [Σ : Σ ∩ g 1 Σg −1 1 ] = ∞, which implies that g ∈ Comm (1) Γ (Σ). (2) Let g ∈ Γ such that L(Σ) ≺ L(Γ) L(Σ ∩ gΣg −1 )
. By applying Lemma 2.2, we find h ∈ Γ such that [Σ : Σ∩h(Σ∩gΣg −1 )h −1 ] < ∞, and thus [Σ : Σ∩hΣh −1 ] < ∞ and [Σ : Σ∩(hg)Σ(hg) −1 ] < ∞. By using part (1) we deduce that h, hg ∈ Σ, and thus g ∈ Σ.
(3) Assume that L(Σ) ≺ L(Γ) L(Γ i ∩ gΓ i g −1 ), for some g ∈ Γ \ Σ and i ∈ 1, 2. Let g = g 1 g 2 ...g n be the reduced form of g, where n ≥ 1 is an integer, j(k) ∈ 1, 2 and g k ∈ Γ j(k) \Σ, for all k ∈ 1, n, and
j(1) = j(2) = ... = j(n). Let x ∈ Γ i ∩ gΓ i g −1 . Then g −1 xg = g −1 n ...g −1 2 g −1 1 xg 1 g 2 ...g n ∈ Γ i . Since g ∈ Γ \ Γ i , we can find k ∈ 1, n with j(k) = i. If j(1) = i, then we must have that g −1 1 xg 1 ∈ Σ and g −1 2 g −1 1 xg 1 g 2 ∈ Σ, and hence Γ i ∩ gΓ i g −1 ⊆ g 1 (Σ ∩ g 2 Σg −1 2 )g −1 1 . If j(1) = i, then we must have that x ∈ Σ and g −1 1 xg 1 ∈ Σ, and hence Γ i ∩ gΓ i g −1 ⊆ Σ ∩ g 1 Σg −1 1 .
In either case, we would conclude that L(Σ) ≺ M L(Σ ∩ hΣh −1 ), for some h ∈ Γ \ Σ. By (2) this is a contradiction.
2.4. Almost malnormality of maximal amenable subgroups in hyperbolic groups. In this section, we justify an assertion made in Example 1.1(b).
Lemma 2.12. Let C < D be an infinite maximal amenable subgroup of a hyperbolic group D.
Then C ∩ gCg −1 is finite, for every g ∈ D \ C.
This result is likely well-known, but for lack of a reference, we include a proof. Note that it implies that if Σ 0 := A ≀ C and Γ 0 :
= A ≀ D, then [Σ 0 : Σ 0 ∩ gΣ 0 g −1 ] = ∞, for all g ∈ Γ 0 \ Σ 0 , as claimed in Example 1.1(b).
Proof. By [GdH90, Théorème 8.37], C admits an infinite cyclic subgroup C 0 = {a n |n ∈ Z} of finite index. By [GdH90,Théorème 8.29], if ∂D denotes the boundary of D, then C 0 ∩ ∂D contains exactly two points {x 1 , x 2 }, both fixed by a.
We claim that if g ∈ D and C 0 ∩gC 0 g −1 = {e}, then g stabilizes the set {x 1 , x 2 }. Let m, n ∈ Z\{0} such that ga m g −1 = a n . If i ∈ {1, 2}, we can find a sequence {p k } such that x i = lim k→∞ a p k . Then
x i = lim k→∞ a m⌊ p k m ⌋ and thus gx i = lim k→∞ ga m⌊ p k m ⌋ = lim k→∞ a n⌊ p k m ⌋ g = lim k→∞ a n⌊ p k m ⌋ ∈ {x 1 , x 2 }. Since [C : C 0 ] < ∞, the claim implies that C ⊆ Stab D {x 1 , x 2 }. By [GdH90, Théorème 8.30], Stab D {x 1 , x 2 } contains a finite index cyclic subgroup, hence is amenable. Thus, we get that C = Stab D {x 1 , x 2 }.
Using the claim again gives that C ∩ gCg −1 is finite, for all g ∈ D \ C.
Property (T) and
Haagerup's property for groups and algebras. In this section, we record the well-known relationship between property (T) and Haagerup's property for countable groups and their von Neumann algebras. We refer the reader to [Po01] for the definitions of these notions. As shown in [CJ85] and [Ch83] a countable icc group Γ has property (T) and respectively Haagerup's property if and only if the II 1 factor L(Γ) does. Moreover, this result holds if Γ is not necessarily icc (see [Po01, Propositions 3.1 and 5.1]). Here we note that arguments from [Po01] show that the result remains true if in addition L(Γ) is replaced by one of its corners.
Lemma 2.13. Let Γ be a countable group and p ∈ L(Γ) be a non-zero projection. Then Conversely, assume that pL(Γ)p has property (T). Denoting by z ∈ L(Γ) the central support of p, [Po01, Proposition 4.7 (3)] implies that L(Γ)z has property (T). Let ϕ n : Γ → C be a sequence of positive definite functions such that ϕ n (e) = 1, for all n, and ϕ n (g) → 1, for all g ∈ Γ. Then the formula Φ n (x) = g ϕ n (g)a g u g , for every x = g a g u g ∈ L(Γ), defines a sequence Φ n : L(Γ) → L(Γ) of unital, tracial, completely positive maps such that Φ n (x) − x 2 → 0, for every x ∈ L(Γ). Thus, if we let Ψ n (x) = Φ n (x)z, for every x ∈ L(Γ)z, then Ψ n : L(Γ)z → L(Γ)z is a sequence of unital, subtracial, completely positive maps such that Ψ n (x) − x 2 → 0, for every x ∈ L(Γ)z.
Since L(Γ)z has property (T ), we get that sup{
Ψ n (u) − u 2 | u ∈ U (L(Γ)z)} → 0. Since sup{ Φ n (uz)−Φ n (u)z 2 | u ∈ U (L(Γ))} → 0, we get that sup{ Φ n (u g )z −u g z 2 | g ∈ Γ} → 0. As Φ n (u g )z − u g z 2 = |ϕ n (g) − 1| z 2 , for all g ∈ Γ, we conclude that sup{|ϕ n (g) − 1| | g ∈ Γ} → 0.
Thus, Γ has property (T).
(2) Assume that Γ has Haagerup's property. Then [Po01, Propositions 3.1 and 2.4 (1)] together imply that pL(Γ)p has Haagerup's property.
Conversely, assume that pL(Γ)p has Haagerup's property. Denoting by z ∈ L(Γ) the central support of p, [Po01, Proposition 2.4 (2)] implies that L(Γ)z has Haagerup's property. Let Φ n : L(Γ)z → L(Γ)z be a sequence of completely positive maps such that τ • Φ n τ and the set {Φ n (x) | x ∈ L(Γ)z, x 1} is . 2 -precompact, for all n, and Φ n (x) − x 2 → 0, for all x ∈ L(Γ)z. Then ϕ n : Γ → C given by ϕ n (g) = τ (z) −1 τ (Φ n (u g z)(u g z) * ) is a c 0 positive definite function. As ϕ n (g) → 1, for all g ∈ Γ, we get that Γ has Haagerup's property.
Identification of Peripheral Subgroups via W * -Equivalence
In this section we establish the main technical result needed in the proof of Theorem A. Throughout the section, we will work with amalgamated free product groups Γ satisfying the following:
Assumption 3.1. Γ = Γ 1 * Σ Γ 2 is an amalgamated free product group, where
(1) Σ is a common amenable subgroup of Γ 1 and Γ 2 .
(2) Γ i = Γ 1 i × Γ 2 i , where Γ j i is an icc, non-amenable, bi-exact group, for every i, j ∈ 1, 2.
The main goal of this section to establish the following structural result for groups Λ in the W * -equivalence class of an amalgamated free product group Γ as in Assumption 3.1.
Theorem 3.2. Let Γ = Γ 1 * Σ Γ 2 be as in Assumption 3.1, and put M = L(Γ). Let Λ be an arbitrary group such that M = L(Λ). Then one of the following two conditions holds:
(1) For every i ∈ 1, 2, there exists a subgroup Θ i < Λ such that u i L(Θ i )u * i = L(Γ i ) for some u i ∈ U (M ).
(2) There exist a subgroup Θ < Λ, non-zero projections r 1 , r 2 ∈ Z(L(Θ)) with r 1 +r 2 = 1, and u ∈ U (M ) such that q i := ur i u * ∈ L(Γ i ) and uL(Θ)r i u * = q i L(Γ i )q i , for every i ∈ 1, 2.
The rest of this section is devoted to the proof of Theorem 3.2. The starting point is the following key result showing that any such group Λ must contain commuting non-amenable subgroups. Then for every i, j ∈ 1, 2 we can find a non-amenable subgroup ∆ < Λ such that C Λ (∆) is non-amenable and L(Γ j i ) ≺ M L(∆).
There are two main ingredients in the proof of Theorem 3.3. Thus, we first use repeatedly Ozawa's work on the structure of subalgebras with non-amenable commutant inside von Neumann algebras of relatively bi-exact groups [Oz03,Oz04,BO08] (see also the more recent developments [CS11,CSU11] By applying [BO08, Theorem 15.1.5], we deduce that △(P ) ≺ M⊗M L(G), for some G ∈ G. Since the flip automorphism of M⊗M acts identically on ∆(P ), we may assume that G = Γ × Γ m , for m ∈ 1, 2. Thus, there exist projections p ∈ △(P ), q ∈ M⊗L(Γ m ), a non-zero partial isometry v ∈ q(M⊗M )p, and a * -homomorphism ϕ : p△(P )p → q(M⊗L(Γ m ))q such that
(3.1) ϕ(x)v = vx, for all x ∈ p△(P )p.
Notice that vv * ∈ ϕ(p△(P )p) ′ ∩ q(M⊗M )q and v * v ∈ (p△(P )p) ′ ∩ p(M⊗M )p. We may assume that q is equal to the support projection of E q(M⊗L(Γm))q (vv * ). Let us show that
(3.2) ϕ(p△(P )p) ⊀ M⊗L(Γm) M⊗L(Σ).
Indeed, if (3.2) does not hold, [IPP05, Lemma 1.12] would imply that △(P ) ≺ M⊗M M⊗L(Σ).
Since P has no amenable direct summand, by [IPV10, Proposition 7.2(4)], this would contradict the amenability of Σ.
Since M⊗M = L(Γ × Γ 1 ) * L(Γ×Σ) L(Γ × Γ 2 ), by combining (3.2) and Corollary 2.10 we conclude that ϕ(p△(P )p) ′ ∩ q(M⊗M )q ⊂ q(M⊗L(Γ m ))q, hence vv * ∈ M⊗L(Γ m ). Thus, equation (3.1) implies that v△(P )v * ⊆ M⊗L(Γ m ). Since △(P ) ⊀ M⊗M M⊗L(Σ), applying Corollary 2.10 again gives that v(△(P ) ∨ (△(P ) ′ ∩ M⊗M ))v * ⊂ M⊗L(Γ m ). Since M⊗L(Γ m ) is a factor, we can thus find a non-zero projection e ∈ Z(△(P ) ′ ∩ (M⊗M )) and u ∈ U (M⊗M ) such that
u(△(P ) ∨ (△(P ) ′ ∩ M⊗M ))eu * ⊆ M⊗L(Γ m ).
In particular, we get that u△(P ∨ Q)eu * ⊆ M⊗L(Γ m ). Thus, u△(P )eu * and u△(Q)eu * are commuting non-amenable subfactors of M⊗L( If H = Γ × Γ 1 m or H = Γ × Γ 2 m , then the claim follows. Therefore, it remains to analyze the case when H = Γ r × Γ m , for some r ∈ 1, 2. In this case, by arguing as above, we find a non-zero projection f ∈ Z((∆(P )e) ′ ∩ e(M⊗M )e) and w ∈ U (M⊗M ) such that we have w△(P ∨ Q)f w * ⊆ L(Γ r )⊗L(Γ m ). In particular, w△(P )f w * and w△(Q)f w * are commuting, non-amenable subfactors of L(Γ r × Γ m )
Γ m ) = L(Γ × Γ m ). Since Γ × Γ m is bi-exact relative to the family H = {Γ × Γ 1 m , Γ × Γ 2 m , Γ 1 × Γ m , Γ 2 × Γ m },Since Γ r × Γ m is bi-exact relative to K = {Γ 1 r × Γ m , Γ 2 r × Γ m , Γ r × Γ 1 m , Γ r × Γ 2 m }, [BO08,
Then there exists a decreasing sequence of subgroups
Λ k < Λ such that A ≺ M L(Λ k ), for every k ≥ 1, and B ′ ∩ M ≺ M L(∪ k≥1 C Λ (Λ k )).
Going back to the proof of Theorem 3.3, by combining Claim (3.4) and Theorem 3.5, we deduce the existence of a decreasing sequence of subgroups Λ k < Λ such that L(Γ j i ) ≺ M L(Λ k ), for every k ≥ 1, and L(Γ n m ) ′ ∩ M ≺ M L(∪ k≥1 C Λ (Λ k )). Since L(Γ n m ) ′ ∩ M is non-amenable, as it contains L(Γ s m ), where {n, s} = {1, 2}, we get that ∪ k≥1 C Λ (Λ k ) is non-amenable. Thus, there is k ≥ 1 such that C Λ (Λ k ) is non-amenable. Letting ∆ := Λ k the conclusion follows since L(Γ j i ) ≺ M L(∆) and in particular ∆ is non-amenable.
We continue with the second step towards proving Theorem 3.2. More precisely, we use the commuting subgroups of the mysterious group Λ provided by Theorem 3.3, to identify the algebras of the peripheral subgroups Γ 1 , Γ 2 of Γ with algebras of certain subgroups of Λ. Our proof is inspired by the analysis performed in [CdSS15, Theorem 4.3].
Theorem 3.6. Let Γ = Γ 1 * Σ Γ 2 be as in Assumption 3.1, and put M = L(Γ). Let Λ be a group such that M = L(Λ). Assume that there exists a non-amenable subgroups ∆ < Λ such that C Λ (∆) is non-amenable and L(Γ 1 1 ) ≺ M L(∆). Then we can find a group Θ < Λ, projections r 1 , r 2 ∈ Z(L(Θ)) with r 1 = 0, r 1 + r 2 = 1, and u ∈ U (M ) such that ur 1 u * ∈ L(Γ 1 ), uL(Θ)r 1 u * = ur 1 u * L(Γ 1 )ur 1 u * and uL(Θ)r 2 u * ⊆ L(Γ 2 ).
Proof.
Let Ω be the group of h ∈ Λ such that {δhδ −1 |δ ∈ ∆} is finite. Then Ω is normalized by ∆, hence ∆Ω is a subgroup of Λ. Let Θ < Λ be the subgroup generated by Comm M (L(∆Ω))) = L(Θ). Since L(Γ 2 ) is a II 1 factor, there is a maximal projection r 2 ∈ Z(L(Θ)) such that L(Θ)r 2 can be unitarily conjugated into L(Γ 2 ). Let w 2 ∈ U (M ) such that w 2 L(Θ)r 2 w * 2 ⊆ L(Γ 2 ). Since L(Σ) and L(Γ 2 ) are II 1 factors, we may assume that w 2 r 2 w * 2 ∈ L(Σ). Let r 1 = 1 − r 2 . We will prove that Θ, r 1 , r 2 satisfy the conclusion of the theorem.
Our first goal is to prove the following:
Claim 3.7. There is u ∈ U (M ) such that uL(Θ)r i u * ⊆ L(Γ i ), for every i ∈ 1, 2.
Proof of Claim 3.7. To prove the claim, it suffices to show that L(Θ)r 1 can be unitarily conjugated into L(Γ 1 ). Indeed, then we can find w 1 ∈ U (M ) such that w 1 L(Θ)r 1 w * 1 ⊆ L(Γ 1 ). Since L(Γ 1 ) is a II 1 factor and τ(r 1 ) = τ(1 − w 2 r 2 w * 2 ), we may moreover assume that w 1 r 1 w * 1 = 1 − w 2 r 2 w * 2 . Since w 2 L(Θ)r 2 w * 2 ⊆ L(Γ 2 ), it is now clear that u = w 1 r 1 + w 2 r 2 is a unitary operator which satisfies the claim.
Towards showing that L(Θ)r 1 can be unitarily conjugated into L(Γ 1 ), let q ∈ Z(L(Θ))r 1 be a non-zero projection. As L(∆) ′ ∩ M ⊆ L(Ω) ⊆ L(Θ) and L(∆) ⊆ L(Θ), we have that L(Θ) ′ ∩ M ⊆ Z(L(∆) ′ ∩ M ). Thus, q ∈ Z(L(∆) ′ ∩ M )r 1 . Since C Λ (∆) is non-amenable, L(∆) ′ ∩ M has no amenable direct summand. Since Γ is bi-exact relative to {Γ 1 , Γ 2 }, [BO08, Theorem 15.1.5] implies that L(∆)q ≺ M L(Γ j ), for some j ∈ 1, 2. Since ∆ is non-amenable and Σ is amenable, we have that L(∆) ⊀ M L(Σ). By proceeding as in the proof of [IPP05, Theorem 5.1] it follows that we can find a non-zero projection r ∈ Z(L(∆) ′ ∩ M )q ⊆ L(Θ)q and v ∈ U (M ) such that vL(∆)rv * ⊆ L(Γ j ).
Since L(∆Ω) ⊆ qN M (L(∆)) ′′ and L(Θ) = W * (qN rM r (rL(∆Ω)r)). Since L(∆) ⊀ M L(Σ), by applying Corollary 2.10 twice, we get that vrL(Θ)rv * ⊆ L(Γ j ). Since Γ j is icc, this implies that L(Θ)z can be unitarily conjugated into L(Γ j ), where z ∈ Z(L(Θ)) denotes the central support of r. Then we must have that j = 1. Otherwise, we would get that z r 2 , hence r r 2 , which contradicts that r q r 1 . Therefore, q ′ = zq is a non-zero projection belonging to Z(L(Θ))q such that L(Θ)q ′ can be unitarily conjugated into L(Γ 1 ). Since this statement holds for every non-zero projection q ∈ Z(L(Θ))r 1 , and L(Γ 1 ) is a II 1 factor, we deduce the existence of u ∈ U (M ) such that uL(Θ)r 1 u * ⊆ L(Γ 1 ).
We continue with the following:
Claim 3.8. There is a non-zero projection r ∈ (L(∆) ′ ∩ M )r 1 such that if e = uru * ∈ L(Γ 1 ), then ur(L(∆) ∨ (L(∆) ′ ∩ M ))ru * ⊆ eL(Γ 1 )e is a finite index inclusion of II 1 factors.
Proof of Claim 3.8. First, let us show that L(Γ 1 1 ) ≺ M L(∆)r 1 . Otherwise, since L(Γ 1 1 ) ≺ M L(∆), we would get that L(Γ 1 1 ) ≺ M L(∆)r 2 , which would imply that L(Γ 1 1 ) ≺ M L(Γ 2 ). Then Lemma 2.2 would provide g ∈ Γ such that [Γ 1 1 : Γ 1 1 ∩ gΓ 2 g −1 ] < ∞. This contradicts the fact that Γ 1 1 is non-amenable and Γ 1 1 ∩ gΓ 2 g −1 < Σ is amenable, for every g ∈ Γ. Denote e 1 = ur 1 u * ∈ L(Γ 1 ) and P := uL(∆)r 1 u * ⊆ e 1 L(Γ 1 )e 1 . By the previous paragraph we have L(Γ 1 1 ) ≺ M P . Since Γ 1 1 is non-amenable and Σ is amenable, we have L(Γ 1 1 ) ⊀ M L(Σ). Thus, applying [IPP05, Theorem 1.1] gives that L(Γ 1 1 ) ≺ L(Γ 1 ) P . By [DHI16, Lemma 2.4(4)] we get that there is a non-zero projection e 2 ∈ Z(P ′ ∩ e 1 L(Γ 1 )e 1 ) such that L(Γ 1 1 ) ≺ L(Γ 1 ) P f , for any non-zero projection f ∈ P ′ ∩ e 1 L(Γ 1 )e 1 with f ≤ e 2 . Since L(Γ 2 1 ) = L(Γ 1 1 ) ′ ∩ L(Γ 1 ), by applying [Va07, Lemma 3.5] it follows that (P ′ ∩ e 1 L(Γ 1 )e 1 )e 2 ≺ s L(Γ 1 ) L(Γ 2 1 ).
Next, let us show that P e 2 ≺ s L(Γ 1 ) L(Γ 1 1 ). Otherwise, we can find a non-zero projection f ∈ Z(P ′ ∩ e 1 L(Γ 1 )e 1 ) with f e 2 such that P f ⊀ L(Γ 1 ) L(Γ 1 1 ). On the other hand, the commutant of P f in f L(Γ 1 )f contains uL(C Λ (∆))r 1 u * , and thus has no amenable direct summand. By applying [BO08, Theorem 15.1.5], we derive that P f ≺ s L(Γ 1 ) L(Γ 2 1 ). Since L(Γ 1 1 ) ≺ L(Γ 1 ) P f , by using [Va07, Lemma 3.7], we get that L(Γ 1 1 ) ≺ L(Γ 1 ) L(Γ 2 1 ), which is false. Since P e 2 ≺ s L(Γ 1 ) L(Γ 1 1 ) and (P ′ ∩ e 1 L(Γ 1 )e 1 )e 2 ≺ s L(Γ 1 ) L(Γ 2 1 ), we get Z(P )e 2 ≺ s L(Γ 1 ) L(Γ 1 1 ) and Z(P )e 2 ≺ s L(Γ 1 ) L(Γ 2 1 ). By using [DHI16, Lemma 2.8(2)], this implies that Z(P )e 2 is completely atomic. Using again that P e 2 ≺ s L(Γ 1 ) L(Γ 1 1 ) and [Va07, Lemma 3.7], we derive that L(Γ 2 1 ) ≺ L(Γ 1 ) (P ′ ∩ e 1 L(Γ 1 )e 1 )f , for any non-zero projection f ∈ P ′ ∩ e 1 L(Γ 1 )e 1 satisfying f e 2 . In combination with the fact that (P ′ ∩ e 1 L(Γ 1 )e 1 )e 2 ≺ s L(Γ 1 ) L(Γ 2 1 ), we similarly get that Z(P ′ ∩ e 1 L(Γ 1 )e 1 )e 2 is completely atomic.
In conclusion, both Z(P )e 2 and Z(P ′ ∩ e 1 L(Γ 1 )e 1 )e 2 are completely atomic. This implies the existence of a non-zero projection e 3 ∈ P ′ ∩ e 1 L(Γ 1 )e 1 with e 3 e 2 such that both P e 3 and e 3 (P ′ ∩ e 1 L(Γ 1 )e 1 )e 3 are II 1 factors. Since P e 3 ≺ L(Γ 1 ) L(Γ 1 1 ), [OP03, Proposition 12] then gives a decomposition e 3 L(Γ 1 )e 3 = L(Γ 1 1 ) t 1⊗ L(Γ 2 1 ) t 2 , for some t 1 , t 2 > 0 with t 1 t 2 = τ(e 3 ), and a unitary element w ∈ e 3 L(Γ 1 )e 3 such that wP e 3 w * ⊆ L(Γ 1 1 ) t 1 . Since e 3 e 2 , we get that L(Γ 1 1 ) t 1 ≺ e 3 L(Γ 1 )e 3 P e 3 . This gives that L(Γ 1 1 ) t 1 ≺ L(Γ 1 1 ) t 1 wP e 3 w * . From this we get that there is a non-zero projection e 4 ∈ (wP e 3 w * ) ′ ∩ L(Γ 1 1 ) t 1 such that the inclusion of II 1 factors (wP e 3 w * )e 4 ⊆ e 4 L(Γ 1 1 ) t 1 e 4 has finite index. If we put e = w * e 4 w, then e ∈ P ′ ∩ e 1 L(Γ 1 )e 1 , e ≤ e 3 , and (3.4) wP ew * ⊆ e 4 L(Γ 1 1 ) t 1 e 4 has finite index. Since w(e(P ′ ∩ e 1 L(Γ 1 )e 1 )e)w * = (wP ew * ) ′ ∩ (e 4 ⊗ 1)(e 3 L(Γ 1 )e 3 )(e 4 ⊗ 1) contains e 4 ⊗ L(Γ 2 1 ) t 2 , we derive that we(P ∨ (P ′ ∩ e 1 L(Γ 1 )e 1 ))ew * contains wP ew * ⊗ L(Γ 2 1 ) t 2 . By using (3.4), we get that the inclusion of II 1 factors we(P ∨ (P ′ ∩ e 1 L(Γ 1 )e 1 ))ew * ⊆ (e 4 ⊗ 1)(e 3 L(Γ 1 )e 3 )(e 4 ⊗ 1) has finite index. Thus, the inclusion e(P ∨ (P ′ ∩ e 1 L(Γ 1 )e 1 ))e ⊆ eL(Γ 1 )e has finite index, which implies that r = u * eu satisfies the claim.
Since L(∆) ∨ (L(∆) ′ ∩ M ) ⊆ L(∆Ω) we get that L(∆Ω) ′ ∩ M ⊆ Z(L(∆) ∨ (L(∆) ′ ∩ M )
). Thus, Claim 3.8 implies that rL(∆Ω)r is a II 1 factor and the inclusion urL(∆Ω)ru * ⊆ eL(Γ 1 )e has finite index. By using [PP86, Proposition 1.3] this entails that
eL(Γ 1 )e ⊆ W * (qN (1)
urM ru * (urL(∆Ω)ru * )). Since urL(∆Ω)ru * ⊆ eL(Γ 1 )e has no amenable direct summand and Σ is amenable, by Corollary 2.10(1) we also get the reverse inclusion. In combination with Lemma 2.5 and equation Since uL(Θ)r 1 u * ⊆ L(Γ 1 ), urL(Θ)ru * = eL(Γ 1 )e, and L(Γ 1 ) is a II 1 factor, by Lemma 2.6 we deduce that uL(Θ)r 1 u * = ur 1 u * L(Γ 1 )ur 1 u * . This finishes the proof of the theorem.
Proof of Theorem 3.2. By combining Theorems 3.3 and 3.6, for every i ∈ 1, 2, we can find a subgroup Θ i < Λ, a unitary element u i ∈ M , and a non-zero projection r i ∈ Z(L(Θ i )) such that
Comm Λ (Θ i ) = Θ i , q i := u i r i u * i ∈ L(Γ i ), and (3.5) u 1 L(Θ 1 )r 1 u * 1 = q 1 L(Γ 1 )q 1 u 1 L(Θ 1 )(1 − r 1 )u * 1 ⊆ L(Γ 2 ) u 2 L(Θ 2 )(1 − r 2 )u * 2 ⊆ L(Γ 1 ) u 2 L(Θ 2 )r 2 u * 2 = q 2 L(Γ 2 )q 2 .
If r 1 = r 2 = 1, then conclusion (1) holds. Therefore, in order to complete the proof, it suffices to prove that if either r 1 = 1 or r 2 = 1, then conclusion (2) holds. Due to symmetry, we can further reduce to the case when r 1 = 1.
Since r 1 = 1, we can find a non-zero projection r ∈ L(Θ 1 )(1 − r 1 ) such that τ(r) ≤ τ(r 2 ). Since L(Γ 2 ) is a II 1 factor, (3.5) implies that we can find v ∈ U (M ) such that vL(Θ 1 )rv * ⊆ L(Θ 2 )r 2 . Thus, L(Θ 1 ) ≺ M L(Θ 2 ). By applying Lemma 2.2, we deduce the existence of h ∈ Λ such that [Θ 1 : Θ 1 ∩ hΘ 2 h −1 ] < ∞. Therefore, after replacing Θ 2 with hΘ 2 h −1 , we may assume that in addition to (3.5) we also have that [Θ 1 : Θ] < ∞, where Θ := Θ 1 ∩ Θ 2 . In particular, since Θ 1 is non-amenable, Θ is non-amenable.
Next, we claim that r 1 r 2 = (1 − r 1 )(1 − r 2 ) = 0. Otherwise, by using (3.5) and applying Lemma 2.4, it follows that we can find g ∈ Γ such that L(Θ) ≺ M L(Γ 1 ∩ gΓ 2 g −1 ). Since Γ 1 ∩ gΓ 2 g −1 ⊆ Σ, Σ is amenable, and Θ is non-amenable, this leads to a contradiction. Now, the claim implies that r 1 + r 2 = 1. Thus, by (3.5) we have that u 1 L(Θ)r 1 u * 1 ⊆ L(Γ 1 ) and u 2 L(Θ)r 1 u * 2 ⊆ L(Γ 1 ). Since Θ is non-amenable and Σ is amenable, L(Θ) ⊀ M L(Σ). By Lemma 2.10(1), we get that u 1 r 1 u * 2 ∈ L(Γ 1 ). In combination with (3.5), this implies that u 1 L(Θ 2 )r 1 u * 1 ⊆ L(Γ 1 ), hence u 1 L(Θ 2 )r 1 u * 1 ⊆ q 1 L(Γ 1 )q 1 . Similarly, we get that u 1 r 2 u * 2 ∈ L(Γ 2 ). Hence, if we put q 2 := u 1 r 2 u * 1 , thenq 2 ∈ L(Γ 2 ) and (3.5) gives that u 1 L(Θ 2 )r 2 u * 1 =q 2 L(Γ 2 )q 2 . Since u 1 L(Θ 2 )r 1 u * 1 ⊆ q 1 L(Γ 1 )q 1 and u 1 L(Θ 1 )r 1 u * 1 = q 1 L(Γ 1 )q 1 , we get that L(Θ 2 )r 1 ⊆ L(Θ 1 )r 1 . This implies that v h r 1 = 0, for all h ∈ Θ 2 \ Θ 1 . Since v h r 1 = 0, for every h ∈ Λ, we conclude that Θ 2 ⊆ Θ 1 and so Θ = Θ 2 . In particular, the inclusion Θ 2 < Θ 1 has finite index and therefore Θ 1 = Comm
(1) Λ (Θ 1 ) = Comm (1) Λ (Θ 2 ) = Θ 2 .
It follows that Θ = Θ 1 = Θ 2 satisfies (2). In the next section, we will prove that if Σ is icc and has trivial one-sided commensurator in Γ 1 and Γ 2 , then condition (2) from Theorem 3.2 can be ruled out (see Proposition 4.1). Here, we point out another general situation in which this is the case.
Corollary 3.9. Let Γ = Γ 1 * Σ Γ 2 be as in Assumption 3.1, and put M = L(Γ). Assume additionally that either Γ 1 has property (T) and Γ 2 does not, or that Γ 1 has Haagerup's property and Γ 2 does not. Let Λ be an arbitrary group such that M = L(Λ).
Then for any i ∈ 1, 2, there exists a subgroup Λ i < Λ such that u i L(Λ i )u * i = L(Γ i ) for u i ∈ U (M ).
Proof. Using the assumptions made on Γ 1 and Γ 2 , Proposition 2.13 guarantees that condition (2) from Theorem 3.2 does not hold. The conclusion now follows from Theorem 3.2 .
Proof of Theorem A
This section is devoted to the proof of Theorem A, whose setup we now recall. Let M = L(Γ), where Γ = Γ 1 * Σ Γ 2 is an amalgamated free product group satisfying the following conditions:
(1) Σ is an icc amenable group and Comm
(1)
Γ i (Σ) = Σ, for every i ∈ 1, 2. (2) Γ i = Γ 1 i × Γ 2 i , where Γ j i
is an icc, non-amenable, bi-exact group, for every i, j ∈ 1, 2. In order to derive Theorem A, we will need the following result, whose proof we postpone until the end of this section. Then there do not exist a subgroup Θ < Λ, non-zero projections r 1 , r 2 ∈ Z(L(Θ)), and a unitary u ∈ M , such that r 1 + r 2 = 1, q i := ur i u * ∈ L(Γ i ) and uL(Θ)r i u * = q i L(Γ i )q i , for every i ∈ 1, 2.
Proof of Theorem A. Let Λ be a group such that M = L(Λ). Denote by {u g } g∈Γ and {v h } h∈Λ the canonical unitaries generating M . Theorem 3.2 implies that either condition (1) or (2) from its conclusion hold. By Proposition 4.1, condition (2) cannot hold. Thus, we deduce that for every i ∈ 1, 2, we can find a subgroup Θ i < Λ and v
i ∈ U (M ) such that v i L(Θ i )v * i = L(Γ i ). In particular, we get that L(Σ) = L(Γ 1 ) ∩ L(Γ 2 ) = v 1 L(Θ 1 )v * 1 ∩ v 2 L(Θ 2 )v * 2 . Lemma 2.4 implies that L(Σ) ≺ M L(Θ 1 ∩hΘ 2 h −1 ), for some h ∈ Λ. Define Λ 1 = Θ 1 , Λ 2 = hΘ 2 h −1 , and ∆ = Λ 1 ∩Λ 2 . Letting u 1 = v 1 and u 2 = v 2 u * h , we have that (4.1) u i L(Λ i )u * i = L(Γ i )
, for every i ∈ 1, 2. In particular, we get that L(∆) ⊆ u * 1 L(Γ 1 )u 1 ∩ u * 2 L(Γ 2 )u 2 . Since Γ 1 ∩ gΓ 2 g −1 ⊆ Σ, for all g ∈ Γ, by applying Lemma 2.4 we conclude that We claim that for every i ∈ 1, 2, g ∈ Γ \ Σ, and h ∈ Γ \ Γ i we have that
(4.3) L(∆)z ⊀ M L(Σ ∩ gΣg −1 ) and L(∆)z ⊀ M L(Γ i ∩ hΓ i h −1 ). Indeed, if L(∆)z ≺ M L(Σ ∩ gΣg −1 ) (respectively, if L(∆)z) ≺ M L(Γ i ∩ gΓ i g −1 )
), for some g ∈ Γ, then by [DHI16, Lemma 2.4 (3)] we can find a non-zero projection q ′ ∈ (L(∆) ′ ∩ M )z such that
L(∆)q ′ ≺ s M L(Σ ∩ gΣg −1 ) (resp., L(∆)q ′ ≺ s M L(Γ i ∩ gΓ i g −1 )
). On the other, since q ′ z, we have that L(Σ) ≺ M L(∆)q ′ . By [Va07, Lemma 3.7] we get that L(Σ) ≺ M L(Σ ∩ gΣg −1 ) (resp., L(Σ) ≺ M L(Γ i ∩ gΓ i g −1 )). Lemma 2.11(2) gives that g ∈ Σ (resp., g ∈ Γ i ), which proves (4.3).
Claim 4.2. We can find u ∈ U (M ) such that uL(∆)zu * ⊆ L(Σ).
Proof of Claim 4.2. Let q ′ ∈ (L(∆) ′ ∩ M )z be a non-zero projection. Since L(∆)q ′ ≺ M L(Σ), we can find projections q ∈ L(∆), r ∈ L(Σ), a non-zero isometry w ∈ rM qq ′ and a * -homomorphism ϕ : qL(∆)qq ′ → rL(Σ)r satisfying ϕ(x)w = wx, for all x ∈ qL(∆)qq ′ . Moreover, we may assume that r is equal to the support projection of E L(Σ) (ww * ). Put P := ϕ(qL(∆)qq ′ ) ⊆ rL(Σ)r.
Note that P ⊀ L(Σ) L(Σ ∩ gΣg −1 ), for all g ∈ Γ \ Σ. Otherwise, [IPP05, Lemma 1.12] would imply that qL(∆)qq ′ ≺ M L(Σ ∩ gΣg −1 ), which contradicts (4.3). By applying Lemma 2.7 we deduce that P ′ ∩ rL(Σ)r ⊆ L(Σ), hence ww * ∈ L(Σ). Since w * w ∈ q ′ (L(∆) ′ ∩ M )q ′ q, we can find a projection q 0 ∈ q ′ (L(∆) ′ ∩ M )q ′ such that w * w = qq 0 . Thus, w(qL(∆)qq 0 )w * ⊆ L(Σ). Let z 0 be the central support of q in L(∆). Since L(Σ) is a II 1 factor, it follows that there is η ∈ U (M ) such that ηL(∆)z 0 q 0 η * ⊆ L(Σ).
Thus, for any non-zero projection q ′ ∈ (L(∆) ′ ∩ M )z, we found a non-zero projection q ′′ in q ′ (L(∆) ′ ∩ M )q ′ such that L(∆)q ′′ can be unitarily conjugated into L(Σ). Since L(Σ) is a II 1 factor, the claim follows by a maximality argument (see the proof of [IPP05, Theorem 5.1]).
Next, we define by Ω < Λ the subgroup generated by Comm
(1) Λ (∆), and prove the following: Claim 4.3. We have that uzL(Ω)zu * ⊆ L(Σ) and there is a non-zero projection z ′ ∈ zL(Ω)z such that uz ′ L(Ω)z ′ u * = pL(Σ)p, where p = uz ′ u * .
Proof of Claim 4.3. Put e := uzu * ∈ L(Σ). First, since L(∆)z ⊀ L(Σ ∩ gΣg −1 ), for every g ∈ Γ \ Σ, by Lemma 2.7 we deduce that qN Second, put Q := uL(∆)zu * ⊆ eL(Σ)e. Since L(Σ) is a II 1 factor and L(Σ) ≺ M L(∆)z, we get that eL(Σ)e ≺ M Q. Thus, we can find projections p ∈ eL(Σ)e, q ∈ Q, a non-zero partial isometry v ∈ qM p, and a * -homomorphism θ : pL(Σ)p → qQq such that θ(x)v = vx, for all x ∈ pL(Σ)p.
Since qQq ⊆ L(Σ), it follows that v ∈ W * (qN (1) M (L(Σ))). Since Comm
(1) Γ i (Σ) = Σ, for all i ∈ 1, 2, combining Corollary 2.9 and Lemma 2.11(1) yields that v ∈ L(Σ). Thus, after shrinking p, we may assume that v * v = p. Moreover, since L(Σ) is a II 1 factor and Q ⊆ eL(Σ)e is diffuse, we may assume that p ∈ Q. By combining the last two inclusions we deduce that pL(Σ)p ⊆ p(uzL(Ω)zu * )p. Therefore, if z ′ ∈ L(∆)z ⊆ zL(Ω)z is such that p = uz ′ u * , then pL(Σ)p ⊆ uz ′ L(Ω)z ′ u * . Since the reverse inclusion also holds, the second assertion of the claim follows.
Now, if x ∈ pL(Σ)p, then vx(pQp) ⊆ vpL(Σ)p ⊆ (qQq)v. This implies that vx ∈ W * (qN
Before finishing the proof, we need one final claim: Since L(Σ) is a II 1 factor and uz ′ L(Ω)z ′ u * = pL(Σ)p, we can apply Lemma 2.6 and deduce the existence of w ∈ U (M ) such that wL(Ω)w * = L(Σ).
We are now ready to finish the proof. Let q ′ ∈ L(∆) ′ ∩M be a non-zero projection. Then q ′ ∈ L(Ω) and since [Ω : ∆] < ∞ we have that q ′ L(Ω)q ′ ≺ s M L(∆)q ′ . Moreover, L(Σ) ≺ M q ′ L(Ω)q ′ by Claim 4.3, hence [Va07, Lemma 3.7] allows us to conclude that L(Σ) ≺ M L(∆)q ′ . Since this holds for any non-zero projection q ′ ∈ L(∆) ′ ∩ M , the maximality property of z implies that z = 1.
By Claim 4.3 we get that Q := uL(∆)u * ⊆ L(Σ). Let i ∈ 1, 2. Since z = 1, (4.3) implies that L(∆) ⊀ M L(Γ i ∩ gΣg −1 ), for all g ∈ Γ \ Γ i . Since u i u * Q = u i L(∆)u * ⊆ u i L(Λ i )u * = L(Γ i )u i u * by equation (4.1), Lemma 2.7 gives that u i u * ∈ L(Γ i ). Thus, we get that
u * L(Γ i )u = u * (uu * i )L(Γ i )(uu * i ) * u = u * i L(Γ i )u i = L(Λ i ). Therefore, L(∆) = L(Λ 1 ) ∩ L(Λ 2 ) = u * (L(Γ 1 ) ∩ L(Γ 2 ))u = u * L(Σ)u.
This finishes the proof.
Proof of Proposition 4.1. Assume by contradiction that the conclusion of Proposition 4.1 is false. After replacing Λ with uΛu * , we find a group Λ satisfying M = L(Λ), a subgroup Θ < Λ and non-zero projections r 1 , r 2 ∈ Z(L(Θ)) ∩ L(Σ) such that r 1 + r 2 = 1 and
(4.4) L(Θ)r i = r i L(Γ i )r i , for all i ∈ 1, 2.
Since L(Σ) is a II 1 factor, there is a non-zero partial isometry v ∈ L(Σ) such that vv * r 1 and v * v r 2 . Then vv * L(Σ)vv * ⊆ r 1 L(Σ)r 1 ∩ vr 2 L(Σ)r 2 v * ⊆ L(Θ) ∩ vL(Θ)v * . The moreover assertion of Lemma 2.4 implies that there exists h ∈ Λ such that L(Σ) ≺ M L(Θ ∩ hΘh −1 ) and
τ(vv * h ) = 0. Moreover, since v = r 1 vr 2 , we get that E L(Θ) (v) = r 1 E L(Θ) (v)r 2 = r 1 r 2 E L (Θ)(v) = 0, hence h ∈ Λ \ Θ. Thus, if we put ∆ := Θ ∩ hΘh −1 , then (4.5) L(Σ) ≺ M L(∆).
We claim that (4.6) L(∆) ≺ s M L(Σ). For this, let p ∈ Z((L(∆)r 1 ) ′ ∩ r 1 L(Γ 1 )r 1 ) be the largest projection such that L(∆)p ⊀ L(Γ 1 ) L(Σ).
First, since we have L(∆)p(v h r 1 ) = pL(∆)(v h r 1 ) = pv h L(h −1 ∆h)r 1 ⊆ pv h L(Θ)r 1 ⊆ pv h L(Γ 1 ),
Corollary 2.10 allows us to conclude that pv h r 1 ∈ r 1 L(Γ 1 )r 1 . In particular, pv h r 1 ∈ L(Θ) and since r 1 , p ∈ L(Θ) while h ∈ Λ \ Θ, we get that pv h r Since r 1 + r 2 = 1, the last paragraph gives that p = 0. This implies that L(∆)r 1 ≺ s L(Γ 1 ) L(Σ). Similarly, it follows that L(∆)r 2 ≺ s L(Γ 2 ) L(Σ). These together prove (4.6).
1 = pE L(Θ) (v h )r 1 = 0. Secondly, since L(∆)p(v h r 2 ) = pL(∆)(v h r 2 ) = pv h L(h −1 ∆h)r 2 ⊆ pv h L(Θ)r 2 ⊆ pv h L(Γ 2 )
Let Ω < Λ be the subgroup generated by Comm
(1) Λ (∆). In the proof of Theorem A, we showed that if ∆ < Λ satisfies conditions (4.5) and (4.6), then [Ω : ∆] < ∞ and wL(Ω)w * = L(Σ), for some w ∈ U (M ) (see Claim 4.4). In particular, Ω is icc.
Put Q := wL(∆)w * ⊆ L(Σ). Since ∆ ⊆ Ω, we have r i w * Q = r i L(∆)w * ⊆ r i L(Γ i )r i w * , for all i ∈ 1, 2. Note that Q ⊀ M L(Γ i ∩ gΓ i g −1 ), for all g ∈ Γ i \ Σ. Otherwise, since [Ω : ∆] < ∞ we have that L(Σ) ≺ M Qq, for any non-zero projection q ∈ Q ′ ∩ M , and [Va07, Lemma 3.7] would imply that L(Σ) ≺ M L(Γ i ∩ gΓ i g −1 ). This however contradicts Lemma 2.11(3).
We can therefore apply Lemma 2.7 to derive that r i w * ∈ L(Γ i ). Let p i = wr i w * ∈ P(L(Γ i )). Then p 1 , p 2 are non-zero and since p 1 + p 2 = 1 we get that p 1 , p 2 ∈ L(Σ). Moreover, we have that wL(Θ)w * = w(L(Θ)r 1 ⊕ L(Θ)r 2 )w * = wr 1 L(Γ 1 )r 1 w * ⊕ wr 2 L(Γ 2 )r 2 w * = p 1 L(Γ 1 )p 1 ⊕ p 2 L(Γ 2 )p 2 .
From this we deduce that wL(Ω ∩ Θ)w * = wL(Ω)w * ∩ wL(Θ)w * = L(Σ) ∩ (p 1 L(Γ 1 )p 1 ⊕ p 2 L(Γ 2 )p 2 ) = p 1 L(Σ)p 1 ⊕ p 2 L(Σ)p 2 .
In particular, since p 1 , p 2 are non-zero, we conclude that L(Ω ∩ Θ) is not a factor and therefore Ω ∩ Θ is not icc. On other hand, since [Ω : ∆] < ∞ and ∆ ⊆ Ω ∩ Θ, we get that [Ω : Ω ∩ Θ] < ∞. This altogether contradicts the fact that Ω is icc.
Proof of Corollary B
In this section, we prove Corollary B. Its proof relies on Theorem A and the following result:
Theorem 5.1. Let Γ 1 , Γ 2 be icc, non-amenable, bi-exact groups. Put Γ = Γ 1 ×Γ 2 and M = L(Γ). Let Σ be an icc group and π i : Σ → Γ i an injective homomorpism such that {π i (g)hπ i (g) −1 |g ∈ Σ} is infinite, for all h ∈ Γ i \ {e} and i ∈ 1, 2. We identify Σ with {(π 1 (g), π 2 (g))|g ∈ Σ} < Γ.
Let ∆ < Λ be countable groups such that M = L(Λ) and L(Σ) = L(∆).
Then we can find a decomposition Λ = Λ 1 × Λ 2 and a unitary u ∈ M such that TΣ = uT∆u * and L(Γ i ) = uL(Λ i )u * , for all i ∈ 1, 2.
Recall from [Io10, Section 4], that the height of an element x ∈ L(Λ) is defined as
h Λ (x) = max h∈Λ |τ(xv * h )|.
In the proof of Theorem 5.1, we will make crucial use of [IPV10, Theorem 3.1]. This asserts that if Γ is any countable group such that L(Γ) = L(Λ) and
inf g∈Γ h Λ (u g ) > 0,
then there is a unitary u ∈ L(Γ) = L(Λ) such that TΓ = uTΛu * .
Proof of Theorem 5.1. By [CdSS15, Corollary B] we can find a decomposition Λ = Λ 1 × Λ 2 , where Λ 1 , Λ 2 are icc groups, t 1 , t 2 > 0 with t 1 t 2 = 1, and x ∈ U (M ) such that L(Λ 1 ) = xL(Γ 1 ) t 1 x * and L(Λ 2 ) = xL(Γ 2 ) t 2 x * . Let d max{t 1 , t 2 } be an integer. For i ∈ 1, 2, let p i ∈ M d (L(Λ i )) be a projection with (τ ⊗ Tr)(p i ) = t i , where Tr denotes the non-normalized trace on M d (C). Then the above implies that we can find a partial isometry v ∈ M d (L(Λ 1 ))⊗M d (L(Λ 2 )) such that vv * = e 1,1 ⊗ e 1,1 , v * v = p 1 ⊗ p 2 , where e 1,1 ∈ M d (C) denotes the elementary matrix corresponding to the (1, 1)-entry, and if we identify L(Λ i ) ≡ e 1,1 M d (L(Λ i ))e 1,1 in the natural way, then (5.1) L(Λ 1 ) ⊗ e 1,1 = v(p 1 M d (L(Γ 1 ))p 1 ⊗ p 2 )v * and e 1,1 ⊗ L(Λ 2 ) = v(p 1 ⊗ p 2 M d (L(Γ 2 ))p 2 )v * .
Let ρ i be the restriction of the projection Λ → Λ i to ∆. We claim that ρ i is one-to-one, for all i ∈ 1, 2. We only treat the case i = 1, since the case i = 2 is similar. To this end, let Ω = ker(ρ 1 ). Since ∆ is icc, in order to show that Ω = {e}, it suffices to prove that Ω is finite. Assume that Ω is infinite, and let h n ∈ Ω be a sequence satisfying h n → ∞. Since v hn ∈ 1 ⊗ L(Λ 2 ), (5.1) implies the existence of T ⊆ Γ 1 finite such that v hn − e(v hn ) 2 1/2, for all n 1, where e denotes the orthogonal projection from ℓ 2 (Γ) = ℓ 2 (Γ 1 ) ⊗ ℓ 2 (Γ 2 ) onto the closed linear span of {u g ⊗ L(Γ 2 )|g ∈ T }. On the other hand, v hn ∈ L(∆) = L(Σ), for all n 1. Thus, we get that
e(v hn ) 2 2 = g∈π −1 1 (T ) |τ(v hn u * g )| 2 , for all n.
Since π 1 is one-to-one and T is finite, π −1 1 (T ) ⊆ ∆ is finite. Since v hn → 0, weakly, we conclude that e(v hn ) 2 → 0, as n → ∞, which gives a contradiction, and proves the claim.
We continue by establishing the following:
Claim 5.2. inf g∈Σ h ∆ (u g ) > 0.
Proof of Claim 5.2. Using (5.1), for every i ∈ 1, 2, we can find a finite set S i ⊆ Λ i such that for every u i ∈ U (L(Γ i )), there is v i in the linear span of {v h 1 ⊗ v h 2 |h 1 ∈ Λ 1 , h 2 ∈ Λ 2 , h i ∈ S i } satisfying v i 1 and u i − v i 2 1/8.
Let g ∈ Σ. Then for every i ∈ 1, 2 we can find v i ∈ (M ) 1 such that u π i (g) − v i 2 1/8 and
v 1 = (h 1 ,h 2 )∈Λ 1 ×S 2 c h 1 ,h 2 (v h 1 ⊗ v h 2 ) and v 2 = (h ′ 1 ,h ′ 2 )∈S 1 ×Λ 2 d h ′ 1 ,h ′ 2 (v h ′ 1 ⊗ v h ′ 2 ), for some c h 1 ,h 2 , d h ′ 1 ,h ′ 2 ∈ C. Since u g = u π 1 (g) u π 2 (g) ∈ L(∆), we get that u g −v 1 v 2 2 1/4, hence u g −E L(∆) (v 1 v 2 ) 2 1/4. Write u g = k∈∆ a k v k , and notice that E L(∆) (v 1 v 2 ) = k∈∆ b k v k , where b k = (h 1 ,h 2 )∈Λ 1 ×S 2 ,(h ′ 1 ,h ′ 2 )∈S 1 ×Λ 2 h 1 h ′ 1 =ρ 1 (k),h 2 h ′ 2 =ρ 2 (k) c h 1 ,h 2 d h ′ 1 ,h ′ 2 . Now, fix (h 1 , h 2 ) ∈ Λ 1 ×S 2 . If k ∈ ∆ is such that there is (h ′ 1 , h ′ 2 ) ∈ S 1 ×Λ 2 satisfying h 1 h ′ 1 = ρ 1 (k), then k ∈ ρ −1 1 (h 1 S 1 ).
Since ρ 1 is one-to-one, there are at most |S 1 | such k ∈ ∆. Similarly, given
(h ′ 1 , h ′ 2 ) ∈ S 1 × Λ 2 , there are at most |S 2 | elements k ∈ ∆ for which there is (h 1 , h 2 ) ∈ Λ 1 × S 2 satisfying h 2 h ′ 2 = ρ 2 (k).
Using the inequality |cd| c 2 + d 2 , for all c, d ∈ R, we conclude that
(5.2) k∈∆ |b k | (h 1 ,h 2 )∈Λ 1 ×S 2 |S 1 | c 2 h 1 ,h 2 + (h ′ 1 ,h ′ 2 )∈S 1 ×Λ 2 |S 2 | d 2 h ′ 1 ,h ′ 2 |S 1 | + |S 2 |. Next, let T = {k ∈ ∆||a k − b k | |a k |/2}. Then (5.3) k∈∆\T |a k | 2 k∈∆ 4|a k − b k | 2 = 4 u g − E L(∆) (v 1 v 2 ) 2 2 1/4.
On the other hand, if k ∈ T , then |a k | 2|b k |, and thus by using (5.2) we get that 2(|S 1 | + |S 2 |) h ∆ (u g ) + 1/4. Therefore, we have h ∆ (u g ) (3|S 1 | + |S 2 |)/8 > 0, for any g ∈ Σ. This proves the claim.
We are now ready to finish the proof. First, combining Claim (5.2) and [IPV10, Theorem 3.1] allows us to deduce the existence of w ∈ L(Σ), an isomorphism δ : Σ → ∆, and a character η : Σ → T such that u g = η(g)wv δ(g) w * , for all g ∈ Σ. Moreover, after replacing Λ with wΛw * , we may assume that w = 1. In other words, we have (5.5) u π 1 (g) ⊗ u π 2 (g) = u g = η(g)v δ(g) = η(g)(v ρ 1 (δ(g)) ⊗ v ρ 2 (δ(g)) ), for all g ∈ Γ.
By equation (5.1), for every i ∈ 1, 2, we have a homomorphism σ i : Σ → U (p i M d (L(Γ i ))p i ) such that v ρ 1 (δ(g)) ⊗ e 1,1 = v(σ 1 (g) ⊗ p 2 )v * and e 1,1 ⊗ v ρ 2 (δ(g)) = v(p 1 ⊗ σ 2 (g))v * , for all g ∈ Σ. In combination with (5.5), we deduce that (5.6) u π 1 (g) ⊗ u π 2 (g) = η(g) v(σ 1 (g) ⊗ σ 2 (g))v * , for all g ∈ Σ.
For i ∈ 1, 2, we define a unitary representation α i : Σ → U (L 2 (e 1,1 M d (L(Γ i ))p i )) by letting α i (g)(ξ) = u π i (g) ξσ i (g) * . Then (5.6) implies that (α 1 (g) ⊗ α 2 (g))(v) = η(g)v, for all g ∈ Σ. Therefore, both α 1 and α 2 are not weakly mixing.
We continue by using an argument from the proof of [PS03, Lemma 2.5]. Let i ∈ 1, 2. Since α i is not weakly mixing, we can find an α i (Σ)-invariant subspace {0} = H i ⊆ L 2 (e 1,1 M d (L(Γ i ))p i ). Let B i be an orthonormal basis of H i . Then ξ = ζ∈B i ζζ * ∈ L 1 (e 1,1 M d (L(Γ i ))e 1,1 ) = L 1 (L(Γ i )) is independent of the choice of the basis. Since {α i (g)(ζ)} ζ∈B i is a basis of H i , we get that ξ = ζ∈B i α i (g)(ζ)α i (g)(ζ) * = u π i (g) ξu * π i (g) , for all g ∈ Σ.
Since {π i (g)hπ i (g) −1 |g ∈ Σ} is infinite, for all h ∈ Γ i \ {e}, this forces ξ ∈ C1. In particular, we derive that ζ ∈ L(Γ i ), for all ζ ∈ B i , and thus H i ⊆ L(Γ i ). Let K i ⊆ L(Γ i ) be the linear span {ζ 1 ζ * 2 |ζ 1 , ζ 2 ∈ H i }. Then K i is a finite dimensional space which is invariant under the unitary representation τ i : Σ → U (L 2 (Γ i )) given by τ i (g)(ζ) = u π i (g) ζu * π i (g) . Using again the fact that {π i (g)hπ i (g) −1 |g ∈ Σ} is infinite, for all h ∈ Γ i \ {e}, we deduce that K i ⊆ C1.
This implies the existence of a partial isometry ω i ∈ e 1,1 M d (L(Γ i ))p i such that ω i ω * i = e 1,1 and H i = Cω i . In particular, we get that 1 = (τ ⊗ Tr)(e 1,1 ) (τ ⊗ Tr)(p i ) = t i . Since this holds for all i ∈ 1, 2 and t 1 t 2 = 1, we get that t 1 = t 2 = 1. Thus, we may assume that d = 1 and p 1 = p 2 = 1.
Let i ∈ 1, 2. Then ω i ∈ U (L(Γ i )), and since Cω i is α i (Σ)-invariant, we can find a character η i : Σ → T such that u π i (g) ω i σ i (g) * = η i (g)ω i and thus u π i (g) = η i (g)ω i σ i (g)ω * i , for all g ∈ Σ. Therefore, if we put ω = ω 1 ⊗ ω 2 ∈ L(Γ 1 )⊗L(Γ 2 ) = M , then u π i (g) = η i (g)ωσ i (g)ω * , for all g ∈ Σ. Denote u = ωv * ∈ U (M ). Since L(Λ i ) = vL(Γ i )v * , we get that uL(Λ i )u * = ωL(Γ i )ω * = L(Γ i ). Moreover, recalling that v ρ i (δ(g)) = vσ i (g)v * , we get that u π i (g) = η i (g)uv ρ i (δ(g)) u * , for all g ∈ Σ. This implies that TΣ = uT∆u * , which finishes the proof.
Before proving Corollary B, we also need the following elementary result:
Lemma 5.3. Let Γ be an icc group and put M = L(Γ). Let Σ < Γ be a subgroup such that the centralizer in Γ of any finite index subgroup of Σ ∩ gΣg −1 is trivial, for every g ∈ Γ.
If ∆ < Λ are countable groups such that M = L(Λ) and TΣ = T∆, then TΓ = TΛ.
Proof. In order to prove the lemma, it suffices to show that Γ ⊆ TΛ.
To this end, let g ∈ Γ and put u := u g . Define ∆ 0 := ∆ ∩ Tu∆u * . Then we have that T∆ 0 = T∆ ∩ Tu∆u * = TΣ ∩ TgΣg −1 = T(Σ ∩ gΣg −1 ) and there are a homomorphism δ : ∆ 0 → ∆ and a character η : ∆ 0 → T such that u * v h u = η(h)v δ(h) , for all h ∈ ∆ 0 . Let k 1 , k 2 ∈ Λ such that τ(uv * k 1 ) = 0 and τ(uv * k 2 ) = 0. Then {hk 1 δ(h) −1 |h ∈ ∆ 0 } and {hk 2 δ(h) −1 |h ∈ ∆ 0 } are finite, and hence there is a finite index subgroup ∆ 1 < ∆ 0 such that hk 1 δ(h) −1 = k 1 and hk 2 δ(h) −1 = k 2 , for all h ∈ ∆ 1 . From this a basic calculation shows that k := k 1 k −1 2 commutes with ∆ 1 . Thus, v k commutes with {v h |h ∈ ∆ 1 } and hence with {u g |g ∈ Σ 1 }, where Σ 1 < Σ ∩ gΣg −1 is the finite index subgroup such that TΣ 1 = T∆ 1 . The assumption from the hypothesis implies that v k ∈ C1, hence k = e and k 1 = k 2 . Since this holds for every k 2 , k 2 ∈ Λ in the support of u, we conclude that u ∈ TΛ.
Proof of Corollary B. Let Γ 0 be an icc, non-amenable, bi-exact group, and Σ 0 < Γ 0 be an icc, amenable subgroup. Assume that (1) [Σ 0 : Σ 0 ∩ gΣ 0 g −1 ] = ∞, for every g ∈ Γ 0 \ Σ 0 , and (2) the centralizer in Γ 0 of any finite index subgroup of Σ 0 ∩ gΣ 0 g −1 is trivial, for every g ∈ Γ 0 . Note that (2) implies that the centralizer of any finite index subgroup of Σ 0 in Γ 0 is trivial, or, equivalently, that {ghg −1 |g ∈ Σ 0 } is infinite, for all h ∈ Γ 0 \ {e}.
Put Γ j i = Γ 0 and Γ i = Γ 1 i × Γ 2 i , for all i, j ∈ 1, 2. Let Σ = {(g, g)|g ∈ Σ 0 } < Γ 1 ∩ Γ 2 . Define Γ = Γ 1 * Σ Γ 2 and M = L(Γ). Let Λ be a countable group such that M = L(Λ).
conclude that ρ(u h ) = 0, for all h ∈ ∆ \ {e}. Thus, ρ(u g ) = ρ(u aga −1 ) = 0, which proves the claim. Note that one can alternatively prove the claim by showing that the amenable radical of Γ is trivial and applying [BKKO14, Theorem 1.3].
Finally, the claim implies that ρ is the restriction of the canonical trace of L(Γ) to C * r (Γ). Thus, θ is trace preserving and hence it extends to a * -isomorphism θ : L(Γ) → L(Λ). The conclusion now follows from Corollary B.
2. 2 .
2Popa's intertwining technique. In the early 2000s, S. Popa introduced in [Po03, Theorem 2.1 and Corollary 2.3] the following powerful criterion for the existence of intertwiners between arbitrary subalgebras of tracial von Neumann algebras.
Theorem 2.1 ([Po03]). Let (M, τ) be a separable tracial von Neumann algebra and let P, Q ⊆ M be (not necessarily unital) von Neumann subalgebras. Then the following are equivalent:
By the proof of [IPP05, Theorem 4.3] (see also [DHI16, Remark 2.3]) this implies the conclusion.
Lemma 2.5 ([Po03, FGS10]). Let P ⊆ M be an inclusion of tracial von Neumann algebras.
( 1 )
1Γ has property (T) if and only if pL(Γ)p does. (2) Γ has Haagerup's property if and only if pL(Γ)p does. Proof. (1) Assume that Γ has property (T). Then [Po01, Proposition 5.1] implies that L(Γ) has property (T), and [Po01, Proposition 4.7 (2)] further gives that pL(Γ)p has property (T).
Theorem 3. 3 .
3Let Γ = Γ 1 * Σ Γ 2 be as in Assumption 3.1. Put M = L(Γ), and let Λ be an arbitrary group such that M = L(Λ).
Claim 3. 4 .
4For every i, j ∈ 1, 2, there exist m, n ∈ 1, 2 such that △(L(Γ j i )) ≺ M⊗M M⊗L(Γ n m ).Proof of Claim 3.4. Let i ∈ 1, 2, and denote P = L(Γ 1 i ), Q = L(Γ 2 i ). Then △(P ) and △(Q) are commuting non-amenable subalgebras of M⊗M = L(Γ × Γ). On the other hand, by [BO08, Lemma 15.3.3 and Proposition 15.3.12], Γ × Γ is bi-exact relative to the family G = {Γ × Γ m |m ∈ 1, 2} ∪ {Γ m × Γ|m ∈ 1, 2}.
by applying [BO08, Theorem 15.1.5] we get that △(P )e ≺ M⊗M L(H), for some H ∈ H.
Theorem 15.1.5] implies that △(P )f ≺ M⊗M L(K) and hence △(P ) ≺ M⊗M L(K), for some K ∈ K. Since the flip automorphism of M⊗M acts identically on ∆(M ), this concludes the proof of the claim. We are now in position to apply the ultrapower technique from [Io11], which we recall in the following form. This result is essentially contained in the proof of [Io11, Theorem 3.1]. Stated as such, it is a particular case of [DHI16, Theorem 4.1]. Theorem 3.5 ( [Io11]). Let Λ be a countable group, M = L(Λ), and {v h } h∈Λ the canonical unitaries generating M . Let △ : M → M⊗M be the * -homomorphism given by △(v h ) = v h ⊗ v h , for all h ∈ Λ. Let A, B ⊆ M be von Neumann subalgebras such that △(A) ≺ M⊗B.
M
(L(∆Ω))), by using [Po03, Lemma 3.5] and Lemma 2.5, we get rL(∆Ω)r ⊂ qN rM r (L(∆)r) ′′ and rL(Θ)r ⊆ W * (qN (1)
* (urL(∆Ω)ru * )) = ur W * (qN (1) M (L(∆Ω))) ru * = urL(Θ)ru * . This implies in particular that rL(∆Ω)r ⊆ rL(Θ)r is a finite index inclusion of II 1 factors. By [PP86, Proposition 1.3] it follows that L(Θ) ≺ L(Θ) L(∆Ω). By Lemma 2.2 we get that ∆Ω < Θ has finite index. In particular, Comm
Proposition 4 . 1 .
41Let Λ be a countable group such that M = L(Λ).
Since L(Σ) ≺ M L(∆), by [DHI16, Lemma 2.4(4)], there is a non-zero projection z ∈ Z(L(∆) ′ ∩M ) such that L(Σ) ≺ M L(∆)q ′ , for any non-zero projection q ′ ∈ (L(∆) ′ ∩ M )z. We may moreover assume that z is the largest projection belonging to Z(L(∆) ′ ∩ M ) with this property.
e (uL(∆)zu * ) ⊆ eL(Σ)e. On the other hand, the combination of Lemma 2.5 and Corollary 2.9 yields that uzL(Ω)zu * = W * (qN (1) eM e (uL(∆)zu * )). Putting together the last two facts, we deduce that indeed uzL(Ω)zu * ⊆ L(Σ).
the proofs of [Po03, Lemma 3.5] or Lemma 2.6). Thus, pL(Σ)p ⊆ W * (qN eM )e (Q)). On the other hand, the moreover assertion of Lemma 2.6 and Lemma 2.8 imply that qN (1) eM e (Q) = uz qN
M
(L(∆)) zu * ⊆ uzL(Ω)zu * .
Claim 4.4.[Ω : ∆] < ∞ and there is w ∈ U (M ) such that wL(Ω)w * = L(Σ).Proof of Claim 4.4. Note that vuzL(Ω)zu * ⊆ vpL(Σ)p = θ(pL(Σ)p)v ⊆ uL(∆)zu * v. Thus, if ξ = u * vu, then ξ ∈ zM z and ξzL(Ω)z ⊆ L(∆)zξ. In particular, ξL(∆) ⊆ L(∆)ξ. Corollary 2.9 implies that ξ ∈ zL(Ω)z. Therefore, L(Ω) ≺ L(Ω) L(∆), and Lemma 2.2 gives that [Ω : ∆] < ∞.Since [Ω : ∆] < ∞, Comm
M
(L(Ω)) = L(Ω). Moreover, since L(∆) ≺ s M L(Σ) by (4.2), by using again that [Ω : ∆] < ∞, we get that L(Ω) ≺ s M L(Σ).
and we have L(∆)p ⊀ L(Γ 1 ) L(Σ), [IPP05, Theorem 1.1] implies that pv h r 2 = 0.
k | 2(|S 1 | + |S 2 |).
). We refer the reader to [BO08, Definition 15.1.2] for the notion of relative biexactness for groups. A second crucial ingredient is the ultrapower technique for group von Neumann algebras introduced by the second author in [Io11, Theorem 3.1]. Note that this technique has recently been used in several other works [CdSS15, KV16, DHI16]. of Theorem 3.3. Denote by {u g } g∈Γ and {v h } h∈Λ the canonical unitaries generating M . Following [PV09], we consider a * -homomorphism △ : M → M⊗M , called the comultiplication along Λ, and defined by ∆(v h ) = v h ⊗ v h , for all h ∈ Λ. Then we haveProof
We first use Theorem A. Let h = (h 1 , h 2 ) ∈ Γ i = Γ 0 × Γ 0 such that [Σ : Σ ∩ hΣh −1 ] < ∞. Then h 1 h −1 2 ∈ Γ 0 commutes with a finite index subgroup of Σ 0 , and thus by (2) we get that h 1 = h 2 . Further, it follows that [Σ 0 : Σ 0 ∩ h 1 Σ 0 h −1 1 ] < ∞, which by (1) forces h 1 ∈ Σ 0 . Hence h = (h 1 , h 1 ) ∈ Σ. We may thus apply Theorem A to deduce that Λ = Λ 1 * ∆ Λ 2 and that, after unitary conjugacy, we have L(Σ) = L(∆) and L(Γ i ) = L(Λ i ), for all i ∈ 1, 2.Since {ghg −1 |g ∈ Σ 0 } is infinite, for all h ∈ Γ 0 \ {e}, we are in position to apply Theorem 5.1. Thus, we deduce the existence of a decomposition Λ i = Λ 1 i × Λ 2 i and a unitary u i ∈ L(Γ i ) such that TΣ = u i T∆u * i and L(Γ j i ) = u i L(Λ j i )u * i , for all i, j ∈ 1, 2. In particular, we have that TΣ 0 = u i Tρ j i (∆)u * i , where we consider the canonical embedding Σ 0 < Γ j i and projection ρ j i : Λ i → Λ j i . By using condition (2) again, Lemma 5.3 implies thatThen TΣ = uTΣu * , hence we can find an isomorphism δ : Σ → Σ and a character η : Σ → T such that u δ(g) = η(g)uu g u * , for all g ∈ Σ. Let k 1 , k 2 ∈ Γ such that τ(uu * k 1 ) = 0 and τ(uu * k 2 ) = 0. Then {δ(g)k 1 g −1 |g ∈ Σ} and {δ(g)k 2 g −1 |g ∈ Σ} are finite, hence there is a finite index subgroup Σ 1 < Σ such that δ(g)k 1 g −1 = k 1 and δ(g)k 2 g −1 = k 2 , for all g ∈ Σ 1 . Since [Σ : Σ ∩ hΣh −1 ] = ∞, for all h ∈ Γ i \ Σ and i ∈ 1, 2, we deduce that k 1 , k 2 ∈ Σ. But then k := k −1 2 k 1 ∈ Σ commutes with a finite index subgroup Σ 1 < Σ. Since Σ 0 is icc, k = e, thus k 1 = k 2 ∈ Σ. Since this holds for any k 1 , k 2 ∈ Γ in the support of u, we derive that u ∈ TΣ.Thus, since u 1 = uu 2 and TΓ 2 = u 2 TΛ 2 u * 2 , we get that u * 1 TΓ 2 u 1 = u * 2 (u * TΓ 2 u)u 2 = u * 2 TΓ 2 u 2 = TΛ 2 . Since we also have TΓ 1 = u 1 TΛ 1 u * 1 , we conclude that TΓ = u 1 TΛu * 1 . This finishes the proof.Proof of Corollary CLet Γ = Γ 1 * Σ Γ 2 be as in Corollary B, where we denote Γ 1 = Γ 2 = Γ 0 ×Γ 0 . Let θ : C * r (Γ) → C * r (Λ) be a * -isomorphism, for some countable group Λ. Denote by τ : L(Λ) → C the canonical trace and view C * r (Λ) ⊆ L(Λ). Then ρ := τ • θ : C * r (Γ) → C is a tracial state. We claim that if g ∈ Γ \ {e}, then ρ(u g ) = 0. To this end, we will show that there exist a, b ∈ Γ such that b 3 = e and {aga −1 , b} freely generate a subgroup of Γ.First, assume that g ∈ Σ \ {e}. Then g = (g 0 , g 0 ), for some g 0 ∈ Σ 0 \ {e}. Since Γ 0 is icc, we can find a 0 ∈ Γ 0 such that a 0 does not commute with g 0 . If we put a = (a 0 , e) ∈ Γ 1 = Γ 0 × Γ 0 , then aga −1 = (a 0 g 0 a −1 0 , g 0 ) ∈ Γ 1 \ Σ. On the other hand, we can find b 0 ∈ Γ 0 \ Σ 0 such that b 3 0 = e. Granting this and letting b = (b 0 , e) ∈ Γ 2 \ Σ, we have that {aga −1 , b} freely generate a subgroup of Γ. Now, if we cannot find such a b 0 , we would have that b 2 0 = e, for all b 0 ∈ Γ 0 \ Σ 0 . Thus, if x, y ∈ Γ 0 \ Σ 0 are such that xΣ 0 = yΣ 0 , then x 2 = y 2 = (x −1 y) 2 = e, which implies that x, y commute. Thus, xΣ 0 and yΣ 0 commute, which would give that Σ 0 is abelian, a contradiction.Secondly, assume that g ∈ Γ \ Σ. Let g = g 1 g 2 ...g k be the reduced form on g. Then the reduced form of g n begins and ends with g ± 1 or g ± k , for every n ∈ Z \ {0}. Let a ∈ Γ 1 \ Σ be such that a / ∈ {g ± 1 , g ± k }. Then the reduced form of (aga −1 ) n = ag n a −1 begins with a and ends with a −1 , for every n ∈ Z \ {0}. As in the previous paragraph, let b ∈ Γ 2 \ Σ such that b 3 = e. Then it is clear that {aga −1 , b} freely generate a subgroup of Γ.Thus, if ∆ 1 , ∆ 2 , ∆ < Γ denote the subgroups respectively generated by {aga −1 }, {b}, {aga −1 , b}, then ∆ = ∆ 1 * ∆ 2 . Since |∆ 1 | 2 and |∆ 2 | 3, by Powers' work[Po75]and its extension[PS79]we get that C * r (∆) has a unique tracial state. Viewing C * r (∆) ⊆ C * r (Γ) in the natural way, we
* -superrigidity for wreath products with groups having positive first ℓ 2 -Betti number, Internat. M Berbec, J. Math. 261ppM. Berbec, W * -superrigidity for wreath products with groups having positive first ℓ 2 -Betti number, Inter- nat. J. Math. 26 (2015), no. 1, 1550003, 27 pp.
E Breuillard, M Kalantar, M Kennedy, N Ozawa, arXiv:1410.2518C * -simplicity and the unique trace property for discrete groups. preprintE. Breuillard, M. Kalantar, M. Kennedy, and N. Ozawa, C * -simplicity and the unique trace property for discrete groups, preprint arXiv:1410.2518.
W * -superrigidity for group von Neumann algebras of left-right wreath products. M Berbec, S Vaes, Proc. Lond. Math. Soc. 108M. Berbec and S. Vaes, W * -superrigidity for group von Neumann algebras of left-right wreath products, Proc. Lond. Math. Soc. 108 (2014), 1116-1152.
C * -algebras and finite-dimensional approximations. N P Brown, N Ozawa, Graduate Studies in Mathematics. 88AMSN. P. Brown and N. Ozawa, C * -algebras and finite-dimensional approximations, Graduate Studies in Mathematics, vol. 88, AMS, Providence, RI.
Group factors of the Haagerup type. M Choda, Proc. Japan Acad. Ser. A Math. Sci. 59M. Choda, Group factors of the Haagerup type, Proc. Japan Acad. Ser. A Math. Sci. 59 (1983), 174-177.
Bass-Serre rigidity results in von Neumann algebras. I Chifan, C Houdayer, Duke Math. J. 153I. Chifan and C. Houdayer, Bass-Serre rigidity results in von Neumann algebras, Duke Math. J. 153 (2010), 23-54.
On the structural theory of II1 factors of negatively curved groups. I Chifan, T Sinclair, Ann. Sci. Ec. Norm. Sup. 46I. Chifan and T. Sinclair, On the structural theory of II1 factors of negatively curved groups, Ann. Sci. Ec. Norm. Sup. 46 (2013), 1-33.
On the structural theory of II1 factors of negatively curved groups, II. Actions by product groups. I Chifan, T Sinclair, B Udrea, Adv. Math. 245I. Chifan, T. Sinclair, and B. Udrea, On the structural theory of II1 factors of negatively curved groups, II. Actions by product groups, Adv. Math. 245 (2013), 208-236.
W * -rigidity for the von Neumann algebras of products of hyperbolic groups. I Chifan, R Santiago, T Sinclair, Geom. Funct. Anal. 26I. Chifan, R. de Santiago, and T. Sinclair, W * -rigidity for the von Neumann algebras of products of hyperbolic groups, Geom. Funct. Anal. 26 (2016), 136-159.
Classification of injective factors. A Connes, Ann. Math. 104A. Connes, Classification of injective factors, Ann. Math. 104 (1976) 73-115.
Property (T) for von Neumann algebras. A Connes, V F R Jones, Bull. Lond. Math. Soc. 17A. Connes and V. F. R. Jones, Property (T) for von Neumann algebras, Bull. Lond. Math. Soc. 17 (1985), 57-62.
D Drimbe, D Hoff, A Ioana, arXiv:1611.02209Prime II1 factors arising from irreducible lattices in products of rank one simple Lie groups. preprintD. Drimbe, D. Hoff, and A. Ioana, Prime II1 factors arising from irreducible lattices in products of rank one simple Lie groups, preprint arXiv:1611.02209.
Rescalings of free products of II1-factors. K Dykema, F Rȃdulescu, Proc. Amer. Math. Soc. 131K. Dykema and F. Rȃdulescu, Rescalings of free products of II1-factors, Proc. Amer. Math. Soc. 131 (2003), 1813-1816.
The Relative Weak Asymptotic Homomorphism Property for Inclusions of Finite von Neumann Algebras. J Fang, S Gao, R Smith, Intern. J. Math. 22J. Fang, S. Gao, and R. Smith, The Relative Weak Asymptotic Homomorphism Property for Inclusions of Finite von Neumann Algebras, Intern. J. Math. 22 (2011) 991-1011.
E Ghys, P De La Harpe, Sur les groupes hyperboliques d'aprés Mikhael Gromov. Birkhäuser83Progress in MathematicsE. Ghys and P. de la Harpe, Sur les groupes hyperboliques d'aprés Mikhael Gromov, Progress in Mathe- matics, 83 (1990), Birkhäuser.
A class of groups for which every action is W * -superrigid. C Houdayer, S Popa, S Vaes, Groups Geom. Dyn. 73C. Houdayer, S. Popa, and S. Vaes, A class of groups for which every action is W * -superrigid, Groups Geom. Dyn. 7 (2013), no. 3, 577-590.
Rigidity of free product von Neumann algebras. C Houdayer, Y Ueda, Compos. Math. 15212C. Houdayer and Y. Ueda, Rigidity of free product von Neumann algebras, Compos. Math. 152 (2016), no. 12, 2461-2492.
W * -superrigidity for Bernoulli actions of property (T) groups. A Ioana, J. Amer. Math. Soc. 24A. Ioana, W * -superrigidity for Bernoulli actions of property (T) groups, J. Amer. Math. Soc. 24 (2011), 1175-1226.
Classification and rigidity for von Neumann algebras. A Ioana, European Congress of Mathematics. Eur. Math. Soc.A. Ioana, Classification and rigidity for von Neumann algebras, European Congress of Mathematics, 601-625, Eur. Math. Soc., Zürich, 2013.
Amalgamated free products of weakly rigid factors and calculation of their symmetry groups. A Ioana, J Peterson, S Popa, Acta Math. 200A. Ioana, J. Peterson, and S. Popa, Amalgamated free products of weakly rigid factors and calculation of their symmetry groups. Acta Math. 200 (2008), 85-153.
Uniqueness of the group measure space decomposition for Popa's HT factors. A Ioana, Geom. Funct. Anal. 22A. Ioana, Uniqueness of the group measure space decomposition for Popa's HT factors, Geom. Funct. Anal. 22 (2012), 699-732.
A class of superrigid group von Neumann algebras. A Ioana, S Popa, S Vaes, Ann. of Math. 2A. Ioana, S. Popa, and S. Vaes, A class of superrigid group von Neumann algebras, Ann. of Math. (2) 178 (2013), 231-286.
Index for subfactors. V F R Jones, Invent. Math. 72V.F.R. Jones, Index for subfactors, Invent. Math. 72 (1983), 1-25.
Rigidity of amalgamated free products in measure equivalence. Y Kida, J. Topol. 43Y. Kida, Rigidity of amalgamated free products in measure equivalence, J. Topol. 4 (2011), no. 3, 687-735.
On C * -superrigidity of virtually abelian groups. S Knuby, S Raum, H Thiel, S White, preprintS. Knuby, S. Raum, H. Thiel, and S. White, On C * -superrigidity of virtually abelian groups, preprint.
A class of II1 factors with exactly two group measure space decompositions. A Krogager, S Vaes, arXiv:1512.06677preprint. to appear in Journal de Mathématiques Pures et AppliquéesA. Krogager and S. Vaes, A class of II1 factors with exactly two group measure space decompositions, preprint arXiv:1512.06677, to appear in Journal de Mathématiques Pures et Appliquées.
On rings of operators. F J Murray, J Von Neumann, Ann. of Math. 37F.J. Murray and J. von Neumann, On rings of operators, Ann. of Math. 37 (1936), 116-229.
Rings of operators IV. F J Murray, J Von Neumann, Ann. of Math. 44F.J. Murray and J. von Neumann, Rings of operators IV. Ann. of Math. 44 (1943), 716-808.
Solid von Neumann algebras. N Ozawa, Acta Math. 192N. Ozawa, Solid von Neumann algebras, Acta Math., 192 (2004), 111-117.
A Kurosh type theorem for type II1 factors. N Ozawa, Int. Math. Res. Not. Article ID97560 (21 pagesN. Ozawa, A Kurosh type theorem for type II1 factors, Int. Math. Res. Not., Volume 2006, Article ID97560 (21 pages).
An example of a solid von Neumann algebra. N Ozawa, Hokkaido Math. J. 383N. Ozawa: An example of a solid von Neumann algebra, Hokkaido Math. J. 38 (2009), no. 3, 557-561.
Some prime factorization results for type II1 factors. N Ozawa, S Popa, Invent. Math. 156N. Ozawa and S. Popa, Some prime factorization results for type II1 factors,Invent. Math., 156 (2004), 223-234.
L 2 -rigidity in von Neumann algebras. J Peterson, Invent. Math. 175J. Peterson, L 2 -rigidity in von Neumann algebras, Invent. Math. 175 (2009), 417-433.
Entropy and index for subfactors. M Pimsner, S Popa, Ann. Sci.Éc. Norm. Sup. 19M. Pimsner and S. Popa, Entropy and index for subfactors, Ann. Sci.Éc. Norm. Sup. 19 (1986), 57-106.
Simplicity of the C * -algebra associated with the free group on two generators. R T Powers, Duke Math. J. 42R. T. Powers, Simplicity of the C * -algebra associated with the free group on two generators, Duke Math. J. 42 (1975), 151-156.
Some properties of the symmetric enveloping algebra of a factor. S Popa, Doc. Math. 4with applications to amenability and property (TS. Popa, Some properties of the symmetric enveloping algebra of a factor, with applications to amenability and property (T), Doc. Math. 4 (1999), 665-744.
On a class of type II1 factors with Betti numbers invariants. S Popa, Ann. of Math. 163S. Popa, On a class of type II1 factors with Betti numbers invariants, Ann. of Math. 163 (2006), 809-899.
Strong rigidity of II1 factors arising from malleable actions of w-rigid groups I. S Popa, Invent. Math. 165S. Popa, Strong rigidity of II1 factors arising from malleable actions of w-rigid groups I, Invent. Math. 165 (2006), 369-408.
Strong rigidity of II1 factors arising from malleable actions of w-rigid groups II. S Popa, Invent. Math. 165S. Popa, Strong rigidity of II1 factors arising from malleable actions of w-rigid groups II, Invent. Math. 165 (2006), 409-452.
Deformation and rigidity for group actions and von Neumann algebras. S Popa, International Congress of Mathematicians. IEur. Math. Soc.S. Popa, Deformation and rigidity for group actions and von Neumann algebras, International Congress of Mathematicians. Vol. I, 445-477, Eur. Math. Soc., Zürich, 2007.
On Ozawa's property for free group factors. S Popa, Int. Math. Res. Not. IMRN. 1011ppS. Popa, On Ozawa's property for free group factors, Int. Math. Res. Not. IMRN 2007, no. 11, Art. ID rnm036, 10 pp.
C * -algebras associated with free products of groups. W L Paschke, N Salinas, Pacific J. Math. 82W.L. Paschke, N. Salinas, C * -algebras associated with free products of groups, Pacific J. Math. 82 (1979), 211-221.
On the cohomology of Bernoulli actions, Ergodic Theory Dynam. S Popa, R Sasyk, Systems. 271S. Popa and R. Sasyk, On the cohomology of Bernoulli actions, Ergodic Theory Dynam. Systems 27 (2007), no. 1, 241-251.
Group measure space decomposition of II1 factors and W * -superrigidity. S Popa, S Vaes, Invent. Math. 182S. Popa and S. Vaes, Group measure space decomposition of II1 factors and W * -superrigidity, Invent. Math. 182 (2010), 371-417.
Rigidity results for Bernoulli actions and their von Neumann algebras (after Sorin Popa). S Vaes, S. Vaes, Rigidity results for Bernoulli actions and their von Neumann algebras (after Sorin Popa).
. Séminaire Bourbaki, Astérisque. 311961Séminaire Bourbaki, exp. no. 961. Astérisque 311 (2007), 237-294.
Explicit computations of all finite index bimodules for a family of II1 factors. S Vaes, Ann. Sci.Éc. Norm. Sup. 41S. Vaes, Explicit computations of all finite index bimodules for a family of II1 factors, Ann. Sci.Éc. Norm. Sup. 41 (2008), 743-788.
Rigidity for von Neumann algebras and their invariants. S Vaes, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansHyderabad, India; Hindustan Book Agency; New DelhiIIIS. Vaes, Rigidity for von Neumann algebras and their invariants, Proceedings of the International Con- gress of Mathematicians (Hyderabad, India, 2010), Vol. III, 1624-1650, Hindustan Book Agency, New Delhi, 2010.
Normalizers inside amalgamated free product von Neumann algebras. S Vaes, Publ. Res. Inst. Math. Sci. 504S. Vaes, Normalizers inside amalgamated free product von Neumann algebras, Publ. Res. Inst. Math. Sci. 50 (2014), no. 4, 695-721.
| [] |
[
"Power Spectrum Analysis of the ESP Galaxy Redshift Survey",
"Power Spectrum Analysis of the ESP Galaxy Redshift Survey"
] | [
"E Carretti \nIstituto Te.S.R.E\nVia Gobetti 101I-40129BolognaITALY\n\nDipartimento di Astronomia\nVia Ranzani 1I-40127BolognaITALY\n",
"C Bertoni \nIstituto di Radioastronomia\nVia Gobetti 101I-40129BolognaITALY\n",
"† A Messina \nDipartimento di Scienze dell'informazione\nVia Mura Anteo Zamboni 7I-40126BolognaITALY\n",
"‡ E Zucca \nOsservatorio Astronomico di Bologna\nVia Ranzani 1I-40127BolognaITALY\n",
"L Guzzo \nOsservatorio Astronomico di Brera\nVia Bianchi 46I-23807Merate(LC)ITALY\n"
] | [
"Istituto Te.S.R.E\nVia Gobetti 101I-40129BolognaITALY",
"Dipartimento di Astronomia\nVia Ranzani 1I-40127BolognaITALY",
"Istituto di Radioastronomia\nVia Gobetti 101I-40129BolognaITALY",
"Dipartimento di Scienze dell'informazione\nVia Mura Anteo Zamboni 7I-40126BolognaITALY",
"Osservatorio Astronomico di Bologna\nVia Ranzani 1I-40127BolognaITALY",
"Osservatorio Astronomico di Brera\nVia Bianchi 46I-23807Merate(LC)ITALY"
] | [
"Mon. Not. R. Astron. Soc"
] | We measure the power spectrum of the galaxy distribution in the ESO Slice Project (ESP) galaxy redshift survey. We develope a technique to describe the survey window function analytically, and then deconvolve it from the measured power spectrum using a variant of the Lucy method. We test the whole deconvolution procedure on ESP mock catalogues drawn from large N-body simulations, and find that it is reliable for recovering the correct amplitude and shape of P (k) at k > 0.065 h Mpc −1 . In general, the technique is applicable to any survey composed by a collection of circular fields with arbitrary pattern on the sky, as typical of surveys based on fibre spectrographs. The estimated power spectrum has a well-defined power-law shape k n with n ≃ −2.2 for k ≥ 0.2 h Mpc −1 , and a smooth bend to a flatter shape (n ≃ −1.6) for smaller k's. The smallest wavenumber, where a meaningful reconstruction can be performed (k ∼ 0.06 h Mpc −1 ), does not allow us to explore the range of scales where other power spectra seem to show a flattening and hints for a turnover. We also find, by direct comparison of the Fourier transforms, that the estimate of the two-point correlation function ξ(s) is much less sensitive to the effect of a problematic window function as that of the ESP, than the power spectrum. Comparison to other surveys shows an excellent agreement with estimates from blue-selected surveys. In particular, the ESP power spectrum is virtually indistinguishable from that of the Durham-UKST survey over the common range of k's, an indirect confirmation of the quality of the deconvolution technique applied. | 10.1046/j.1365-8711.2001.04390.x | [
"https://arxiv.org/pdf/astro-ph/0007012v3.pdf"
] | 8,920,915 | astro-ph/0007012 | e4032ecaf7f57df91777c780fcaaba3b654883d7 |
Power Spectrum Analysis of the ESP Galaxy Redshift Survey
27 October 2018
E Carretti
Istituto Te.S.R.E
Via Gobetti 101I-40129BolognaITALY
Dipartimento di Astronomia
Via Ranzani 1I-40127BolognaITALY
C Bertoni
Istituto di Radioastronomia
Via Gobetti 101I-40129BolognaITALY
† A Messina
Dipartimento di Scienze dell'informazione
Via Mura Anteo Zamboni 7I-40126BolognaITALY
‡ E Zucca
Osservatorio Astronomico di Bologna
Via Ranzani 1I-40127BolognaITALY
L Guzzo
Osservatorio Astronomico di Brera
Via Bianchi 46I-23807Merate(LC)ITALY
Power Spectrum Analysis of the ESP Galaxy Redshift Survey
Mon. Not. R. Astron. Soc
000000027 October 2018Accepted. Received; in original form(MN L A T E X style file v1.4)surveys -galaxies: distances and redshifts -(cosmology:) large-scale structure of the Universe
We measure the power spectrum of the galaxy distribution in the ESO Slice Project (ESP) galaxy redshift survey. We develope a technique to describe the survey window function analytically, and then deconvolve it from the measured power spectrum using a variant of the Lucy method. We test the whole deconvolution procedure on ESP mock catalogues drawn from large N-body simulations, and find that it is reliable for recovering the correct amplitude and shape of P (k) at k > 0.065 h Mpc −1 . In general, the technique is applicable to any survey composed by a collection of circular fields with arbitrary pattern on the sky, as typical of surveys based on fibre spectrographs. The estimated power spectrum has a well-defined power-law shape k n with n ≃ −2.2 for k ≥ 0.2 h Mpc −1 , and a smooth bend to a flatter shape (n ≃ −1.6) for smaller k's. The smallest wavenumber, where a meaningful reconstruction can be performed (k ∼ 0.06 h Mpc −1 ), does not allow us to explore the range of scales where other power spectra seem to show a flattening and hints for a turnover. We also find, by direct comparison of the Fourier transforms, that the estimate of the two-point correlation function ξ(s) is much less sensitive to the effect of a problematic window function as that of the ESP, than the power spectrum. Comparison to other surveys shows an excellent agreement with estimates from blue-selected surveys. In particular, the ESP power spectrum is virtually indistinguishable from that of the Durham-UKST survey over the common range of k's, an indirect confirmation of the quality of the deconvolution technique applied.
INTRODUCTION
The quantitative characterisation of the galaxy distribution is a major aim in the study of the large-scale structure of the Universe. During the last 20 years, several surveys of galaxy redshifts have shown that galaxies are grouped in clusters and superclusters, drawing structures surrounding large voids (see e.g. Guzzo 1999 for a review). The power ⋆ [email protected] † [email protected] ‡ [email protected] § [email protected] ¶ [email protected] spectrum of the galaxy distribution provides a concise statistical description of the observed clustering that, under some assumptions on its relation to the mass distribution, represents an important test for different structure formation scenarios (e.g. Peacock 1997 and references therein). Indeed, under the assumption of Gaussian fluctuations, the power spectrum totally describes the statistical properties of the matter density field (e.g. Peebles 1980).
In recent years, several estimates of the galaxy power spectrum have been obtained using galaxy samples selected at different wavelenghts: radio (Peacock & Nicholson 1991), infrared (Feldman, Kaiser & Peacock 1994, Fisher et al. 1993, Sutherland et al. 1999) and optical , c 0000 RAS da Costa et al. 1994, Tadros & Efstathiou 1996, Hoyle et al. 1999, to mention the most recent ones.
The ESO Slice Project redshift survey (ESP, Vettolani et al. 1997 is one of the two deepest wide-angle surveys currently available, inferior only to the larger Las Campanas Redshift Survey (LCRS, Shectman et al. 1996). During the last few years, it has produced a number of statistical results on the properties of optically-selected galaxies, as e.g. the luminosity function (Zucca et al. 1997) or the correlation function (Guzzo et al. 2000). The geometry of the survey (a thin row of circular fields, resulting in an essentially 2D slice in space) is such that an estimate of the power spectrum represents a true challenge. In this paper we present the results of a detailed analysis that overcomes these difficulties, producing a reliable measure of the power spectrum from the ESP redshift data. The technique developed here to cope with the specific geometry of the survey is potentially interesting also for application to other surveys consisting of separate patches on the sky, as could be the case, for example, of preliminary sub-samples of the ongoing SDSS (Margon 1998) and 2dF (Colless 1998) surveys.
The outline of the paper is as follows. We shall first recall the main features of the ESP survey and the sample selection (section 2), then discuss the power spectrum estimator adopted for the analysis (section 3) and the numerical tests performed in order to check its validity range (section 4). We shall then present the estimated power spectrum (section 5) and its consistency with the correlation function (section 6), and then discuss it in comparison to the results from other surveys (section 7). Section 8 summarises the results obtained, drawing some conclusions.
THE ESO SLICE PROJECT
The ESO Slice Project galaxy redshift survey (ESP, Vettolani et al. 1997 was constructed between 1993 and 1996 to fill the gap that existed at the time between shallow, wide angle surveys as the CfA2, and very deep, onedimensional pencil beams. The survey was designed in order to allow the sampling of volumes larger than the maximum sizes of known structures and an unbiased estimate of the luminosity function of field galaxies to very faint absolute magnitudes. The survey and the data catalogue are described in detail in Vettolani et al. (1997Vettolani et al. ( , 1998 1950)). This region was covered with a regular grid of adjacent circular fields, with a diameter of 32 arcmin each, corresponding to the field of view of the multifibre spectrograph OPTOPUS (Avila et al. 1989) at the ESO 3.6 m telescope. The total solid angle covered by the survey is 23.2 square degrees and its position on the sky was chosen in order to minimize galactic absorption (−75 o < ∼ b II < ∼ −60 o ). The target objects, with a limiting magnitude bJ ≤ 19.4, were selected from the Edinburgh-Durham Southern Galaxy Catalogue (EDSGC, Heydon-Dumbleton et al. 1989). A total of 4044 objects were observed, corresponding to ∼ 90% of the par-ent photometric sample and selected to be a random subset of the total catalogue with respect to both magnitude and surface brightness. The total number of confirmed galaxies with reliable redshift measurement is 3342, while 493 objects turned out to be stars and 1 object is a quasar at redshift z ∼ 1.174. No redshift measurement could be obtained for the remaining 208 spectra. As discussed in Vettolani et al. (1998), the magnitude distribution of the missed galaxies is consistent with a random extraction of the parent population. About half of the ESP galaxies present spectra with emission lines. Particular attention was paid to the redshift quality and several checks were applied to the data, using 1) multiple observations of ∼ 200 galaxies, 2) ∼ 750 galaxies for which the redshift from both absorption and emission line is available (Vettolani et al. 1998, Cappi et al. 1998. More details about the data reduction and sample completeness are reported in Vettolani et al. (1997Vettolani et al. ( , 1998.
Given the magnitude-limited nature of the survey, the computation of a clustering statistics like the power spectrum requires the knowledge of the selection function. This is defined as the expected probability to detect a galaxy at a redshift z and can be expressed as
s(z) = +∞ max[L 1 ,L min (z)] φ(L)dL +∞ L 1 φ(L)dL ,(1)
where φ(L) is the luminosity function, L1 is the minimum luminosity of the sample and Lmin(z) is the minimum luminosity detectable at redshift z, given the sample limiting magnitude.
In the ESP survey the minimum luminosity corresponds to an absolute magnitude M b J ,1 = −12.4 + 5 log h (h is the Hubble constant in units of 100 km s −1 Mpc −1 ). Lmin(z) is the luminosity of a galaxy at redshift z with an apparent magnitude equal to the apparent magnitude limit bJ = 19.4. The corresponding absolute magnitude is given by
bJ − M b J = 25 + 5 log DL(z) + K(z),(2)
where DL is the luminosity distance in Mpc and K(z) is the K-correction. DL is given by the Mattig formula (1958), which depends on the assumed cosmological model. For all ESP computations we assume a flat universe with Ωo = 1 and Λ = 0. Before proceeding to the computation of luminosity distances, we have converted the observed heliocentric redshifts in the catalogue to the Cosmic Microwave Background (CMB) rest frame using a standard procedure, as described in Carretti (1999). The luminosity distance is then given by
DL(z) = 2c Ho (1 + z) 1 − 1 √ 1 + z(3)
The K-correction is a function of redshift and morphological type, but the latter is not directly available for ESP galaxies. Following Zucca et al. (1997), we use an average Kcorrection, weighted over the expected morphological mixture at each z. See Zucca et al. (1997, cfr. their figure 1) for the details of this computation. A recent principal component analysis of the spectra (Scaramella, priv. comm.) confirms the adequacy of this mean correction.
The luminosity function is such that φ(L)dL gives the density of galaxies with luminosity L ∈ [L, L + dL[. The ESP luminosity function is well approximated by a Schechter (1976) function (Zucca et al. 1997)
φ(L)dL = φ * L L * α e −L/L * d L L *(4)
with best fit parameters α = −1.22, M * b J = −19.61 + 5 log h and φ * = 0.020h 3 Mpc −3 . In reality, as shown by Zucca et al. (1997), for M b J > −16 + 5 log h the faint end steepens with respect to the Schechter form and the overall shape is better described by adding an extra power law. Nevertheless, this is relevant only for the very local part of the sample and a description of the selection function using a simple Schechter fit is perfectly adequate for our purposes.
Another quantity to be taken into account for clustering analyses is the redshift completeness of the 107 fields, as not all target galaxies at the photometric limit were succesfully measured. This can be expressed as (Vettolani et al. 1998)
C = NZ NT − NS − 0.122NNO ,(5)
where, for each field, NT is the total number of objects in the photometric catalogue, NZ is the number of reliable galaxy redshifts, NNO is the number of not observed objects, NS is the number of stars and 0.122 is the fraction of stars in the spectroscopic sample. In Figure 3 we plot the completeness values for each field. Field numbers < 100 denote fields in the northern row, while the others refer to the southern one. The power spectrum analysis has been performed on both volume-limited and magnitude-limited subsamples of the survey. Volume-limited samples include all galaxies intrinsically more luminous than a given absolute magnitude M lim and within the maximum redshift zmax at which such magnitude can still be detected within the survey apparent magnitude limit. In such a case, the expected mean density of galaxies does not vary with distance. Magnitude-limited catalogues, by definition, are simply subsets of all galaxies in the survey to a given apparent magnitude, possibly with the addition of an upper distance cut zmax above which the value of the selection function becomes too small. Magnitudelimited samples contain more objects, but the mean ensemble properties (as e.g. the galaxy luminosity distribution) vary with distance. We extract from the ESP survey two magnitude-limited samples with different zmax limit, plus (For simplicity, we shall omit hereafter the 5 log h term). For the estimate of the power spectrum, comoving distances are computed for each galaxy as Dc(z) = DL(z)/(1 + z). The uncertainty introduced in Dc because of our ignorance of the correct cosmological model amounts to less than 5% for a typical redshift z = 0.20, when the value of Ωo is changed from 1 to 0.2.
POWER SPECTRUM ESTIMATOR
The galaxy power spectrum can be defined as
P (k) = ξ(x)e −ik·x dx,(6)
where ξ(x) is the two-point correlation function, x and k are the comoving position and wavenumber vectors respectively, while x = |x| and k = |k|. Under the hypothesis of homogeneity and isotropy, P (k) and ξ(x) are functions only of k and x respectively. By definition the two-point correlation function can be also written
P (k) ∝ |δ(k)| 2 ,(7)
whereδ(k) is the Fourier transform of the density contrast of the galaxies.
In this paper we follow the Fourier notation
f (k) = f (x)e −ik·x dx,(8)f (x) = 1 (2π) 3 f (k)e ik·x dk.(9)
To compute the power spectrum of galaxy density fluctuations from the observed galaxy distribution, we use a traditional Fourier method (cfr. Carretti 1999 for details), as developed by several authors (e. g. cfr. Peebles 1980, Fisher et al. 1993, Feldman et al. 1994. We also apply a correction (Tegmark et al. 1998), that accounts for our ignorance on the true value of the mean density of galaxies (Peacock & Nicholson 1991).
Given a sample of N galaxies of positions xj and weights wj, an estimate of the Fourier transform of density contrast is given bŷ
δ(k) = V N j=1 wj N j=1 wj e −ik·x j −Ŵ (k),(10)
where V is the volume of the sample andŴ (k) is the Fourier transform of the survey window function (hereafter a ∼ will denote the quantities estimated from the data). The window Field number Figure 3. The redshift completeness within the ESP fields, i.e. the fraction of galaxies with measured redshift with respect to the total number of galaxies in the photometric sample in each field. Field numbers are as reported in the catalogue (Vettolani et al., 1998). The two panels are for the fields in the northern (a) and southern (b) rows respectively.
function W (x) is 1 within the volume covered by the sample and 0 elsewhere, so it can be described as an ensemble of 107 cones. This geometry allows us to obtain analytically the Fourier transformŴ (k) as the sum of the Fourier transform of each cone. In equation 10 each galaxy contributes with some weight wj . In a volume-limited catalogue all galaxies have equal weight, i.e. wj ≡ 1. In a magnitude-limited catalogue the expected galaxy density decreases with the distance according to the selection function. Thus, the simplest form for the weight for a galaxy is given by the inverse of the selection function at its redshift zj wj = 1 s(zj)
.
If the catalogue completeness is C < 1, the previous weight should be modified as
wj = 1 C(xj)(12)
for volume-limited catalogues, and as
wj = 1 s(zj)C(xj) ,(13)
for magnitude-limited catalogues. C(xj) is the completeness of the sample at the position of the j th galaxy. Our adopted power spectrum estimator is defined with respect toδ(k) by the following equation (Tegmark et al. 1998)
Pc(k) = |δ(k)| 2 −b(k) A(k) ,(14)
where
A(k) = 1 − Ŵ (k) W (0) 2 V(15)
accounts for our ignorance of the mean galaxy density, whilẽ
b(k) = V 2 N j=1 wj 2 N j=1 w 2 j e −ik·x −Ŵ (k) W (0) 2(16)
is the shot noise correction due to the finite size of the sample.
The observed power spectrum estimated by equation 14 is related to the true power spectrum P (k) bỹ
Pc(k) = 1 (2π) 3 A(k) P (k ′ )φ(k, k ′ )dk ′ ,(17)
where
φ(k, k ′ ) = Ŵ (k − k ′ ) −Ŵ (k) W (0)Ŵ (−k ′ ) 2 .(18)
For wavenumbers k such that |Ŵ (k)| ≪ |Ŵ (0)| this equation reduces to the convolution between P (k) and |Ŵ (k)| 2 .
To describe the convolved power spectrum we choose to averagePc(k) over all directions
Pc(k) = P c(k) = 1 4π Ω kP c(k)dΩk = ∞ 0 k ′2 P (k ′ )χ(k, k ′ ) dk ′ ,(19)
where Ω k is the sphere defined by wavenumbers of amplitude k. The kernel of this integral equation is given by
χ(k, k ′ ) = 1 2(2π) 4 V Ω k Ω k ′ ψ(k, k ′ ) dΩ k dΩ k ′ ,(20)
where
ψ(k, k ′ ) = Ŵ (k − k ′ ) −Ŵ (−k ′ )Ŵ (k)/Ŵ (0) 2 1 − Ŵ (k)/Ŵ (0) 2(21)
and Ω k ′ is the sphere defined by wavenumbers of amplitude k ′ . The Fourier transform of the window function has been analytically computed as the sum of the Fourier transforms of all the 107 cones. In reality, the cones are slightly overlapped but the small common volume (2.85%) allows us to make the assumption of disjoined cones.
The small width of one ESP cone allows us to analytically compute its Fourier transform. Let ro be the cone height and ∆θ ≪ 1 rad its width (∆θ = 16 ′ = 0.00465 rad). In the simple case of a cone centered on the z axis, the Fourier transform iŝ
Wc(k) = ro 0 dr r 2 2π 0 dφ ∆θ 0 dθ sin θ e −ik·r .(22)
Taking into account the small value of ∆θ, the integrand can be approximated to first order in θ, resulting in
Wc(k) =
This expression depends on ro, ∆θ and cos α, where α is the angle between k and the z axis. For a generic cone along the direction (θo, φo) a rotation can be applied in order to bring the cone on the z axis. So, the Fourier transform iŝ
Wc(k) = ro 0 dr r 2 Ω (θo ,φo) dΩe −ik·r ≈ 2π[1 − cos(∆θ)] ro 0 dr r 2 e −ikr cos γ ,(24)
where Ω (θo,φo) is the solid angle of the cone centered on (θo, φo), and γ is the angle between the wave number k and the direction (θo, φo). By solving the integral one getŝ
Wc(k) = 2π i [1 − cos(∆θ)] k cos γ × r 2 o e −ikro cos γ − i 2roe −ikro cos γ k cos γ − 2 e −ikro cos γ − 1 (k cos γ) 2 .(25)
Finally, the Fourier transform of the whole ESP window function is given by the sum of the Fourier transform of the cones normalized to the true volume of the survey to account for the overlapping zonê
W (k) = V 107 Vc 107 i=1Ŵ c,i(k),(26)
where i is the cone index, Vc is the volume of one cone and V is the true survey volume Figure 4. The power spectrum of the ESP window function (here we use this term for the real-space survey selection window, but in other papers the same is used for its Fourier transform directly). The filled line is the spherically-averaged function, computed analytically using the machinery described in the text. The filled points give the same quantity, but computed numerically through a simple Montecarlo simulation. The figure shows also the three components of |W (k)| 2 in k-space. Note how broad is the function along the direction z, which has been chosen to be essentially perpendicular to the ESP main plane, evidencing its extreme anisotropy.
V = 107 Vc(1 − β),(27)
which accounts for the total volume fraction β = 0.0285 lost in the cone overlaps.
To check the reliability of our analytic approximation, we perform a numerical Fourier computation. We sample the survey volume on a regular grid by assigning 1 to the grid points inside the window function and 0 outside. We then perform an FFT. This numerical computation is limited by the finite size of the grid cells, but avoids the overlapping zones and considers the true window function. Figure 4 (filled points and solid line) compares both the analytical and numerical estimates of the window function power spectrum averaged over spherical shells. The difference is less than 5%.
The strongly anisotropic geometry of the ESP survey (see Figure 4) introduces important convolution effects between the survey window function and the galaxy distribution. To clean the observed power spectrum for these effects, we have adopted Lucy's deconvolution method (Lucy 1974; see also Efstathiou 1993 andLin et al. 1996, for a discussion about its application to power spectrum estimates).
The Lucy technique is a general method to estimate the frequency distribution ψ(η) of a quantity η, when we know the frequency distribution φ(y) of a second quantity y, related to η by
φ(y) = ψ(η)Π(y|η) dη,(28)
where Π(y|η) dy is the probability that y ′ ∈ [y, y+dy[ when η ′ = η. The probability Π(y|η) must be known and the frequency distribution φ(y) is the observed one. The solution of equation 28 can be obtained by an iterative procedure. Let Q(η|y) dη be the probability that η ′ ∈ [η, η+dη[ when y ′ = y. The probability that y ′ ∈ [y, y+dy[ and η ′ ∈ [η, η + dη[ can be written as φ(y)dy Q(η|y)dη and ψ(η)dη Π(y|η)dy. From these two expressions and equation 28 we obtain
Q(η|y) = ψ(η)Π(y|η) ψ(η)Π(y|η) dη ,(29)
which provides the identity
ψ(η) ≡ φ(y)Q(η|y) dy.(30)
The latter equation cannot be solved directly, since Q(η|y) depends on the unknown ψ(η) as well. Given a fiducial model for ψ(η) and the known probability Π(y|η), equation 29 provides an estimate for Q(η|y). This and the identity 30 allows us to compute an improved estimate for ψ(η). The process can then be repeated until convergency. In our specific case the equation to be solved is eq. 19, where k ′2 χ(k, k ′ ) plays the role of the probability Π(y|η). If we sample k on logarithmic intervals the convolution integral becomes
Pc(k) = k ′3 P (k ′ )χ(k, k ′ ) ln(10) d(log 10 k ′ )(31)
and an iterative scheme for the deconvolved spectrum can be written as
P m+1 (ki) = P m (ki) j P c(kj )/Pc m (kj) χ(kj, ki) j χ(kj , ki) ,(32)
wherẽ Pc m (kj) = r kr 3 P m (kr)χ(kj, kr) ln(10) ∆
and ∆ = (log 10 ki+1 − log 10 ki) is the logarithmic interval, while P m denotes the m th estimate of the spectrum. One problem with the Lucy method is that of producing a noiser and noiser solution as the iteration converges. To avoid this, Lucy suggests to stop the iteration after the first few steps. This is quite arbitrary and we prefer to follow Baugh & Efstathiou (1993; see also Lin et al. 1996) in applying a smoothing procedure at each step P m (ki) = 0.25P m (ki−1) + 0.5P m (ki) + 0.25P m (ki+1). (34) We use P 0 (ki) = constant as initial guess for the power spectrum, but we checked that the solution is independent of the shape of P 0 (ki). One consequence of this smoothing is that some degree of correlation is introduced among the bins of P (k).
The importance of the convolution effects on different scales can be estimated by plotting the integrand of eq. (31) as a function of k ′ , for different values of k ( Figure 5). If the window was a large and regular sample of the Universe, the plots would be sharply peaked at k = k ′ , as it actually happens for large values of k (small scales). On the other hand, for small values of k, i.e. for spatial wavelengths comparable to the typical scales of the window (which are quite small, due to the strongly anisotropic shape), the true power is spread over a wide range of wavenumbers. We plot the quantity f (k ′ ) = k ′3 P (k ′ )χ(k, k ′ ) ln(10)∆/P (k) for (k = 0.032, 0.057, 0.1, 0.18, 0.32h Mpc −1 ), considering the kernel relative to the geometry of the 523 h −1 Mpc sample. The power spectrum P (k ′ ) of a CDM model with Ω = 0.4 and Γ = 0.2 has been used.
NUMERICAL TESTS
We test the whole procedure for estimating the power spectrum through N -body simulations that we have run assuming some cosmological models (Carretti 1999). The results of these simulations can be considered as a Universe, from which we can extract mock catalogues with the same features of the ESP survey (geometry, galaxy density, field completeness, selection function). We then apply to such mock catalogues the whole power spectrum estimate procedure (convolved power spectrum estimator and deconvolution tecnique) and we compare the result with the true power spectrum obtained from the whole set of particles.
The simulations were performed on a Cray T3E at CINECA supercompunting center (Bologna) using a Particle-Mesh (PM) code (Carretti & Messina 1999) and adopting two cosmological models: an unbiased Standard Cold Dark Matter (SCDM: Ωo = 1, h = 0.5, σ8 = 1) and an unbiased Open Cold Dark Matter with shape parameter Γ = 0.2 (OCDM: Ωo = 0.4, h = 0.5, σ8 = 1). They were run with a box size of 700h −1 Mpc, 512 3 grid points and 512 3 particles, in order to reproduce a volume which can contain all catalogues selected for the analysis (max depth 633h −1 Mpc) and to select a realistic number of mock galaxies for the magnitude-limited catalogues. From each simulation box, we randomly choose a particle as origin and extract sets of particles with the same features of the three ESP catalogues. The magnitude-limited selection is then reproduced by simply assigning a weight corresponding to the observed selection function.
From each simulation and for each ESP subsample we construct 50 independent mock catalogues. The average number of particles over the 50 realisations is set to the number of galaxies observed in the corresponding true ESP sample. The power spectrum estimator is then applied to each of the 50 mock galaxy catalogues, producing an independent estimate of P (k) for that specific model and sample geometry. From each set of realisations, a mean P (k) and its standard deviation can finally be computed and compared to the true power spectrum obtained from all the particles.
A general result from this exercise is that for k > 0.065h Mpc −1 the systematic power suppression by the window function convolution is properly corrected for by our procedure, i.e. we are able to fully recover the input P (k). In figure 6 we show the reconstructed power spectra from the three sets of mock catalogues, compared to the "true" ones, both for the SCDM and OCDM simulations. In panel a), in particular, we also show (open circles) the raw power spectrum before applying the Lucy deconvolution, to emphasize the dramatic effect of the ESP window function on all scales. It is evident that for k > 0.065h Mpc −1 the mean deconvolved power spectra are a very good reconstruction of the original ones. In particular, in the SCDM case where the spectrum turnover scale is well sampled by the simulation box, the technique is able to nicely follow the change of shape at small k's. This is important, because guarantees that the deconvolution method has enough resolution as to follow possible features in the data power spectrum, while at the same time recovering the correct amplitude. At smaller k's the error bars explode, and the results become meaningless. In the case of the deepest sample, ESPm633, the reconstruction shows a small systematic overestimate of the amplitude for the SCDM spectrum on the largest scales, i.e. the reconstruction algorithm seems to have difficulty in following the curvature of the spectrum accurately. This is probably due to the rather small value of the selection function in the most distant part of the sample, which puts too large a weight on the distant objects. Rather than indicating a difficulty in the technique, this is probably telling us that it is safer to truncate the data at smaller distances, as for sample ESPm523.
Comparing the results from the SCDM and OCDM mock catalogues, we have checked that the fractional errors for the two cases are quite similar. Not knowing a priori the correct cosmological model, rather than choosing one of the two models as representative, we prefer to average the fractional errors measured from the two models.
Using the mock catalogues, we can also evaluate directly the possible effects of the field incompleteness on the power spectrum estimate. Figure 7 compares the mean power spectra obtained from one set of 50 SCDM mock samples both in the ideal case (all fields complete) and when the ESP fieldto-field incompleteness is introduced and corrected for. It is clear that the incompleteness is correctly taken into account by the weighting scheme. Error bars (not reported for clarity) are also very similar.
THE POWER SPECTRUM OF ESP GALAXIES
The numerical tests performed have given us an estimate of the reliability of our method to reconstruct the true power Table 2. The error bars are partially reported only for ESPm523 to avoid confusion (the errors are similar for the three samples). The three estimates of the power spectrum are well consistent with each other. Given the large amplitude of the errors (∼ 30% of P (k) for k > 0.15h Mpc −1 , 50% for k ∼ 0.1 and 75% for k ∼ 0.065) the small differences in the slopes are not significant. In general, we can safely say that the power spectrum of ESP galaxies follows a power law P (k) ∝ k n with n ∼ −2.2 for k > 0.2h Mpc −1 , and n ∼ −1.6 for k < 0.2h Mpc −1 . In the range 0.065 < k < 0.6 h Mpc −1 there is no meaningful difference between the three estimates, which are therefore independent of the catalogue type (magnitude-or volume-limited) and of the catalogue depth.
In figure 8 we have also plot, for comparison, redshiftand real-space power spectra computed from the simulations described in section 4 (SCDM and Γ = 0.2 OCDM). Note how the former compensate for non linear evolution at large k's in this way steepening the slope of P (k) over the whole observed range, which makes the global slope closer to the observed one. Despite this comparison to models is deliberately limited, one can safely say that the data points (especially below k = 0.1-0.2 h Mpc −1 ) are in better agreement with the power spectrum of the Γ = 0.2 OCDM model. This model would reproduce this observation without biasing (the normalisation adopted by the simulation is σ8 = 1). Figure 8. The final deconvolved estimates of the ESP power spectrum from the three samples. The plot shows also the power spectra computed from the two simulations described in the text both in redshift-and in the real-space. Note how redshift distorsion effects modify the power spectra increasing the apparent power on large scales and reducing it on small ones. Table 2. Results of the power spectrum estimates from the three subsamples of the ESP survey. Errors are estimated from 50 mock realisations of the samples, as detailed in the text. (Guzzo et al. 2000). The Fourier transform of the de-convolved estimate is in very good agreement with the direct measure of ξ(s). This result shows also how the two-point correlation function -for which no kind of correction has been applied in addition to those of standard estimators -is substantially insensitive to the effect of the window function.
k (hMpc −1 ) P (k) ESP m523 1σ ESP m523 P (k) ESP m633 1σ ESP m633 P (k) ESP
CONSISTENCY BETWEEN REAL AND FOURIER SPACE
It is interesting to compare the Fourier transform of the ESP power spectrum estimated in this work with the twopoint correlation function measured independently from the same sample (Guzzo et al. 2000). This exercise is a further check of the robustness and self-consistency of the estimate of P (k). In addition, it is of specific interest to verify the effect of the survey geometry/window function in real and Fourier space. To simplify the procedure, we have first fitted the observed P (k) with a simple analytical form with two power laws connected by a smooth turnover (e.g. Peacock 1997)
P (k) = (k/ko) α 1 + (k/kc) α−n ,(35)
where ko is a normalisation factor, kc gives essentially the turnover scale, n is the large-scale primordial index (here fixed to n = 1), and α gives the slope for k ≫ kc. We have used this function to reproduce the global shape of both the convolved and de-convolved estimates of P (k) from the ESP523 sample. In terms of selection function, this sample is the closest to one of the volume-limited samples used by Guzzo et al. (2000) to estimate ξ(s) from the same data. Figure 9 shows how this form provides a good description of the ESP power spectrum, with the deconvolved one characterised by ko = 0.080 h Mpc −1 , kc = 0.062 h Mpc −1 , α = −2.2 (note that while the slope α is a stable value, the turnover scale kc is very poorly constrained, given the limited range covered by the data). Figure 10 shows the Fourier transform of the two fits, compared to the direct estimate of ξ(s) by Guzzo et al. (2000). Two main comments should be made here. First, our "best" estimate of P (k), deconvolved for the ESP window function according to our recipe, reproduces rather well the observed two-point correlation function (solid line). Note how the Fourier transform of the simple direct estimate suffers from a systematic lack of power as a function of scale (dashed line), as we expected from our results on the mock samples. The second, more general comment concerns the stability of the two-point correlation function. One might naively think that the narrowness of the explored volume, which gives rise to the window function in Fourier space, should affect in a similar way also the estimate of clustering by the two-point correlation function. Figure 10 shows that this is not the case. In fact, the points showed here have not been subject to any kind of correction (Guzzo et al. 2000), a part from those which are standard in the estimation technique to take into account the survey boundaries. Still, they seem to sample clustering to the largest available scales in a reasonably unbiased way, without basically being affected by the survey geometry.
COMPARISON TO OTHER REDSHIFT SURVEYS
In the six panels of figure 11, we compare the power spectrum for the ESPm523 and ESPm633 samples to a variety of results from previous surveys, both selected in the optical and infrared (IRAS) bands. In general, there is a good level of unanimity among the different surveys concerning the slope of P (k) over the range sampled by the ESP estimate. Optically-selected surveys show a good agreement also in amplitude, with a possible minor differential biasing effect in the case of CfA2-SSRS2-130 (panel a, da Costa et al. 1994), which is a volume-limited sample containing galaxies brighter than ∼ M * − 1.5. The effect of different biasing values is more evident in the comparison to IRASbased surveys (IRAS 1.2 Jy, Fisher et al. 1993;QDOT, Feldman et al. 1994;PSCz, Sutherland et al. 1999) in panels e and f. Particularly relevant is the comparison to the results of the Durham/UKST galaxy redshift survey (DUKST, Hoyle et al., 1999) (panel d). This survey is selected from the same parent photometric catalogue as the ESP (the EDSGC) and contains a comparable number of redshifts. However, it is less deep (bj 17), while covering a much larger solid angle by measuring redshift in a sparse-sampling fashion, picking one galaxy in three. This produces a window function which is essentially complementary to that of the ESP survey, with a good sampling of long wavelengths and a poor description of small-scale clustering, which on the contrary is well sampled by the ESP 1-in-1 redshift measurements. The agreement between these two data sets is impressive. This is a further confirmation of the quality of the deconvolution procedure we have applied to the ESP data, given the rather 3D shape of the DUKST volume which makes the window function practically negligible for this survey. Significantly more noisy is the estimate from the similarly bJ -selected Stromlo-APM redshift survey (Tadros & Efstathiou 1996), most probably because of the very sparse sampling of this survey and the smaller number of galaxies.
Finally, panel b) shows a comparison with the data from the r-selected LCRS . The power spectrum from this survey has a flatter slope with respect to our estimate from the ESP. More in general, it is flatter than practically all other power spectra shown in the figure. This is somewhat suspicious, as the two-point correlation functions agree rather well for ESP, LCRS, Stromlo-APM and DUKST (Guzzo 1999), and might be an indication that the effect of the window function has not been fully removed from the estimated spectrum.
SUMMARY AND CONCLUSIONS
The main results obtained in this work can be summarised as follows.
• We have developed a technique to properly describe the ESP window function analytically, and then deconvolve it from the measured power spectrum, to obtain an estimate of the galaxy power spectrum. The tests performed on a number of mock catalogues drawn from large N -body simulations show that the technique is able to recover the correct shape of P (k) down to wavenumbers k ≃ 0.065 h Mpc −1 . In general, this technique for describing the window function analytically can be applied to any redshift survey composed by circular patches on the sky (e.g. the ongoing 2dF survey). In addition to its mathematical elegance, it has some computational advantages over the traditional method for recovering the survey window function, normally based on the generation of large Montecarlo poissonian realisations. Figure 11. Comparison of the ESP P(k) with results from other surveys, as indicated by the labels (see text for references). Error bars for the ESP points are reported only partially for clarity.
• The final estimates of the ESP power spectrum, ex-
Figure 1 .
1The area covered by the ESP survey on the sky consists of a set of 107 circular fields of 16 ′ radius. As shown in this figure, they are arranged into 2 parallel rows and draw two thin slices over the celestial sphere of about 22 • × 1 • and 5 • × 1 • respectively, separated by ∼ 5 • (fromVettolani et al. 1997).
Figure 2 .
2The galaxy distribution in the ESP redshift survey.
sin θ e −ikr cos α = 2π[1 − cos(∆θ)] ro 0 dr r 2 e −ikr cos α .
Figure 5 .
5Behaviour of the integrand of the convolution equation 31 normalized with respect to the convolved power spectrumP (k) for some k values.
Figure 6 .
6Deconvolved power spectra from 50 mock ESP catalogues extracted from SCDM (left panels) and OCDM (right panels) simulations. The filled squares and the error bars give the ensemble average and standard deviation of the 50 mock samples. The solid line is the corresponding power spectrum computed from the whole simulation box using all particles. N denotes the average number of particles among the mock catalogues. The three pairs of panels, from top to bottom, refer to the three different kinds of subsamples ESPm523, ESP523, and ESPm633, as in the case of the real data. In panel a) we plot also the power spectrum estimate before applying the deconvolution procedure (open circles).
Figure 7 .
7Test for the effect of redshift incompleteness. The power spectrum has been computed for 50 mock samples extracted from the SCDM simulation, both in the ideal case of a full redshift coverage of a sample as ESPm523, and in the real situation, i.e. including the field-to-field incompleteness and correcting for it through the weighting scheme.spectrum, so that we can now apply it to the three galaxy subsamples ESPm523, ESP523 and ESPm633.The final results of the computation are shown in figure 8 and
Figure 9 .
9Fits with a simple phenomenological form of the convolved and de-convolved P(k) from the ESP523 sample
Figure 10 .
10The Fourier transform of the convolved (dashed line) and de-convolved (solid line) estimates of the power spectrum compared to the two-point correlation function of the ESP survey (filled circles), estimated for essentially the same volume-limited subsample
). Here we limit ourselves to a summary of the main features relevant for the present analysis. The ESP survey (see Figures 1 and 2) extends over a strip of α × δ = 22 • × 1 o , plus a nearby area of 5 • × 1 • , five degrees west of the main strip, in the South Galactic Pole region (22 h 30 m ≤ α ≤ 01 h 20 m , at a mean declination of −40 o 15 ′
Table 1 .
1Parameters of the samples extracted from the ESP survey: zmax is the maximum redshift, Dmax the maximum comoving distance in h −1 Mpc unit, M lim the absolute magnitude limit for the volume-limited sample (we omit the 5 log h term) and N is the galaxy number.Sample
zmax
Dmax
M lim
N
(h −1 Mpc)
ESPm523
0.20
523
3092
ESPm633
0.25
633
3306
ESP523
0.20
523
−20.1
481
one volume-limited sample with M lim ≤ −20.1 + 5 log h.
c 0000 RAS, MNRAS 000, 1-15
tracted from three subsamples of the survey, are in good agreement within the error bars. The bright volume-limited sample does not show a clear difference in amplitude with respect to the apparent-magnitude limited ones. This agrees with the similar behaviour found for the two-point correlation function, i.e. a negligible evidence for luminosity segregation even for limiting absolute magnitudes M b J ∼ −20(Guzzo et al. 2000). This is only apparently in contrast with the results ofPark et al. (1994), who found evidence for luminosity segregation studying the amplitude of the power spectrum in the CfA2 survey. In fact, that analysis concentrates on a range of luminosities about 1.5 magnitude brighter than M * , which for the CfA2 survey has a value of -18.8(Marzke et al. 1994), i.e. nearly one magnitude fainter than for the ESP. This also agrees with the results ofBenoist et al. (1996), who studied the correlation function for the SSRS2 sample, finding negligible signs of luminosity segregation for M > M * .• All three estimates of P (k) show a similar shape, with a well defined power-law k n with n ≃ −2.2 for k ≥ 0.2 h Mpc −1 , and a smooth bend to a flatter shape (n ≃ −1.6) for smaller k's. The smallest wavenumber where a meaningful reconstruction can be performed (k ≃ 0.065 h Mpc −1 ), does not allow us to explore the range of scales where other power spectra seem to show a flattening and hints for a turnover. In the framework of CDM models, however, the well-sampled steep slope between 0.08 and 0.3 h Mpc −1 favours a low-Γ model (Γ = 0.2), consistently with the most recent CMB observation of BOOMERANG/MAXIMA experiments(Jaffe et al. 2000).• We have verified that the two-point correlation function ξ(s) is much less sensitive to the effect of a difficult window function as that of the ESP, than the power spectrum. In fact, the measured correlation function (without any correction), agrees with the Fourier transform of the power spectrum, only after this has been cleaned of the combination by the window function. This is an instructive example of how these two quantities, despite being mathematically equivalent, can be significantly different in their practical estimates and be very differently affected by the peculiarities of data samples.• When compared to previous estimates from other surveys, the ESP power spectrum is virtually indistinguishable from that of the Durham-UKST survey over the common range of wavenumbers. In particular, between 0.1 and 1 h Mpc −1 our power spectrum has significantly smaller error bars with respect to the DUKST, by virtue of its superior small-scale sampling. The absence of any systematic amplitude difference between these two surveys -both selected from the EDSGC catalogue, but with complementary volume and sampling choices -is an important indirect indication of the quality of the deconvolution procedure applied here, and also of the accuracy of the two independent estimates. In this respect, a combination of the Durham-UKST and ESP surveys possibly provides the current best measure of P (k) for blue-selected galaxies in the full range ∼ 0.03 − 1 h Mpc −1 . It will be very interesting to compare these combined results to the power spectrum of the forthcoming 2dF redshift survey, which is also selected in the same bJ band to virtually the same limiting magnitude than the ESP.ACKNOWLEDGMENTSWe thank an anonymous referee for suggestions that helped us to improve the paper. We thank Hume Feldman, Huan Lin, and Michael Vogeley for providing us with their power spectrum results in electronic form, and Stefano Borgani for his COBE normalisation routine.LG and EZ thank all their collaborators in the ESP survey team, for their contribution to the success of the survey. This work has been partially supported by a CNAA grant.
The Messenger. G Avila, S D'odorico, M Tarenghi, L Guzzo, 5562Avila G., D'Odorico S., Tarenghi, M., Guzzo L., 1989, The Mes- senger, 55, 62
. C M Baugh, G Efstathiou, MNRAS. 265145Baugh C.M., Efstathiou G., 1993, MNRAS, 265, 145
. C Benoist, S Maurogordato, L N Da Costa, A Cappi, R Schaeffer, ApJ. 472452Benoist C., Maurogordato S., da Costa L.N., Cappi A., & Scha- effer R., 1996, ApJ, 472, 452
. A Cappi, A&A. 336445Cappi A. et al., 1998, A&A 336, 445
E Carretti, Proc. Fifth European SGI/Cray MPP Workshop. Fifth European SGI/Cray MPP WorkshopUniv. Bologna Carretti E., Messina A.PhD thesisCarretti E., 1999, PhD thesis, Univ. Bologna Carretti E., Messina A., 1999, in Proc. Fifth European SGI/Cray MPP Workshop, http://www.cineca.it/mpp-workshop/ fullpapers/carretti/carretti.htm (astro-ph/0005512)
M Colless, Proc. Wide Field Surveys in Cosmology, Edition Frontieres. Colombi S., Mellier Y., Raban B.Wide Field Surveys in Cosmology, Edition FrontieresParis77Colless, M., 1998, in Colombi S., Mellier Y., Raban B., eds, Proc. Wide Field Surveys in Cosmology, Edition Frontieres, Paris, p. 77
. L N Da Costa, M S Vogeley, M J Geller, J P Huchra, C Park, ApJ. 4371da Costa L.N., Vogeley M.S., Geller M.J., Huchra J.P., Park C., 1994, ApJ, 437, L1
. H A Feldman, N Kaiser, J A Peacock, ApJ. 42623Feldman H.A., Kaiser N., Peacock J.A., 1994, ApJ, 426, 23
. K B Fisher, M Davis, M A Strauss, A Yahil, J P Huchra, ApJ. 40242Fisher K.B., Davis M., Strauss M.A., Yahil A., Huchra J.P., 1993, ApJ, 402, 42
L Guzzo, astro- ph/9911115Proc. XIX Texas Symp. on Relativistic Astrophysics. AubourgÉ., Montmerle T., Paul J., Peter P.XIX Texas Symp. on Relativistic Astrophysics80Guzzo L., 1999, in AubourgÉ., Montmerle T., Paul J., Peter P.,eds, Proc. XIX Texas Symp. on Relativistic Astrophysics, Nucl. Phys. B (Proc. Suppl.), 80, CD-ROM 09/06, (astro- ph/9911115)
. L Guzzo, A&A. 3551Guzzo L. et al., 2000, A&A, 355, 1
. N H Heydon-Dumbleton, C A Collins, H T Macgillivray, MNRAS. 238379Heydon-Dumbleton N.H., Collins C.A., MacGillivray H.T., 1989, MNRAS, 238, 379
. F Hoyle, C M Baugh, T Shanks, A Ratcliffe, MNRAS. 309659Hoyle F., Baugh C.M., Shanks T., Ratcliffe A., 1999, MNRAS, 309, 659
. A H Jaffe, submitted to PRL (astro-ph/0007333Jaffe A.H., et al., 2000, submitted to PRL (astro-ph/0007333)
. H Lin, R P Kirshner, S A Shectman, S D Landy, A Oemler, D L Tucker, P L Schechter, ApJ. 471617Lin H., Kirshner R.P., Shectman S.A., Landy S.D., Oemler A., Tucker D.L., Schechter P.L., 1996, ApJ, 471, 617
. L B Lucy, AJ. 79745Lucy L.B., 1974, AJ, 79, 745
. B Margon, astro- ph/9805314Phil. Trans. R. Soc. Lond. A. 357Margon, B., 1998, Phil. Trans. R. Soc. Lond. A, 357, 93 (astro- ph/9805314)
. R O Marzke, J P Huchra, M J Geller, ApJ. 431569Marzke R.O., Huchra J.P., Geller M.J., 1994, ApJ, 431, 569
. W Mattig, Astron. Nachr. 284109Mattig W., 1958, Astron. Nachr., 284, 109
. C Park, M S Vogeley, M J Geller, J P Huchra, ApJ. 431569Park C., Vogeley M.S., Geller M.J., Huchra J.P., 1994, ApJ, 431, 569
. J A Peacock, MNRAS. 284885Peacock J.A., 1997, MNRAS, 284, 885
. J A Peacock, D Nicholson, MNRAS. 253307Peacock J.A., Nicholson D., 1991, MNRAS, 253, 307
The Large Scale Structure of the Universe. P J E Peebles, Princeton University PressPrinceton, NJPeebles P.J.E., 1980, The Large Scale Structure of the Universe. Princeton University Press, Princeton, NJ
. P Schechter, ApJ. 203297Schechter P., 1976, ApJ, 203, 297
. S A Shectman, S D Landy, A Oemler, D L Tucker, H Lin, R P Kirshner, P L Schechter, ApJ. 470172Shectman S.A., Landy S.D., Oemler A., Tucker D.L., Lin H., Kirshner R.P., Schechter P.L., 1996, ApJ, 470, 172
. W Sutherland, MNRAS. 308289Sutherland W. et al., 1999, MNRAS, 308, 289
. H Tadros, G Efstathiou, MNRAS. 2821381Tadros H., Efstathiou G., 1996, MNRAS, 282, 1381
. M Tegmark, A J S Hamilton, M A Strauss, M S Vogeley, A S Szalay, ApJ. 499555Tegmark M., Hamilton A.J.S., Strauss M.A., Vogeley M.S., Szalay A.S., 1998, ApJ, 499, 555
. G Vettolani, A&A. 325954Vettolani G. et al., 1997, A&A 325, 954
. G Vettolani, A&ASS. 130323Vettolani G. et al., 1998, A&ASS 130, 323
. E Zucca, A&A. 326477Zucca E. et al., 1997, A&A 326, 477
| [] |
[
"Has the Brain Maximized its Information Storage Capacity?",
"Has the Brain Maximized its Information Storage Capacity?"
] | [
"Armen Stepanyants [email protected] \nCold Spring Harbor Laboratory\n\n\nBungtown Rd\nCold Spring Harbor\n11724NY\n"
] | [
"Cold Spring Harbor Laboratory\n",
"Bungtown Rd\nCold Spring Harbor\n11724NY"
] | [] | Learning and memory may rely on the ability of neuronal circuits to reorganize by dendritic spine remodeling. We have looked for geometrical parameters of cortical circuits, which maximize information storage capacity associated with this mechanism. In particular, we calculated optimal volume fractions of various neuropil components. The optimal axonal and dendritic volume fractions are not significantly different from anatomical measurements in the mouse and rat neocortex, and the rat hippocampus. This has led us to propose that the maximization of information storage capacity associated with dendritic spine remodeling may have been an important driving force in the evolution of the cortex. | null | [
"https://arxiv.org/pdf/physics/0307065v1.pdf"
] | 17,950,593 | physics/0307065 | 126e3cc02de3d66420265fc225c9cb5ce4698db3 |
Has the Brain Maximized its Information Storage Capacity?
Armen Stepanyants [email protected]
Cold Spring Harbor Laboratory
Bungtown Rd
Cold Spring Harbor
11724NY
Has the Brain Maximized its Information Storage Capacity?
Learning and memory may rely on the ability of neuronal circuits to reorganize by dendritic spine remodeling. We have looked for geometrical parameters of cortical circuits, which maximize information storage capacity associated with this mechanism. In particular, we calculated optimal volume fractions of various neuropil components. The optimal axonal and dendritic volume fractions are not significantly different from anatomical measurements in the mouse and rat neocortex, and the rat hippocampus. This has led us to propose that the maximization of information storage capacity associated with dendritic spine remodeling may have been an important driving force in the evolution of the cortex.
Introduction
Many important brain functions, such as learning and memory, depend on the plasticity of neuronal circuits. Traditionally, plasticity is thought to rely on the following biological mechanisms: changes in the strengths of existing synaptic connections, formation and elimination of synapses without remodeling of neuronal arbors, and remodeling of dendritic and axonal branches. In order to understand the respective roles of these mechanisms we began to evaluate their plasticity potentials, i.e. the number of available yet different circuits attainable by each mechanism. In particular, we calculated the plasticity potential associated with the reorganization of neuronal circuits by dendritic spine remodeling. We expressed our results in terms of the logarithm of the number of available circuits, or the information storage capacity.
Next, we looked for geometrical parameters of the cortical circuits, which maximize information storage capacity associated with this mechanism. We found optimal axonal, and dendritic volume fractions as functions of dendritic and axonal length densities, average dendritic spine length, and density of synapses. We compared these optimal geometrical parameters with the anatomical data and found a reasonable agreement.
Information storage capacity
We start by briefly reviewing the framework behind the calculation of the information storage capacity due to the formation and elimination of synapses, which has been done previously 1,2 . Because the majority of excitatory synapses are located on spines 3 the reorganization of neuronal circuits can be implemented by retracting some dendritic spines from pre-synaptic axons and extending them towards other axons. Such switching of pre-synaptic partners is only possible if there are a greater number of axons within a spine length of a dendrite than the number of dendritic spines (Fig. 1). We refer to locations in neuropil where an axon is present within a spine length of a dendrite as potential synapses. Potential synapse is a necessary but not sufficient condition of an actual synaptic connection.
If the number of potential synapses is equal to the number of spines (Fig. 1B), then there are no available pre-synaptic partners and spine remodeling cannot contribute to circuit reorganization. If the number of potential synapses is greater than the number of spines (Fig. 1C), then spine remodeling can contribute to circuit reorganization (Fig. 1D). Figure 1: Spine remodeling as a mechanism of circuit reorganization 1,2 . A. Spiny dendrite in macaque neocortex visualized by light microscopy. B. A sketch showing the dendritic branch from A together with adjacent axons (thin lines). The number of potential synapses is equal to the number of spines and spine remodeling cannot contribute to circuit re-organization. C. An alternative scenario in which spine remodeling can contribute to circuit re-organization. Actual dendritic spines (solid gray) for actual synapses. Potential synapses include both actual synapses and other possible spine locations (dotted contours). The number of potential synapses is much greater than the number of spines. D. New circuit obtained from C through spine remodeling.
D C B A
To determine which scenario (Fig. 1B or Fig. 1C) better reflects the real brain, we have derived a mathematical expression (making no assumptions about neuronal arbor shapes) to evaluate the ratio of numbers of actual and potential synapses. We call this ratio the filling fraction, f :
sin( )2 s a d n f s θ ρ ρ = (1)
Here is the spine length (measured from the tip of the spine to the midline of the dendritic branch), s a ρ and d ρ are axonal and dendritic length densities, and sin( ) θ is the mean sine of the angle between axonal and dendritic branches forming potential synapses. This mean sine is equal to / 4 π for uniformly distributed axonal or dendritic branches (cortex, hippocampus), and to 1 for axons and dendrites intersecting at the right angle (parallel fibers of granular cells and dendrites of Purkinje neurons in cerebellum). The filling fraction calculated according to Eq. 1 for different species and brain areas 1,2 is in the range of 0.1-0.3. As a result, cortical micro-architecture is depicted better in Figs. 1C,D rather than in Fig. 1B. (1.25
s = a d κ κ a d κ κ = =
The filling fraction, f , is a measure of plasticity potential associated with spine remodeling, as it reflects the number of different circuits that can be realized in given neuropil volume through spine reorganization. The information storage capacity of unit volume of neuropil due to spine remodeling is defined as base two logarithm of the number of different synaptic connectivity patterns. We have shown that the information storage capacity depends on the filling fraction as 1 :
) 2 log i n f − .(2)
Eq. (2) is an approximation of a more general expression for information storage capacity 1,2 , made for biologically relevant values of the filling fraction,
0.4 f < .
Optimal neuropil
A unit volume of neuropil can be broken down into several components, 1 s y n r e s t κ κ
+ + + = ,(3)
where and are the axonal (not including boutons) and dendritic (without spines) volume fractions. 2. However, due to the constraint on the volume of neuropil, Eq. (3), the increase in , a d ρ will be accompanied by a reduction in the synaptic density, s n . This reduction, in turn, will have an opposite effect on information capacity. The three considered components, i.e. axonal, dendritic, and synaptic, have to be perfectly balanced in order to achieve the maximum information capacity. This balance is realized when axons and dendrites occupy equal fractions of neuropil given by the following function of the filling fraction only (see Methods for details):
1 1.87 ln rest f κ − − .(4)
This function of the filling fraction is illustrated in Fig. 2 for a special case of . 0 rest κ =
Anatomical data
Axonal and dendritic volume fractions obtained with electron microscopy and corresponding filling fractions are presented in In comparing these volume fractions with the optimal fractions, Eq. (4), we set . This is justified by the fact that 0 rest κ = rest κ , which consists mainly of the extracellular space, is significantly reduced in serial section electron microscopy preparation. The anatomical fractions from Table 1 contain a large spread, and are within 20% of the theoretically predicted value (Fig. 2). A more consistent study, which would measure volume fractions and filling fractions from the same brain, is needed in order to further justify the hypotheses that neuropil is optimally designed to store information in patterns of synaptic connectivity. Table 1.
Conclusion
We calculated volume fractions occupied by axons and dendrites in a neuropil optimally designed to maximize information storage capacity due to spine remodeling. These optimal volume fractions are not significantly different from those measured anatomically. This leads us to suggest that maximizing information storage capacity due to spine remodeling may have been an important driving force in the evolution of the cerebral cortex.
Methods
In this section we derive the expression for the optimal values of axonal and dendritic volume fractions [Eq. This is Eq. (4) of the main text.
the synaptic volume fraction, which includes dendritic spines, axonal boutons, and the part of glia dedicated to synapse maintenance. rest κ denotes the remaining part of the neuropil, including the rest of glia and the extracellular space. At first glance, it might appear that in order to obtain large structural synaptic information storage capacity, i , it is sufficient to increase the axonal and/or dendritic length densities, , a d ρ . This change would result in the decrease of the filling fraction, f [according to Eq. (1)], and consequently, increase in the information capacity, i , Eq.
Figure 2 :
2Optimal volume fractions of axons and dendrites as a function of the filling fraction, f , solid line. Circles represent anatomical data from
( 4 )
4in the main text]. This optimization problem, constrained by the fact that neuropil consists of several particular components, Eq. (3), is equivalent to the problem of finding the maximum of the Lagrange function,
Table 1 .
1The filling fraction for the CA1 field of the rat hippocampus is based on CA3 to CA1 projection only 1,2 .Axonal volume
fraction
a
κ
Dendritic
volume fraction
d
κ
Filling
fraction
f
Mouse and rat
neocortex
0.31±0.09 4
0.36±0.03 5
0.34 3
0.24±0.07 4
0.23±0.02 5
0.35 3
0.26 1,2
Rat hippocampus
CA1
(CA3 to CA1 projection)
0.29±0.03 5
0.26±0.03 5
0.22 1,2
Table 1 :
1Axonal, dendritic volume fractions, and corresponding filling fractions for the mouse and rat neocortx, and the rat hippocampus.
Acknowledgments I thank Dr. D.B. Chklovskii for numerous discussions and support during the writing of this manuscript.(5)where λ is a positive parameter.We search for the maximum of the function I by varying axonal and dendritic length densities,
Geometry and structural plasticity of synaptic connectivity. A Stepanyants, P R Hof, D B Chklovskii, Neuron. 34Stepanyants, A., Hof, P. R. & Chklovskii, D. B. Geometry and structural plasticity of synaptic connectivity. Neuron 34, 275-288 (2002).
Information storage capacity of synaptic connectivity patterns. A Stepanyants, P R Hof, D B Chklovskii, Neurocomputing. 44Stepanyants, A., Hof, P. R. & Chklovskii, D. B. Information storage capacity of synaptic connectivity patterns. Neurocomputing 44, 661-665 (2002).
V Braitenberg, A Schüz, Cortex: statistics and geometry of neuronal connectivity. Berlin; New YorkSpringerBraitenberg, V. & Schüz, A. Cortex: statistics and geometry of neuronal connectivity (Springer, Berlin; New York, 1998).
Aging in the neuropil of cerebral cortex--a quantitative ultrastructural study. K Ikari, M Hayashi, Folia Psychiatr Neurol Jpn. 35Ikari, K. & Hayashi, M. Aging in the neuropil of cerebral cortex--a quantitative ultrastructural study. Folia Psychiatr Neurol Jpn 35, 477-486 (1981).
Wiring optimization in cortical circuits. D B Chklovskii, T Schikorski, C F Stevens, Neuron. 34Chklovskii, D. B., Schikorski, T. & Stevens, C. F. Wiring optimization in cortical circuits. Neuron 34, 341-347 (2002).
| [] |
[
"EXTREMA OF MULTI-DIMENSIONAL GAUSSIAN PROCESSES OVER RANDOM INTERVALS",
"EXTREMA OF MULTI-DIMENSIONAL GAUSSIAN PROCESSES OVER RANDOM INTERVALS"
] | [
"J I Lanpeng ",
"Xiaofan And ",
"Peng "
] | [] | [] | This paper studies the joint tail asymptotics of extrema of the multi-dimensional Gaussian process over random intervals defined aswhere X i (t), t ≥ 0, i = 1, 2, · · · , n, are independent centered Gaussian processes with stationary increments, T = (T 1 , · · · , T n ) is a regularly varying random vector with positive components, which is independent of the Gaussian processes, and c i ∈ R, a i > 0, i = 1, 2, · · · , n. Our result shows that the structure of the asymptotics of P (u) is determined by the signs of the drifts c i 's. We also discuss a relevant multi-dimensional regenerative model and derive the corresponding ruin probability. | 10.1017/jpr.2021.37 | [
"https://arxiv.org/pdf/2009.12085v1.pdf"
] | 221,949,244 | 2009.12085 | 49cd2628292a8e0f903a62bfaa9bb4a73fe4e4e7 |
EXTREMA OF MULTI-DIMENSIONAL GAUSSIAN PROCESSES OVER RANDOM INTERVALS
25 Sep 2020
J I Lanpeng
Xiaofan And
Peng
EXTREMA OF MULTI-DIMENSIONAL GAUSSIAN PROCESSES OVER RANDOM INTERVALS
25 Sep 2020Joint tail asymptoticsGaussian processesperturbed random walkruin probabilityfluid modelfractional Brownian motionregenerative model AMS Classification: Primary 60G15secondary 60G70
This paper studies the joint tail asymptotics of extrema of the multi-dimensional Gaussian process over random intervals defined aswhere X i (t), t ≥ 0, i = 1, 2, · · · , n, are independent centered Gaussian processes with stationary increments, T = (T 1 , · · · , T n ) is a regularly varying random vector with positive components, which is independent of the Gaussian processes, and c i ∈ R, a i > 0, i = 1, 2, · · · , n. Our result shows that the structure of the asymptotics of P (u) is determined by the signs of the drifts c i 's. We also discuss a relevant multi-dimensional regenerative model and derive the corresponding ruin probability.
Introduction
Let X(t), t ≥ 0 be an almost surely (a.s.) continuous centered Gaussian process with stationary increments and X(0) = 0. Motivated by its applications to the hybrid fluid and ruin models, the seminal paper [1] derived the exact tail asymptotics of P sup
t∈[0,T ] X(t) > u , u → ∞,(1)
with T being an independent of X regularly varying random variable. Since then the study of the tail asymptotics of supremum on random interval has attracted substantial interest in the literature. We refer to [2,3,4,5,6,7] for various extensions to general (non-centered) Gaussian or Gaussian-related processes. In the aforementioned contributions, various different tail distributions for T have been discussed, and it has been shown that the variability of T influences the form of the asymptotics of (1), leading to qualitatively different structures.
The primary aim of this paper is to analyze the asymptotics of a multi-dimensional counterpart of (1). More precisely, consider a multi-dimensional centered Gaussian process X(t) = (X 1 (t), X 2 (t), · · · , X n (t)), t ≥ 0, (2) with independent coordinates, each X i (t), t ≥ 0, has stationary increments, a.s. continuous sample paths and X i (0) = 0, and let T = (T 1 , · · · , T n ) be a regularly varying random vector with positive components, which is Date: September 28, 2020.
(X i (t) + c i t) > a i u , u → ∞,(3)
where c i ∈ R, a i > 0, i = 1, 2, · · · , n.
Extremal analysis of multi-dimensional Gaussian processes has been an active research area in recent years; see [8,9,10,11,12] and references therein. In most of these contributions, the asymptotic behaviour of the probability that X (possibly with trend) enters an upper orthant over a finite-time or infinite-time interval is discussed, this problem is also connected with the conjunction problem for Gaussian processes firstly studied by Worsley and Friston [13]. Investigations on the joint tail asymptotics of multiple extrema as defined in (3) have been known to be more challenging. Current literature has only focused on the case with deterministic times T 1 = · · · = T n and some additional assumptions on the correlation structure of X i 's. In [14,8] large deviation type results are obtained, and more recently in [15,16] exact asymptotics are obtained for correlated two-dimensional Brownian motion. It is worth mentioning that a large derivation result for the multivariate maxima of a discrete Gaussian model is discussed recently in [17].
In order to avoid more technical difficulties, the coordinates of the multi-dimensional process X in (2) are assumed to be independent. The dependence among the extrema in (3) is driven by the structure of the multivariate regularly varying T . Interestingly, we observe in Theorem 3.1 that the form of the asymptotics of (3) is determined by the signs of the drifts c i 's.
Apart from its theoretical interest, the motivation to analyse the asymptotic properties of P (u) is related to numerous applications in modern multi-dimensional risk theory, financial mathematics or fluid queueing networks. For example, we consider an insurance company which runs n lines of business. The surplus process of the ith business line can be modelled by a time-changed Gaussian process
R i (t) = a i u + c i Y i (t) − X i (Y i (t)), t ≥ 0,
where a i u > 0 is the initial capital (considered as a proportion of u allocated to the ith business line, with n i=1 a i = 1), c i > 0 is the net premium rate, X i (t), t ≥ 0 is the net loss process, and Y i (t), t ≥ 0 is a positive increasing function modelling the so-called "operational time" for the ith business line. We refer to [18,19] and [5] for detailed discussions on multi-dimensional risk models and time-changed risk models, respectively.
Of interest in risk theory is the study of the probability of ruin of all the business lines within some finite
(deterministic) time T > 0, defined by ϕ(u) := P ∩ n i=1 inf t∈[0,T ] R i (t) < 0 = P ∩ n i=1 sup t∈[0,T ] (X i (Y i (t)) + c i Y i (t)) > a i u .
If additionally all the operational time processes Y i (t), t ≥ 0 have a.s. continuous sample paths, then we have ϕ(u) = P (u) with T = Y (T ), and thus the derived result can be applied to estimate this ruin probability. Note that the dependence among different business lines is introduced by the dependence among the operational time processes Y i 's. As a simple example we can consider Y i (t) = Θ i t, t ≥ 0, with Θ = (Θ 1 , · · · , Θ n ) being a multivariate regularly varying random vector. Additionally, multi-dimensional time-changed (or subordinate)
Gaussian processes have been recently proved to be good candidates for modelling the log-return processes of multiple assets; see, e.g., [20,21,22]. As the joint distribution of extrema of asset returns is important in finance problems, e.g., [23], we expect the obtained results for (3) might also be interesting in financial mathematics.
As a relevent application, we shall discuss a multi-dimensional regenerative model, which is motivated from its relevance to risk model and fluid queueing model. Essentially, the multi-dimensional regenerative process is a process with a random alternating environment, where an independent multi-dimensional fractional Brownian motion (fBm) with trend is assigned at each environment alternating time. We refer to Section 4 for more detail. By analysing a related multi-dimensional perturbed random walk, we obtain in Theorem 4.1 the ruin probability of the multi-dimensional regenerative model. This generalizes some of the results in [24] and [25] to the multi-dimensional setting. Note in passing that some related stochastic models with random sampling or resetting have been discussed in the recent literature; see, e.g., [26,27,28].
Organization of the rest of the paper: In Section 2 we introduce some notation, recall the definition of the multivariate regularly variation, and present some preliminary results on the extremes of one-dimensional Gaussian process. The result for (3) is displayed in Section 3, and the ruin probability of the multi-dimensional regenerative model is discussed in Section 4. The proofs are relegated to Section 5 and Section 6. Some useful results on multivariate regularly variation are discussed in the Appendix.
Notation and Preliminaries
We shall use some standard notation which is common when dealing with vectors. All the operations on vectors are meant componentwise. For instance, for any given x = (x 1 , . . . , x n ) ∈ R n and y = (y 1 , . . . , y n ) ∈ R n , we write xy = (x 1 y 1 , · · · , x n y n ), and write x > y if and only if x i > y i for all 1 ≤ i ≤ n. Furthermore, for two positive functions f, h and some
u 0 ∈ [−∞, ∞], write f (u) h(u) or h(u) f (u) if lim sup u→u0 f (u)/h(u) ≤ 1, write h(u) ∼ f (u) if lim u→u0 f (u)/h(u) = 1, write f (u) = o(h(u)) if lim u→u0 f (u)/h(u) = 0, and write f (u) ≍ h(u) if f (u)/h(u)
is bounded from both below and above for all sufficiently large u.
Next, let us recall the definition and some implications of multivariate regularly variation. We refer to [29,30,31] for more detailed discussions. Let
R n 0 = R n \ {0} with R = R ∪ {−∞,
∞}. An R n -valued random vector X is said to be regularly varying if there exists a non-null Radon measure ν on the Borel σ-field B(R n 0 ) with ν(R n \ R n ) = 0 such that
P x −1 X ∈ · P {|X| > x} v → ν(·), x → ∞.
Here | · | is any norm in R n and v → refers to vague convergence on B(R n 0 ). It is known that ν necessarily satisfies the homogeneous property ν(sK) = s −α ν(K), s > 0, for some α > 0 and all Borel set K in B(R n 0 ). In what follows, we say that such defined X is regularly varying with index α and limiting measure ν. An implication of the homogeneity property of ν is that all the rectangle sets of the form [a, b] = {x : a ≤ x ≤ b} in R n 0 are ν-continuity sets. Furthermore, we have that |X| is regularly varying at infinity with index α, i.e., P {|X| > x} ∼ x −α L(x), x → ∞, with some slowly varying function L(x). Some useful results on the multivariate regularly variation are discussed in Appendix.
In what follows, we review some results on the extremes of one-dimensional Gaussian process with nagetive drift derived in [32]. Let X(t), t ≥ 0 be an a.s. continuous centered Gaussian process with stationary increments and X(0) = 0, and let c > 0 be some constant. We shall present the exact asymptotics of
ψ(u) := P sup t≥0 (X(t) − ct) > u , u → ∞.
Below are some assumptions that the variance function σ 2 (t) = Var(X(t)) might satisfy: C1: σ is continuous on [0, ∞) and ultimately strictly increasing; C2: σ is regularly varying at infinity with index H for some H ∈ (0, 1); C3: σ is regularly varying at 0 with index λ for some λ ∈ (0, 1); C4: σ 2 is ultimately twice continuously differentiable and its first derivativeσ 2 and second derivativeσ 2 are both ultimately monotone.
Note that in the aboveσ 2 andσ 2 denote the first and second derivative of σ 2 , not the square of the derivatives of σ. In the sequel, provided it exists we denote by ← − σ an asymptotic inverse near infinity or zero of σ; recall that it is (asymptotically uniquely) defined by ← − σ (σ(t)) ∼ σ( ← − σ (t)) ∼ t. It depends on the context whether ← − σ is an asymptotic inverse near zero or infinity.
One known example that satisfies the assumptions C1-C4 is the fBm {B H (t), t ≥ 0} with Hurst index H ∈ (0, 1), i.e., an H-self-similar centered Gaussian process with stationary increments and covariance function given by
Cov(B H (t), B H (s)) = 1 2 (|t| 2H + |s| 2H − |t − s| 2H ), t, s ∈ R.
We introduce the following notation:
C H,λ1,λ2 = 2 1−1/λ2 πλ 1 1 H 1/λ2 H 1 − H λ1+H− 1 2 + 1 λ 2 (1−H)
.
For an a.s. continuous centered Gaussian process Z(t), t ≥ 0 with stationary increments and variance function σ 2 Z , we define the generalized Pickands constant
H Z = lim T →∞ 1 T E exp sup t∈[0,T ] ( √ 2Z(t) − σ 2 Z (t))
provided both the expectation and the limit exist. When Z = B H the constant H BH is the well-known Pickands constant; see [33]. For convenience, sometimes we also write H σ 2 Z for H Z . Denote in the following by Ψ(·) the survival function of the N (0, 1) distribution. It is known that
Ψ(u) = 1 √ 2π ∞ u e − x 2 2 dx ∼ 1 √ 2πu e − u 2 2 , u → ∞.(4)
The following result is derived in Proposition 2 in [32] (here we consider a particular trend function φ(t) = ct, t ≥ 0). Proposition 2.1. Let X(t), t ≥ 0 be an a.s. continuous centered Gaussian process with stationary increments and X(0) = 0. Suppose that C1-C4 hold. We have, as u → ∞
(i) if σ 2 (u)/u → ∞, then ψ(u) ∼ H BH C H,1,H 1 − H H c 1−H σ(u) ← − σ (σ 2 (u)/u) Ψ inf t≥0 u(1 + t) σ(ut/c) ; (ii) if σ 2 (u)/u → G ∈ (0, ∞), then ψ(u) ∼ H (2c 2 /G 2 )σ 2 2/π c 1+H H σ(u)Ψ inf t≥0 u(1 + t) σ(ut/c) ;
(iii) if σ 2 (u)/u → 0, then [here we need regularity of σ and its inverse at 0]
ψ(u) ∼ H B λ C H,1,λ 1 − H H H/λ c −1−H+2H/λ σ(u) ← − σ (σ 2 (u)/u) Ψ inf t≥0 u(1 + t) σ(ut/c) .
As a special case of the Proposition 2.1 we have the following result (see Corollary 1 in [32] or [19]). This will be useful in the proofs below.
Corollary 2.2. If X(t) = B H (t), t ≥ 0 is the fBm with index H ∈ (0, 1), then as u → ∞ P sup t≥0 (B H (t) − ct) > u ∼ K H H BH u H+1/H−2 Ψ c H u 1−H H H (1 − H) 1−H . with constant K H = 2 1 2 − 1 2H √ π √ H(1−H) c H H H (1−H) 1−H 1/H−1 .
Main results
Without loss of generality, we assume that in (3) there are n − coordinates with negative drift, n 0 coordinates without drift and n + coordinates with positive drift, i.e.,
c i < 0, i = 1, · · · , n − , c i = 0, i = n − + 1, · · · , n − + n 0 , c i > 0, i = n − + n 0 + 1, · · · , n,
where 0 ≤ n − , n 0 , n + ≤ n such that n − + n 0 + n + = n. We impose the following assumptions for the standard deviation functions σ i (t) = V ar(X i (t)) of the Gaussian processes X i (t), i = 1, · · · , n.
Assumption I: For i = 1, · · · , n − , σ i (t) satisfies the assumptions C1-C4 with the parameters involved indexed by i. For i = n − + 1, · · · , n − + n 0 , σ i (t) satisfies the assumptions C1-C3 with the parameters involved indexed by i. For i = n − + n 0 + 1, · · · , n, σ i (t) satisfies the assumptions C1-C2 with the parameters involved indexed by i.
Denote ξ i := sup t∈[0,1] B Hi (t), t * i = H i 1 − H i .(5)
Given a Radon measure ν, define
ν(K) =: E ν(ξ −1/H K) , K ⊂ B([0, ∞] n \ {0}),(6)where ξ −1/H K = {(ξ −1/H1 1 d 1 , · · · , ξ −1/Hn n d n ), (d 1 , · · · , d n ) ∈ K}. Further, note that for i = 1, · · · , n − , (where c i < 0), the asymptotic formula, as u → ∞, of ψ i (u) = P sup t≥0 (X i (t) + c i t) > u .(7)
is available from Proposition 2.1 under Assumption I. Theorem 3.1. Suppose that X(t), t ≥ 0 satisfies the Assumption I, and T is an independent of X regularly varying random vector with index α and limiting measure ν. Further assume, without loss of generality, that
there are m(≤ n 0 ) positive constants k i 's such that ← − σ i (u) ∼ k i ← − σ n−+1 (u) for i = n − + 1, · · · , n − + m and ← − σ i (u) = o( ← − σ n−+1 (u)) for i = n − + m + 1, · · · , n − + n 0 . We have, with the convention 0 i=1 = 1, (i) If n 0 > 0, then, as u → ∞, P (u) ∼ ν((ka 1/Hn − +1 0 , ∞]) P |T | > ← − σ n−+1 (u) n− i=1 ψ i (a i u),
where ν and ψ ′ i s are defined in (6) and (7), respectively, and ka
1/Hn − +1 0 = (0, · · · , 0, k n−+1 a 1/Hn − +1 n−+1 , · · · , k n−+m a 1/Hn − +1 n−+m , 0, · · · , 0). (ii) If n 0 = 0, then, as u → ∞, P (u) ∼ ν((a 1 , ∞]) P {|T | > u} n− i=1 ψ i (a i u),
where a 1 = (t * 1 / |c 1 | · · · , t * n− / c n− , a n−+1 /c n−+1 , · · · , a n /c n ).
Remark 3.2. As a special case, we can obtain from Theorem 3.1 some results for the one-dimensional model.
Specifically, let c > 0 be some constant, then as u → ∞, (9) is discussed in [5] only for the fBm case. The result in (10) seems to be new.
P sup t∈[0,T ] X(t) > u ∼ E sup t∈[0,1] B H (t) α/H P {T > ← − σ (u)} ,(8)P sup t∈[0,T ] (X(t) − ct) > u ∼ (c(1 − H)/H) α P {T > u} ψ(u),(9)P sup t∈[0,T ] (X(t) + ct) > u ∼ c α P {T > u} . (10) Note that (8) is derived in Theorem 2.1 of [1],
We conclude this section with an interesting example of multi-dimensional subordinate Brownian motion; see, e.g., [21].
Example 3.3. For each i = 0, 1, · · · , n, let {S i (t), t ≥ 0} be independent α i -stable subordinator with α i ∈ (0, 1), i.e., S i (t) D = S αi (t 1/αi , 1, 0), where S α (σ, β, d) denotes a stable random variable with stability index α, scale
parameter σ, skewness parameter β and drift parameter d. It is known (e.g., Property 1.2.15 in [34]) that for any fixed constant T > 0,
P {S i (T ) > t} ∼ C αi,T t −αi , t → ∞, with C αi,T = T Γ(1−αi) cos(παi/2) . Assume α 0 < α i , for all i = 1, 2 · · · , n.
Define an n-dimensional subordinator as
Y (t) := (S 0 (t) + S 1 (t), · · · , S 0 (t) + S n (t)), t ≥ 0.
We consider an n-dimensional subordinate Brownian motion with drift defined as
X(t) = (B 1 (Y 1 (t)) + c 1 Y 1 (t), · · · , B n (Y n (t)) + c n Y n (t)), t ≥ 0, where B i (t), t ≥ 0, i = 1, · · · ,
n, are independent standard Brownian motions which are independent of Y and
c i ∈ R.
Define, for any a i > 0, i = 1, 2, · · · , n, T > 0 and u > 0,
P B (u) := P ∩ n i=1 sup t∈[0,T ] (B i (Y i (t)) + c i Y i (t)) > a i u .
For illustrative purpose and to avoid further technicality, we only consider the case where all c i 's in the above have the same sign. As an application of Theorem 3.1 we obtain the asymptotic behaviour of P B (u), u → ∞, as follows:
(i) If c i > 0 for all i = 1, · · · , n, then P B (u) ∼ C α0,T (max n i=1 (a i /c i )u) −α0 . (ii) If c i = 0 for all i = 1, · · · , n, then P B (u) ≍ u −2α0 .
(iii) If c i < 0 and the density function of S i (T ) is ultimately monotone for all i = 0, 1, · · · , n, then
ln P B (u) ∼ 2 n i=1 (a i c i )u.
The proof of the above is displayed in Section 5.
Ruin probability of a multi-dimensional regenerative model
As it is known in the literature that the maximum of random processes over random interval is relevant to the regenerated models (e.g., [24,25]), this section is focused on a multi-dimensional regenerative model which is motivated from its applications in queueing theory and ruin theory. More precisely, there are four elements in this model: Two sequences of strictly positive random variables,
{T i : i ≥ 1} and {S i : i ≥ 1},
and two sequences of n-dimensional processes,
{{X (i) (t), t ≥ 0} : i ≥ 1} and {{Y (i) (t), t ≥ 0} : i ≥ 1}, where X (i) (t) = (X (i) 1 (t), · · · , X (i) n (t)) and Y (i) (t) = (Y (i) 1 (t), · · · , Y (i)
n (t)). We assume that the above four elements are mutually independent. Here T i , S i are two successive times representing the random length of the alternating environment (called T -stage and S-stage), and we assume a T -stage starts at time 0. The model
grows according to {X (i) (t), t ≥ 0} during the ith T -stage and according to {Y (i) (t), t ≥ 0} during the ith S-stage.
Based on the above we define an alternating renewal process with renewal epochs
0 = V 0 < V 1 < V 2 < V 3 < · · · with V i = (T 1 + S 1 ) + · · · + (T i + S i ) which is the ith environment cycle time. Then the resulting n-dimensional process Z(t) = (Z 1 (t), · · · , Z n (t)), is defined as Z(t) := Z(V i ) + X (i+1) (t − V i ), if V i < t ≤ V i + T i+1 ; Z(V i ) + X (i+1) (T i+1 ) + Y (i+1) (t − V i − T i+1 ), if V i + T i+1 < t ≤ V i+1 .
Note that this is a multi-dimensional regenerative process with regeneration epochs V i . This is a generalization of the one-dimensional model discussed in [26].
We assume that
{{X (i) (t), t ≥ 0} : i ≥ 1} and {{Y (i) (t), t ≥ 0} : i ≥ 1} are independent samples of {X(t), t ≥ 0} and {Y (t), t ≥ 0}, respectively, where X j (t) = B Hj (t) + p j t, t ≥ 0, 1 ≤ j ≤ n, Y j (t) = B Hj (t) − q j t, t ≥ 0, 1 ≤ j ≤ n,
with all the fBm's B Hj , B Hj being mutually independent and p j ,
q j > 0, 1 ≤ j ≤ n. Suppose that (T i , S i ), i ≥ 1
are independent samples of (T, S) and T is regularly varying with index λ > 1. We further assume that
P {S > x} = o (P {T > x}) , p j E {T } < q j E {S} < ∞ 1 ≤ j ≤ n.(11)
For notational simplicity we shall restrict ourselves to the 2-dimensional case. The general n-dimensional problem can be analysed similarly. Thus, for the rest of this section and related proofs in Section 6, all vectors (or multi-dimensional processes) are considered to be two-dimensional ones.
We are interested in the asymptotics of the following tail probability Q(u) := P ∃n ≥ 1 : sup
t∈[Vn−1,Vn] Z 1 (t) > a 1 u, sup s∈[Vn−1,Vn] Z 2 (s) > a 2 u , u → ∞,
with a 1 , a 2 > 0. In the fluid queueing context, Q(u) can be interpreted as the probability that both buffers overflow in some environment cycle. In the insurance context, Q(u) can be interpreted as the probability that in some business cycle the two lines of business of the insurer are both ruined (not necessarily at the same time). Similar one-dimensional models have been discussed in the literature; see, e.g., [25,24,18]. We introduce the following notation:
U (n) = (U (n) 1 , U (n) 2 ) := Z(V n ) − Z(V n−1 ), n ≥ 1, U (0) = 0,(12)M (n) = (M (n) 1 , M (n) 2 ) := sup t∈[Vn−1,Vn) Z 1 (t) − Z 1 (V n−1 ), sup s∈[Vn−1,Vn) Z 2 (s) − Z 2 (V n−1 ) , n ≥ 1.(13)
Then we have
Q(u) = P ∃n ≥ 1 : n i=1 U (i−1) 1 + M (n) 1 > a 1 u, n i=1 U (i−1) 2 + M (n) 2 > a 2 u .
Note that U (n) , n ≥ 0 and M (n) , n ≥ 0 are both IID sequences. By the second assumption in (11) we have
E U (1) = (p 1 E {T } − q 1 E {S} , p 2 E {T } − q 2 E {S}) =: −c < 0,(14)
which ensures that the event in the above probability is a rare event for large u, i.e., Q(u) → 0, as u → ∞.
It is noted that our question now becomes an exit problem of a 2-dimensional perturbed random walk. The exit problems of multi-dimensional random walk has been discussed in many papers, e.g., [31]. However, it seems that multi-dimensional perturbed random walk has not been discussed in the existing literature.
Since T is regularly varying with index λ > 1, we have that
T := (p 1 T, p 2 T )(15)
is regularly varying with index λ and some limiting measure µ (whose form depends on the norm | · | that is chosen). We present next the main result of this section, leaving its proof to Section 6. where c and T is given by (14) and (15), respectively.
Q(u) ∼ ∞ 0 max((a 1 + c 1 v)/p 1 , (a 2 + c 2 v)/p 2 ) −λ dv P {T > u} u.
Proof of main results
This section is devoted to the proof of Theorem 3.1 followed by a short proof of Example 3.3. plays an important role. It has been discussed therein that t * u converges, as u → ∞, to t * := H/(1 − H) which is the unique minimum point of lim u→∞ f u (t)σ(u)/u = (1 + t)/(t/c) H , t ≥ 0. In this sense, t * u is asymptotically unique. We have the following corollary of [32], which is useful for the proofs below.
Lemma 5.1. Let X(t), t ≥ 0 be an a.s. continuous centered Gaussian process with stationary increments and X(0) = 0. Suppose that C1-C4 hold. For any fixed 0 < ε < t * /c, we have, as u → ∞,
P sup t∈0,(t * /c+ε)u] (X(t) − ct) > u ∼ ψ(u),
with ψ(u) the same as in Proposition 2.1. Furthermore, we have that for any γ > 0
lim u→∞ P sup t∈[0,(t * /c−ε)u] (X(t) − ct) > u ψ(u)u −γ = 0.
Proof of Lemma 5.1: Note that P sup
t∈[0,(t * /c+ε)u] (X(t) − ct) > u = P sup t∈[0,(t * +cε)] X(ut/c) 1 + t > u .
The first claim follows from [32], as the main interval which determines the asymptotics is in[0, (t * + cε)] (see Lemma 7 and the comments in Section 2.1 therein). Similarly, we have P sup
t∈[0,(t * /c−ε)u] (X(t) − ct) > u = P sup t∈[0,(t * −cε)] X(ut/c) 1 + t > u .
Since t * u is asymptotically unique and lim u→∞ t * u = t * , we can show that for all u large
inf t∈[0,(t * −cε)] f u (t) ≥ ρf u (t * u ) = ρ inf t≥0 f u (t)
for some ρ > 1. Thus, by similar arguments as in the proof of Lemma 7 in [32] using the Borel inequality we conclude the second claim.
The following lemma is crucial for the proof of Theorem 3.1.
Lemma 5.2. Let X i (t), t ≥ 0, i = 1, 2, · · · , n 0 (< n) be independent centered Gaussian processes with stationary increments, and let T be an independent regularly varying random vector with index α and limiting measure ν.
Suppose that all of σ i (t), i = 1, 2, · · · , n 0 satisfy the assumptions C1-C3 with the parameters involved indexed by i, which further satisfy that,
← − σ i (u) ∼ k i ← − σ 1 (u) for some positive constants k i , i = 1, 2, · · · , m ≤ n 0 and ← − σ j (u) = o( ← − σ 1 (u)
) for all j = m + 1, · · · , n 0 . Then, for any increasing to infinity functions h i (u), n 0 + 1 ≤ i ≤ n such that h i (u) = o( ← − σ 1 (u)), n 0 + 1 ≤ i ≤ n, and any a i > 0,
P ∩ n0 i=1 sup t∈[0,Ti] X i (t) > a i u , ∩ n i=n0+1 (T i > h i (u)) ∼ ν((ka 1/H m,0 , ∞]) P {|T | > ← − σ 1 (u)} ,
where ν is defined in (6) and ka For notational convenience denote
H(u) =: P ∩ n0 i=1 sup t∈[0,Ti] X i (t) > a i u , ∩ n i=n0+1 (T i > h i (u)) .
We first give a asymptotically lower bound for H(u). Let G(x) = P {T ≤ x} be the distribution function of T . Note that, for any constants 0 < r < R,
H(u) ≥ P ∩ n0 i=1 sup t∈[0,Ti] X i (t) > a i u , ∩ m i=1 (r ← − σ 1 (u) ≤ T i ≤ R ← − σ 1 (u)), ∩ n i=m+1 (T i > r ← − σ 1 (u)) = [r,R] m ×(r,∞) n−m P ∩ n0 i=1 sup t∈[0, ← − σ1(u)ti] X i (t) > a i u dG( ← − σ 1 (u)t 1 , · · · , ← − σ 1 (u)t n ) = [r,R] m ×(r,∞) n−m n0 i=1 P sup s∈[0,1] X u,ti i (s) > a i u i (t i ) dG( ← − σ 1 (u)t 1 , · · · , ← − σ 1 (u)t n )
holds for sufficiently large u, where
X u,ti i (s) =: X i ( ← − σ 1 (u)t i s) σ i ( ← − σ 1 (u)t i ) , u i (t i ) =: u σ i ( ← − σ 1 (u)t i ) , s ∈ [0, 1], (t 1 , t 2 , · · · , t n0 ) ∈ [r, R] m × (r, ∞) n0−m .
By Lemma 5.2 in [1], we know that, as u → ∞, the processes X u,ti i (s) converges weakly in C([0, 1]) to B Hi (s), uniformly in t i ∈ (r, ∞), respectively for i = 1, 2, · · · , n 0 . Further, according to the assumptions on σ i (t), Theorem 1.5.2 and Theorem 1.5.6 in [35], we have, as u → ∞, u i (t i ) converges to k Hi i t −Hi i uniformly in t i ∈ [r, R], respectively for i = 1, 2, · · · , m, and u i (t i ) converges to 0 uniformly in t i ∈ [r, ∞), respectively for i = m + 1, · · · , n 0 . Then, by the continuous mapping theorem and recalling ξ i defined in (5) is a continuous random variable (e.g., [36]), we get
H(u) [r,R] m ×(r,∞) n−m m i=1 P sup s∈[0,1] B Hi (s) > a i k Hi i t −Hi i dG( ← − σ 1 (u)t 1 , · · · , ← − σ 1 (u)t n ) (16) = P ∩ m i=1 ξ 1 H i i T i > k i a 1 H i i ← − σ 1 (u) , ∩ m i=1 (r ← − σ 1 (u) ≤ T i ≤ R ← − σ 1 (u)) , ∩ n i=m+1 (T i > r ← − σ 1 (u)) = J 1 (u) − J 2 (u), where J 1 (u) =: P ∩ m i=1 ξ 1 H i i T i > k i a 1 H i i ← − σ 1 (u) , ∩ n i=m+1 (T i > r ← − σ 1 (u)) , J 2 (u) =: P ∩ m i=1 ξ 1 H i i T i > k i a 1 H i i ← − σ 1 (u) , ∩ n i=m+1 (T i > r ← − σ 1 (u)) , ∪ m i=1 ((T i < r ← − σ 1 (u)) ∪ (T i > R ← − σ 1 (u)))
Putting η = (ξ 1/H1 1 , · · · , ξ 1/Hm m , 1, · · · , 1), then by Lemma 7.2 and the continuity of the limiting measure ν defined therein, we have
lim r→0 lim u→∞ J 1 (u) P {|T | > ← − σ 1 (u)} = ν((ka 1/H m,0 , ∞]).(17)
Furthermore,
J 2 (u) ≤ m i=1 P ξ 1 H i i T i > k i a 1 H i i ← − σ 1 (u), T i < r ← − σ 1 (u) + P {T i > R ← − σ 1 (u)} .
Then, by the fact that |T | is regularly varying with index α, and using the same arguments as in the the proof of Theorem 2.1 in [1] (see the asymptotic for integral I 4 and (5.14) therein), we conclude that
lim r→0,R→∞ lim sup u→∞ J 2 (u) P {|T | > ← − σ 1 (u)} = 0,(18)
which combined with (16) and (17)
yields lim r→0,R→∞ lim inf u→∞ H(u) P {|T | > ← − σ 1 (u)} ≥ ν((ka 1/H m,0 , ∞]).(19)
Next, we give an asymptotic upper bound for H(u). Note
H(u) ≤ P ∩ m i=1 sup t∈[0,Ti] X i (t) > a i u = P ∩ m i=1 sup t∈[0,Ti] X i (t) > a i u , ∩ m i=1 (r ← − σ 1 (u) ≤ T i ≤ R ← − σ 1 (u)) + P ∩ m i=1 sup t∈[0,Ti] X i (t) > a i u , ∪ m i=1 ((T i < r ← − σ 1 (u)) ∪ (T i > R ← − σ 1 (u))) =: J 3 (u) + J 4 (u).
By the same reasoning as that used in the deduction for (16), we can show that
lim r→0,R→∞ lim u→∞ J 3 (u) P {|T | > ← − σ 1 (u)} = ν((ka 1/H m,0 , ∞]).(20)
Moreover,
J 4 (u) ≤ m i=1 P sup t∈[0,Ti] X i (t) > a i u, T i < r ← − σ 1 (u) + P {T i > R ← − σ 1 (u)} .
Thus, by the same arguments as in the proof of Theorem 2.
H(u) P {|T | > ← − σ 1 (u)} ≤ ν((ka 1/H m,0 , ∞]).(21)
Notice that by the assumptions on { ← − σ i (u)} m i=1 , we in fact have H 1 = H 2 = · · · = H m . Consequently, combing (19) and (21) we complete the proof.
Proof of Theorem 3.1: We use in the following the convention that ∩ 0 i=1 = Ω, the sample space. We first verify the claim for case (i), n 0 > 0. For arbitrarily small ε > 0, we have
P (u) ≥ P ∩ n− i=1 sup t∈[0,Ti] (X i (t) + c i t) > a i u, T i > (t * i / |c i | + ε)u , ∩ n−+n0 i=n−+1 sup t∈[0,Ti] X i (t) > a i u , ∩ n i=n−+n0+1 sup t∈[0,Ti] (X i (t) + c i t) > a i u, T i > a i + ε c i u ≥ P ∩ n− i=1 sup t∈[0,(t * i /|ci|+ε)u] (X i (t) + c i t) > a i u, T i > (t * i / |c i | + ε)u , ∩ n−+n0 i=n−+1 sup t∈[0,Ti] X i (t) > a i u , ∩ n i=n−+n0+1 X i a i + ε c i u > −εu, T i > a i + ε c i u = Q 1 (u) × Q 2 (u) × Q 3 (u), where Q 1 (u) := P ∩ n− i=1 sup t∈[0,(t * i /|ci|+ε)u] X i (t) + c i t > a i u Q 2 (u) := P ∩ n− i=1 (T i > (t * i / |c i | + ε)u) , ∩ n−+n0 i=n−+1 sup t∈[0,Ti] X i (t) > a i u , ∩ n i=n−+n0+1 T i > a i + ε c i u , Q 3 (u) := n i=n−+n0+1 P N i > −εu σ i ( ai+ε ci u) → 1, u → ∞,
with N i , i = n − + n 0 + 1, · · · , n being standard Normal distributed random variables. By Lemma 5.1, we know, as u → ∞,
Q 1 (u) ∼ n− i=1 ψ i (a i u).
Further, according to the assumptions on σ i 's and Lemma 5.2, we get
lim ε→0 lim u→∞ Q 2 (u) P |T | > ← − σ n−+1 (u) = ν((ka 1/Hn − +1 0 , ∞]),
and thus
P (u) ν((ka 1/Hn − +1 0 , ∞])P |T | > ← − σ n−+1 (u) n− i=1 ψ i (a i u), u → ∞.
Similarly, we can show
P (u) ≤ P ∩ n− i=1 sup t∈[0,∞) X i (t) + c i t > a i u , ∩ n−+n0 i=n−+1 sup t∈[0,Ti] X i (t) > a i u ∼ ν((ka 1/Hn − +1 0 , ∞])P |T | > ← − σ n−+1 (u) n− i=1 ψ i (a i u), u → ∞.
This completes the proof of case (i).
Next we consider case (ii), n 0 = 0. Similarly as in case (i) we have, for any small ε > 0
P (u) ≥ P ∩ n− i=1 sup t∈[0,(t * i /|ci|+ε)u] (X i (t) + c i t) > a i u, T i > (t * i / |c i | + ε)u , ∩ n i=n−+1 X i a i + ε c i u > −εu, T i > a i + ε c i u = Q 1 (u) × Q 3 (u) × Q 4 (u), where Q 4 (u) := P ∩ n− i=1 (T i > (t * i / |c i | + ε)u) , ∩ n i=n−+1 T i > a i + ε c i u . By Lemma 7.1, we know lim ε→0 lim u→∞ Q 4 (u) P {|T | > u} = ν(a 1 , ∞],
and thus
P (u) ν(a 1 , ∞]P {|T | > u} n− i=1 ψ i (a i u), u → ∞.
For the upper bound, we have for any small ε > 0
P (u) ≤ I 1 (u) + I 2 (u), with I 1 (u) := P ∩ n− i=1 sup t∈[0,Ti] X i (t) + c i t > a i u , ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u , I 2 (u) := P ∩ n− i=1 sup t∈[0,Ti] X i (t) + c i t > a i u , ∪ n− i=1 (T i ≤ (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u .
It follows that
I 1 (u) ≤ P ∩ n− i=1 sup t∈[0,∞) X i (t) + c i t > a i u , ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u = n− i=1 ψ i (a i u)P ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u .
Next, we have for the small chosen ε > 0
P ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u = P ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u, sup t∈[0,Ti] X i (t) ≤ εu +P ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 sup t∈[0,Ti] X i (t) + c i T i > a i u , ∪ n i=n−+1 sup t∈[0,Ti] X i (t) > εu ≤ P ∩ n− i=1 (T i > (t * i / |c i | − ε)u) , ∩ n i=n−+1 (c i T i > (a i − ε)u) + n i=n−+1 P sup t∈[0,Ti] X i (t) > εu .
Furthermore, it follows from Theorem 2.1 in [1] that for any i = n − + 1, · · · , n P sup
t∈[0,Ti] X i (t) > εu ∼ C i (ε)P {T i > ← − σ i (u)} , u → ∞,
with some constant C i (ε) > 0. This implies that
n i=n−+1 P sup t∈[0,Ti] X i (t) > εu = o(P {|T | > u}), u → ∞.
Consequently, applying Lemma 7.1 and letting ε → 0 we can obtain the required asymptotic upper bound, if we can further show
lim u→∞ I 2 (u) n− i=1 ψ i (a i u)P {|T | > u} = 0.(22)
Indeed, we have
I 2 (u) ≤ n− i=1 P ∩ n− j=1 sup t∈[0,Tj] X j (t) + c j t > a j u , T i ≤ (t * i / |c i | − ε)u ≤ n− i=1 n− j=1 j =i ψ j (a j u)P sup t∈[0,(t * i /|ci|−ε)u] X i (t) + c i t > a i u .(23)
Furthermore, by Lemma 5.1 we have that for any γ > 0 lim u→∞ P sup t∈[0,(t * i /|ci|−ε)u] X i (t) + c i t > a i u ψ i (a i u)u −γ = 0, i = 1, 2, · · · , n − , which together with (23) implies (22). This completes the proof.
Proof of Example 3.3:
The proof is based on the following obvious bounds
P L (u) := P {∩ n i=1 ((B i (Y i (T )) + c i Y i (T )) > a i u)} ≤ P B (u) ≤ P ∩ n i=1 sup t∈[0,Yi(T )] (B i (t) + c i t) > a i u =: P U (u).(24)
Since α 0 < min n i=1 α i , by Lemma 7.3 we have that Y (T ) is a multivariate regularly varying random vector with index α 0 and the same limiting measure ν as that of S 0 (T ) := (S 0 (T ), · · · , S 0 (T )) ∈ R n , and further
P {|Y (T )| > x} ∼ P {|S 0 (T )| > x} , x → ∞.
The asymptotics of P U (u) can be obtained by applying Theorem 3.1. Below we focus on P L (u).
First, consider case (i) where c i > 0 for all i = 1, · · · , n. We have
P L (u) = P ∩ n i=1 (B i (1) Y i (T ) + c i Y i (T )) > a i u .
Thus, by Lemma 7.3 we obtain
P L (u) ∼ P {∩ n i=1 (c i S 0 (T ) > a i u)} ∼ C α0,T ( n max i=1 (a i /c i )u) −α0 , u → ∞,
which is the same as the asymptotic upper bound obtained by using (ii) of Theorem 3.1.
Next, consider case (ii) where c i = 0 for all i = 1, · · · , n. We have
P L (u) = P ∩ n i=1 B i (1) Y i (T ) > a i u = 1 2 n P ∩ n i=1 B i (1) 2 Y i (T ) > (a i u) 2 .
Thus, by Lemma 7.2 and Lemma 7.3 we obtain
P L (u) ≍ O(u −2α0 ), u → ∞,
which is the same as the asymptotic upper bound obtained by using (i) of Theorem 3.1.
Finally, consider the case (iii) where c i < 0 for all i = 1, · · · , n. We have
P L (u) ≥ P ∩ n i=1 B i (Y i (T )) + c i Y i (T ) > a i u, Y i (T ) ∈ [a i u/ |c i | − √ u, a i u/ |c i | + √ u] ≥ n i=1 min t∈[aiu/|ci|− √ u,aiu/|ci|+ √ u] P {B 1 (t) + c i t > a i u} P ∩ n i=1 Y i (T ) ∈ [a i u/ |c i | − √ u, a i u/ |c i | + √ u] .
Recalling (4), we derive that
min t∈[aiu/|ci|− √ u,aiu/|ci|+ √ u] P {B 1 (t) + c i t > a i u} = min t∈[ai/|ci|−1/ √ u,ai/|ci|+1/ √ u] P B 1 (1) > (a i − c i t) √ u/ √ t constant · 1 √ u e 2aiciu+o(u) , u → ∞.
Furthermore,
P ∩ n i=1 Y i (T ) ∈ [a i u/ |c i | − √ u, a i u/ |c i | + √ u] ≥ n i=0 P S i (T ) ∈ [a i u/ |2c i | − √ u/2, a i u/ |2c i | + √ u/2] .(25)
Due to the assumptions on the density functions of S i (T ), i = 0, 1, · · · , n, then by Monotone Density Theorem (see e.g. in [37]), we know that (25) is asymptotically larger than Cu −β for some constants C, β > 0. Therefore,
ln P L (u) 2 n i=1 (a i c i )u, u → ∞.
The same asymptotic upper bound can be obtained by the fact that P {sup t>0 (B i (t) + c i t) > a i u} = e 2aiciu for c i < 0. This completes the proof.
Proof of Theorem 4.1
We first show one lemma which is crucial for the proof of Theorem 4.1.
Lemma 6.1. Let U (1) , M (1) and T be given by (12), (13) and (15) respectively. Then, U (1) , M (1) are both regularly varying with the same index λ and limiting measure µ as that of T . Moreover,
P U (1) > x ∼ P M (1) > x ∼ P T > x , x → ∞.
Proof of Lemma 6.1: First note that by self-similarity of fBm's
U (1) = (X (1) 1 (T 1 ) + Y (1) 1 (S 1 ), X(1)2 (T 1 ) + Y (1) 2 (S 1 )) D = ( T + Z 1 + Z 2 + Z 3 ), where Z 1 = (B H1 (1)T H1 , B H2 (1)T H2 ), Z 2 = ( B H1 (1)S H1 , B H2 (1)S H2 ), Z 3 = (−q 1 S, −q 2 S).
Since every two norms on R d are equivalent, then by the fact that H i , H i < 1 for i = 1, 2 and (11), we have
max P (T H1 , T H2 ) > x , P (S H1 , S H2 ) > x , P {|Z 3 | > x} = o P T > x , x → ∞.
Thus, the claim for U (1) follows directly by Lemma 7.3.
Next, note that Proof of Theorem 4.1: First, note that, for any a, c > 0, by the homogeneous property of µ,
M (1) D = sup 0≤t≤T +S X 1 (t)I (0≤t<T ) + (X 1 (T ) + Y 1 (t − T ))I (T ≤t<T +S) , sup 0≤t≤T +S X 2 (t)I (0≤t<T ) + (X 2 (T ) + Y 2 (t − T ))I (T ≤t<T +S) =: M , then M ≥ (X 1 (T ), X 2 (T )) D = T + Z 1 and M ≤ sup 0≤t≤T B H1 (t) + p 1 T + sup t≥0 Y 1 (t), sup 0≤t≤T B H2 (t) + p 1 T + sup t≥0 Y 2 (t) D = (ξ 1 T H1 + sup t≥0 Y 1 (t), ξ 2 T H2 + sup t≥0 Y 2 (t)) + T , with ξ i defined in (5). By Corollary 2.2, we know P sup t≥0 Y i (t) > x = o(P {T > x}) as x → ∞. Therefore,∞ 0 µ((vc + a, ∞])dv ≤ µ((a, ∞]) + ∞ 1 v −λ µ((c + a/v, ∞])dv ≤ µ((a, ∞]) + 1 λ − 1 µ((c, ∞]).(26)
For simplicity we denote W (n) := n i=1 U (i) . We consider the lower bound, for which we adopt a standard technique of "one big jump" (see [24]). Informally speaking, we choose an event on which W (n−1) + M (n) , n ≥ 1, behaves in a typical way up to some time k for which M (k+1) is large. Let δ, ε be small positive numbers.
By the Weak Law of Large Numbers, we can choose large K = K ε,δ so that
P W (n) > −n(1 + ε)c − K1 > 1 − δ, n = 1, 2, · · · .
For any u > 0, we have
Q(u) = P ∃n ≥ 1, W (n−1) + M (n) > au = P M (1) > au + k≥1 P ∩ k n=1 (W (n−1) + M (n) > au), W (k) + M (k+1) > au ≥ P M (1) > au + k≥1 P ∩ k n=1 (W (n−1) + M (n) > au), W (k) > −k(1 + ε)c − K1, M (k+1) > au + k(1 + ε)c + K1 ≥ P M (1) > au + k≥1 1 − δ − P ∪ k n=1 (W (n−1) + M (n) > au) P M (k+1) > au + k(1 + ε)c + K1 ≥ (1 − δ − Q(u)) k≥0 P M (1) > au + k(1 + ε)c + K1 ≥ (1 − δ − Q(u)) 1 + ε ∞ 0 P M (1) > au + vc + K1 dv.
For u sufficiently large such that εu > K, we have
Q(u) ≥ (1 − δ − Q(u)) 1 + ε ∞ 0 P M (1) > (a + ε1)u + vc dv.
Rearranging the above inequality and using a change of variable, we obtain
Q(u) ≥ (1 − δ)u ∞ 0 P M (1) > u(a + ε1 + vc) dv 1 + ε + ∞ 0 P M (1) > (a + ε1)u + vc dv ,(27)
and thus by Lemma 6.1 and Fatou's lemma
lim inf u→∞ Q(u) uP T > u ≥ 1 − δ 1 + ε ∞ 0 µ((a + ε1 + vc, ∞])dv.
Since ε and δ are arbitrary, and by (26) the integration on the right hand side is finite, taking ε → 0, δ → 0 and applying dominated convergence theorem yields
lim inf u→∞ Q(u) uP T > u ≥ ∞ 0 µ((a + vc, ∞])dv.
Next, we consider the asymptotic upper bound. Let y 1 , y 2 > 0 be given. We shall construct an auxiliary
random walk W (n) , n ≥ 0, with W (0) = 0 and W (n) = n i=1 U (i) , n ≥ 1, where U (n) = ( U (n) 1 , U(n)
2 ) is given by
U (n) i = M (n) i , if M (n) i > y 1 ; U (n) i , if −y 2 < U (n) i ≤ M (n) i ≤ y 1 ; −y 2 , if M (n) i ≤ y 1 , U (n) i ≤ −y 2 , i = 1, 2.
Obviously, W (n) ≤ W Then,
W (n−1) + M (n) ≤ W (n) + (y 1 + y 2 )1, n ≥ 1.
Thus, for any ε > 0 and sufficiently large u,
Q(u) ≤ P ∃n ≥ 1, W (n) > au − (y 1 + y 2 )1 ≤ P ∃n ≥ 1, W (n) > (a − ε1)u .
Define c y1,y2 = −E U (1) . Since lim y1,y2→∞ c y1,y2 = c, we have that for any y 1 , y 2 large enough c y1,y2 > 0.
It follows from Lemma 6.1 and Lemma 7.4 that for any y 1 , y 2 > 0, U (1) is regularly varying with index λ and limiting measure µ, and P U (1) > u ∼ P T > u as u → ∞. Then, applying Theorem 3.1 and Remark 3.2 of [31] we obtain that
P ∃n ≥ 1, W (n−1) > (a − ε1)u ∼ uP U (1) > u ∞ 0 µ((c y1,y2 v + a − ε1, ∞])dv ∼ uP T > u ∞ 0 µ((c y1,y2 v + a − ε1, ∞])dv.
Consequently, the claimed asymptotic upper bound is obtained by letting ε → 0, y 1 , y 2 → ∞. The proof is complete.
Appendix
This section includes some results on the regularly varying random vectors.
Lemma 7.1. Let T > 0 be a regularly varying random vector with index α and limiting measure ν, and let
x i (u), 1 ≤ i ≤ n be increasing (to infinity) functions such that for some 1 ≤ m ≤ n, x 1 (u) ∼ · · · ∼ x m (u), and
x j (u) = o(x 1 (u)) for all j = m + 1, · · · , n. Then, for any a > 0,
P {∩ n i=1 (T i > a i x i (u))} ∼ P {∩ m i=1 (T i > a i x 1 (u))} ∼ ν([a m,0 , ∞]) P {|T | > x 1 (u)}
holds as u → ∞, with a m,0 = (a 1 , · · · , a m , 0, · · · , 0).
Proof of Lemma 7.1: Obviously, for any small enough ε > 0 we have that when u is sufficiently large
P {∩ n i=1 (T i > a i x i (u))} ≤ P ∩ m i=1 (T i > (a i − ε)x 1 (u)), ∩ n i=m+1 (T i > 0) ∼ ν([a −ε , ∞]) P {|T | > x 1 (u)} ,
where a −ε = (a 1 −ε, · · · , a m −ε, 0, · · · , 0). Next, for any small enough ε > 0 we have that when u is sufficiently
large P {∩ n i=1 (T i > a i x i (u))} ≥ P ∩ m i=1 (T i > (a i + ε)x 1 (u)), ∩ n i=m+1 (T i > a i (εx 1 (u))) ∼ ν([a ε+ , ∞])P {|T | > x 1 (u)}
with a ε+ = (a 1 + ε, · · · , a m + ε, a m+1 ε, · · · , a n ε). Letting ε → 0, the claim follows by the continuity of ν([a ε± , ∞]) in ε. The proof is complete.
Lemma 7.2. Let T , a i 's, x i (u) ′ s and a m,0 be the same as in Lemma 7.1. Further, consider η = (η 1 , · · · , η n )
to be an independent of T nonnegative random vector such that max 1≤i≤n E η α+δ i < ∞ for some δ > 0.
Then,
P {∩ n i=1 (T i η i > a i x i (u))} ∼ P {∩ m i=1 (T i η i > a i x 1 (u))} ∼ ν([a m,0 , ∞]) P {|T | > x 1 (u)} holds as u → ∞, where ν(K) = E ν(η −1 K) , with η −1 K = {(η −1 1 b 1 , · · · , η −1 n b n ), (b 1 , · · · , b n ) ∈ K} for any K ⊂ B([0, ∞] n \ {0}).
Proof of Lemma 7.2: It follows directly from Lemma 4.6 of [29] (see also Proposition A.1 of [38]) that the second asymptotic equivalence holds. The first claim follows from the same arguments as in Lemma 7.1. Lemma 7.3. Assume X ∈ R n is regularly varying with index α and limiting measure µ, A is a random n × d matrix independent of random vector Y ∈ R d . If 0 < E A α+δ < ∞ for some δ > 0, with · some matrix norm and
P {|Y | > x} = o (P {|X| > x}) , x → ∞,(28)
then, X + AY is regularly varying with index α and limiting measure µ, and
P {|X + AY | > x} ∼ P {|X| > x} , x → ∞.
Proof of Lemma 7.3: By Lemma 3.12 of [29], it suffices to show that
P {|AY | > x} = o (P {|X| > x}) , x → ∞.(29)
Defining
g(x) = x α+δ/2 α+δ , x ≥ 0, we have P {|AY | > x} ≤ P { A |Y | > x} ≤ g(x) 0 P {|Y | > x/t} P { A ∈ dt} + P { A > g(x)} .(30)
Due to (28), for arbitrary ε > 0,
g(x) 0 P {|Y | > x/t} P { A ∈ dt} ≤ ε g(x) 0 P {|X| > x/t} P { A ∈ dt} ,
hold for large enough x. Furthermore, by Potter's Theorem (see, e.g., Theorem 1.5.6 of [35]), we have
P {|X| > x/t} P {|X| > x} ≤ I (t≤1) + 2t α+δ I (1<t≤g(x)) , t ∈ (0, g(x))
holds for sufficiently large x, and thus by the dominated convergence theorem,
lim x→∞ g(x) 0 P {|Y | > x/t} P {|X| > x} P { A ∈ dt} ≤ lim x→∞ g(x) 0 εP {|X| > x/t} P {|X| > x} P { A ∈ dt} = εE { A α } .(31)
Moreover, Markov inequality implies that
lim x→∞ P { A > g(x)} P {|X| > x} ≤ lim x→∞ E A α+δ g(x) α+δ P {|X| > x} = 0.(32)
Therefore, the claim (29) follows from (30)-(32) and the arbitrariness of ε. This completes the proof. We shall show that
µ x v −→ µ, x → ∞.(33)
Given that the above is established, by letting A = {x : |x| > 1} (which is relatively compact and satisfies µ(∂A) = 0), we have µ x (A) → µ(A) = 1 as x → ∞ and thus P {|Z| > x} ∼ P {|X| > x}. Furthermore, by substituting the denominator in the definition of µ x by P {|Z| > x}, we conclude that
P x −1 Z ∈ · P {|Z| > x} v −→ µ(·), x → ∞,
showing that Z is regularly varying with index α and limiting measure µ.
Now it remains to prove (33). To this end, we define a set D consisting of all sets in R with some a 2 > 0, by the order relations of X, Y , Z, we have for any x > 0
P x −1 Y ∈ A P {|X| > x} ≤ µ x (A) ≤ P x −1 X ∈ A P {|X| > x} .(35)
Letting x → ∞, using the regularity properties as supposed for X and Y , and then appealing to Proposition 3.12(ii) in [39], we verify (34) for case a). If A = [−∞, a 1 ] × (a 2 , ∞] with some a 1 ∈ R, a 2 > 0, then we have Therefore, similarly as the proof for the cases a)-b), one can establish (34) for the cases c) and d).
Next, let f defined on R 2 0 be any positive, continuous function with compact support. We see that the support of f is contained in [a, b] c for some a < 0 < b. Note that
[a, b] c = (b 1 , ∞] × [a 2 , ∞] ∪ [−∞, b 1 ] × (b 2 , ∞] ∪ [−∞, a 1 ) × [−∞, b 2 ] ∪ [a 1 , ∞] × [−∞, a 2 ) =: 4 i=1 A i ,
where A i 's are sets of the form a)-d) respectively, and thus (34) holds for these A i 's. Therefore,
sup x>0 µ x (f ) ≤ sup z∈R 2 0 f (z) · sup x>0 µ x ([a, b] c ) ≤ sup z∈R 2 0 f (z) · 4 i=1 sup x>0 µ x (A i ) < ∞,
which by Proposition 3.16 of [39] implies that {µ x } x>0 is a vaguely relatively compact subset of the metric space consisting of all the nonnegative Radon measures on (R 2 0 , B(R 2 0 )). If µ 0 and µ ′ 0 are two subsequential vague limits of {µ x } x>0 as x → ∞, then by (34) we have µ 0 (A) = µ ′ 0 (A) for any A ∈ D. Since any rectangle in R 2 0 can be obtained from a finite number of sets in D by operating union, intersection, difference or complementary, and these rectangles constitutes a π-system and generate the σ-field B(R 2 0 ), we get µ 0 = µ ′ 0 on R 2 0 . Consequently, (33) is valid, and thus the proof is complete.
Theorem 4. 1 .
1Under the above assumptions on regenerative model Z(t), t ≥ 0, we have that, as u → ∞, vc + a, ∞])dv P T > u u,
Remark 4. 2 .
2Consider | · | to be the L 1 norm in Theorem 4.1. We have µ([a, ∞]) = ((p 1 + p 2 ) max(a 1 /p 1 , a 2 /p 2 )) −λ , and thus, as u → ∞,
First
we give a result in line with Proposition 2.1. Note that in the proof of the main results in [32] the minimum point t * u of the function f u (t) := u(1 + t) σ(ut/c) , t ≥ 0,
,
0 · · · , 0) with H 1 = H 2 = · · · = H m .Proof of Lemma 5.2: We use a similar argument as in the proof of Theorem 2.1 in[1] to verify our conclusion.
the claim for M (1) is a direct consequence of Lemma 7.3 and Lemma 7.4. This completes the proof.
any n ≥ 1. Furthermore, one can show that M (n) i ≤ U (n) i + (y 1 + y 2 ).
Lemma 7 . 4 .
74Assume X, Y ∈ R n are regularly varying with same index α and same limiting measure µ.Moreover, if X ≥ Y and P {|X| > x} ∼ P {|Y | > x} as x → ∞, then for any random vector Z satisfyingX ≥ Z ≥ Y , Z isregularly varying with index α and limiting measure µ, and P {|Z| > x} ∼ P {|X| > x} as x → ∞. Proof of Lemma 7.4: We only prove the claim for n = 2, a similar argument can be used to verify the claim for n ≥ 3. For any x > 0, define a measure µ x as µ x (A) =: P x −1 Z ∈ A P {|X| > x} , A ∈ B
: (a 1 , ∞] × [a 2 , ∞], a 1 > 0, a 2 ∈ R, b) : [−∞, a 1 ] × (a 2 , ∞], a 1 ∈ R, a 2 > 0, c) : [−∞, a 1 ) × [−∞, a 2 ], a 1 < 0, a 2 ∈ R, d) : [a 1 , ∞] × [−∞, a 2 ), a 1 ∈ R, a 2 < 0.Note that every A ∈ D is relatively compact and satisfies µ(∂A) = 0. We first show thatlim x→∞ µ x (A) = µ(A), ∀A ∈ D.(34)If A = (a 1 , ∞]×(a 2 , ∞] or A = (a 1 , ∞]×[a 2 , ∞] with a i ∈ R and at least one a i > 0, i = 1, 2, or A = R×(a 2 , ∞]
µ x (A) = µ x (R × (a 2 , ∞]) − µ x ((a 1 , ∞] × (a 2 , ∞]),and thus by the convergence in case a),lim x→∞ µ x (A) = µ(R × (a 2 , ∞]) − µ((a 1 , ∞] × (a 2 , ∞]) = µ(A), this validates (34) for case b). If A = [−∞, a 1 ) × [−∞, a 2 ] or A = [−∞, a 1 ) × [−∞, a 2 )with a i ∈ R and at least one a i < 0, i = 1, 2, or A = R × [−∞, a 2 ) with some a 2 < 0, then we get a similar formula as (35) with the reverse inequalities. If A = [a 1 , ∞] × [−∞, a 2 ) with some a 1 ∈ R, a 2 < 0, thenµ x (A) = µ x (R × [−∞, a 2 )) − µ x ([−∞, a 1 ) × [−∞, a 2 )).
1 in[1] (see the asymptotics for integrals I 1 , I 2 , I 4therein), we conclude
lim
r→0,R→∞
lim sup
u→∞
J 4 (u)
P {|T | > ← −
σ 1 (u)}
= 0,
which together with (20) implies that
lim
r→0,R→∞
lim sup
u→∞
The supremum of a Gaussian process over a random interval. K Dȩbicki, B Zwart, S Borst, Statistics and Probability letters. 683K. Dȩbicki, B. Zwart, and S. Borst, "The supremum of a Gaussian process over a random interval," Statistics and Probability letters, vol. 68, no. 3, pp. 221-234, 2004.
Asymptotics of supremum distribution of a Gaussian process over a Weibullian time. M Arendarczyk, K Dȩbicki, Bernoulli. 171M. Arendarczyk and K. Dȩbicki, "Asymptotics of supremum distribution of a Gaussian process over a Weibullian time," Bernoulli, vol. 17, no. 1, pp. 194-210, 2011.
Exact asymptotics of supremum of a stationary Gaussian process over a random interval. M Arendarczyk, K Dȩbicki, Statistics and Probability Letters. M. Arendarczyk and K. Dȩbicki, "Exact asymptotics of supremum of a stationary Gaussian process over a random interval," Statistics and Probability Letters, 2011.
Exact asymptotics and limit theorems for supremum of stationary chi-processes over a random interval. Z Tan, E Hashorva, Stochastic Processes and Their Applications. 123Z. Tan and E. Hashorva, "Exact asymptotics and limit theorems for supremum of stationary chi-processes over a random interval," Stochastic Processes and Their Applications, vol. 123, pp. 2983-2998, 2013.
Tail asymptotics of supremum of certain Gaussian processes over threshold dependent random intervals. K , E Hashorva, L Ji, Extremes. 173K. D ' ebicki, E. Hashorva, and L. Ji, "Tail asymptotics of supremum of certain Gaussian processes over threshold dependent random intervals," Extremes, vol. 17, no. 3, pp. 411-429, 2014.
On the asymptotics of supremum distribution for some interated processes. M Arendarczyk, Extremes. 20M. Arendarczyk, "On the asymptotics of supremum distribution for some interated processes," Extremes, vol. 20, pp. 451-474, 2017.
Sojourns of stationary Gaussian processes over a random interval. K , X Peng, PreprintK. D ' ebicki and X. Peng, "Sojourns of stationary Gaussian processes over a random interval," Preprint, https://arxiv.org/pdf/2004.12290.pdf, 2020.
Extremes of multi-dimensional Gaussian processes. K , K Kosiński, M Mandjes, T Rolski, Stochastic Processes and their Applications. 120K. D ' ebicki, K. Kosiński, M. Mandjes, and T. Rolski, "Extremes of multi-dimensional Gaussian processes," Stochastic Pro- cesses and their Applications, vol. 120, no. 12, pp. 2289-2301, 2010.
Extremes of multi-dimensional Gaussian processes: Exact asymptotics. K Dȩbicki, E Hashorva, L Ji, K Tabiś, Stochastic Processes and Their Applications. 125K. Dȩbicki, E. Hashorva, L. Ji, and K. Tabiś, "Extremes of multi-dimensional Gaussian processes: Exact asymptotics," Stochastic Processes and Their Applications, vol. 125, pp. 4039-4065, 2015.
Geometry of conjunction set of smooth stationary Gaussian fields. J.-M Azais, V Pham, PreprintJ.-M. Azais and V. Pham, "Geometry of conjunction set of smooth stationary Gaussian fields," Preprint, https://arxiv.org/abs/1909.07090v1, 2019.
Conjunction probability of smooth centered Gaussian processes. V Pham, 10.1007/s40306-019-00351-4Acta Math Vietnam. V. Pham, "Conjunction probability of smooth centered Gaussian processes," Acta Math Vietnam, https://doi.org/10.1007/s40306-019-00351-4, 2020.
Extremes of multi-dimensional Gaussian processes. K , E Harshorva, L Wang, Stochastic Processes and their Applications. K. D ' ebicki, E. Harshorva, and L. Wang, "Extremes of multi-dimensional Gaussian processes," Stochastic Processes and their Applications, 2020.
A test for a conjunction. K Worsley, K Friston, Statistics and Probability Letters. 472K. Worsley and K. Friston, "A test for a conjunction," Statistics and Probability Letters, vol. 47, no. 2, pp. 135-140, 2000.
Rough asymptotics of the probability of simultaneous high extrema of two Gaussian processes: the dual action functional. V Piterbarg, B Stamatovich, Uspekhi Mat. Nauk. 601V. Piterbarg and B. Stamatovich, "Rough asymptotics of the probability of simultaneous high extrema of two Gaussian processes: the dual action functional," Uspekhi Mat. Nauk, vol. 60, no. 1(361), pp. 171-172, 2005.
Exact asymptotics of component-wise extrema of two-dimensional Brownian motion. K , L Ji, T Rolski, 10.1007/s10687-020-00387-yK. D ' ebicki, L. Ji, and T. Rolski, "Exact asymptotics of component-wise extrema of two-dimensional Brownian motion," Extremes, https://doi.org/10.1007/s10687-020-00387-y, 2020.
Finite-time ruin probability for correlated Brownian motions. K , E Harshorva, K Krystecki, PreprintK. D ' ebicki, E. Harshorva, and K. Krystecki, "Finite-time ruin probability for correlated Brownian motions," Preprint, https://arxiv.org/pdf/2004.14015.pdf, 2020.
Large deviations of bivariate Gaussian extrema. R Van Der Hofstad, H Honnappa, Queueing Systems. 93R. van der Hofstad and H. Honnappa, "Large deviations of bivariate Gaussian extrema," Queueing Systems, vol. 93, pp. 333- 349, 2019.
Ruin probabilities. S Asmussen, H Albrecher, Advanced Series on Statistical Science & Applied Probability. 14World Scientific Publishing Co. Pte. Ltdsecond ed.S. Asmussen and H. Albrecher, Ruin probabilities. Advanced Series on Statistical Science & Applied Probability, 14, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, second ed., 2010.
Ruin problem of a two-dimensional fractional Brownian motion risk process. L Ji, S Robert, Stochastic Models. 34L. Ji and S. Robert, "Ruin problem of a two-dimensional fractional Brownian motion risk process," Stochastic Models, vol. 34, no. 1, pp. 73-97, 2018.
Multivariate subordination, self-decomposability and statility. O E Barndorff-Nielsen, J Pedersen, K.-I Sato, Advances in Applied Probability. 33O. E. Barndorff-Nielsen, J. Pedersen, and K.-I. Sato, "Multivariate subordination, self-decomposability and statility," Ad- vances in Applied Probability, vol. 33, pp. 160-187, 2001.
Multivariate time changes for Lévy asset models: Characterization and calibration. E Luciano, P Semeraro, Journal of Computational and Applied Mathematics. 233E. Luciano and P. Semeraro, "Multivariate time changes for Lévy asset models: Characterization and calibration," Journal of Computational and Applied Mathematics, vol. 233, pp. 1937-1953, 2010.
The fractional multivariate normal tempered stable process. Y S Kim, Applied Mathematics Letters. 25Y. S. Kim, "The fractional multivariate normal tempered stable process," Applied Mathematics Letters, vol. 25, pp. 2396- 2401, 2012.
Double lookbacks. H He, W P Keirstead, J Rebholz, Mathematical Finance. 83H. He, W. P. Keirstead, and J. Rebholz, "Double lookbacks," Mathematical Finance, vol. 8, no. 3, pp. 201-228, 1998.
Tail asymptotics of the supremum of a regenerative process. Z Palmowski, B Zwart, Journal of Applied Probability. 442Z. Palmowski and B. Zwart, "Tail asymptotics of the supremum of a regenerative process," Journal of Applied Probability, vol. 44, no. 2, pp. 349-365, 2007.
Subexponential asymptotics of hybrid fluid and ruin models. B Zwart, S Borst, K , The Annals of Applied Probability. 151AB. Zwart, S. Borst, and K. D ' ebicki, "Subexponential asymptotics of hybrid fluid and ruin models," The Annals of Applied Probability, vol. 15, no. 1A, pp. 500-517, 2005.
A storage model with a two-state random evironment. O Kella, W Whitt, Operations Research. 40O. Kella and W. Whitt, "A storage model with a two-state random evironment," Operations Research, vol. 40, pp. 257-262, 1992.
A ruin model with a resampled environment. C Constantinescu, G Delsing, M Mandjes, L Rojas Nandayapa, Scandinavian Actuarial Journal. 4C. Constantinescu, G. Delsing, M. Mandjes, and L. Rojas Nandayapa, "A ruin model with a resampled environment," Scandinavian Actuarial Journal, vol. 4, pp. 323-341, 2020.
Kac-levy process. N Ratanov, Journal of Theoretical Probability. 33N. Ratanov, "Kac-levy process," Journal of Theoretical Probability, vol. 33, pp. 239-267, 2020.
Regular varying functions. A H Jessen, T Mikosch, Publications de L'Institut Mathematique. 80A. H. Jessen and T. Mikosch, "Regular varying functions," Publications de L'Institut Mathematique, vol. 80, no. 94, pp. 171- 192, 2006.
Heavy-Tail Phenomena. Probabilistic and Statistical Modeling. S I Resnick, SpringerLondonS. I. Resnick, Heavy-Tail Phenomena. Probabilistic and Statistical Modeling. London: Springer, 2007.
Functional large deviations for multivariate regularly varying random walks. H Hult, F Lindskog, T Mikosch, G Samorodnitsky, The Annals of Applied Probability. 154H. Hult, F. Lindskog, T. Mikosch, and G. Samorodnitsky, "Functional large deviations for multivariate regularly varying random walks," The Annals of Applied Probability, vol. 15, no. 4, pp. 2651-2680, 2006.
Extremes of Gaussian processes over an infinite horizon. A Dieker, Stochastic Processes and their Applications. 115A. Dieker, "Extremes of Gaussian processes over an infinite horizon," Stochastic Processes and their Applications, vol. 115, no. 2, pp. 207-248, 2005.
V Piterbarg, Asymptotic methods in the theory of Gaussian processes and fields. Providence, RIAmerican Mathematical Society148V. Piterbarg, Asymptotic methods in the theory of Gaussian processes and fields, vol. 148 of Translations of Mathematical Monographs. Providence, RI: American Mathematical Society, 1996.
Stable Non-Gaussian Random Processes. G Samorodnitsky, M S Taqq, The Annals of Applied Probability. 154G. Samorodnitsky and M. S. Taqq, "Stable Non-Gaussian Random Processes," The Annals of Applied Probability, vol. 15, no. 4, pp. 2651-2680, 2006.
N Bingham, C Goldie, J Teugels, Regular variation. Cambridge university press27N. Bingham, C. Goldie, and J. Teugels, Regular variation, vol. 27. Cambridge university press, 1989.
Smoothness of the law of the supremum of the fractional Brownian motion. N L Zadi, D Nualart, Electronic Communications in Probability. 8N. L. Zadi and D. Nualart, "Smoothness of the law of the supremum of the fractional Brownian motion," Electronic Com- munications in Probability, vol. 8, pp. 102-111, 2003.
Regular variation, subexponentiatility and their applications in probability theory. T Mikosch, Lecture Notes for the Workshop "Heavy Tails and Queues. EURANDOM Institute, Eindhoven, The NetherlandsT. Mikosch, Regular variation, subexponentiatility and their applications in probability theory. in: Lecture Notes for the Workshop "Heavy Tails and Queues", EURANDOM Institute, Eindhoven, The Netherlands, 1999.
Regular variation of GARCH processes. B Basrak, R A Davis, T Mikosch, Stochastic Processes and their Applications. 99B. Basrak, R. A. Davis, and T. Mikosch, "Regular variation of GARCH processes," Stochastic Processes and their Applica- tions, vol. 99, pp. 95-116, 2002.
Extreme Values, Regular variation, and Point Processes. S I Resnick, SpringerLondonS. I. Resnick, Extreme Values, Regular variation, and Point Processes. London: Springer, 1987.
. Lanpeng Ji, Woodhouse Lane, Leeds LS2 9JTSchool of Mathematics, University of LeedsUnited Kingdom E-mail address: [email protected] Ji, School of Mathematics, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, United Kingdom E-mail address: [email protected]
. Xiaofan Peng, School of Mathematical Sciences, University of Electronic Science and Technology of ChinaChengdu 610054, China E-mail address: [email protected] Peng, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China E-mail address: [email protected]
| [] |
[
"APPROXIMATING MOVING POINT SOURCES IN HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS",
"APPROXIMATING MOVING POINT SOURCES IN HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS"
] | [
"Ylva Ljungberg ",
"Rydin And ",
"Martin Almquist "
] | [] | [] | We consider point sources in hyperbolic equations discretized by finite differences. If the source is stationary, appropriate source discretization has been shown to preserve the accuracy of the finite difference method. Moving point sources, however, pose two challenges that do not appear in the stationary case. First, the discrete source must not excite modes that propagate with the source velocity. Second, the discrete source spectrum amplitude must be independent of the source position. We derive a source discretization that meets these requirements and prove design-order convergence of the numerical solution for the one-dimensional advection equation. Numerical experiments indicate design-order convergence also for the acoustic wave equation in two dimensions. The source discretization covers on the order of √ N grid points on an N -point grid and is applicable for source trajectories that do not touch domain boundaries. | null | [
"https://arxiv.org/pdf/2201.08658v1.pdf"
] | 246,210,345 | 2201.08658 | 5798c5a8f25019b72ff7dd0fa78d7a28a7f662c8 |
APPROXIMATING MOVING POINT SOURCES IN HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS
Ylva Ljungberg
Rydin And
Martin Almquist
APPROXIMATING MOVING POINT SOURCES IN HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS
We consider point sources in hyperbolic equations discretized by finite differences. If the source is stationary, appropriate source discretization has been shown to preserve the accuracy of the finite difference method. Moving point sources, however, pose two challenges that do not appear in the stationary case. First, the discrete source must not excite modes that propagate with the source velocity. Second, the discrete source spectrum amplitude must be independent of the source position. We derive a source discretization that meets these requirements and prove design-order convergence of the numerical solution for the one-dimensional advection equation. Numerical experiments indicate design-order convergence also for the acoustic wave equation in two dimensions. The source discretization covers on the order of √ N grid points on an N -point grid and is applicable for source trajectories that do not touch domain boundaries.
Introduction
Point sources are frequently used to model, for example, sources of acoustic [7] and elastic [1] waves. Point sources are also used to represent boundaries and interfaces in level set methods [16] and immersed boundary methods [9]. In many applications, the point sources are actually moving. Unless the source velocity is orders of magnitude smaller than the wave speed, accurate solution requires that the source movement is taken into account.
Petersson et al. [10] derived stationary source discretizations for hyperbolic partial differential equations discretized with finite difference methods. They proved that by enforcing both moment conditions and smoothness conditions, design-order convergence is achieved away from the source. Their work builds on the work by Waldén [15], who developed theory for 1D point source discretizations for elliptic and parabolic partial differential equations. This theory was extended to source discretizations in higher dimensions by Tornberg and Engquist [14]. A different class of point source discretizations that works well with level set methods was proposed by Zahedi and Tornberg [16]. This type of point source has compact support in Fourier space. As a result, the source discretization is formally global in physical space. However, the source discretization coefficients decay rapidly away from the source position, which makes it possible to window the source discretization to a finite width and still satisfy a given error tolerance.
The objective of this paper is to extend the work of Petersson et al. [10] to moving sources. Interestingly, although their source discretization is valid for any fixed source position x 0 , a straightforward method-of-lines discretization that uses their source fails to converge when x 0 is time-dependent (see Section 2). To understand why, we repeat the accuracy analysis in Petersson et al. [10] with a nonzero source velocity taken into account. The analysis reveals two problems with the method-oflines approach. First, it may excite modes whose numerical phase velocity equals the source velocity, causing a "numerical sonic boom". Second, the source spectrum amplitude depends on the distance between the source position and the closest grid point. As the source moves, its spectrum fluctuates with a period proportional to h −1 , where h denotes the grid spacing. Our convergence proof relies on a "motionconsistent" source discretization with position-independent spectrum amplitude.
This paper is organized as follows. In Section 2, the need for a new source discretization is motivated by a comparison between different discretizations of the acoustic wave equation in one dimension. In Section 3, we study the exact solution to the one-dimensional advection equation with a moving point source. Section 4 introduces a spatial discretization of the advection equation. In Section 5, the motion-consistent source is introduced, and design-order convergence is proved for the advection equation. Next, in Section 6, we show that the motion-consistent source, which formally has global support in physical space, can be windowed to a width proportional to √ N grid points on an N -point grid. Section 7 describes the numerical implementation of the motion-consistent source. The convergence properties are verified by numerical experiments with the advection equation in Section 8. In Section 9, numerical experiments that indicate design-order convergence for the two-dimensional wave equation with an accelerating source are presented. Section 10 concludes the work.
Motivation
To motivate the need for a new type of point source discretization, we consider the acoustic wave equation on the real line,
(2.1) ρ dv dt + θ x = 0, 1 K dθ dt + v x = g(t)δ(x − x 0 (t)),
where θ is pressure, v is particle velocity, ρ is density, K is bulk modulus, g(t) is the source time function, δ is the Dirac delta distribution, and x 0 (t) is the source trajectory. The wave speed is c = K ρ . In this example we set K = 1 and ρ = 1, which gives c = 1. We impose homogeneous initial data at t = 0. The smoothness of the solution is determined by the smoothness of g and x 0 . We choose the Gaussian source time function
(2.2) g(t) = 1 σ √ 2π e − (t−t 0 ) 2 2σ 2
, with σ = 1 5 and t 0 = 2. We let the source move with constant velocity v 0 = 0.3 and set (2.3) x 0 (t) = 1.6 + v 0 t. To discretize (2.1) we truncate the real line to the interval [0, 4] and impose periodic boundary conditions. We use 400 grid points, which yields the grid spacing h = 0.01. In what follows, we will pay particular attention to the highest mode k that can be represented on the grid: the mode |kh| = π, which we refer to as the π mode. Our discretizations take the form
(2.4) dv dt + ∂ + θ = 0 dθ dt + ∂ − v = g(t)δ x0(t) .
where ∂ ± are finite difference operators and δ x0(t) denotes the discrete approximation of δ(x − x 0 (t)). If a choice of δ x0 produces a convergent method, we say that δ x0 is a motion-consistent discretization of the δ distribution. We will investigate the performance of the following two finite difference (FD) methods: (FD 1) Centered stencil. This method is given by ∂ + = ∂ − = ∂, where ∂ denotes the standard centered fourth order finite difference operator. This method propagates the π mode with phase velocity 0. This implies that there is some wavenumber k * , |k * h| < π, which propagates with the source velocity, as illustrated in Figure 2. This method is therefore vulnerable to the numerical sonic boom, which is discussed in detail in Section 5. (FD 2) Dual-pair stencil. This method, introduced in [4], is given by
∂ ± = ∂±B,
where ∂ is a centered operator and B = B is chosen so that ∂ ± are upwind/downwind operators. We use the fifth order upwind/downwind operators whose footprints are offset by one compared to the sixth order centered operator. The numerical phase velocity of this method is strictly larger than v 0 = 0.3 (see Figure 2). Hence, this method avoids the numerical sonic boom in the test case studied in this section.
Figure 2.
Phase velocity for the 4th order centered stencil (FD 1) and the 5th order dual-pair stencil (FD 2). The sonic boom wavenumber corresponding to source velocity v 0 = 0.3 is indicated by a circle.
The stationary source discretizations in [10,11] are based on discrete δ distributions δ x0 ≈ δ(x − x 0 ) of the form
(2.5) δ x0 j = 1 h φ x j − x 0 h ,
where the function φ has compact support. These approximations are accurate for any fixed x 0 , but are not designed to be motion-consistent. We will refer to them as motion-inconsistent. For pth order convergence with a stationary source, δ x0 must satisfy p moment conditions. When combined with centered finite differences, δ x0 additionally needs to satisfy p smoothness conditions, which serve to remove the π mode from the spectrum of δ x0 . When combined with the dual-pair stencil, or any other stencil that propagates the π mode with non-zero velocity, no smoothness conditions are required (see e.g. [8] for an example with staggered operators). In this section, we investigate the following three source discretizations:
(Source 1) Motion-inconsistent source with continuous φ. This source discretization from [10] satisfies 4 moment conditions and 4 smoothness conditions. The function φ is continuous but not continuously differentiable. (Source 2) Motion-inconsistent source with continuously differentiable φ. This source discretization from [11] satisfies 2 moment conditions. The function φ is continuously differentiable.
(Source 3) Motion-consistent source. This is the motion-consistent source, presented in Section 5 of this paper, with 4 moment conditions and 4 sonic boom conditions imposed.
The sources are illustrated in Figure 3. Figure 4 shows the solution obtained with the following four combinations of finite difference method and source discretization:
(1) FD 1, source 1. Figure 4(a) shows large artifacts in the solution. This is to some extent expected, since FD 1 is vulnerable to the numerical sonic boom. (2) FD 2, source 1. Figure 4(b) shows artifacts that are smaller than in Figure 4(a), but still clearly visible, even though FD 2 eliminates the numerical sonic boom issue. We conclude that there are additional requirements for motion consistency. (3) FD 2, source 2. One might hypothesize that regularity of φ is important for motion consistency. However, Figure 4(c) shows no significant improvement over Figure 4(b), despite φ being continuously differentiable. This indicates that regularity of φ is not of primary importance. (4) FD 1, source 3. Figure 4(d) shows that the motion-consistent source developed in this paper produces no visible artifacts.
All four spatial discretizations were time-integrated with the classical fourth order Runge-Kutta method. Combination 4, with the motion-consistent source, is the only combination without clearly visible artifacts, and also the only combination that actually converges to the true solution as h → 0. This motivates the need for motion-consistent source discretizations.
Model problem
To derive a provably motion-consistent source discretization, we consider the advection equation with a moving point source,
(3.1) u t + cu x = g(t)δ(x − x 0 (t)), x ∈ [0, L], t > 0,
where u is the solution and c the wave speed. We impose periodic boundary conditions and consider u to be L-periodic in x. We assume that the source trajectory, x 0 , and the source time funtion, g, are sufficiently smooth functions of t. We further assume that g(t) = 0 for t ≤ 0 and for t ≥ t 1 > 0, where t 1 is some finite time. For simplicity we impose the homogeneous initial condition
(3.2) u(x, 0) = 0, x ∈ [0, L].
While the source is active, i.e., for t < t 1 , the exact solution may be discontinuous. We shall not attempt to prove high-order convergence for t < t 1 . Hence, all of the subsequent convergence analysis assumes t ≥ t 1 . We will also restrict the analysis to constant source velocity v 0 < c, which allows us to set x 0 (t) = v 0 t.
Numerical experiments indicate that the convergence rates that we prove carry over to accelerating sources. We will use the inner product
(3.3) (u, v) = 1 L L 0 u(x)v(x) dx,
where u denotes the complex conjugate of u. The Fourier modes, e ikx , where
(3.4) k ∈ 0, ± 2π L , ± 4π L , . . . =: K ∞ ,
are orthonormal in the inner product. The Fourier series representation of an Lperiodic function u is
(3.5) u(x) = k∈K∞ u k e ikx ,
where the Fourier coefficients, u k , are given by
(3.6) u k = e ikx , u = 1 L L 0 e −ikx u(x) dx, k ∈ K ∞ .
The Fourier coefficients of the δ distribution are
(3.7) δ k = 1 L L 0 e −ikx δ(x − x 0 ) dx = 1 L e −ikx0 .
Taking the inner product of the advection equation (3.1) with e ikx leads to a system of ordinary differential equations for the Fourier coefficients of u,
(3.8) d u k dt + ikc u k = g(t) L e −ikx0 , k ∈ K ∞ , with initial conditions u k (0) = 0. The solution is (3.9) u k (t) = 1 L t 0 g(τ )e ik(cτ −x0(τ )−ct) dτ.
In the case of constant source velocity, x 0 (t) = v 0 t, the solution simplifies to
(3.10) u k (t) = 1 L t 0 g(τ )e ik((c−v0)τ −ct) dτ.
Consider t ≥ t 1 so that g(t) = 0. Assuming that v 0 = c and that g has at least n continuous derivatives, applying the integration-by-parts formula n times results in
(3.11) u k (t) = (−1) n L(ik(c − v 0 )) n t 0 g (n) (τ )e ik((c−v0)τ −ct) dτ.
Taking the absolute value shows that the Fourier coefficients decay with |k| according to
(3.12) | u k (t)| ≤ C |k| n ,
for some constant C, which is independent of k. The decay of the Fourier coefficients is a consequence of the smoothness of u. Note however, that if v 0 = c so that the source moves with the wave speed, then u develops the discontinuity known as the sonic boom. In this case, the Fourier coefficients are
(3.13) u k (t) = e −ikct L t 0 g(τ ) dτ.
and there is no decay with |k|.
Spatial discretization
The interval [0, L] is discretized by an equidistant grid with M points, x j = (j − 1)h, j = 1, 2, . . . , M , where the grid spacing is h = L/M . For simplicity, we assume that M is odd such that M = 2N + 1, where N is an integer. This is no actual restriction and our results hold also when M is even. We use boldface font to denote grid vectors,
(4.1) u = [u 1 , u 2 , . . . , u M ] .
The discrete inner product is defined as
(4.2) (u, v) h = 1 L M j=1 hu j v j .
The first 2N + 1 Fourier modes, i.e., e ikx , where
(4.3) k ∈ 0, ± 2π L , . . . , ±N 2π L =: K N ,
are orthonormal in the discrete inner product. Thus, any grid vector can be represented by its Fourier expansion,
(4.4) u j = k∈K N u k e ikxj ,
where the Fourier coefficients, u k , are given by
(4.5) u k = e ikx , u h = 1 L j he −ikxj u j , k ∈ K N .
Let δ x0 be a discrete approximation of δ(x−x 0 ) and let ∂ denote a centered finite difference operator. The semidiscrete approximation of the advection equation (3.1) reads
(4.6) du dt + c∂u = g(t)δ x0 .
Inserting the Fourier expansions yields
(4.7) d u k dt + c ∂ k u k = g(t) δ x0 k ,
where ∂ k denotes the symbol of the finite difference operator. For a pth order accurate centered finite difference operator, the symbol can be written as (see [6])
(4.8) ∂ k = ik P (kh),
where P is real-valued and even, and
(4.9) P (kh) = 1 + O((kh) p ).
For notational convenience we will henceforth let the dependence on kh be implied and write simply P instead of P (kh). The O notation in (4.9) means that there exist constants C and κ 0 such that (4.10)
P − 1 ≤ C |kh| p , |kh| ≤ κ 0 .
The O notation will be used frequently in subsequent sections. The solution to the semidscrete problem (4.6) is
(4.11) u k = t 0 g(τ ) δ x0(τ ) k e cik P (τ −t) dτ.
Motion-consistent source discretization
We propose a discrete δ distribution whose Fourier coefficients satisfy
(5.1) δ x0 k = e −ikx0 L F (kh),
where F is a bounded and sufficiently smooth real-valued even function. We will prove that δ x0 is motion-consistent. Notice that the spectrum of δ x0 is fixed in the sense that the magnitude of δ x0 k is independent of x 0 . It is straightforward to verify that δ x0 satisfies the shift theorem:
(5.2) δ x0 k = e −ikx0 δ 0 k .
The drawback of the motion-consistent source is that it formally has global support in physical space, which would make it impossible to use in a finite, nonperiodic domain. However, we will later show that the elements of δ x0 decay rapidly in magnitude away from x 0 . It is thus possible to apply a window W , of finite width, such that
(5.3) W δ x0 − δ x0 ∞ ≤ µ,
for any given error tolerance µ. In practice, we propose to use the windowed source W δ x0 . First, however, we will analyze the periodic problem and show pth order convergence for the global source δ x0 . Section 6 discusses how to choose W so that pth order convergence is retained. By (4.11), the Fourier coefficients of the discrete solution obtained with the motion-consistent source are
(5.4) u k (t) = F (kh) L t 0 g(τ )e −ikx0(τ ) e cik P (τ −t) dτ.
For constant source velocity, we obtain
(5.5) u k (t) = F (kh) L t 0 g(τ )e ik((c P −v0)τ −c P t) dτ.
Assuming that c P = v 0 , integrating by parts n times yields,
(5.6) u k (t) = F (kh) (−1) n L(ik(c P − v 0 )) n t 0 g (n) (τ )e ik((c P −v0)τ −c P t) dτ,
which is a discrete analog of (3.11). Recall that we used (3.11) to show that the Fourier coefficients of the exact solution decay with |k|. Using (5.6) to show similar decay of the discrete Fourier coefficients is an essential part of our convergence proof. The reason why the source discretizations proposed by Petersson et al.
are not motion-consistent is that they do not admit a result similar to (5.6). We elaborate on this fact in Section 5.
(5.7) d x0 k = e −ikx0 L f x0 (kh),
for some function f x0 . Here we have taken the liberty of adapting their notation to the current setting by adding the subscript x 0 , to highlight that f x0 may change with x 0 . The source d x0 does not satisfy the shift theorem, because
(5.8) d x0 k − e −ikx0 d 0 k = e −ikx0 L (f x0 (kh) − f 0 (kh)
) .
and f x0 = f 0 does not hold, in general. The shift theorem would be satisfied if f x0 = f 0 for any x 0 or, equivalently,
(5.9) d dx 0 f x0 (kh) = 0.
Let us now attempt to derive a result similar to (5.6) for d x0 . With constant source velocity, the Fourier coefficients of the corresonding discrete solution, which we here denote by v k , are
(5.10) v k (t) = 1 L t 0 f x0 (kh)g(τ )e ik((c P −v0)τ −c P t) dτ.
The next step is to integrate by parts. We then need to consider the time derivative of f x0 (kh). If the time derivative is zero, we can proceed to derive the desired result. By the chain rule, we have
(5.11) d dt f x0 (kh) = v 0 d dx 0 f x0 (kh).
Notice that the time derivative is zero if
(5.12) v 0 = 0 or d dx 0 f x0 (kh) = 0.
That is, we need either a stationary source or a discrete δ distribution that satisfies the shift theorem. Notice that for a stationary source, the requirement on f x0 vanishes.
Since f x0 does not depend on the absolute position, only on the distance to the nearest grid point, we have f x0 = f x0+h . If d/dx 0 f x0 is not identically zero, it follows that
(5.13) d dx 0 f x0 (kh) ∝ h −1 ,
and it appears impossible to bound d/dx 0 f x0 as h → 0. We believe that these oscillations in f x0 are the cause of the numerical artifacts in Figures 4(b) and 4(c).
(5.14) (x ν , d x0 ) h = x ν 0 , ν = 0, . . . , m − 1. Petersson et al.
showed that the moment conditions (5.14) imply that (assuming m ≥ 1)
(5.15) f x0 (0) = 1, f (ν) x0 (0) = 0, ν = 1, . . . , m − 1.
Since the motion-consistent source is defined in terms of its Fourier coefficients, it is natural to also formulate moment conditions in the Fourier domain. Hence, rather than attempting to satisfy (5.14), we say that δ x0 satisfies m moment conditions if
5.3.
Sonic boom conditions. The numerical sonic boom manifests for wavenumbers in the vicinity of k * such that c P (k * h) = v 0 , which means that the finite difference operator propagates the k * mode with the source velocity. Equation (5.6) indicates that we cannot derive a strong enough bound on u k near k = k * unless F → 0 sufficiently fast as k → k * . To ensure that F → 0 sufficiently fast, we introduce sonic boom conditions that we require F to satisfy. Let k * denote the smallest positive sonic boom wavenumber. We say that F satisfies s sonic boom conditions if F is smooth on (0, k * h) and
(5.18) F (ν) (k * h) = 0, ν = 0, . . . , s − 1, F (kh) = 0, k > k * .
If there are no sonic boom wavenumbers, i.e., no solutions to c P (k * h) = v 0 , then the sonic boom conditions do not imply any conditions on F .
Remark 1. It may not be strictly necessary to require F (kh) = 0 for k > k * , but it is convenient. Wavenumbers k > k * are generally under-resolved and our experience is that removing them from the source spectrum tends to make the discrete solution smoother without causing any loss of accuracy.
The sonic boom conditions may be viewed as a generalization of the smoothness conditions introduced by Petersson et al. for stationary sources. The effect of s smoothness conditions is that (5.19) f (ν) x0 (π) = 0, ν = 0, . . . , s − 1. Note that Petersson et al. considered centered finite difference operators, for which kh = π is the only solution to P (kh) = 0 (see Figure 2). Since the source velocity is zero, this solution corresponds to the sonic boom wavenumber, i.e., k * h = π. The smoothness conditions (5.19) are thus the sonic boom conditions corresponding to v 0 = 0.
The sonic boom conditions allow us to prove the following lemma, which is essential to the convergence proof. Lemma 1. If F satisfies s sonic boom conditions, then there is a constant C such that
(5.20) |F (kh)| c P − v 0 s ≤ C, |k| ≤ k * .
Proof. Note that
(5.21) |F (kh)| c P − v 0 s = 1 c s |F (kh)| P − v0 c s = 1 c s |F (kh)| |G(kh)| s where (5.22) G(kh) = P − v 0 c .
Assume that G has a zero at k * h, of multiplicity α. By Taylor's theorem, there are positive constants ∆κ and C G such that
(5.23) |G(kh)| ≥ C G |kh − k * h| α , k * h − ∆κ ≤ kh ≤ k * h.
Due to the sonic boom conditions and Taylor's theorem, there is a constant C F such that
(5.24) |F (kh)| ≤ C F |kh − k * h| s , k * h − ∆κ ≤ kh ≤ k * h.
We divide the interval [0, k * h) into two subintervals:
I 1 = [0, k * h − ∆κ] and I 2 = [k * h − ∆κ, k * h).
On I 1 , G is nonzero and hence |G| ≥ G min > 0. We also have |F | ≤ F ∞ < ∞, by assumption. It follows that
(5.25) |F (kh)| |G(kh)| s ≤ F ∞ G s min =: C 1 .
On I 2 , using (5.23) and (5.24) yields
(5.26) |F (kh)| |G(kh)| s ≤ C F |kh − k * h| s C s G |kh − k * h| αs .
We note that for a general multiplicity α, F would actually need to satisfy αs sonic boom conditions. All the finite difference operators considered in this paper are such that P is a strictly decreasing function of |k|, which allows us to assume α = 1.
We obtain
(5.27) |F (kh)| |G(kh)| s ≤ C F |kh − k * h| s C s G |kh − k * h| s = C F C s G =: C 2 .
Combining the two subintervals, we find that (5.20) holds with
(5.28) C = max C 1 c s , C 2 c s .
Practical implementation of moment and sonic boom conditions.
While there are infinitely many choices of F that satisfy m moment conditions and s sonic boom conditions, in our implementation we have opted to define F as
(5.29) F (κ) = Q m+s−1 (κ), 0 ≤ κ ≤ k * h 0, κ > k * h ,
where Q m+s−1 denotes a polynomial of degree m+s−1. This polynomial is uniquely determined by the m moment and s sonic boom conditions. Since F is assumed to be even, (5.29) defines F (κ) for negative κ as well.
The non-zero part of F , given by Q m+s−1 , is displayed for the cases m = s = q, q = 2, 6, 14 in Figure 5. Recall that q moment conditions imply that q−1 derivatives of F are zero at kh = 0 while q sonic boom conditions imply that q − 1 derivatives of F are zero at kh = k * h.
(5.30) ε k = u k − u k , k ∈ K N .
We will first bound the Fourier coefficients of the error, and then use that result to bound the error itself. The proofs are quite similiar to the proofs in Petersson et al. [10], with some extensions to account for a non-zero source velocity.
Theorem 1. Let g be p + 2 times continuously differentiable and let g(t) = 0 for t ≤ 0 and t ≥ t 1 . If F satisfies p moment conditions and p + 1 smoothness conditions, then, for any t ≥ t 1 ,
(5.31) ε 0 = 0 and (5.32) | ε k | ≤ C h p |k| , k ∈ K N \ {0},
for some constant C.
Proof. We organize the proof into three cases: k = 0, 2πh L ≤ |kh| ≤ κ 0 , and |kh| > κ 0 . The constant κ 0 is not known at this point but will be specified under Case 2 below. We will frequently use C to denote generic constants that are independent of k and h.
Case 1: k = 0.
Compare the exact solution (3.10) with the discrete solution (5.5) and use that F (0) = 1 and P (0) = 1. It follows that ε 0 = 0.
Case 2: 2πh
L ≤ |kh| ≤ κ 0 Note that the error can be expressed as follows:
(5.33) |ε k | = | u k − u k | = | u k − F (kh) u k + (F (kh) − 1) u k | ≤ | u k − F (kh) u k | T1 + |F (kh) − 1| | u k | T2
Let us first bound T 2 . By (3.12), we have
(5.34) | u k | ≤ C |k| n .
By the assumption of p moment conditions, we have
(5.35) |F (kh) − 1| = O((kh) p ).
It follows that
(5.36) T 2 = |F (kh) − 1| | u k | ≤ O((kh) p ) C |k| n = 1 |k| n O((kh) p ).
Because g is p + 2 times continuously differentiable we may set n = p + 1, which yields (5.37)
T 2 = 1 |k| p+1 O((kh) p ).
For T 1 , we have (5.38)
T 1 = | u k − F (kh) u k | = F (kh)(−1) n L(ik) n t 0 g (n) (τ ) e ik((c P −v0)τ −c P t) (c P − v 0 ) n − e ik((c−v0)τ −ct) (c − v 0 ) n dτ ≤ C |F (kh)| |k| n t 0 g (n) (τ ) e ik((c P −v0)τ −c P t) (c P − v 0 ) n − e ik((c−v0)τ −ct) (c − v 0 ) n dτ ≤ C |k| n t 0 e ik((c P −v0)τ −c P t) (c P − v 0 ) n − e ik((c−v0)τ −ct) (c − v 0 ) n dτ ≤ C |k| n t 0 e ik((c−v0)τ −ct) (c − v 0 ) n e ikc( P −1)(τ −t) (c − v 0 ) n (c P − v 0 ) n − 1 dτ ≤ C |k| n t 0 e ikc( P −1)(τ −t) (c − v 0 ) n (c P − v 0 ) n − 1 dτ.
Next, we need to utilize that P 1. Note that
(5.39) (c − v 0 ) n (c P − v 0 ) n = (c − v 0 ) n (c − v 0 + c( P − 1)) n = 1 1 + c c−v0 ( P − 1) n = 1 (1 + z) n ,
where we have defined
(5.40) z := c c − v 0 ( P − 1) = O((kh) p ).
Taylor expanding yields
(5.41) 1 (1 + z) n = 1 − nz + n(n + 1) 2 z 2 + ... = 1 + O(z) = 1 + O((kh) p ),
where we used that n does not depend on k or h. We conclude that
(5.42) (c − v 0 ) n (c P − v 0 ) n = 1 + O((kh) p ).
Substituting (5.42) into (5.38) yields (5.43)
T 1 ≤ C |k| n t 0 e ikc( P −1)(τ −t) (1 + O((kh) p )) − 1 dτ ≤ C |k| n t 0 e ikc( P −1)(τ −t) − 1 + O((kh) p ) dτ.
Noting that (5.44) e iβ − 1 ≤ |β| for any real β, we obtain
(5.45) e ikc( P −1)(τ −t) − 1 ≤ kc( P − 1)(τ − t) = |k| O((kh) p ),
where the estimate τ − t = O(1) is valid because we consider the final time fixed. Using (5.45) in (5.43) leads to
(5.46) T 1 ≤ C |k| n (|k| O((kh) p ) + O((kh) p )) = 1 + |k| −1 |k| n−1 O((kh) p ).
Because we are studying wavenumbers |k| ≥ 2π/L, we have |k| −1 ≤ L/2π = C. It follows that (5.47)
T 1 = 1 |k| n−1 O((kh) p ).
Since g is p + 2 times continuously differentiable we may set n = p + 2, which yields
(5.48) T 1 = 1 |k| p+1 O((kh) p ).
Adding T 1 and T 2 leads to the error estimate
(5.49) | ε k | ≤ T 1 + T 2 = 1 |k| p+1 O((kh) p ) + 1 |k| p+1 O((kh) p ) = 1 |k| p+1 O((kh) p ).
It follows that there exist constants C and κ 0 such that, for |kh| ≤ κ 0 ,
(5.50) | ε k | ≤ C |k| p+1 |kh| p = C h p |k| .
Case 3: |kh| > κ 0 In this case we prove that the error is small by proving that both u k and u k are small. By (3.12), we have
(5.51) | u k | ≤ C |k| n = C h p |kh| p |k| n−p ≤ C h p |k| n−p .
Setting n = p + 1 yields the desired estimate
(5.52) | u k | ≤ C h p |k| .
Now consider u k . For |k| ≥ k * , we have F (kh) = 0 and hence u k = 0 according to the solution formula (5.5). For |k| < k * , (5.6) states that
(5.53) u k = F (kh) (−1) n L(ik(c P − v 0 )) n t 0 g (n) (τ )e ik((c P −v0)τ −c P t) dτ.
Taking the absolute value yields
(5.54) | u k | ≤ |F (kh)| L |k| n c P − v 0 n t 0 g (n) (τ ) dτ ≤ C |F (kh)| |k| n c P − v 0 n = Ch n−1 |kh| n−1 |k| |F (kh)| c P − v 0 n ≤ Ch n−1 |k| |F (kh)| c P − v 0 n Setting n = p + 1 yields (5.55) | u k | ≤ C h p |k| |F (kh)| c P − v 0 p+1 .
Because F satisfies p + 1 sonic boom conditions, we may apply Lemma 1 to obtain the estimate
(5.56) | u k | ≤ C h p |k| .
The error satisfies showed that it is possible to derive an almost equally strong bound on | u k | with only p smoothness conditions, at the cost of slightly more involved analysis. In theory we could have taken a similar approach, but with the motion-consistent source discretization there is no obvious drawback of requiring more sonic boom conditions. Hence, for simplicity, we here require p + 1 conditions. Theorem 2. Under the same assumptions as in Theorem 1, the error in the physical domain,
(5.57) ε k = | u k − u k | ≤ | u k | + | u k | ≤ C h p |k| + C h p |k| ≤ C h p |k| .(5.58) ε j = u j − u| x=xj , satisfies (5.59) ε h ≤ Ch p ,
for some constant C.
Proof. The error at grid point x j is
(5.60) ε j = u j − u| x=xj = k∈K N u k e ikxj − k∈K∞ u k e ikxj = k∈K N ( u k − u k )e ikxj − k∈K∞\K N u k e ikxj = k∈K N ε k e ikxj − k∈K∞\K N u k e ikxj .
By the Plancherel theorem,
(5.61) (ε, ε) h = k∈K N | ε k | 2 + k∈K∞\K N | u k | 2 .
Using the bound | u k | ≤ C |k| −(p+1) , which follows from (3.12) with n = p + 1, the second sum in (5.61) can be bounded as
(5.62) k∈K∞\K N | u k | 2 ≤ C k∈K∞\K N 1 |k| 2p+2 = Ch 2p k∈K∞\K N 1 |k| 2 |kh| 2p ≤ Ch 2p k∈K∞\K N 1 |k| 2 = Ch 2p ∞ |m|=N +1 1 2πm L 2 = Ch 2p ∞ |m|=N +1 1 m 2 ≤ Ch 2p ,
where we used that the series converges in the last step. The first sum in (5.61) satisfies
(5.63) k∈K N | ε k | 2 ≤ k∈K N \{0} C h p |k| 2 = Ch 2p N |m|=1 1 2πm L 2 = Ch 2p N |m|=1 1 m 2 ≤ Ch 2p .
We conclude that
(5.64) ε 2 h = (ε, ε) h ≤ Ch 2p + Ch 2p = Ch 2p
, and the final result follows after taking the square root of (5.64).
Windowing the source
For a source discretization to be applicable in a finite domain, it must have compact support in physical space. While the motion-consistent source discretization formally has global support, the magnitude of the source coefficients δ x0 j decays rapidly as the distance |x j − x 0 | increases, see Figure 3. We will show numerically that we can window the source to a finite width, which decreases with grid refinement, without reducing the order of accuracy. The procedure is similar to the truncation of the source in [16].
To analyze the decay rate of δ x0 j we introduce the Fourier interpolant,
(6.1) d x0 (x) = k∈K N δ x0 k e ikx = 1 L k∈K N F (kh)e ik(x−x0) ,
which satisfies d x0 (x j ) = δ x0 j . Since F (κ) is zero for |κ| ≥ π, we may extend the sum in (6.1) to infinity:
(6.2) d x0 (x) = 1 L k∈K∞ F (kh)e ik(x−x0) .
To provide some intuition for the decay rate of d x0 , let us consider the (non-periodic) function
(6.3) D x0 (x) = 1 2πL ∞ −∞ F (kh)e ik(x−x0) dk. The Fourier transform of D x0 is (6.4) F[D x0 ](k) = e −ikx0 L F (kh).
Notice that d x0 k = F[D x0 ](k), which leads us to expect d x0 ≈ D x0 . If F satisfies q moment conditions and q sonic boom conditions (i.e., m = s = q), applying the integration-by-parts formula q times yields
(6.5) D x0 (x) = 1 2πL −h i(x − x 0 ) q ∞ −∞ F (q) (kh)e ik(x−x0) dk.
It follows that
(6.6) |D x0 (x)| ≤ 1 2πL h q |x − x 0 | q π/h −π/h F (q) (kh)e ik(x−x0) dk ≤ 1 2πL h q |x − x 0 | q 2π h C = C h q−1 |x − x 0 | q ,
where we used that F (q) (κ) = 0 for |κ| ≥ π and F (q) is assumed to be bounded. Assuming that the same decay rate holds for d x0 leads to the conjecture
(6.7) |d x0 (x)| ≤ C h q−1 |x − x 0 | q ,
for some constant C. We will verify that (6.7) holds numerically. Let W denote the rectangular window of width 2 :
(6.8) W (ξ) = 1, |ξ| < 0, |ξ| ≥
Assuming that the conjecture (6.7) holds, the error introduced by windowing d x0 is
(6.9) e(x) = |W (x − x 0 )d x0 (x) − d x0 (x)| ≤ C h q−1 q .
Note that the error vector arising from the windowing satisfies
(6.10) e j = W (x j − x 0 )δ x0 j − δ x0 j = |W (x j − x 0 )d x0 (x j ) − d x0 (x j )| = e(x j )
.
Setting = C h w , where C is a constant, yields (6.11) e j ≤ C h q−1−qw .
To make e j of order p, we need
(6.12) q − 1 − qw ≥ p ⇔ w ≤ q − 1 − p q .
Given w and p this implies the following requirement on q:
(6.13) q ≥ p + 1 1 − w .
Clearly, the ideal choice w = 1, which yields a source width proportional to h, is not possible. As an example, setting w = 1/2 yields the condition (6.14) q ≥ 2p + 2.
Note that a width proportional to h w implies a number of nonzero coefficients proportional to h w−1 .
6.1. Numerical verification. In this section, the conjectured bound on e j in terms of q and w, stated in (6.11), is verified by numerical experiments. We construct motion-consistent discrete δ distributions for various parameters and compute the error vector e by comparing the windowed δ discretization, W δ x0 , with the global version, δ x0 . We set L = 1, and x 0 = 1 2 + 1 29 . In all experiments, the constant C is chosen such that = 1/2 when h = 1/16. Figure 6 shows the l 2 norm of e for w = 1/4 and w = 3/4, with k * h = π. In all cases, a slightly higher p is observed than the conjecture (6.11) suggests, which indicates that the conjectured bound is valid, although perhaps not sharp.
To further verify the conjectured bound, Tables 1 and 2 show the observed convergence rate p in numerical experiments with various choices of w, for k * h = π and k * h = 3 4 π. The observed p is computed by taking the average of (6.15)
p h = log 2 e 2h e h ,
for h ∈ 2 −4 , 2 −5 , 2 −6 , . . . , 2 −10 , where e h denotes the l 2 norm of the error vector obtained with grid spacing h. Errors smaller than 10 −9 were excluded to avoid values dominated by floating point errors. The observed rates further indicate that the conjectured bound is valid (but perhaps not sharp). Table 2. Observed and conjectured (in parentheses) p with k * h = 0.75π
Implementation aspects
There are some design choices to be made when implementing the motionconsistent source discretization. In this section, we summarize the requirements on the source discretization and present the implementation used in the numerical experiments.
Recall that the motion-consistent discrete δ distribution is defined by its Fourier coefficients:
(7.1) δ x0 k = e −ikx0 L F (kh),
where we have chosen to define F as in (5.29). To obtain convergence order p, δ x0 needs to satisfy m = p moment conditions and s = p + 1 sonic boom conditions according to Theorem 1. For simplicity, we set m = s = q and require q ≥ p + 1.
Additionally, to be able to use a window of width = C h w without reducing the convergence rate, we need to satisfy (6.13). In our experiments, we opt for w = 1/2, which corresponds to a source discretization that covers on the order of √ N grid points. For w = 1/2, the requirement (6.13) becomes q ≥ 2p + 2. To satisfy both the requirements from Theorem 1 and the requirement from the windowing, we set (7.2) q = max(p + 1, 2p + 2) = 2p + 2.
In the case of an accelerating source, we define the sonic boom wavenumber k * as the smallest positive solution to cP (kh) = v max , where v max denotes the highest source velocity during the simulation. If cP (kh) = v max does not have a solution in the interval 0 ≤ kh ≤ π, no sonic boom conditions are required. Using a higher velocity than occurs in the simulation (e.g., γv max , γ > 1) to define k * does not reduce the convergence rate of the numerical method. However, using an excessively high velocity will exclude more wavenumbers than necessary and might therefore affect the accuracy for a given h.
The Fourier interpolant d x0 of δ x0 is shown for different combinations of q and k * in Figure 7. Figure 7(a) shows d x0 for different q, with k * h = π. Figure 7(b) shows d x0 for different k * , with q = 14.
Since δ x0 is defined in terms of its Fourier coefficients, evaluating it on the grid by naively summing over Fourier modes requires on the order of N 2 operations. In our implementation we utilize the fast Fourier transform, which reduces the complexity to N log(N ). When the source is windowed, the source discretization is computed for the whole domain and values outside the window are discarded. In applications where the source is far from boundaries, the user might decide to use a larger window than we have suggested here, to minimize truncation errors from the windowing.
Numerical experiments in 1D
In this section we verify the convergence rate of the windowed source discretization when solving the advection equation (3.1) with constant source velocity. We choose the source trajectory x 0 (t) = 1 + v 0 t. This problem has the exact solution
(8.1) u = g(τ ) c−v0 , τ > 0, 0, τ ≤ 0. where (8.2) τ = ct − (x − 1) c − v 0 .
We select the parameters c = 1 and v 0 = 0.5. To ensure a smooth solution we use a Gaussian source time function
(8.3) g(t) = 1 σ √ 2π e − (t−t 0 ) 2 2σ 2 ,
with t 0 = 1 and σ = 0.15. The error is computed at time t end = 2. The exact solution at t end and the source path are displayed in Figure 8. The test problem is discretized in space on the periodic domain x ∈ [0, 4] with central finite differences of order p = 2, 4, 6 according to (4.6). We use the classical fourth order Runge-Kutta method with time step ∆t = 0.2h for time integration.
In the experiment, the point source discretion has been truncated by the window in (6.8) with w = 1/2, q = 2p + 2 and C = 4. Our theoretical results indicate that this combination of q and w yields a discretization of convergence order p. The errors and convergence rates are displayed in Table 3. The convergence rate c r is calculated as
(8.4) c r = log e M2 e M1 log M 2 M 1 ,
where e M is the l 2 norm of the error vector obtained with M grid points. Table 3 shows that the experimental convergence rates agree well with the theoretical results.
2nd order 4th order 6th order M log 10 (e M ) c r log 10 (e M ) c r log 10 (e M ) c r 100 -0 Table 3. l 2 errors and convergence rates for the 1D advection equation with v 0 = 0.5 In this section we solve the acoustic wave equation in two dimensions,
(9.1) ρ dv dt + ∇θ = 0, 1 K dθ dt + ∇ ·v = g(t)δ(x −x 0 (t)), wherex = [x, y]
is position, θ is pressure, andv is particle velocity. We consider a square domain with side L = 2.5 and impose characteristic boundary conditions on all boundaries. We select the material parameters K = 1 and ρ = 1, which gives the wave speed c = 1. The source moves with constant speed v 0 = 0.5 along the circle given by (9.2)x 0 (t) = x 0 (t) y 0 (t) = 1.25 + 0.2 sin(5v 0 t) 1.25 + 0.2 cos(5v 0 t) ,
We use the Gaussian source time function (8.3) with t 0 = 1 and σ = 1 25 . The problem is discretized in space with Summation-by-Parts finite-difference operators [3,13] and the boundary conditions are imposed weakly, using Simultaneous Approximation Terms [2]. The procedure is identical to the centered finite-difference discretization in [12].
We define the two-dimensional discrete δ distribution as the Cartesian product of two one-dimensional distributions corresponding to the coordinate directions,
(9.3) δx 0 = δ x0 ⊗ δ y0 .
This tensor-product construction of multidimensional sources was presented for stationary sources by Tornberg and Engquist in [14]. In both spatial directions, the sonic boom wavenumber k * has been defined by v max = v 0 . As in the previous experiment, δ x0 and δ y0 have been truncated by the window in (6.8) with w = 1/2, q = 2p + 2 and C = 4.
For time discretization, we again employ the fourth order Runge-Kutta method, with time step ∆t = 0.1h. We study the error at time t end = 1. Rather than an analytical solution, we compare against a reference solution, shown in Figure 9, computed with N = 3201 grid points in each coordinate direction. The converge rate is computed as (9.4) c r = log e N2 e N1 log
N 2 − 1 N 1 − 1
where e N is the l 2 norm of the error vector obtained with N points in each spatial direction. The obtained errors and convergence rates are displayed in Table 4. The convergence rates agree well with the theoretical and experimental results for one dimension. This indicates that the tensor product of motion-consistent discrete δ distributions yields design-order convergence for accelerating sources in multiple dimensions. The accuracy of the finite difference scheme is reduced to order p/2 at boundaries. Therefore, to highlight the accuracy of the source discretization, we choose to end the simulation before the wavefield interacts with the boundaries. If the wavefield were allowed to interact with the boundaries, we would observe convergence of order p/2 + 1 [5]. We stress, however, that there are no problems associated with the wavefield interacting with the boundaries that would not be present in a simulation without a point source. The only requirement is thatx 0 stays far enough from boundaries that the footprint of the windowed δx 0 does not overlap with the boundary stencils of the Summation-by-Parts finite difference discretization. Since δx 0 was designed for periodic problems, it is only allowed to act on grid points where the centered finite difference operator is applied. As long as the source trajectory does not touch the boundary, this condition is always satisfied for h smaller than some h 0 , since the source width decreases with grid refinement. In practice, if the source path is very close to the boundary, grid-refining until h < h 0 may of course be infeasible, however.
2nd order 4th order 6th order N log 10 (e N ) c r log 10 (e N ) c r log 10 (e N ) c r 101 -0 Table 4. l 2 errors and convergence rates for the 2D wave equation
Conclusion
We have derived high-order discretizations of moving point sources in hyperbolic equations. Compared to a stationary source discretization, there are additional requirements on the moving source for convergence. First, the source must not excite modes that propagate with the same velocity as the source. Second, the Fourier spectrum amplitude of the discrete δ distribution needs to be independent of the source posititon. The convergence properties of the source discretization are verified by numerical experiments with the advection equation in one dimension and with an accelerating source in the acoustic wave equation in two dimensions. The approximation of the moving source covers on the order of √ N grid points on an N -point grid and is therefore not applicable if the source trajectory is very close to boundaries or interfaces; we hope to address this in future work.
Figure 1
1shows the pressure component of the exact solution at time t = 2.
Figure 1 .
1Exact solution of the acoustic wave equation with a moving point source and a Gaussian source time function.
Figure 3 .
3Discrete δ distributions δ x0 for the motion-inconsistent source with continuous φ (Source 1), the motion-inconsistent source with continuously differentiable φ (Source 2), and the motion-consistent source (Source 3)
Centered finite differences, motioninconsistent source with continuous φ (b) Dual-pair finite differences, motioninconsistent source with continuous φ (c) Dual-pair finite differences, motioninconsistent source with continuously differentiable φ (d) Centered finite differences, motionconsistent source
Figure 4 .
4Numerical solution to 2.1 with different finite difference operators and source discretizations on the domain x ∈ [0, 4], with h = 0.01 and x 0 = 1.6 + 0.3t.
5. 2 .
2Moment conditions. For a stationary source, requiring the discrete δ distribution to satisfy an appropriate number of moment conditions ensures accuracy for low wavenumbers. The definition of moment conditions used by Petersson et al. states that d x0 satisfies m moment conditions if
F
(ν) (0) = 0, ν = 1, . . . , m − 1. A consequence of m moment conditions is that F satisfies (5.17) F (kh) = 1 + O((kh) m ).
Figure 5 .
5Non-zero part of F (kh) for the case m = s = q.
5. 5 .
5Convergence result for the advection equation. The errors in the Fourier coefficients are
Remark 2 .
2Recall that the sonic boom conditions transition to the smoothness conditions of Petersson et al. if v 0 = 0. Petersson et al.
Figure 6 .
6Numerical verification of the conjectured relation between p, q and e. Dashed lines indicate the conjectured rates, while solid lines show the experimentally observed errors.
8 (5.0) 5.1 (4.3) 3.9 (3.0) 2.3 (1.7) 1.6 (1.0)Table 1. Observed and conjectured (in parentheses) p with k * h =
Figure 7 .
7The motion-consistent discrete δ distribution for different values of k * and q.
Figure 8 .
8Exact solution and source path for the wave equation in 1D 9. Numerical experiments in 2D
Figure 9 .
9Pressure component of the reference solution and source trajectory for the 2D wave equation
1. 5.1. Motion-incosistent sources. Let d x0 denote the discrete δ distribution proposed by Petersson et al. for stationary sources. Unlike the motion-consistent discretization, which we designed in Fourier space, d x0 is designed to have minimal support in physical space. Petersson et al. state that the Fourier coefficients take the form
Quantitative seismology. K Aki, P G Richards, University Science Books. 2nd editionK. Aki and P. G. Richards. Quantitative seismology, 2nd edition. University Science Books, Sausalito, CA, USA, 2002.
Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes. M H Carpenter, D Gottlieb, S Abarbanel, 10.1006/jcph.1994.1057J. Comput. Phys. 1112M. H. Carpenter, D. Gottlieb, and S. Abarbanel. Time-stable boundary con- ditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes. J. Comput. Phys., 111(2):220- 236, 1994. doi:10.1006/jcph.1994.1057.
Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. D C Del Rey Fernández, J E Hicken, D W Zingg, 10.1016/j.compfluid.2014.02.016Comput. Fluids. 95D. C. Del Rey Fernández, J. E. Hicken, and D. W. Zingg. Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. Comput. Fluids, 95:171- 196, 2014. doi:10.1016/j.compfluid.2014.02.016.
High-accuracy finite-difference schemes for solving elastodynamic problems in curvilinear coordinates within multiblock approach. L Dovgilovich, I Sofronov, L. Dovgilovich and I. Sofronov. High-accuracy finite-difference schemes for solving elastodynamic problems in curvilinear coordinates within multiblock approach.
. 10.1016/j.apnum.2014.06.005Appl. Numer. Math. 93CAppl. Numer. Math., 93(C):176-194, July 2015. doi:10.1016/j.apnum.2014.06.005.
The convergence rate for difference approximations to mixed initial boundary value problems. B Gustafsson, Math. Comp. 29130B. Gustafsson. The convergence rate for difference approximations to mixed initial boundary value problems. Math. Comp., 29(130):396-406, Apr. 1975. URL https://www.jstor.org/stable/2156498.
Time dependent problems and difference methods. B Gustafsson, H.-O Kreiss, J Oliger, 10.1002/9781118548448John Wiley & Sons, IncB. Gustafsson, H.-O. Kreiss, and J. Oliger. Time dependent problems and dif- ference methods. John Wiley & Sons, Inc., 1995. doi:10.1002/9781118548448.
Is a wind turbine a point source? (L). R Makarewicz, 10.1121/1.3514426J. Acoust. Soc. Am. 129579R. Makarewicz. Is a wind turbine a point source? (L). J. Acoust. Soc. Am., 129:579, 2011. doi:10.1121/1.3514426.
Energy stable and high-order-accurate finite difference methods on staggered grids. O O'reilly, T Lundquist, E M Dunham, J Nordström, 10.1016/j.jcp.2017.06.030J. Comput. Phys. 346O. O'Reilly, T. Lundquist, E. M. Dunham, and J. Nordström. Energy sta- ble and high-order-accurate finite difference methods on staggered grids. J. Comput. Phys., 346:572-589, 2017. doi:10.1016/j.jcp.2017.06.030.
The immersed boundary method. C S Peskin, 10.1017/S0962492902000077Acta Numerica. 11C. S. Peskin. The immersed boundary method. Acta Numerica, 11:479-517, 2002. doi:10.1017/S0962492902000077.
Discretizing singular point sources in hyperbolic wave propagation problems. N A Petersson, O O'reilly, B Sjögreen, S Bydlon, 10.1016/j.jcp.2016.05.060J. Comput. Phys. 321N. A. Petersson, O. O'Reilly, B. Sjögreen, and S. Bydlon. Discretizing singular point sources in hyperbolic wave propagation problems. J. Comput. Phys., 321:532-555, 2016. doi:10.1016/j.jcp.2016.05.060.
Stable grid refinement and singular source discretization for seismic wave simulations. N A Petersson, B Sjögreen, 10.4208/cicp.041109.120210aCommun. Comput. Phys. 85N. A. Petersson and B. Sjögreen. Stable grid refinement and singular source dis- cretization for seismic wave simulations. Commun. Comput. Phys., 8(5):1074- 1110, 2010. doi:10.4208/cicp.041109.120210a.
High-fidelity sound propagation in a varying 3D atmosphere. Y Rydin, K Mattsson, J Werpers, 10.1007/s10915-018-0751-5J. Sci. Comput. 77Y. Rydin, K. Mattsson, and J. Werpers. High-fidelity sound propaga- tion in a varying 3D atmosphere. J. Sci. Comput., 77:1278-1302, 2018. doi:10.1007/s10915-018-0751-5.
Review of summation-by-parts-operators schemes for initial-boundary-value problems. M Svärd, J Nordström, 10.1016/j.jcp.2014.02.031J. Comput. Phys. 2680M. Svärd and J. Nordström. Review of summation-by-parts-operators schemes for initial-boundary-value problems. J. Comput. Phys., 268(0):17-38, 2014. doi:10.1016/j.jcp.2014.02.031.
Numerical approximations of singular source terms in differential equations. A.-K Tornberg, B Engquist, 10.1016/j.jcp.2004.04.011J. Comput. Phys. 2002A.-K. Tornberg and B. Engquist. Numerical approximations of singular source terms in differential equations. J. Comput. Phys., 200(2):462-488, 2004. doi:10.1016/j.jcp.2004.04.011.
On the approximation of singular source terms in differential equations. J Waldén, 10.1002/(SICI)1098-2426(199907)15:4<503::AID-NUM6>3.0.CO;2-QNumer. Meth. Part. D. E. 154<503::AID-NUM6>3.0.CO;2-QJ. Waldén. On the approximation of singular source terms in differential equa- tions. Numer. Meth. Part. D. E., 15:503-520, 1999. doi:10.1002/(SICI)1098- 2426(199907)15:4<503::AID-NUM6>3.0.CO;2-Q.
Delta function approximations in level set methods by distance function extension. S Zahedi, A.-K Tornberg, 10.1016/j.jcp.2009.11.030J. Comput. Phys. 2296S. Zahedi and A.-K.Tornberg. Delta function approximations in level set meth- ods by distance function extension. J. Comput. Phys., 229(6):2199-2219, 2010. doi:10.1016/j.jcp.2009.11.030.
| [] |
[
"Heavy-hadron molecular spectrum from light-meson exchange saturation",
"Heavy-hadron molecular spectrum from light-meson exchange saturation"
] | [
"Fang-Zheng Peng \nSchool of Physics\nBeihang University\n100191BeijingChina\n",
"Mario Sánchez Sánchez \nCentre d'Études Nucléaires\nCNRS/IN2P3\nUniversité de Bordeaux\n33175GradignanFrance\n",
"Mao-Jun Yan \nSchool of Physics\nBeihang University\n100191BeijingChina\n",
"Manuel Pavon Valderrama \nSchool of Physics\nBeihang University\n100191BeijingChina\n"
] | [
"School of Physics\nBeihang University\n100191BeijingChina",
"Centre d'Études Nucléaires\nCNRS/IN2P3\nUniversité de Bordeaux\n33175GradignanFrance",
"School of Physics\nBeihang University\n100191BeijingChina",
"School of Physics\nBeihang University\n100191BeijingChina"
] | [] | If known, the spectrum of heavy-hadron molecules will be a key tool to disentangle the nature of the exotic states that are being discovered in experiments. Here we argue that the general features of the molecular spectrum can be deduced from the idea that the short-range interaction between the heavy hadrons is effectively described by scalar and vector meson exchange, i.e. the σ, ω and ρ mesons. By means of a contact-range theory where the couplings are saturated by the aforementioned light mesons we are indeed able to postdict the X(3872) (as a D * D molecule) from the three P c (4312), P c (4440) and P c (4457) pentaquarks (asDΣ c andD * Σ c molecules). We predict a J PC = 1 −− DD 1 molecule at 4240 − 4260 MeV which might support the hypothesis that the Y(4260) is at least partly molecular. The extension of these ideas to the light baryons requires minor modifications, after which we recover approximate SU(4)-Wigner symmetry in the two-nucleon system and approximately reproduce the masses of the deuteron and the virtual state. | 10.1103/physrevd.105.034028 | [
"https://arxiv.org/pdf/2101.07213v3.pdf"
] | 231,632,500 | 2101.07213 | b36a9747a08eb4eee1be1eb8a020e7af71a40b6c |
Heavy-hadron molecular spectrum from light-meson exchange saturation
28 Feb 2022
Fang-Zheng Peng
School of Physics
Beihang University
100191BeijingChina
Mario Sánchez Sánchez
Centre d'Études Nucléaires
CNRS/IN2P3
Université de Bordeaux
33175GradignanFrance
Mao-Jun Yan
School of Physics
Beihang University
100191BeijingChina
Manuel Pavon Valderrama
School of Physics
Beihang University
100191BeijingChina
Heavy-hadron molecular spectrum from light-meson exchange saturation
28 Feb 2022(Dated: March 1, 2022)
If known, the spectrum of heavy-hadron molecules will be a key tool to disentangle the nature of the exotic states that are being discovered in experiments. Here we argue that the general features of the molecular spectrum can be deduced from the idea that the short-range interaction between the heavy hadrons is effectively described by scalar and vector meson exchange, i.e. the σ, ω and ρ mesons. By means of a contact-range theory where the couplings are saturated by the aforementioned light mesons we are indeed able to postdict the X(3872) (as a D * D molecule) from the three P c (4312), P c (4440) and P c (4457) pentaquarks (asDΣ c andD * Σ c molecules). We predict a J PC = 1 −− DD 1 molecule at 4240 − 4260 MeV which might support the hypothesis that the Y(4260) is at least partly molecular. The extension of these ideas to the light baryons requires minor modifications, after which we recover approximate SU(4)-Wigner symmetry in the two-nucleon system and approximately reproduce the masses of the deuteron and the virtual state.
1. Introduction: Theoretical predictions of the hadronic spectrum are fundamental for testing our understanding of strong interactions against experiments. SU(3)-flavor symmetry [1,2], the quark model [3][4][5][6] and the theory behind quarkonium [7][8][9][10] have provided valuable insights and clear predictions about which hadrons to expect and their approximate masses. With the experimental observation of exotic hadrons -hadrons that are neither three-quark states, a quark-antiquark pair or that do not fit into preexisting quarkmodel predictions [11][12][13][14] -new theoretical explanations have appeared, among which molecular hadrons [15,16] are popular. However the spectrum of molecular hadrons is not properly understood, with most theoretical applications of this idea being explicitly customized to explain a particular hadron or a few at most.
The present manuscript attempts to overcome this limitation by proposing a more general explanation of the spectrum of hadronic molecules (in the spirit of Refs. [17,18]). The idea is as follows: first, we will describe the interaction between two heavy-hadrons in terms of a contact-range potential. If the range of the binding mechanism between two heavy-hadrons is indeed shorter than the size of the molecular state formed by the aforementioned hadrons, a contactrange theory will represent a good description. Second, we will assume that the couplings in the contact-range potential are saturated by light-meson exchange (i.e. σ, ρ and ω) within the saturation procedure of Ref. [19]. Third, for effectively combining the contribution from the saturation of scalar and vector meson exchange, which happen at a different renormalization scale as the masses of these light-mesons are different, we will follow a renormalization group equation (RGE). This RGE will tell us what is the importance of scalar and vector meson contributions to saturation relative to each other. Fourth, the couplings derived from saturation are expected to be valid modulo a proportionality constant, which we fix by solving the bound state equation for a molecular candidate, for instance the P c (4312). Finally, we can derive the predictions * [email protected] of this procedure for other molecular states.
Saturation:
We will consider a generic two heavyhadron system H 1 H 2 , with H i = D ( * ) , Σ ( * ) c , etc. for i = 1, 2. Owing to heavy-quark spin symmetry (HQSS) their interaction only depends on the light-quark spins of heavy-hadrons H 1 and H 2 , which we denote S L1 and S L2 . If parity and lightspin are conserved in each vertex (i.e. the H 1 H 2 → H 2 H 1 transition does not happen), we can describe the H 1 H 2 system with a contact-range interaction that admits the multipolar expansion:
V C = C 0 + C 1ˆ S L1 ·ˆ S L2 + . . . ,(1)
plus higher order terms, if present, with no momentum/energy dependence and whereˆ S Li = S Li /S Li is a reduced spin operator for hadron i = 1, 2. Higher multipoles can be built analogously but in practice the monopolar and dipolar terms are more than enough to effectively describe the short-range interaction in most molecules, as higher multipoles will be suppressed [19]. By using a contact theory we are assuming that pion dynamics and coupled channel effects are perturbative corrections [20,21] and that the resulting two-body bound state is not compact enough as to resolve the shortdistance details of the light-meson exchanges binding it. Finally, this contact-range potential has to be regularized, for which a regulator function depending on a cutoff Λ is introduced and where the couplings become functions of this cutoff, i.e. C J = C J (Λ) for J = 0, 1. Concrete regularization details will be discussed later.
To determine the C 0 and C 1 couplings we assume that they are saturated with scalar-and vector-meson exchange. We begin by writing the Lagrangians for light-meson exchange in a suitable notation in which instead of the full heavy-hadron fields we use effective non-relativistic fields with the quantum numbers of the light-quarks within the hadrons [19,22]. This is motivated by the observation that in heavy-hadron interactions the heavy-quarks are spectators. For the interaction of a scalar meson with the light-quark degrees of freedom inside a heavy hadron, the Lagrangian reads
L = g S q † L σq L ,(2)
where g S is a coupling constant, σ the scalar meson field and q L the aforementioned effective non-relativistic light-quark subfield. For the vector mesons the Lagrangian can be written as a multipole expansion
L = L E0 + L M1 + . . . = g V q † L V 0 q L + f V 2M q L † ǫ i jkˆ S Li ∂ j V k q L + . . . ,(3)
where the dots indicate higher order multipoles. In this Lagrangian, g V and f V are coupling constants, ǫ i jk the Levi-Civita symbol, V µ = (V 0 , V) is the vector-meson field and M is the characteristic mass scale for this multipolar expansion which for convenience we will set to be the nucleon mass, M = m N ≃ 938.9 MeV. For simplicity we are not writing down explicitly the isospin or flavor indices. The number of terms depends on the spin of the light quark degrees of freedom, where for S L = 0 (Λ c ) there is only the electric term, for S L = 1 2 (D ( * ) ) there is also the magnetic dipole term, for S L = 1 (Σ ( * ) c ) an electric quadrupole term (which we will ignore), and so on.
In the H 1 H 2 system the potential for the scalar meson is
V S ( q) = − g S 1 g S 2 q 2 + m 2 S ,(4)
with q the exchanged momentum and where g S i refers to the scalar coupling for hadron i = 1, 2, while for the vector mesons we have
V E0 ( q) = + g V1 g V2 q 2 + m 2 V ,(5)V M1 ( q) = + 2 3 f V1 f V2 4M 2ˆ S L1 ·ˆ S L2 m 2 V q 2 + m 2 V + . . . ,(6)
with g Vi , f Vi the couplings for hadron i = 1, 2 and where the dots indicate terms that vanish for S-wave or contact-range terms for which the range is shorter than vector-meson exchange. Following Ref.
[19], the saturation condition for scalar-meson exchange reads
C S 0 (Λ ∼ m S ) ∝ − g S 1 g S 2 m 2 S ,(7)
where we stress that saturation is expected to work for Λ of the same order of magnitude as the mass of the exchanged light-meson (thus Λ ∼ m S ). For vector-meson exchange we have
C V 0 (Λ ∼ m V ) ∝ g V1 g V2 m 2 V ζ +ˆ I 1 ·ˆ I 2 ,(8)C V 1 (Λ ∼ m V ) ∝ f V1 f V2 6M 2 ζ +ˆ I 1 ·ˆ I 2 ,(9)
where we have now included isospin explicitly, ζ = ±1 is a sign to indicate the contribution from the omega (+1 for DD, Σ cD , Σ c Σ c and −1 for DD, Σ c D, Σ cΣc ) andˆ I i = I i /I i are normalized isospin operators for the rho contribution (with I i the standard isospin operator and I i the isospin of hadron i = 1, 2).
Renormalization Group Evolution:
The saturation of the C 0 coupling receives contributions from two types of light-mesons with different masses. To combine the saturation from scalar-and vector-meson exchange into a single coupling, we have to know first the RGE of the couplings, which for non-relativistic contact-range theories is well-understood and follows the equation [23,24] d dΛ C(Λ) Λ α = 0 or, equivalently:
C(Λ 1 ) Λ α 1 = C(Λ 2 ) Λ α 2 ,(10)
with α the anomalous dimension of the coupling. From this we can combine the scalar-and vector-meson contributions as
C sat (m V ) = C V (m V ) + m V m S α C S (m S ) .(11)
The anomalous dimension is linked with the behavior of the two-body wavefunction Ψ(R) at distances R −1 ∼ Λ by [23,24] (i.e. α encodes the short-range suppression of the wavefunction). We do not know the exact form of the short-distance wavefunction, but owing to the large mass of the heavy-hadrons it is sensible to assume that the semiclassical approximation applies. From including the Langer correction [25] we estimate Ψ(R) ∼ R 1/2 which implies α = +1. We end up with:
|Ψ(R ∼ 1/Λ)| 2 ∼ Λ −αC sat (Λ = m V ) = C sat 0 + C sat 1ˆ S L1 ·ˆ S L2 ∝ g V1 g V2 ζ +T 12 1 m 2 V + κ V1 κ V2 6M 2Ĉ L12 − ( m V m S ) α g S 1 g S 2 m 2 S ,(12)
whereT 12 =ˆ I 1 ·ˆ I 2 ,Ĉ L12 =ˆ S L1 ·ˆ S L2 and κ Vi = f Vi /g Vi . However we do not know (yet) the proportionality constant. It is worth stressing that the previous relations implicitly assume that the contact-range interaction has indeed been regularized and that the regularization scale (i.e. the cutoff Λ) is of the right order of magnitude (Λ ∼ m V ). In the next lines we will explain in more detail our regularization prescription.
Predictions:
We now explain how to make predictions with the saturated coupling and overcome the ambiguities in its exact definition. First, we regularize the potential
p ′ |V C |p = C sat mol (Λ H ) f ( p ′ Λ H ) f ( p Λ H ) ,(13)
where C sat mol is the saturated coupling of Eq. (12) particularized for a given molecule, f (x) is a regulator function (for which we choose a Gaussian regulator, f (x) = e −x 2 ) and Λ H a physical cutoff, i.e. a cutoff which corresponds with the natural hadronic momentum scale. The cutoff Λ H can be either the scale at which saturation is expected to work (from m S to m V ) or the momentum scale at which we begin to see the internal structure of the heavy-hadrons, as these two scales are of the same order of magnitude. We opt for the later: Λ H = 1 GeV. This contact-range potential can be introduced within a bound state equation to make predictions:
1 + 2µ mol C sat mol (Λ H ) ∞ 0 p 2 d p 2π 2 f (p/Λ H ) 2 p 2 + γ 2 mol = 0 ,(14)
where µ mol is the reduced mass of the two-hadron system under consideration and γ mol the wave number of the bound state, related to the binding energy by B mol = −γ 2 mol /2µ mol . The mass of the predicted molecular state will be M mol = M thres − B mol , with M thres = M 1 + M 2 the threshold of the two-hadron system and M i the mass of hadron i = 1, 2.
The proportionality constant between the contact-range coupling and the saturation ansatz can be determined from a known molecular candidate. Actually, the strength of the interaction is dependent on the reduced mass times the coupling, µ mol C sat mol . If we take the P c (4312) as the reference molecule 1 (mol = P c ), we can first determine its coupling from solving Eq. (14) for this system and then define the ratio
R mol = µ mol C sat mol µ P c C sat P c ,(15)
from which we can determine the interaction strength of a particular molecule relative to the P c (4312). Finally we plug R mol and C sat P c into Eq. (14):
1 + 2µ P c C sat P c R mol ∞ 0 p 2 d p 2π 2 f (p/Λ H ) 2 p 2 + γ 2 mol = 0 ,(16)
and compute γ mol and B mol for a particular molecule. For this we need g V , κ V (= f V /g V ) and g S for the heavy hadrons; g V and κ V are determined from the mixing of the neutral vector mesons with the electromagnetic current (i.e. Sakurai's universality and vector meson dominance [26][27][28]), for which we apply the substitution rules
ρ 0 µ → e 2g A µ and ω µ → e 6g A µ ,(17)
to Eq. (3) and match to the light-quark contribution to the electromagnetic Lagrangian (ρ 0 µ and ω µ are the neutral rho and omega fields, A µ the photon field, µ a Lorentz index, e the proton charge and g = m V /2 f π ≃ 2.9 the universal vector meson coupling constant, with m V the vector meson mass and f π ≃ 132 MeV the pion weak decay constant). For g V we obtain g V = g (2g) for D ( * ) /D ( * ) 0(1) /D ( * ) 1(2) (Σ ( * ) c ); κ V is proportional to the (light-quark) magnetic moment of the heavy hadrons, which we calculate in the non-relativistic quark-model [29], (2) ) where µ N is the nuclear magneton and µ u ≃ 1.9 µ N the magnetic moment of a constituent u-quark (for a more detailed account on the choice of the magnetic-like couplings, we refer to Appendix A). For g S we invoke the linearσ model [30] and the quark model [29], yielding g S ≃ 3.4 At this point we find it worth mentioning that the lightmeson exchange picture is not free of theoretical difficulties, the most important of which is probably the nature and width of the σ. This happens to be a well-known issue for which we briefly review a few of the available solutions in Appendix B.
yielding κ V = 3 2 (µ u /µ N ) for D ( * ) and Σ ( * ) c , κ V = 3 2 (µ u /3µ N ) ( 3 2 (2µ u /µ N )) for the S P L = 1 2 − ( 3 2 − ) P-wave charmed mesons D ( * ) 0(1) (D ( * ) 1
Here it is enough to comment that when a broad meson is exchanged, it can be effectively approximated by a narrow one by a suitable redefinition of its parameters [32,33]. As we are determining the couplings from phenomenological relations which do not take into account the width of the scalar meson and provide a good description of a few molecular candidates, we consider this redefinition to have already taken place (we refer to Appendix B for further discussion and details).
Regarding uncertainties, the RGE indicates that the value of the saturated couplings are dominated by the scalar meson (see Eq. (11)), which also happens to be the meson for which theoretical uncertainties are larger. For this reason we will generate error bands from the uncertainty in the scalar meson mass, m S = 475 ± 75 MeV. Besides, the contact-range theory approximation also entails uncertainties: if the molecule is compact enough, the heavy hadrons will be able to resolve the details of the interaction binding them and the contact-range approximation will cease to be valid. We include a relative uncertainty of γ mol /m V (i.e. the ratio of the characteristic molecular momentum scale and the mass of the vector meson) to take into account this effect, which we then sum in quadrature with the previous error coming from m S .
With these couplings we now reproduce the P c (4312) with Eq. (14) yielding C sat P c = −0.80 fm 2 for Λ H = 1 GeV. After this we calculate R mol and solve Eq. (16) to predict the molecules we show in Table I: (i) For molecules in the lowest isospin state the general pattern is that mass decreases with light-spin [19] (i.e. opposite to compact hadrons).
(ii) The origin of the light-spin dependence is the magneticlike vector-meson exchange term in Eqs. (6) and (9).
(iii) ForD ( * ) Σ ( * ) c , we reproduce the P c (4440/4457) [34] and find the spectrum predicted in Refs. [35][36][37][38][39].
(iv) For D ( * )D( * ) , besides the X and its J PC = 2 ++ partner [20,[40][41][42], the only other configuration close to binding is the J PC = 0 ++ DD system [43,44]. It should be noticed that:
(iv.a) The isoscalar D ( * )D( * ) systems can mix with nearby charmonia. This is a factor that we have not included in our model, yet discrepancies between our predictions and candidate states could point towards the existence of such mixing. (iv.b) In particular, the X(3872) is predicted around its experimental mass, though with moderate uncertainties which do not exclude (but do not require either) a charmonium component. Though its nature is still debated [45][46][47][48][49], theoretical works point out to the existence of a non-trivial nonmolecular component (e.g. Ref. [50] finds a negative effective range in the J PC = 1 ++ D * D system, which is difficult to explain in a purely molecular picture). The previous could be further confirmed if the spectroscopic uncertainties of the present model could be reduced to the point of determining if underbinding or overbinding exist.
(iv.c) Had we used the X(3872) as the reference molecule, the results of Table I would have been almost identical to the ones we obtain from the P c (4312).
(v) Molecules involving P-wave D 1 /D * 2 mesons have larger hyperfine splittings than their D/D * counterparts:
(v.a) For instance, the D ( * ) 1(2)D ( * ) 1(2) molecules only bind for configurations with high light-spin, while the configurations with low light-spin content can even become repulsive.
(v.b) The hyperfine splitting of the Σ cD1 pentaquarks is predicted to be approximately twice as large as for the Σ cD * one, about 30 MeV instead of 15 MeV.
(vi) Molecules for which rho-and omega-exchange cancel out (Z c (3900) [51]) or involving strangeness (P cs (4459) [52]) require additional discussion and are not listed. We advance that:
(vi.a) For the Z c [53-56] I G (J PC ) = 1 + (1 +− ) D * D con- figuration, we predict a virtual state at M = 3849.6
MeV. This agrees with the EFT analysis of Ref. [57], which suggests that if the Z c is a virtual state its mass would be in the M = (3831 − 3844) MeV range, though with large uncertainties. Meanwhile in the EFT approach of Ref. [53] the Z c is located at (3867 − 3871) MeV when a virtual state solution is assumed (fit 1 in Ref. [53]). Recently Ref. [58] has proposed the inclusion of axial meson (a 1 (1260)) exchange to explain the Z c , in which case we would end up with a mass in the (3856 − 3867) MeV range. However this depends on the assumption that Eq. (11) holds far away from Λ = m V , which might very well not be correct without suitable modifications (check the discussion in Appendix B). For comparison, Ref. [18] explains the Z c states in terms of vector charmonia exchanges.
(vi.b) If the scalar meson couples to q = u, d, s with similar strength [58], this will generate P cs -like [59][60][61][62]
I = 0, J = 1 2 , 3 2 Ξ cD * bound states at M = 4466.9 MeV.
(vii) We remind that the results of Table I ignore pion exchanges and coupled channel effects, which are considered to be perturbative corrections. Appendices C and D explicitly check these two assumptions in a few concrete cases, leading in general to corrections that are indeed smaller than the uncertainties shown in Table I. We notice that there might be specific molecules for which these assumptions do not hold.
Finally we warn that though the formalism is identical to the one used in a typical contact-range effective field theory (EFT), this is not EFT: here the cutoff is not auxiliary, but physical. It is not expected to run freely but a parameter chosen to reproduce the known spectrum.
5. S-to-P-wave charmed meson transitions: Now we want to explore the Y(4260) [65], which has been conjectured to be a J PC = 1 −− DD 1 molecule [66][67][68][69][70][71], though its nature remains unclear [72][73][74][75][76][77].
For the D ( * ) 1(2)D ( * ) two-hadron system the electric dipolar and magnetic quadrupolar D ( * ) 1(2) → D ( * ) transitions are possible, i.e. there are new H 1 H 2 → H 2 H 1 components in the potential not present in Eq. (1), which we write as
V C = C 0 + C 1 σ L1 ·ˆ S L2 + C ′ 1 Σ † L1 · Σ L2 + C ′ 2 Q † L1i j Q L2i j ,(18)
where σ L1 are the Pauli matrices andˆ S L2 the spin-3 2 matrices as applied to the light-quark within the D ( * ) 1 (2) . The dipolar and quadrupolar pieces are described by the C ′ 1 and C ′ 2 couplings, Σ L / Σ † L are the spin matrices for the S L = 1 2 to 3 2 transition (which can be consulted in the appendices of Ref. [21]) and
Q Li j = (σ Li Σ L j + σ L j Σ Li )/2. For saturating C ′ 1 and C ′ 2 we consider the Lagrangians L E1 = f ′ V 2M q L † Σ Li (∂ i V 0 − ∂ 0 V i ) q ′ L + C.C. ,(19)L M2 = h ′ V (2M) 2 q L † Q Li j ∂ i ǫ jlm ∂ l V m q ′ L + C.C. ,(20)
where q L and q ′ L are the non-relativistic light-quark subfields for the S-and P-wave charmed mesons, generating the potentials
V E1 = − f V ′2 4M 2 ω 2 V + 1 3 µ 2 V q 2 + µ 2 V Σ † L1 · Σ L2 + . . . ,(21)V M2 = − h V ′2 16M 4 1 5 µ 4 V q 2 + µ 2 V Q † L1i j Q L2i j + . . .(22)
where
µ 2 V = m 2 V − ω 2 V is the effective vector-meson mass for this transition and ω V = m(D 1 ) − m(D), m(D * 2 ) − m(D), m(D 1 ) − m(D * ) or m(D * 2 ) − m(D * ) for D 1D , D * 2D , D 1D * or D * 2D * molecules. The saturated couplings read C ′ sat 1 (m V ) ∝ − f ′ V 2 4M 2 m V µ V α ω 2 V + 1 3 µ 2 V µ 2 V ζ +ˆ I 1 ·ˆ I 2 , (23) C ′ sat 2 (m V ) ∝ − h ′ V 2 16M 4 m V µ V α µ 2 V 5 ζ +ˆ I 1 ·ˆ I 2 ,(24)
which includes the RGE correction derived in Eqs. (10)(11)(12).
If
we define f ′ V = κ ′ E1 g V , κ ′ E1
can be determined from Eq. (17) and the D ( * ) 1(2) → D ( * ) γ dipolar moment (extractable from the partial decay widths [78] or the D ( * ) 1(2) |r|D ( * ) matrix elements [79]), yielding κ ′ E1 ∼ (2.6 − 3.6). This provides a fairly strong attraction in the J PC = 1 −− DD 1 molecular configuration, for -TABLE I. Selection of the molecular states predicted from saturation: "System" is the two-hadron system, I(J P(C) ) refers to the isospin, angular momentum, parity and C-parity (if applicable) of the state, R mol is the relative interaction strength with respect to the P c (4312) (see Eq. (15)), B mol the binding energy in MeV (where (...) V indicates a virtual state), M mol the mass of the molecule, "Candidate" refers to known resonances that might be identified with the predicted molecule and M candidate is the candidate's mass (" 1 S 0 pole" refers to the virtual state in singlet nucleon-nucleon scattering). The binding energies are calculated from Eqs. (14)(15)(16), where for light-(heavy-) hadrons vertices we use the cutoff Λ L = 0.5 GeV (Λ H = 1.0 GeV). The uncertainty in R mol is obtained from varying the m σ within the 400 − 550 MeV range, while for B mol we combine the previous uncertainty with a γ mol /m V relative error by summing them in quadrature. For the hadron masses we use the isospin averages of the RPP values [31]. The masses for the Λ c (2765), Λ c (2940), Σ c (2800) and X(3872) are taken from the RPP [31] (we notice though that the Λ c (2765) is not well-established and could even be a Σ c -type state or a superposition of two states instead, though Ref. [63] considers it to be a Λ c ), for the Y(4260) and Y(4360) we use the recent BESIII measurements [64] and for the P c (4312/4440/4457) we refer to the original LHCb observation [34]. Table I we list the central value predictions (i.e. κ ′ E1 = 3.1 and κ ′ M2 = 10.7) for the four S-and P-wave molecules considered here.
System I(J P(C) ) R mol B mol M1 S 0 pole 1877.8 ND 0 ( 1 2 − ) 1.01 +0.06 −0.05 (0.7 +0.6 −0.5 ) V 2805.5 +0.5 −0.6 Λ c (2765) 2766.6 ND * 0 ( 3 2 − ) 1.19 +0.12 −0.11 0.1 +0.9 −0.2 2947.4 +0.1 −0.9 Λ c (2940) 2939.6 ND 1 ( 1 2 − ) 0.84 +0.02 −0.02 (4.3 +0.7 −0.6 ) V 2801.9 +0.6 −0.7 Σ c (2800) ∼ 2800 ND * 1 ( 1 2 − ) 0.97 +0.03 −0.02 (1.2 +0.4 −0.4 ) V 2946.3 +0.4 −0.4 - - ND * 0 ( 3 2 − ) 0.94 +0.01 −0.01 (1.8 +0.3 −0.2 ) V 2945.7 +0.2 −0.3 - - DD 0 (0 ++ ) 0.63 +0.08 −0.07 (1.5 +3.5 −1.5 ) V 3733.0 +1.5 −3.5 - - D * D 0 (1 ++ ) 0.89 +0.20 −0.16 4.1 +11.6 −4.1 3871.7 +4.1 −11.6 X(3872) 3871.69 D * D * 0 (2 ++ ) 0.93 +0.20 −0.17 5.5 +12.6 −5.8 4011.6 +5.5 −12.6 - - D 1D 0 (1 −− ) 1.33 +0.36 −0.31 34 +30 −26 4255 +26 −30 Y(4260) 4218.6 D 2D 0 (2 −− ) 0.87 +0.15 −0.13 2.7 +7.3 −2.8 4325.6 +2.8 −7.3 - - D 1D * 0 (1 −− ) 0.56 +0.02 −0.02 (4.1 +1.1 −1.1 ) V 4425.6 +1.1 −1.1 Y(4360) 4382.0 D 2D * 0 (3 −− ) 1.89 +0.60 −0.51 90 +90 −73 4380 +73 −90 - - D 1D1 0 (2 ++ ) 1.66 +0.47 −0.40 58 +57 −45 4786 +45 −57 - - D 2D1 0 (3 +− ) 1.64 +0.46 −0.39 56 +55 −43 4829 +43 −55 - - D 2D1 0 (3 ++ ) 2.04 +0.65 −0.58 97 +95 −81 4788 +81 −95 - - D 2D2 0 (3 +− ) 0.83 +0.11 −0.09 1.4 +3.6 −1.5 4924.7 +1.4 −3.6 - - D 2D2 0 (4 ++ ) 2.06 +0.65 −0.54 98 +96 −82 4828 +82 −96 - - Σ cD 1 2 ( 1 2 − ) 1.00 8.9 4311.9 P c (4312) 4311.9 Σ * cD 1 2 ( 3 2 − ) 1.04 9.4 +1.7 −1.7 4376.0 +1.7 −1.7 - - Σ cD * 1 2 ( 1 2 − ) 0.85 +0.06 −0.12 2.3 +2.6 −1.9 4459.8 +2.3 −2.9 P c (4457) 4457.3 Σ cD * 1 2 ( 3 2 − ) 1.13 +0.04 −0.03 16.9 +5.1 −4.7 4445.2 +4.7 −5.1 P c (4440) 4440.3 Σ * cD * 1 2 ( 1 2 − ) 0.82 +0.09 −0.04 1.3 +2.7 −1.3 4525.4 +1.3 −2.7 - - Σ * cD * 1 2 ( 3 2 − ) 0.96 +0.03 −0.04 6.4 +2.0 −2.0 4520.3 +2.0 −2.0 - - Σ * cD * 1 2 ( 5 2 − ) 1.19 +0.06 −0.05 21.0 +7.5 −6.8 4505.7 +6.8 −7.5 - - Σ cD1which Σ † L1 · Σ L2 = −1 (while Q † L1 · Q L2 ≡ Q † L1i j Q L2i j = 0),′ V = κ ′ M2 g V , where κ ′ M2 can be extracted from κ ′ E1 by the relation κ ′ M2 = (m N /m q ) κ ′ E1 = 7
Light baryons:
For baryons containing only light quarks, the previous ideas can also be applied. However there is a tweak which has to do with the relative sizes of lightbaryons in comparison with heavy-hadrons: light-hadrons are larger than heavy-hadrons. For instance, the electromagnetic radius r 2 e.m. of the charged pion and kaon are about 0.66 and 0.56 fm, respectively [31], with size decreasing once the heavier strange quark is involved. This pattern also applies to the charmed mesons, for which r 2 e.m. ∼ (0.40 − 0.55) fm [80][81][82][83]. For baryons, the electromagnetic radius of the proton is 0.84 fm [31]. Lattice QCD calculations of the electromagnetic form factors of the singly and doubly charmed baryons yield figures of the order of 0.5 and 0.4 fm respectively [84,85], half the proton radius, where it is curious to notice that the doubly charmed baryons are about the same size as the charmed mesons and only slightly smaller than the singly charmed ones (from which the hypothesis of using the same cutoff for all heavy hadrons seems a sensible choice).
Of course, the problem is how to take this effect into account. The easiest idea is to use a softer physical cutoff (which effectively amounts to the introduction of a new parameter) in the light baryon sector
p ′ |V C |p = C sat mol f ( p ′ Λ L ) f ( p Λ L ) ,(25)
where Λ L will be close to Λ H /2, with C sat mol still determined as a ratio of C sat P c . For nucleons g S = 10.2, 1 3 g ω = g ρ = g and κ ρ = 3.7, κ ω = −0.1 [86], from which we get
R singlet = 1.77 , R triplet = 1.86 ,(26)
which are really similar, reproducing Wigner's SU(4) symmetry [87][88][89]. For Λ L = 0.5 GeV we predict shallow singlet/triplet bound states
B singlet = 0.20 MeV , B triplet = 0.94 MeV ,(27)
i.e. close to reality, where the singlet/triplet is a virtual/bound state located at 0.07/2.22 MeV below threshold. It is intriguing how reproducing the deuteron forces us to choose a Λ L that basically coincides with the scale at which Wigner's symmetry is expected to manifest [90][91][92][93][94]. Even though our choice of Λ L follows from a phenomenological argument, in practice this is a new parameter required for the correct description of systems containing a light baryon and it could have as well been determined from the condition of reproducing the deuteron or the virtual state.
Other system to consider is ∆∆, with g ρ = g ω = 3g and κ ρ = κ ω = 3 2 µ u . The most attractive configuration is I = 0 and S = 3 (B ∆∆ = 15 MeV and M ∆∆ = 2405 MeV for M ∆ ≃ 1210 MeV [31]), which might be identified with the hexaquark predicted six decades ago [95], the d * (2380) observed in [96] (which, however, has recently been argued to be a triangle singularity [97]) or the ∆∆ state computed in the lattice [98].
7. Light-heavy systems: Finally for a two-hadron system with a light-and heavy-hadron, we simply use different cutoffs for each hadron
p ′ |V C |p = C sat mol f ( p ′ Λ L ) f ( p Λ H ) .(28)
If we apply this idea to the ND * system, we find a bound state with I(J P ) = 0( 3 2 − ) that might correspond to the Λ c (2940), which has been theorized to be molecular [99][100][101][102][103][104]. We also find it worth mentioning the prediction of a virtual state in the ND system with I(J P ) = 1( 1 2 − ), B V mol = 4.3 MeV and M = 2806.6 MeV, which might be identified with the Σ c (2800) (also theorized to be molecular [104,105]). However, owing to the Σ c (2800) having being observed in the Λ c π spectrum [106], this interpretation is questionable unless it is only partially molecular or there are other factors increasing the attraction in the I = 1 ND system. We notice though that Ref.
[104] finds a Σ c (2800) resonance as a pole in ND scattering. In contrast, the ND ( * ) system (i.e. singly charmed pentaquark candidates) shows less attraction than the ND ( * ) owing to omega exchange becoming repulsive. Yet, the I(J P ) = 0( 3 2 − ) ND * configuration happens to be close to binding, with a virtual state at B V mol = 1.8 MeV.
Conclusions:
We propose a description of heavy-(and light-)hadron molecules in terms of an S-wave contact-range potential. The couplings of this potential are determined from light-meson exchange by means of a saturation procedure which incorporates a few RG ideas to effectively combine together the contribution from scalar and vector mesons. In turn the light-meson exchange parameters are set from a series of well-known phenomenological ideas and a cutoff Λ H ∼ 1 GeV is included. This procedure takes the P c (4312) as input, from which it is able to reproduce the other two LHCb pentaquarks, the X(3872) and predict a few new molecular candidates. If applied to the light-sector (with a few modifications), it reproduces Wigner-SU(4) symmetry and the deuteron as a shallow bound state. Of course the question is whether the theoretical ideas contained in this manuscript do really represent a good approximation to the spectrum of molecular states. Future experiments will tell, particularly the discovery of different spin configurations of a given twohadron system, as the hyperfine splittings are very dependent on their origin, which we conjecture to be the magnetic-like couplings of the vector mesons.
Note added. -After the acceptance of this manuscript, the ALICE collaboration has presented the first experimental study of the ND interaction [107]. They extract the isoscalar (I = 0) ND inverse scattering length ( f −1 0 in their notation), resulting in f Here we briefly explain how we derive the vector meson couplings of higher polarity, beginning with the magnetic M1 κ V couplings. These are given by κ V = 3 2 µ u ( j, l)/µ N with µ u ( j, l) the magnetic moment of a light u-quark with total and orbital angular momentum j and l (which also characterize its parent heavy meson). Within the quark model we expect the light-quark magnetic moment operator to bê
µ q = e q 2m q σ q + l q = µ q σ q + l q ,(A1)
where µ q is the magnetic moment of the q = u, d quarks in the quark model (µ u ≃ 1.9 µ N and µ d ≃ −0.9 µ N ), σ q the Pauli matrices as applied to the intrinsic spin of the light-quark and l q the light-quark orbital angular momentum operator. For a light-quark with j and l quantum numbers, its magnetic moment is given by the matrix element
µ q ( j, l) = (sl) j j|μ q |(sl) j j ,(A2)
with |(sl) j j a state in which a light-quark with spin s = 1 2 and orbital angular momentum l couples to total angular momentum j and third component j. The calculation of this matrix element is trivial, yielding
µ q ( j = l + 1 2 , l) = µ q ( j + 1 2 ) , (A3) µ q ( j = l − 1 2 , l) = µ q j j + 1 ( j + 1 2 ) ,(A4)
which translates into µ u ( 1 2 , 1) = µ u /3 and µ u ( 3 2 , 1) = 2µ u for the D 0 /D * 1 and D 1 /D * 2 P-wave charmed mesons. The E1 and M2 couplings κ ′ E1 and κ ′ M2 , which determine the strength of the P-to-S-wave charmed meson transitions, can be determined in turn from the electric dipolar and magnetic quadrupolar moments of these transitions. A comparison with the E1 and M2 electromagnetic Lagrangians
L E1 = e q d ′ E q † L Σ L · ∂ 0 A − ∂A 0 q ′ L + C.C. , (A5) L M2 = Q q M q † L Q Li j ∂ i B j q ′ L + C.C. ,(A6)together with Eq. (17) yields κ ′ E1 = 2M 3 2 e u d ′ E and κ ′ M2 = (2M) 2 3 2 Q u M ,
where in the Lagrangians above q L , q ′ L , Σ L and Q Li j are defined as below Eqs. (19) and (20), e q = (m Q e q − m q eQ)/(m q + m Q ) is the effective charge of the qQ system ( e u = 2 3 for charmed antimesons), Q q M is the quadrupolar magnetic moment for the light-quark q = u, d, A µ = (A 0 , A) the photon field and B j = ǫ jlm ∂ l A m the magnetic field.
If we begin with the E1 transitions, e q d ′ E is the electric dipolar moment of the u-quark in the D ( * ) 1(2) → D ( * ) transition. The dipolar moment can in turn be obtained in two different ways: (i) from the matrix elements of the dipolar moment operator or (ii) from the D * 1(2) → D ( * ) γ decays. In the first case we consider the operatord
q E = e q r ,(A7)
where we define the dipolar moment in relation with the matrix element
D ( * ) 1(2) |d q E |D ( * ) = e q d ′ E Σ L .(A8)
We calculate d ′ E from the quark-model wave functions of the S L = 1 2 or 3 2 charmed meson, which can be expanded as , µ s the orbital and intrinsic angular momentum of the light-quark, u l the reduced wave function, Y lµ l a spherical harmonic and | jm and j 1 m 1 j 2 m 2 | jm refer to spin wave functions and Clebsch-Gordan coefficients. After a few manipulations we arrive at
d ′ E = − P|r|S √ 3 , (A11) where P|r|S = ∞ 0 dr u P (r) r u S (r) ,(A12)
with u S and u P the l = 0, 1 reduced wave functions. We find that κ ′ E1 = −2m N d ′ E (for M = m N and charmed mesons, i.e. e u = − 2 3 ) and from P|r|S = 2.367 GeV in Ref.
[79], we obtain κ ′ E1 = 2.6. In the second case, we use the electromagnetic decays of the D ( * ) 1(2) charmed mesons, which are described by the nonrelativistic amplitude
A(D ( * ) 1(2) → D ( * ) γ) = e q d ′ E Σ L · ∂ 0 A − ∂A 0 , (A13) from which the D * 0 2 → D * 0 γ decay reads Γ(D * 0 2 → D * 0 γ) = 4α 3 m(D * 0 ) m(D * 0 2 ) q 3 |d ′ E | 2 ,(A14)
with α the fine structure constant and q the momentum of the photon. If we use this decay width as calculated in Ref. [78] (Γ = 895 keV, q = 410 MeV) we will arrive at κ ′ E1 = 3.6. For the magnetic quadrupolar moment (Q q M ), it can be obtained from the matrix elements of the M2 operator [108] Q q M = e q 2m q 1 2 (σ qi r j + σ q j r i ) + 2 3 (l qi r j + l q j r i ) , (A15) with µ q , σ q and l q as defined below Eq. (A1). The matrix element of this operator will be proportional to Q q M
D ( * ) 1(2) |Q q M |D ( * ) = Q q M Q Li j ,(A16)
from which we obtain [78] (where m u = 0.33 GeV) as below Eq. (A14), we obtain κ ′ M2 = 11.1. In this case, these two determinations yield similar results (with the average being κ ′ M2 = 10.7), but this agreement is probably fortuitous and the uncertainties should be of the same relative size as for κ ′ E1 . Indeed, if we simply rewrite κ ′ M2 = (m N /m u ) κ ′ E1 and vary m u independently of κ ′ E1 , we will obtain instead the κ ′ M2 = 7.4 − 15.4 window, which is more in line with what to expect from κ ′ E1 .
Q q M = − e q 2m q P|r|S √ 3 = e q 2m q d ′ E .(A17)
Appendix B: Difficulties with the light-meson exchange model
Here we consider a few theoretical difficulties with the light-meson exchange picture. The first is the nature of the scalar meson, which is not a pure qq state and contains tetraquark and molecular components as well (e.g. the σ appears as a wide resonance in ππ scattering, check the recent review [109] and references therein). The part of the σ which is expected to manifest at the scale we are saturating the couplings (i.e. Λ ∼ m V ) is the qq, with the tetraquark components playing a more important role at longer distances and manifesting themselves as the two-pion exchange potential. Dealing with this issue actually requires to also consider the large width of the σ (check the discussion below), yet the previous observation suggests treating the σ that appears in the meson exchange model as a standard meson, though with properties that might differ from the physical σ (a point of view which is for instance followed in the meson theory of nuclear forces [32,33]).
The second is the width of the scalar meson, which raises the issue of how this affects its exchange potential. Several solutions exists in the literature, of which we underline the following three: (i) to treat the exchange σ as a narrow effective degree of freedom, where its mass and coupling within light-meson exchange models are not necessarily the ones corresponding to a physical σ [32,33], (ii) the two-pole approximation of Binstock and Bryan [110], in which the integral of the σ propagator over its mass distribution is approximated as the sum of two narrow particles, one lighter and one heavier than the physical σ and (iii) the treatment by Flambaum and Shuryak [111], in which the previous integral is approximated as the sum of several contributions, of which the two most important ones are the one corresponding to the σ pole (equivalent to the exchange of a narrow σ, but with a weaker coupling) and another corresponding to the exchange of its decay products (two pion exchange). Here we choose the effective σ solution, which is the simplest and the one originally adopted in the meson theory of nuclear forces. Yet, we notice that the RG equation as applied to saturation actually relates these three solutions, as a decrease in the mass of the sigma increases its effective strength, while the presence of medium range two-pion exchange effects can be in turn substituted by a stronger sigma exchange. Indeed this is what actually happens in the meson exchange theory of the nuclear force, where models in which there is no two-pion exchange require a stronger σ coupling [32,33] than models which include it [112].
A third problem is the exchange of heavier light-mesons with the same quantum numbers as the scalar and vector mesons (e.g. the σ can mix with the f 0 (1370), scalar glueballs and other 0 ++ mesons). Again, RG-improved saturation indicates that the heavier light mesons can be included via the formula
C sat (m V ) = M f sup m V m M C M (m M ) ,(B1)
where M and m M denote a given meson and its mass, and f sup is a suppression (or, if m M < m V , enhancement) factor.
For mesons with a mass similar to the saturation scale (i.e. Λ ∼ m V ), the suppression factor is expected to be f sup (x) = x α with α = 1, as previously explained below Eq. (11). However if the mass is dissimilar, this suppression factor should deviate more and more from the previous ansatz. Besides, f sup (x) = x α is only valid for light-mesons with a mass not too different to the vector mesons and eventually the finite size of the hadrons has to be taken into account, which will probably lead to a considerably larger suppression factor. The previous discussion suggests nonetheless that heavier light-mesons can actually be accounted for by a redefinition of the effective couplings of the σ, ρ and ω mesons, though the modifications are expected to be small owing to the aforementioned suppression of heavier meson contributions. Owing to the phenomenological nature of the relations we have used to obtain the couplings and the aforementioned suppression, we consider that this redefinition is not necessary.
Appendix C: Pion exchange effects
Here we revisit the assumption that pion exchanges are a perturbative effect for the two-hadron systems we are considering. For this we will explicitly include the one pion exchange (OPE) potential in a few selected molecules and calculate the binding energy shift ∆B OPE mol that it entails. As we will see, ∆B OPE mol lies in general within the binding uncertainties we have previously calculated.
For including the OPE potential we will do as follows: as we are limiting ourselves to the S-wave approximation, we will only consider the spin-spin component of OPE, which is given by
V OPE ( q) = ζT 12ĈL12 g 1 g 2 6 f 2 π µ 2 π µ 2 π + q 2 + . . . ,(C1)
with µ π the effective pion mass, i.e. µ 2 π = m 2 π − ∆ 2 , where m π ≃ 138 MeV is the pion mass in the isospin symmetric limit and ∆ is the mass difference between the hadrons emitting (or absorbing) the virtual pion in each of the vertices, in case they are different (e.g. the D * D → DD * case, check for instance Ref. [20] for a more detailed discussion); the dots have the same meaning as in Eq. (6) andT 12 ,Ĉ L12 were already defined below Eq. (12). We project this potential into S-waves, yielding
p ′ |V OPE |p = ζT 12ĈL12 g 1 g 2 24 f 2 π µ 2 π p p ′ log µ 2 π + (p + p ′ ) 2 µ 2 π + (p − p ′ ) 2 .(C2)
To obtain the molecular potential, we add OPE to the contactrange potential and regularize
p ′ |V mol |p = C sat mol (Λ H ) + p ′ |V OPE |p f ( p ′ Λ H ) f ( p Λ H ) ,(C3)
where f (x) is the regulator function (specifically, the Gaussian regulator f (x) = e −x 2 ). This potential is plugged into the bound state equation
φ(k) + 2µ mol ∞ 0 p 2 d p 2π 2 k|V mol |p p 2 + γ 2 mol φ(p) = 0 ,(C4)
where, contrary to the purely contact-range case, the previous equation cannot be solved analytically or semi-analytically when OPE is included. The solution is obtained numerically instead by discretizing the bound state equation, after which it becomes a linear system that can be solved by standard means, where γ mol is calculated by finding the zeros of the determinant of the matrix representing the linear system. If we now define
∆B OPE mol = B OPE mol − B mol ,(C5)
for the D * D and D * D * systems (g 1 = g 2 = 0.6) we obtain
∆B OPE mol ≈ +0.0 MeV for 1 ++ D * D ,(C6)∆B OPE mol = −1.8 MeV for 2 ++ D * D * ,(C7)
which lies within the uncertainties we already have and where for the D * D system we have approximated the effective pion mass to zero as m(D * ) − m(D) ≈ m π . For the Σ cD * and Σ * cD * systems (g 1 = 0.84, g 2 = 0.6) we obtain
∆B OPE mol = −1.6 MeV for 1 2 −D * Σ c ,(C8)∆B OPE mol = +1.6 MeV for 3 2 −D * Σ c ,(C9)∆B OPE mol = −1.3 MeV for 1 2 −D * Σ * c ,(C10)∆B OPE mol = −1.2 MeV for 3 2 −D * Σ * c ,(C11)∆B OPE mol = +2.5 MeV for 5 2 −D * Σ * c ,(C12)
which again lies within the estimated uncertainties of the model. Finally, for the NN system (g 1 = g 2 = 1.29) the effect of OPE happens to be larger
∆B OPE mol = +1.8 MeV for the 1 S 0 channel ,(C13)
∆B OPE mol = +2.5 MeV for the deuteron ,
which is about twice the size of the uncertainties we previously estimated for these two systems. This suggests that the present model could be improved in the light baryon sector by explicitly including OPE in the future.
Appendix D: Coupled channel effects
Here we consider a few examples of how coupled channel effects might affect the predictions we have made. The selected systems are (i) Σ cD * -Σ * cD * , (ii) Ξ ′ cD -Ξ cD * and Ξ cD * -Ξ * cD , (iii) DD-D sDs and (iv) D * D 1 -D * D * 1 . In general, these effects are smaller than the uncertainties we have already estimated. Yet, there might be exceptions in which coupled channel effects could play an important role.
1. The Σ cD * -Σ * cD * states
We begin with the P c (4440) and P c (4457) pentaquark states, which in our molecular model are J = 3 2 and 1 2 Σ cD * bound states. It happens that the Σ cD * and Σ * cD * thresholds are close, where the mass gap is 64.6 MeV, and thus it might be sensible to explicitly check whether this coupled channel effect could play a significant role in the description of the P c (4440/4457). The mechanism by which the two channels mix is the M1 interaction term i.e. C 1ˆ S L1 ·ˆ S L2 in Eq. (1). The evaluation of the spin-spin operator for Σ cD
* -Σ * cD * yieldŝ S L1 ·ˆ S L2 = − 4 3 − √ 2 3 − √ 2 3 − 5 2 for J = 1 2 (D1)
and
+ 2 3 − √ 5 3 − √ 5 3 − 2 2 for J = 3 2 ,(D2)
These masses are close to the central value of the single channel calculation (i.e. the values in parentheses), from which we are driven to the conclusion that coupled channel effects are small in the pentaquark case.
2. The Ξ ′ cD -Ξ cD * and Ξ cD * -Ξ * cD states A second example is the P cs (4459) pentaquark, which in the single channel approximation is considered a Ξ cD * bound state. Owing to the strange content of the Ξ c charmed baryon, it is not clear what its coupling to the scalar meson is. On the one hand, the naive expectation is that the σ does not contain a large ss component (if we assume it to be a qq state, which is not clear to begin with). If we combine this observation with the OZI rule, the coupling of the σ to the strange quarks within a baryon should be smaller than to the u and d quarks. On the other hand, the OZI rule is known to fail in the 0 ++ sector [113][114][115][116], which implies that the σ should not be constrained by it. From this we might expect a similar coupling for all the u, d and s light quarks. We will adopt this second view, which implies g S = 6.8 for the Ξ c . With this choice we obtain degenerate J = 1 2 , 3 2 bound states with a mass of
M(Ξ cD * ) = 4466.9 MeV ,(D7)
which is to be compared with the experimental mass M(P cs ) = 4458.8 ± 2.0 +4.7 −1.1 MeV [52]. However, there are two nearby thresholds to be taken into account, the Ξ ′ cD and Ξ * cD for the J = 1 2 and 3 2 cases, respectively. The Ξ c → Ξ ( ′ / * ) c transition can be described by the
Lagrangian L M1 = f V 2M d † L0 ǫ i jk ǫ Li ∂ j V k d L1 + C.C. ,(D8)
where d L0 and d L1 are fields representing the S = 0 and 1 light diquarks within the Ξ c and Ξ ′ c /Ξ * c baryons, respectively, and ǫ L is the polarization vector of the S = 1 light diquark. This Lagrangian can be matched to the electromagnetic one
L M1 = µ(1 → 0) d † L0 ǫ i jk ǫ Li ∂ j A k d L1 + C.C. ,(D9)
with the diquark transition magnetic moment given by µ(1 → 0) = µ(q 1 ) − µ(q 2 ), with q 1 = u, d and q 2 = s in the case at hand. The actual Ξ ′ cD -Ξ cD * and Ξ cD * -Ξ * cD transitions only involve the ρ and ω vector mesons and thus we can ignore the contribution from the strange meson to the magnetic moment when applying vector meson dominance. This leads to κ V = 2.9 for the transition. The saturation of the coupling for the transition yields
C sat = g V1 g V2 ζ +T 12 κ V1 κ V2 6M 2 ǫ L1 · σ L2 ,(D10)
where the index i = 1, 2 represents the Ξ c /Ξ ′ c /Ξ * c charmed baryons andD ( * ) charmed antimeson, respectively. The evaluation of the light spin-spin operator yields | ǫ L1 · σ L2 | = 1 for both J = 1 2 and 3 2 . From this, we solve the coupled channel bound state equation and obtain
M(Ξ cD * , J = 1 2 ) = 4468.4 − i 1.4 (4466.9) MeV , (D11) M(Ξ cD * , J = 3 2 ) = 4464.4 (4466.9) MeV,(D12)
where the value in parentheses is the previous single channel calculation. In this latter case, the J = 3 2 state is close to the experimental single peak solution, but it also compares well with the two peak solution considered in Ref. [52], M 1 = 4454.9 ± 2.7 MeV and M 2 = 4467.8 ± 3.7 MeV, suggesting that the spin of the lower (higher) mass state should be J = 3 2 ( 1 2 ).
The DD-D sDs states
A second example is the DD system, for which a bound state has been predicted in the lattice [117] with B mol = 4.0 +5.0 −3.7 . Here we predict a virtual state instead (which could bind within uncertainties), but we did not include the DD-D sDs coupled channel dynamics of Ref. [117]. Thus it is worth the effort to explore the importance of this channel.
For the coupled channel dynamics, the DD-D sDs transition potential is given by vector meson (K * (890)) exchange
V(HH − H sHs ) = −2 √ 2 g 2 V q 2 + m 2 K * 1 + κ 2 V m 2 K * 6M 2ˆ S L1 ·ˆ S L2 ,(D13)
with H = D,D * and H s = D s ,D * s and V = K * , where g V and κ V are the standard couplings for this system and the 2 √ 2 factor originates from SU(3)-flavor symmetry. For the D sDs diagonal potential, it will be given by scalar and vector meson (φ(1020)) exchange
V(H sHs ) = − g ′ S 2 q 2 + m 2 S − 2 g 2 V q 2 + m 2 φ 1 + κ 2 V m 2 φ 6M 2ˆ S L1 ·ˆ S L2 ,(D14)
where g ′ S refers to the coupling of the σ to the D ( * ) s meson and m φ = 1020 MeV is the mass of the φ(1020). Regarding g ′ S and the D ( * ) s , we follow the same line of argumentation as for the Ξ c in Appendix D 2 and take g ′ S = 3.4. The saturated couplings read where the superscript V indicates a virtual state solution and the masses in parentheses represent the prediction of the single channel calculation. That is, this coupled channel effect indeed provides an attractive contribution to the DD system (about half a MeV), but it is still small in comparison with the uncertainties we have for the location of the DD.
C(HH − H sHs ) = −2 √ 2 m V m K * α g 2 V m 2 K * 1 + κ 2 V m 2 K * 6M 2ˆ S L1 ·ˆ S L2 ,(D15)C(H sHs ) = − m V m S α g ′ S 2 m 2 S − 2 m V m φ α g 2 V m 2 φ 1 + κ 2 V m 2 φ 6M 2ˆ S L1 ·ˆ S L2 ,(D16)
where most of the uncertainty comes from the broad D * 1 charmed meson. As a consequence, if we try to explain the Y(4360) as a D * D 1 molecule, it will be difficult not to consider too the possible mixing with the D * D * 1 system. Indeed, if both the Y(4260) and Y(4360) were to be DD 1 and D * D 1 molecules, respectively, the possible mixing of the Y(4360) with the D * D * 1 channel might very well explain why it is considerably broader than the Y(4260). In the following lines we will explain how to include the D * D 1 -D * D * 1 coupled channel dynamics.
We begin with the M1 D 1 → D * 1 vector-meson transitions, for which the Lagrangian reads
L M1 = f V 2M q ′′ L † ǫ i jk Σ Li ∂ j V k q ′ L + C.C. ,(D20)
where q ′ L and q ′′ L refer to the light-quark subfield for the S L = 3 2 and S L = 1 2 P-wave charmed mesons respectively. By matching with the electromagnetic Lagrangian L M1 = µ q ( 3 2 → 1 2 ) q ′′ L † ǫ i jk Σ Li ∂ j A k q ′ L + C.C. , (D21)
we obtain κ V = 3 2 µ u ( 3 2 → 1 2 ), where the transition magnetic moment can be extracted from the matrix elements of the magnetic moment operator in Eq. (A1), yielding µ u ( 3 2 → 1 2 ) = µ u / √ 3. The next is the E1 D → D * 1 vector-meson transition, which can be obtained from the matrix elements of
D ( * ) 0(1) |d q E |D ( * ) = e q d ′′ E σ L ,(D22)
with d ′′ E = − P|r|S /3, i.e. 1/ √ 3 smaller than for the D → D 1 family of transitions. This give us κ ′′ E1 = κ ′ E1 / √ 3. The saturated coupling for the D * D 1 -D * D * 1 transition reads
C sat ∝ C M1 σ L1 · Σ L2 D + C E1 σ L1 · Σ L2 E ,(D23)
where the D and E subscripts stand for "direct" (D * D 1 → D * D * 1 ) and "exchange" (D * D 1 → D * 1D * ) terms, the difference being that the sign of the exchange term depends on the Cparity of the system under consideration (for J PC = 1 −− the exchange and direct terms have the same sign). The C M1 and C E1 contributions read
C M1 = +g 2 V ζ +T 12 κ M1 κ ′ M1 6M 2 ,(D24)C E1 = −g 2 V ζ +T 12 κ ′ E1 κ ′′ E1 4M 2 m V µ V α ω 2 + 1 3 µ 2 V µ 2 V ,(D25)
where for J PC = 1 −− the M1 and E1 terms end up interfering destructively, leading to relatively weak coupled channel dynamics (despite the two thresholds being so close). If we use M = m N , κ M1 = √ 3κ ′ M1 = 2.9 and κ ′ E1 = √ 3κ ′′ E1 ≃ 3.1, we end up with a weakly bound D * D 1 -D * D * 1 state with a mass of 4420±9 MeV. However if we employ κ ′ E1 = √ 3κ ′′ E1 ≃ 3.9 (the value of the E1 coupling that reproduces the Y(4260)) and the higher end value of the M2 coupling for this choice of κ ′ E1 , i.e. κ ′ M2 ≃ 16.7, we will predict a bound state with mass of 4417 +6 −8 MeV.
(6.8) for D ( * ) (Σ ( * ) c ). For the light-meson masses, we take m S = 475 MeV (the value in the middle of the Review of Particle Physics (RPP) range of 400 − 550 MeV [31]) and m V = (m ρ + m ω )/2 (with m ρ = 770 MeV and m ω = 780 MeV).
. 4 −
415.4, with m N /m q the ratio of the nucleon and constituent quark masses in the particular quark-model calculation used to obtain κ ′ M2 (see Appendix A for details). This provides a modest (but sizable) attraction in theJ PC = 1 −− D * D 1 system (where Σ † L1 · Σ L2 = − 1 6 and Q † L1 · Q L2 = − 5 4 ),resulting in a virtual state with B V mol = 4.1 MeV for κ ′ E1 = 3.1 and κ ′ M2 = 10.7. If we consider the uncertainties in the M1 and E2 couplings, the end result ranges from a shallow bound state (B mol = 1.1 MeV) to a virtual state moderately away from threshold (B V mol = 16.4 MeV), while for κ ′ E1 = 3.9 (which reproduces the Y(4260)) and κ ′ M2 = 24.6, the location of the Y(4360) as recently measured by BESIII [64] would be reproduced. Yet, the previous explanation does not consider the possible D * D 1 -D * D * 1 coupled channel effects (the D 1 and D * 1 charmed mesons have about the same mass), which would make binding more likely and explain the larger width of the Y(4360), check Appendix D 4 for details. Thus, while a pure molecular explanation of the Y(4360) is less natural than for the Y(4260), a molecular component is nonetheless possible and maybe even expectable. Other two interesting configurations are the J PC = 2 −− DD * 2 and 3 −− D * D * 2 systems: the first depends on the coupling C ′ 2 but not on C ′ 1 ( Σ † L1 · Σ L2 = 0 and Q † L1 · Q L2 = −1), which means that if observed could be used to determine κ ′ M2 . For κ ′ M2 = 7.4 − 15.4 we predict a binding energy and mass of B mol = (0.1 − 9.1) MeV and M = (4319.3 − 4327.5) MeV. The second happens to be the most attractive configuration ( Σ † L1 · Σ L2 = −1 and Q † L1 · Q L2 = − 1 2 ), with a state predicted somewhere in the M = (4349 − 4401) MeV window. In
I = 0) ∈ [−0.4, 0.9] fm −1 . Our own calculation yields f −1 0 (I = 0) = 0.64 +0.19 −0.15 fm −1 (indicating the presence of a virtual state), or [0.49, 0.83] fm −1 , a range which falls within the ALICE estimation.
ACKNOWLEDGMENTS
We thank Feng-Kun Guo, Enrique Ruiz Arriola and Eulogio Oset for their comments on this manuscript. M.P.V. thanks the IJCLab of Orsay, where part of this work was done, for its hospitality. This work is partly supported by the National Natural Science Foundation of China under Grants No. 11375024, the Fundamental Research Funds for the Central Universities and the Thousand Talents Plan for Young Professionals.Appendix A: Determination of the M1, E1 and M2 couplings
|D(S L ; JM) = M L M H Ψ S L M L ( r) |S H M H S L M L S H M H |JM , (A9) Ψ S L M L ( r) = µ l µ s u l (r) r Y lµ l (r)|s µ s l µ l s µ s |S L M L , (A10) where J, M refer to the total angular momentum of the charmed meson and its third component, S L , M L and S H = 1 2 ), M H to the light-and heavy-quark spin, Ψ S L M L the wave function of the light-quark, l, µ l and s (= 1 2 )
from which we can solve the coupled channel version of the bound state equation, resulting in = 4520.7 − i 0.4 (4520.2) MeV .
and concrete calculations yieldM(DD) = 3733.5 V (3733.0 V ) MeV , (D17) M(D sDs ) = 3928.8 V − i 2.4 V (3929.8 V ) MeV , (D18)
D 1 )
1− m(D * 1 ) = (10 ± 9) MeV ,
resulting in B mol = 19−55 MeV and a mass of 4235−4271 MeV, to be compared with 4218.6 ± 5.2[64] for the Y(4260) (κ ′ E1 ∼ 3.9 would reproduce the mass), suggesting a sizable molecular component.For the magnetic quadrupolar term, we define h
We notice that there is no ideal choice of a reference molecule, as for no exotic state there exists a clear consensus of its molecular nature.
. Y Ne'eman, 10.1016/0029-5582(61)90134-1Nucl.Phys. 26222Y. Ne'eman, Nucl.Phys. 26, 222 (1961).
. M Gell-Mann, 10.1103/PhysRev.125.1067Phys.Rev. 1251067M. Gell-Mann, Phys.Rev. 125, 1067 (1962).
. N Isgur, G Karl, 10.1103/PhysRevD.18.4187Phys. Rev. D. 184187N. Isgur and G. Karl, Phys. Rev. D 18, 4187 (1978).
. N Isgur, G Karl, 10.1103/PhysRevD.19.2653Phys. Rev. D. 19817Phys.Rev.DN. Isgur and G. Karl, Phys. Rev. D 19, 2653 (1979), [Erratum: Phys.Rev.D 23, 817 (1981)].
. N Isgur, G Karl, 10.1103/PhysRevD.20.1191Phys. Rev. D. 201191N. Isgur and G. Karl, Phys. Rev. D 20, 1191 (1979).
. S Godfrey, N Isgur, 10.1103/PhysRevD.32.189Phys. Rev. D. 32189S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985).
. E Eichten, K Gottfried, T Kinoshita, K D Lane, T.-M Yan, 10.1103/PhysRevD.17.3090,10.1103/physrevd.21.313.2Phys. Rev. 17Erratum: Phys. Rev.D21,313(1980)E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, and T.-M. Yan, Phys. Rev. D17, 3090 (1978), [Erratum: Phys. Rev.D21,313(1980)].
. E Eichten, K Gottfried, T Kinoshita, K D Lane, T.-M Yan, 10.1103/PhysRevD.21.203Phys. Rev. 21203E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, and T.-M. Yan, Phys. Rev. D21, 203 (1980).
. N Brambilla, A Pineda, J Soto, A Vairo, 10.1016/S0550-3213(99)00693-8arXiv:hep-ph/9907240Nucl. Phys. 566275hep-phN. Brambilla, A. Pineda, J. Soto, and A. Vairo, Nucl. Phys. B566, 275 (2000), arXiv:hep-ph/9907240 [hep-ph].
. N Brambilla, A Pineda, J Soto, A Vairo, 10.1103/RevModPhys.77.1423arXiv:hep-ph/0410047Rev. Mod. Phys. 771423hep-phN. Brambilla, A. Pineda, J. Soto, and A. Vairo, Rev. Mod. Phys. 77, 1423 (2005), arXiv:hep-ph/0410047 [hep-ph].
. H.-X Chen, W Chen, X Liu, S.-L Zhu, 10.1016/j.physrep.2016.05.004arXiv:1601.02092Phys. Rept. 639hep-phH.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Phys. Rept. 639, 1 (2016), arXiv:1601.02092 [hep-ph].
. A Hosaka, T Iijima, K Miyabayashi, Y Sakai, S Yasui, 10.1093/ptep/ptw045arXiv:1603.09229PTEP. 2016hep-phA. Hosaka, T. Iijima, K. Miyabayashi, Y. Sakai, and S. Yasui, PTEP 2016, 062C01 (2016), arXiv:1603.09229 [hep-ph].
. R F Lebed, R E Mitchell, E S Swanson, 10.1016/j.ppnp.2016.11.003arXiv:1610.04528Prog. Part. Nucl. Phys. 93143hep-phR. F. Lebed, R. E. Mitchell, and E. S. Swanson, Prog. Part. Nucl. Phys. 93, 143 (2017), arXiv:1610.04528 [hep-ph].
. F.-K Guo, C Hanhart, U.-G Meißner, Q Wang, Q Zhao, B.-S Zou, 10.1103/RevModPhys.90.015004arXiv:1705.00141Rev. Mod. Phys. 9015004hep-phF.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao, and B.-S. Zou, Rev. Mod. Phys. 90, 015004 (2018), arXiv:1705.00141 [hep-ph].
. M Voloshin, L Okun, JETP Lett. 23333M. Voloshin and L. Okun, JETP Lett. 23, 333 (1976).
. A De Rujula, H Georgi, S Glashow, 10.1103/PhysRevLett.38.317Phys.Rev.Lett. 38317A. De Rujula, H. Georgi, and S. Glashow, Phys.Rev.Lett. 38, 317 (1977).
. M Karliner, J L Rosner, 10.1103/PhysRevLett.115.122001arXiv:1506.06386Phys. Rev. Lett. 115122001hep-phM. Karliner and J. L. Rosner, Phys. Rev. Lett. 115, 122001 (2015), arXiv:1506.06386 [hep-ph].
. X.-K Dong, F.-K Guo, B.-S Zou, 10.13725/j.cnki.pip.2021.02.001arXiv:2101.01021Progr. Phys. 4165hep-phX.-K. Dong, F.-K. Guo, and B.-S. Zou, Progr. Phys. 41, 65 (2021), arXiv:2101.01021 [hep-ph].
. F.-Z Peng, M.-Z Liu, M Sánchez, M Pavon Sánchez, Valderrama, 10.1103/PhysRevD.102.114020arXiv:2004.05658Phys. Rev. D. 102114020hep-ph[19] F.-Z. Peng, M.-Z. Liu, M. Sánchez Sánchez, and M. Pavon Valderrama, Phys. Rev. D 102, 114020 (2020), arXiv:2004.05658 [hep-ph].
. M P Valderrama, 10.1103/PhysRevD.85.114037arXiv:1204.2400Phys. Rev. 85114037hep-phM. P. Valderrama, Phys. Rev. D85, 114037 (2012), arXiv:1204.2400 [hep-ph].
. J.-X Lu, L.-S Geng, M P Valderrama, 10.1103/PhysRevD.99.074026arXiv:1706.02588Phys. Rev. D. 9974026hep-phJ.-X. Lu, L.-S. Geng, and M. P. Valderrama, Phys. Rev. D 99, 074026 (2019), arXiv:1706.02588 [hep-ph].
. M , Pavon Valderrama, 10.1140/epja/s10050-020-00099-8arXiv:1906.06491Eur. Phys. J. A. 56hep-phM. Pavon Valderrama, Eur. Phys. J. A 56, 109 (2020), arXiv:1906.06491 [hep-ph].
. M , Pavón Valderrama, D R Phillips, 10.1103/PhysRevLett.114.082502arXiv:1407.0437Phys. Rev. Lett. 11482502nucl-thM. Pavón Valderrama and D. R. Phillips, Phys. Rev. Lett. 114, 082502 (2015), arXiv:1407.0437 [nucl-th].
. M P Valderrama, 10.1142/S021830131641007XarXiv:1604.01332Int. J. Mod. Phys. E. 251641007nucl-thM. P. Valderrama, Int. J. Mod. Phys. E 25, 1641007 (2016), arXiv:1604.01332 [nucl-th].
. R E Langer, 10.1103/PhysRev.51.669Phys. Rev. 51669R. E. Langer, Phys. Rev. 51, 669 (1937).
. J J Sakurai, 10.1016/0003-4916(60)90126-3Annals Phys. 111J. J. Sakurai, Annals Phys. 11, 1 (1960).
. K Kawarabayashi, M Suzuki, 10.1103/PhysRevLett.16.255Phys. Rev. Lett. 16255K. Kawarabayashi and M. Suzuki, Phys. Rev. Lett. 16, 255 (1966).
. Fayyazuddin Riazuddin, 10.1103/PhysRev.147.1071Phys. Rev. 1471071Riazuddin and Fayyazuddin, Phys. Rev. 147, 1071 (1966).
. D O Riska, G E Brown, 10.1016/S0375-9474(00)00362-6arXiv:nucl-th/0005049Nucl. Phys. 679nucl-thD. O. Riska and G. E. Brown, Nucl. Phys. A679, 577 (2001), arXiv:nucl-th/0005049 [nucl-th].
. M Gell-Mann, M Levy, Nuovo Cim, 10.1007/BF0285973816705M. Gell-Mann and M. Levy, Nuovo Cim. 16, 705 (1960).
. P Zyla, Particle Data Group10.1093/ptep/ptaa104PTEP. 2020P. Zyla et al. (Particle Data Group), PTEP 2020, 083C01 (2020).
. R Machleidt, K Holinde, C Elster, Phys. Rept. 1491R. Machleidt, K. Holinde, and C. Elster, Phys. Rept. 149, 1 (1987).
. R Machleidt, Adv. Nucl. Phys. 19189R. Machleidt, Adv. Nucl. Phys. 19, 189 (1989).
. R Aaij, LHCb10.1103/PhysRevLett.122.222001arXiv:1904.03947Phys. Rev. Lett. 122222001hep-exR. Aaij et al. (LHCb), Phys. Rev. Lett. 122, 222001 (2019), arXiv:1904.03947 [hep-ex].
. M.-Z Liu, Y.-W Pan, F.-Z Peng, M Sánchez, L.-S Geng, A Hosaka, M Pavon Valderrama, 10.1103/PhysRevLett.122.242001arXiv:1903.11560Phys. Rev. Lett. 122242001hep-phM.-Z. Liu, Y.-W. Pan, F.-Z. Peng, M. Sánchez Sánchez, L.-S. Geng, A. Hosaka, and M. Pavon Valder- rama, Phys. Rev. Lett. 122, 242001 (2019), arXiv:1903.11560 [hep-ph].
. M , Pavon Valderrama, 10.1103/PhysRevD.100.094028arXiv:1907.05294Phys. Rev. D. 10094028hep-phM. Pavon Valderrama, Phys. Rev. D 100, 094028 (2019), arXiv:1907.05294 [hep-ph].
. C Xiao, J Nieves, E Oset, 10.1103/PhysRevD.100.014021arXiv:1904.01296Phys. Rev. D. 10014021hep-phC. Xiao, J. Nieves, and E. Oset, Phys. Rev. D 100, 014021 (2019), arXiv:1904.01296 [hep-ph].
. M.-L Du, V Baru, F.-K Guo, C Hanhart, U.-G Meißner, J A Oller, Q Wang, 10.1103/PhysRevLett.124.072001arXiv:1910.11846Phys. Rev. Lett. 12472001hep-phM.-L. Du, V. Baru, F.-K. Guo, C. Han- hart, U.-G. Meißner, J. A. Oller, and Q. Wang, Phys. Rev. Lett. 124, 072001 (2020), arXiv:1910.11846 [hep-ph].
. M.-Z Liu, T.-W Wu, M Sánchez, M P Valderrama, L.-S Geng, J.-J Xie, 10.1103/PhysRevD.103.054004arXiv:1907.06093Phys. Rev. D. 10354004hep-phM.-Z. Liu, T.-W. Wu, M. Sánchez Sánchez, M. P. Valderrama, L.-S. Geng, and J.-J. Xie, Phys. Rev. D 103, 054004 (2021), arXiv:1907.06093 [hep-ph].
. M.-Z Liu, T.-W Wu, M Valderrama, J.-J Xie, L.-S Geng, 10.1103/PhysRevD.99.094018arXiv:1902.03044Phys. Rev. D. 9994018hep-phM.-Z. Liu, T.-W. Wu, M. Pavon Valderrama, J.-J. Xie, and L.-S. Geng, Phys. Rev. D 99, 094018 (2019), arXiv:1902.03044 [hep-ph].
. N A Tornqvist, 10.1007/BF01413192arXiv:hep-ph/9310247Z.Phys. 61hep-phN. A. Tornqvist, Z.Phys. C61, 525 (1994), arXiv:hep-ph/9310247 [hep-ph].
. J Nieves, M P Valderrama, 10.1103/PhysRevD.86.056004arXiv:1204.2790Phys. Rev. 8656004hep-phJ. Nieves and M. P. Valderrama, Phys. Rev. D86, 056004 (2012), arXiv:1204.2790 [hep-ph].
. D Gamermann, E Oset, D Strottman, M Vicente Vacas, 10.1103/PhysRevD.76.074016arXiv:hep-ph/0612179Phys.Rev. 7674016hep-phD. Gamermann, E. Oset, D. Strottman, and M. Vicente Vacas, Phys.Rev. D76, 074016 (2007), arXiv:hep-ph/0612179 [hep-ph].
. C Xiao, E Oset, 10.1140/epja/i2013-13052-5arXiv:1211.1862Eur. Phys. J. A. 4952hep-phC. Xiao and E. Oset, Eur. Phys. J. A 49, 52 (2013), arXiv:1211.1862 [hep-ph].
. E S Swanson, 10.1016/j.physletb.2004.07.059arXiv:hep-ph/0406080Phys. Lett. B. 598197E. S. Swanson, Phys. Lett. B 598, 197 (2004), arXiv:hep-ph/0406080.
. Y Dong, A Faessler, T Gutsche, V E Lyubovitskij, 10.1088/0954-3899/38/1/015001arXiv:0909.0380J. Phys. G. 3815001hep-phY. Dong, A. Faessler, T. Gutsche, and V. E. Lyubovitskij, J. Phys. G 38, 015001 (2011), arXiv:0909.0380 [hep-ph].
. F.-K Guo, C Hanhart, Y S Kalashnikova, U.-G Meißner, A V Nefediev, 10.1016/j.physletb.2015.02.013arXiv:1410.6712Phys. Lett. B. 742394hep-phF.-K. Guo, C. Hanhart, Y. S. Kalashnikova, U.-G. Meißner, and A. V. Nefediev, Phys. Lett. B 742, 394 (2015), arXiv:1410.6712 [hep-ph].
. A Esposito, E G Ferreiro, A Pilloni, A D Polosa, C A Salgado, 10.1140/epjc/s10052-021-09425-warXiv:2006.15044Eur. Phys. J. C. 81669hep-phA. Esposito, E. G. Ferreiro, A. Pilloni, A. D. Polosa, and C. A. Salgado, Eur. Phys. J. C 81, 669 (2021), arXiv:2006.15044 [hep-ph].
. E Braaten, L.-P He, K Ingles, J Jiang, 10.1103/PhysRevD.103.L071901arXiv:2012.13499Phys. Rev. D. 10371901hep-phE. Braaten, L.-P. He, K. Ingles, and J. Jiang, Phys. Rev. D 103, L071901 (2021), arXiv:2012.13499 [hep-ph].
. A Esposito, L Maiani, A Pilloni, A D Polosa, V Riquer, arXiv:2108.11413hep-phA. Esposito, L. Maiani, A. Pilloni, A. D. Polosa, and V. Ri- quer, (2021), arXiv:2108.11413 [hep-ph].
. M Ablikim, BESIII10.1103/PhysRevLett.126.102001arXiv:2011.07855Phys. Rev. Lett. 126102001hep-exM. Ablikim et al. (BESIII), Phys. Rev. Lett. 126, 102001 (2021), arXiv:2011.07855 [hep-ex].
. R Aaij, LHCb10.1016/j.scib.2021.02.030arXiv:2012.10380Sci. Bull. 661391hep-exR. Aaij et al. (LHCb), Sci. Bull. 66, 1391 (2021), arXiv:2012.10380 [hep-ex].
. Z Yang, X Cao, F.-K Guo, J Nieves, M P Valderrama, 10.1103/PhysRevD.103.074029arXiv:2011.08725Phys. Rev. D. 10374029hep-phZ. Yang, X. Cao, F.-K. Guo, J. Nieves, and M. P. Valderrama, Phys. Rev. D 103, 074029 (2021), arXiv:2011.08725 [hep-ph].
. L Meng, B Wang, S.-L , L. Meng, B. Wang, and S.-L.
. Zhu, 10.1103/PhysRevD.102.111502arXiv:2011.08656Phys. Rev. D. 102111502hep-phZhu, Phys. Rev. D 102, 111502 (2020), arXiv:2011.08656 [hep-ph].
. Z.-F Sun, C.-W Xiao, arXiv:2011.09404hep-phZ.-F. Sun and C.-W. Xiao, (2020), arXiv:2011.09404 [hep-ph].
. N Ikeno, R Molina, E Oset, 10.1016/j.physletb.2021.136120arXiv:2011.13425Phys. Lett. B. 814136120hep-phN. Ikeno, R. Molina, and E. Oset, Phys. Lett. B 814, 136120 (2021), arXiv:2011.13425 [hep-ph].
. M Albaladejo, F.-K Guo, C Hidalgo-Duque, J Nieves, 10.1016/j.physletb.2016.02.025arXiv:1512.03638Phys. Lett. B. 755337hep-phM. Albaladejo, F.-K. Guo, C. Hidalgo-Duque, and J. Nieves, Phys. Lett. B 755, 337 (2016), arXiv:1512.03638 [hep-ph].
. M.-J Yan, F.-Z Peng, M Sánchez, M P Sánchez, Valderrama, arXiv:2102.13058hep-phM.-J. Yan, F.-Z. Peng, M. Sánchez Sánchez, and M. P. Valder- rama, (2021), arXiv:2102.13058 [hep-ph].
. H.-X Chen, W Chen, X Liu, X.-H Liu, 10.1140/epjc/s10052-021-09196-4arXiv:2011.01079Eur. Phys. J. C. 81409hep-phH.-X. Chen, W. Chen, X. Liu, and X.-H. Liu, Eur. Phys. J. C 81, 409 (2021), arXiv:2011.01079 [hep-ph].
. F.-Z Peng, M.-J Yan, M Sánchez, M P Sánchez, Valderrama, 10.1140/epjc/s10052-021-09416-xarXiv:2011.01915Eur. Phys. J. C. 81666hep-phF.-Z. Peng, M.-J. Yan, M. Sánchez Sánchez, and M. P. Valderrama, Eur. Phys. J. C 81, 666 (2021), arXiv:2011.01915 [hep-ph].
. R Chen, 10.1103/PhysRevD.103.054007arXiv:2011.07214Phys. Rev. D. 10354007hep-phR. Chen, Phys. Rev. D 103, 054007 (2021), arXiv:2011.07214 [hep-ph].
. M.-Z Liu, Y.-W Pan, L.-S , M.-Z. Liu, Y.-W. Pan, and L.-S.
. Geng, 10.1103/PhysRevD.103.034003arXiv:2011.07935Phys. Rev. D. 10334003hep-phGeng, Phys. Rev. D 103, 034003 (2021), arXiv:2011.07935 [hep-ph].
K Tanida, Belle10.1142/9789811219313_0028arXiv:1908.0623518th International Conference on Hadron Spectroscopy and Structure. hep-exK. Tanida et al. (Belle), in 18th International Conference on Hadron Spectroscopy and Structure (2020) pp. 183-187, arXiv:1908.06235 [hep-ex].
. M Ablikim, BESIII10.1103/PhysRevD.102.031101arXiv:2003.03705Phys. Rev. D. 10231101hep-exM. Ablikim et al. (BESIII), Phys. Rev. D 102, 031101 (2020), arXiv:2003.03705 [hep-ex].
. B Aubert, BaBar10.1103/PhysRevLett.95.142001arXiv:hep-ex/0506081Phys. Rev. Lett. 95142001B. Aubert et al. (BaBar), Phys. Rev. Lett. 95, 142001 (2005), arXiv:hep-ex/0506081.
. X Liu, X.-Q Zeng, X.-Q Li, 10.1103/PhysRevD.72.054023arXiv:hep-ph/0507177Phys. Rev. D. 7254023X. Liu, X.-Q. Zeng, and X.-Q. Li, Phys. Rev. D 72, 054023 (2005), arXiv:hep-ph/0507177.
. G.-J Ding, 10.1103/PhysRevD.79.014001arXiv:0809.4818Phys. Rev. D. 7914001hep-phG.-J. Ding, Phys. Rev. D 79, 014001 (2009), arXiv:0809.4818 [hep-ph].
. Q Wang, C Hanhart, Q Zhao, 10.1103/PhysRevLett.111.132003arXiv:1303.6355Phys. Rev. Lett. 111132003hep-phQ. Wang, C. Hanhart, and Q. Zhao, Phys. Rev. Lett. 111, 132003 (2013), arXiv:1303.6355 [hep-ph].
. Q Wang, M Cleven, F.-K Guo, C Hanhart, U.-G Meißner, X.-G Wu, Q Zhao, 10.1103/PhysRevD.89.034001arXiv:1309.4303Phys. Rev. D. 8934001hep-phQ. Wang, M. Cleven, F.-K. Guo, C. Hanhart, U.-G. Meißner, X.-G. Wu, and Q. Zhao, Phys. Rev. D 89, 034001 (2014), arXiv:1309.4303 [hep-ph].
. M Cleven, Q Wang, F.-K Guo, C Hanhart, U.-G Meißner, Q Zhao, 10.1103/PhysRevD.90.074039arXiv:1310.2190Phys. Rev. D. 9074039hep-phM. Cleven, Q. Wang, F.-K. Guo, C. Hanhart, U.-G. Meißner, and Q. Zhao, Phys. Rev. D 90, 074039 (2014), arXiv:1310.2190 [hep-ph].
. Y.-H Chen, L.-Y Dai, F.-K Guo, B Kubis, 10.1103/PhysRevD.99.074016arXiv:1902.10957Phys. Rev. D. 9974016hep-phY.-H. Chen, L.-Y. Dai, F.-K. Guo, and B. Kubis, Phys. Rev. D 99, 074016 (2019), arXiv:1902.10957 [hep-ph].
. S.-L Zhu, 10.1016/j.physletb.2005.08.068arXiv:hep-ph/0507025Phys. Lett. B. 625212S.-L. Zhu, Phys. Lett. B 625, 212 (2005), arXiv:hep-ph/0507025.
. F J Llanes-Estrada, 10.1103/PhysRevD.72.031503arXiv:hep-ph/0507035Phys. Rev. D. 7231503F. J. Llanes-Estrada, Phys. Rev. D 72, 031503 (2005), arXiv:hep-ph/0507035.
. L Maiani, V Riquer, F Piccinini, A Polosa, 10.1103/PhysRevD.72.031502arXiv:hep-ph/0507062Phys. Rev. D. 7231502L. Maiani, V. Riquer, F. Piccinini, and A. Polosa, Phys. Rev. D 72, 031502 (2005), arXiv:hep-ph/0507062.
. S Dubynskiy, M Voloshin, 10.1016/j.physletb.2008.07.086arXiv:0803.2224Phys. Lett. B. 666344hep-phS. Dubynskiy and M. Voloshin, Phys. Lett. B 666, 344 (2008), arXiv:0803.2224 [hep-ph].
. A Torres, K Khemchandani, D Gamermann, E Oset, 10.1103/PhysRevD.80.094012arXiv:0906.5333Phys. Rev. D. 8094012nucl-thA. Martinez Torres, K. Khemchandani, D. Gamer- mann, and E. Oset, Phys. Rev. D 80, 094012 (2009), arXiv:0906.5333 [nucl-th].
. X Li, M B Voloshin, 10.1142/S0217732314500606arXiv:1309.1681Mod. Phys. Lett. A. 291450060hep-phX. Li and M. B. Voloshin, Mod. Phys. Lett. A 29, 1450060 (2014), arXiv:1309.1681 [hep-ph].
. F E Close, E S Swanson, ; S Godfrey, 10.1103/PhysRevD.72.054029arXiv:hep-ph/0505206arXiv:hep-ph/0508078Phys. Rev. D. 7254029Phys. Rev.. hep-phF. E. Close and E. S. Swanson, Phys. Rev. D 72, 094004 (2005), arXiv:hep-ph/0505206. [79] S. Godfrey, Phys. Rev. D72, 054029 (2005), arXiv:hep-ph/0508078 [hep-ph].
. C.-W Hwang, 10.1007/s100520200904arXiv:hep-ph/0112237Eur. Phys. J. C. 23585C.-W. Hwang, Eur. Phys. J. C 23, 585 (2002), arXiv:hep-ph/0112237.
. D Becirevic, E Chang, A Le Yaouanc, 10.1103/PhysRevD.80.034504arXiv:0905.3352Phys. Rev. D. 8034504hep-latD. Becirevic, E. Chang, and A. Le Yaouanc, Phys. Rev. D 80, 034504 (2009), arXiv:0905.3352 [hep-lat].
. D Becirevic, E Chang, L Oliver, J.-C Raynal, A Le Yaouanc, 10.1103/PhysRevD.84.054507arXiv:1103.4024Phys. Rev. D. 8454507hep-phD. Becirevic, E. Chang, L. Oliver, J.-C. Raynal, and A. Le Yaouanc, Phys. Rev. D 84, 054507 (2011), arXiv:1103.4024 [hep-ph].
. K U Can, G Erkol, M Oka, A Ozpineci, T T Takahashi, 10.1016/j.physletb.2012.12.050arXiv:1210.0869Phys. Lett. 719103hep-latK. U. Can, G. Erkol, M. Oka, A. Ozpineci, and T. T. Takahashi, Phys. Lett. B719, 103 (2013), arXiv:1210.0869 [hep-lat].
. K U Can, G Erkol, B Isildak, M Oka, T T Takahashi, 10.1007/JHEP05(2014)125arXiv:1310.5915JHEP. 05125hep-latK. U. Can, G. Erkol, B. Isildak, M. Oka, and T. T. Takahashi, JHEP 05, 125 (2014), arXiv:1310.5915 [hep-lat].
. K U Can, G Erkol, B Isildak, M Oka, T T Takahashi, 10.1016/j.physletb.2013.09.024arXiv:1306.0731Phys. Lett. 726703hep-latK. U. Can, G. Erkol, B. Isildak, M. Oka, and T. T. Takahashi, Phys. Lett. B726, 703 (2013), arXiv:1306.0731 [hep-lat].
. A Calle Cordon, E. Ruiz Arriola, 10.1103/PhysRevC.81.044002arXiv:0905.4933Phys. Rev. C. 8144002nucl-thA. Calle Cordon and E. Ruiz Arriola, Phys. Rev. C 81, 044002 (2010), arXiv:0905.4933 [nucl-th].
. E Wigner, 10.1103/PhysRev.51.106Phys. Rev. 51106E. Wigner, Phys. Rev. 51, 106 (1937).
. T Mehen, I W Stewart, M B Wise, 10.1103/PhysRevLett.83.931arXiv:hep-ph/9902370Phys. Rev. Lett. 83931T. Mehen, I. W. Stewart, and M. B. Wise, Phys. Rev. Lett. 83, 931 (1999), arXiv:hep-ph/9902370.
. J.-W Chen, D Lee, T Schäfer, 10.1103/PhysRevLett.93.242302arXiv:nucl-th/0408043Phys. Rev. Lett. 93242302J.-W. Chen, D. Lee, and T. Schäfer, Phys. Rev. Lett. 93, 242302 (2004), arXiv:nucl-th/0408043.
. A Calle Cordon, E. Ruiz Arriola, A. Calle Cordon and E. Ruiz Arriola,
. 10.1103/PhysRevC.78.054002arXiv:0807.2918Phys. Rev. C. 7854002nucl-thPhys. Rev. C 78, 054002 (2008), arXiv:0807.2918 [nucl-th].
. A Calle Cordon, E. Ruiz Arriola, 10.1103/PhysRevC.80.014002arXiv:0904.0421Phys. Rev. C. 8014002nucl-thA. Calle Cordon and E. Ruiz Arriola, Phys. Rev. C 80, 014002 (2009), arXiv:0904.0421 [nucl-th].
. V S Timoteo, S Szpigel, E. Ruiz Arriola, 10.1103/PhysRevC.86.034002arXiv:1108.1162Phys. Rev. C. 8634002nucl-thV. S. Timoteo, S. Szpigel, and E. Ruiz Arriola, Phys. Rev. C 86, 034002 (2012), arXiv:1108.1162 [nucl-th].
. E Ruiz Arriola, 10.3390/sym8060042842E. Ruiz Arriola, Symmetry 8, 42 (2016).
. D Lee, 10.1103/PhysRevLett.127.062501arXiv:2010.09420Phys. Rev. Lett. 12762501nucl-thD. Lee et al., Phys. Rev. Lett. 127, 062501 (2021), arXiv:2010.09420 [nucl-th].
. F Dyson, N H Xuong, 10.1103/PhysRevLett.13.815Phys. Rev. Lett. 13815F. Dyson and N. H. Xuong, Phys. Rev. Lett. 13, 815 (1964).
WASA-at-COSY). P Adlarson, 10.1103/PhysRevLett.106.242302arXiv:1104.0123Phys. Rev. Lett. 106242302nucl-exP. Adlarson et al. (WASA-at-COSY), Phys. Rev. Lett. 106, 242302 (2011), arXiv:1104.0123 [nucl-ex].
. R Molina, N Ikeno, E Oset, arXiv:2102.05575nucl-thR. Molina, N. Ikeno, and E. Oset, (2021), arXiv:2102.05575 [nucl-th].
. S Gongyo, HAL QCDK Sasaki, HAL QCDT Miyamoto, HAL QCDS Aoki, HAL QCDT Doi, HAL QCDT Hatsuda, HAL QCDY Ikeda, HAL QCDT Inoue, HAL QCDN Ishii, HAL QCD10.1016/j.physletb.2020.135935arXiv:2006.00856Phys. Lett. B. 811135935hep-latS. Gongyo, K. Sasaki, T. Miyamoto, S. Aoki, T. Doi, T. Hatsuda, Y. Ikeda, T. Inoue, and N. Ishii (HAL QCD), Phys. Lett. B 811, 135935 (2020), arXiv:2006.00856 [hep-lat].
. X.-G He, X.-Q Li, X Liu, X.-Q Zeng, 10.1140/epjc/s10052-007-0347-yarXiv:hep-ph/0606015Eur. Phys. J. C. 51883X.-G. He, X.-Q. Li, X. Liu, and X.-Q. Zeng, Eur. Phys. J. C 51, 883 (2007), arXiv:hep-ph/0606015.
. J He, Y.-T Ye, Z.-F Sun, X Liu, 10.1103/PhysRevD.82.114029arXiv:1008.1500Phys. Rev. D. 82114029hep-phJ. He, Y.-T. Ye, Z.-F. Sun, and X. Liu, Phys. Rev. D 82, 114029 (2010), arXiv:1008.1500 [hep-ph].
. J.-R Zhang, 10.1103/PhysRevD.89.096006arXiv:1212.5325Phys. Rev. D. 8996006hep-phJ.-R. Zhang, Phys. Rev. D 89, 096006 (2014), arXiv:1212.5325 [hep-ph].
. P G Ortega, D R Entem, F Fernandez, 10.1016/j.physletb.2012.12.025arXiv:1210.2633Phys. Lett. B. 7181381hep-phP. G. Ortega, D. R. Entem, and F. Fernandez, Phys. Lett. B 718, 1381 (2013), arXiv:1210.2633 [hep-ph].
. B Wang, L Meng, S.-L , B. Wang, L. Meng, and S.-L.
. Zhu, 10.1103/PhysRevD.101.094035arXiv:2003.05688Phys. Rev. D. 10194035hep-phZhu, Phys. Rev. D 101, 094035 (2020), arXiv:2003.05688 [hep-ph].
. S Sakai, F.-K Guo, B Kubis, 10.1016/j.physletb.2020.135623arXiv:2004.09824Phys. Lett. B. 808135623hep-ph[104] S. Sakai, F.-K. Guo, and B. Ku- bis, Phys. Lett. B 808, 135623 (2020), arXiv:2004.09824 [hep-ph].
. J Haidenbauer, G Krein, U.-G Meissner, L Tolos, 10.1140/epja/i2011-11018-3arXiv:1008.3794Eur. Phys. J. A. 47nucl-thJ. Haidenbauer, G. Krein, U.-G. Meissner, and L. Tolos, Eur. Phys. J. A 47, 18 (2011), arXiv:1008.3794 [nucl-th].
. R Mizuk, Belle10.1103/PhysRevLett.94.122002arXiv:hep-ex/0412069Phys. Rev. Lett. 94122002R. Mizuk et al. (Belle), Phys. Rev. Lett. 94, 122002 (2005), arXiv:hep-ex/0412069.
. S Acharya, ALICEarXiv:2201.05352nucl-exS. Acharya et al. (ALICE), (2022), arXiv:2201.05352 [nucl-ex].
. R Raab, 10.1080/00268977500101151Molecular Physics. 291323R. Raab, Molecular Physics 29, 1323 (1975).
. J Pelaez, 10.1016/j.physrep.2016.09.001arXiv:1510.00653Phys. Rept. 658hep-phJ. Pelaez, Phys. Rept. 658, 1 (2016), arXiv:1510.00653 [hep-ph].
. J Binstock, R Bryan, 10.1103/PhysRevD.4.1341Phys. Rev. D. 41341J. Binstock and R. Bryan, Phys. Rev. D 4, 1341 (1971).
. V V Flambaum, E V Shuryak, 10.1103/PhysRevC.76.065206arXiv:nucl-th/0702038Phys. Rev. C. 7665206V. V. Flambaum and E. V. Shuryak, Phys. Rev. C 76, 065206 (2007), arXiv:nucl-th/0702038.
. V G J Stoks, R A M Klomp, C P F Terheggen, J J De Swart, 10.1103/PhysRevC.49.2950arXiv:nucl-th/9406039Phys. Rev. C. 492950V. G. J. Stoks, R. A. M. Klomp, C. P. F. Ter- heggen, and J. J. de Swart, Phys. Rev. C 49, 2950 (1994), arXiv:nucl-th/9406039.
. P Geiger, N Isgur, 10.1103/PhysRevD.47.5050Phys. Rev. D. 475050P. Geiger and N. Isgur, Phys. Rev. D 47, 5050 (1993).
. H J Lipkin, B.-S Zou, 10.1103/PhysRevD.53.6693Phys. Rev. D. 536693H. J. Lipkin and B.-s. Zou, Phys. Rev. D 53, 6693 (1996).
. N Isgur, H B Thacker, 10.1103/PhysRevD.64.094507arXiv:hep-lat/0005006Phys. Rev. D. 6494507N. Isgur and H. B. Thacker, Phys. Rev. D 64, 094507 (2001), arXiv:hep-lat/0005006.
. U.-G Meissner, J A Oller, 10.1016/S0375-9474(00)00367-5arXiv:hep-ph/0005253Nucl. Phys. A. 679671U.-G. Meissner and J. A. Oller, Nucl. Phys. A 679, 671 (2001), arXiv:hep-ph/0005253.
. S Prelovsek, S Collins, D Mohler, M Padmanath, S Piemonte, 10.1007/JHEP06(2021)035arXiv:2011.02542JHEP. 0635hep-latS. Prelovsek, S. Collins, D. Mohler, M. Pad- manath, and S. Piemonte, JHEP 06, 035 (2021), arXiv:2011.02542 [hep-lat].
| [] |
[
"Efficient Density Ratio-Guided Subsampling of Conditional GANs, With Conditioning on a Class or a Continuous Variable",
"Efficient Density Ratio-Guided Subsampling of Conditional GANs, With Conditioning on a Class or a Continuous Variable"
] | [
"Xin Ding xin.ding@stat \nThe University of British Columbia\nCanada\n",
"Yongwei Wang yongweiw@ece \nThe University of British Columbia\nCanada\n",
"Z Jane Wang \nThe University of British Columbia\nCanada\n",
"William J Welch \nThe University of British Columbia\nCanada\n"
] | [
"The University of British Columbia\nCanada",
"The University of British Columbia\nCanada",
"The University of British Columbia\nCanada",
"The University of British Columbia\nCanada"
] | [] | Recently, subsampling or refining images generated from unconditional generative adversarial networks (GANs) has been actively studied to improve the overall image quality. | null | [
"https://arxiv.org/pdf/2103.11166v3.pdf"
] | 244,117,308 | 2103.11166 | 6ca83f9aa4ee00645ff335541189e14dd69612db |
Efficient Density Ratio-Guided Subsampling of Conditional GANs, With Conditioning on a Class or a Continuous Variable
November 18, 2021
Xin Ding xin.ding@stat
The University of British Columbia
Canada
Yongwei Wang yongweiw@ece
The University of British Columbia
Canada
Z Jane Wang
The University of British Columbia
Canada
William J Welch
The University of British Columbia
Canada
Efficient Density Ratio-Guided Subsampling of Conditional GANs, With Conditioning on a Class or a Continuous Variable
November 18, 2021
Recently, subsampling or refining images generated from unconditional generative adversarial networks (GANs) has been actively studied to improve the overall image quality.
Introduction
Generative adversarial networks (GANs) [11] are popular generative models for image synthesis, aiming to estimate the marginal distribution of images. As an extension and an essential family of GANs, conditional GANs (cGANs) [22] intend to estimate the image distribution given some conditions. These conditions are usually categorical variables such as class labels. cGANs with class labels are also known as class-conditional GANs [28,25,2,32]. Recently, [8,7] propose a new conditional GAN framework, continuous conditional GANs (Cc-GANs), which take continuous, scalar variables (termed regression labels) as conditions. Recent advances in unconditional GANs [15,16] and cGANs [2,8,7] generally enable these models to generate high-quality images. Nevertheless, lowquality images still appear frequently even with such advanced GAN models during image generation. We would like to clarify that the image quality discussed in this paper are three-fold [25,32,6,8,7]: (1) visual quality, (2) diversity, and (3) label consistency. Label consistency applies to cGANs only and is defined as the consistency of generated images with respect to the conditioning label. Notably, CcGANs [8, 7] often suffer from low label consistency because it is designed to sacrifice label consistency for better visual quality and higher diversity.
To enhance the image quality of unconditional GANs, increasing attention has been paid to improve the sampling strategy of pre-trained unconditional GANs via subsampling or refining. Refining methods (e.g., Collab [21]) aim to refine visually unrealistic fake images to improve visual quality. For example, Collab [21] refines an intermediate hidden map of a generator to remove artifacts in generated images by using information from a trained discriminator. Subsampling methods (e.g., DRS [1], DRE-F-SP+RS [9], DDLS [4]) remove visually unrealistic images to improve visual quality and adjust the likelihood of generated images to increase diversity. To accomplish subsampling, discriminator rejection sampling (DRS) [1] accepts or rejects a fake image by rejection sampling (RS). DRS requires an accurate density ratio estimation (DRE). However, since the DRE step in DRS relies on the assumption of optimality of the discriminator, DRS may not perform well if the discriminator is far from optimal. DRS is also inapplicable to some GANs such as MMD-GAN [20].
[9] improves DRS Figure 1: The efficiency and effectiveness comparisons between the proposed cDR-RS and four baseline methods in sampling 100,000 and 60,000 fake images from BigGAN [2] (a) and CcGAN [8, 7] (b), respectively. The dashed red lines denote Intra-FID [25] and Label Score [8, 7] of sampling from cGANs without subsampling or refining in (a) and (b), respectively. Label Score is a metric to evaluate the discrepancy between the actual and conditioning labels of fake images. cDR-RS achieves state-of-the-art performances in sampling from cGANs while requiring only reasonable computational resources.
by proposing density ratio estimation in the feature space with Softplus loss (DRE-F-SP), which does not require an optimal discriminator and is applicable to various GANs. Then, with RS as the sampler, [9] introduces DRE-F-SP+RS to subsample GANs. Besides these density ratio-based methods, discriminator driven latent sampling (DDLS) [4] proposes to accept or reject samples via an energy-based model defined in the latent space of the generator in a GAN. The methods discussed above, however, are not designed for cGANs.
An intuitive approach to better sample from cGANs is applying unconditional methods described above for each distinct class or regression label. Unfortunately, this approach may be inefficient or even impractical, especially when many distinct classes or regression labels exist. For example, if we apply DRE-F-SP+RS [9] to sample from BigGAN trained on ImageNet-100 [3] (a subset of ImageNet [5] with 100 classes), we need to fit 100 density ratio models separately. As visualized in Fig. 1(a), this is often time-consuming (84.4 hours) and it requires a large storage space (39.97 GB). Another noticeable example is DDLS [4], which spends 218 hours subsampling BigGAN trained on ImageNet-100. Furthermore, as shown in Fig. 1(b) and Section 4.2, the unconditional approach is usually ineffective in sampling from CcGANs because it is not designed to solve the label inconsistency problem suffered by CcGANs. Moreover, although DRE-F-SP+RS may perform well in subsampling class-conditional GANs (e.g., Fig. 1(a)), it is not suitable for subsampling CcGANs for two reasons: it lacks a suitable feature extraction method for regression datasets, and it cannot sample from CcGANs conditional on labels that are unseen in the training phase. The ineffectiveness or inefficiency of applying unconditional sampling methods to cGANs is demonstrated in our empirical study in Section 4.
Recently, [26] proposes a rejection sampling scheme, called GOLD, to subsample the auxiliary classifier GAN (ACGAN) [28], a special class-conditional GAN. GOLD is based on the gap in log-densities that measures the discrepancy between the actual image distribution and the fake image distribution of given samples. [26] shows that, empirically, GOLD can improve the performance of ACGAN in the class-conditional image synthesis. However, this method does not apply to general class-conditional GANs (e.g., BigGAN [2]), and it is not designed for CcGANs.
To improve the overall image quality of class-conditional GANs and CcGANs effectively and efficiently, we propose the conditional density ratio-guided rejection sampling (cDR-RS). Our contributions can be summarized as follows:
• In Section 3.1, we propose a DRE scheme to estimate an image's density ratio conditional on a class or regression label. We first introduce a new feature extraction method for images with regression labels, when the feature extraction mechanism in DRE-F-SP [9] is not applicable. Then, we propose a novel conditional Softplus (cSP) loss, which enables us to estimate density ratios conditional on different labels by fitting only one density ratio model. This density ratio model takes as input both the high-level features and the class/regression label of an image and outputs the density ratio conditional on the given label.
• To analyze the proposed cSP loss theoretically, we derive in Section 3.2 the error bound of a density ratio model trained with the proposed cSP loss.
• In Section 3.3, we propose a novel rejection sampling scheme to subsample cGANs. A filtering scheme is also proposed in Section 3.4 for CcGANs to increase fake images' label consistency without losing diversity. By only tuning one hyper-parameter of this filtering scheme, we can easily control the trade-off between label consistency and diversity according to users' needs.
• In Section 4, extensive experiments on five benchmark datasets and different GAN architectures convincingly demonstrate the state-of-the-art performances of the proposed subsampling scheme over baseline methods. The estimated density function p g (x|y) is the density of the fake conditional image distribution induced by the generator network. The conditional y is often a categorical variable such as a class label and cGANs with class labels as conditions are also known as class-conditional GANs. Class-conditional GANs have been widely studied [28,25,2,32]. State-of-theart class conditional GANs (e.g., BigGAN [2]) can generate photo-realistic images for a given class. However, GANs conditional on regression labels have been rarely studied due to two problems. First, very few (even zero) real images exist for some regression labels. Second, since regression labels are continuous and infinitely many, they cannot be embedded by one-hot encoding like class labels. To solve these two problems, [8,7] propose the CcGAN framework, which introduces novel empirical cGAN losses and label input mechanisms. The novel empirical cGAN losses, consisting of the hard vicinal discriminator loss (HVDL), the soft vicinal discriminator loss (SVDL), and a new generator loss, are developed to solve the first problem. The second problem is solved by a naive label input (NLI) mechanism and an improved label input (ILI) mechanism. The effectiveness of CcGAN has been demonstrated on diverse datasets.
Subsampling GANs by DRE-F-SP+RS
Among existing sampling methods for unconditional GANs, DRE-F-SP+RS proposed by [9] can achieve the state-of-the-art sampling performance. This subsampling framework consists of two components: a density ratio estimation method termed DRE-F-SP and a rejection sampling scheme. DRE-F-SP aims to estimate the density ratio function r * (x) := p r (x)/p g (x) based on N r real images x r 1 , x r 2 , . . . , x r N r ∼ p r (x) and N g fake images x g 1 , x g 2 , . . . , x g N g ∼ p g (x), where p r (x) and p g (x) are the density functions of the actual and fake conditional image distributions, respectively. Based on the estimated density ratios, to push p g towards p r , rejection sampling (RS) is used to sample from the trained GAN model. Empirical studies in [9] show that DRE-F-SP+RS substantially outperforms existing sampling methods (e.g., DRS [1]) in subsampling different types of GANs.
As the key component of DRE-F-SP+RS, DRE-F-SP first trains a specially designed ResNet-34 [13] on a set of real images with class labels under the cross-entropy loss. The network architecture of this ResNet-34 is adjusted to ensure that the dimension of one hidden map h equals that of the input image x. Thus, this ResNet-34 defines a mapping of an image x to a high-level feature h, i.e., h = φ(x). Then, DRE-F-SP estimates the density ratio of an image in the feature space defined by φ instead in the pixel space. Specifically, DRE-F-SP models the actual density ratio function in the feature space by a 5-layer multilayer perceptron (MLP-5).
[9] also proposes a novel loss function called Softplus (SP) loss to train this MLP-5. Finally, compositing φ(x) and MLP-5 leads to an estimate of r * (x).
Proposed Method
As discussed in Section 1, due to the ineffectiveness or inefficiency, the unconditional subsampling or refining methods (e.g., DRS [1], Collab [21], DDLS [4] and DRE-F-SP+RS [9]) may be impractical for sampling from cGANs. Moreover, the only existing conditional subsampling method [26] is designed for ACGAN [28] and cannot be applied to other cGANs. Motivated by these limitations, in this section, we propose an effective and efficient dual-functional subsampling method, which is suitable for both class-conditional GANs and CcGANs regardless of the GAN architectures and the number of distinct class or regression labels.
Conditional density ratio estimation in feature space with conditional Softplus loss
In this section, we introduce cDRE-F-cSP, a novel conditional density ratio estimation (cDRE) method. Assume we have N r real image-label pairs (x r 1 , y r 1 ), . . . , (x r N r , y r N r ) and N g fake image-label pairs (x g 1 , y g 1 ), . . . , (x g N g , y g N g ), where x r i and x g i can be seen as samples drawn from p r (x|y r i ) and p g (x|y g i ), respectively. Based on these samples, we aim to estimate r * (x|y) := p r (x|y)/p g (x|y).
Like DRE-F-SP [9], we conduct cDRE in a feature space learned by a pre-trained neural network φ. DRE-F-SP [9] trains a specially designed ResNet-34 on some real images with class labels to extract high-level features for DRE. This mechanism also applies to class-conditional GANs; however, it is inapplicable to CcGANs since regression datasets may not have class labels. Therefore, when subsampling CcGANs, we develop a specially designed sparse autoencoder (AE) to extract features whose architecture is visualized in Fig. 2. The encoder with ReLU [10] as the final layer is treated as φ to extract sparse high-level features from images. The bottleneck dimension of the sparse autoencoder equals the dimension of the flattened input image. The decoding process is trained to reconstruct the input image and predict the regression label of the input image. The training loss of this sparse AE is the summation of three loss components: (1) the mean square error (MSE) between the input image and the reconstructed image; (2) the MSE between the actual regression label and the predicted regression label; (3) the product of a positive constant Since the encoder has ReLU as the last layer, the feature vector h is sparse. This AE also has an extra branch to predict the regression label of x.
λ and the mean of all elements in h, where λ controls the sparsity and is set to be 10 −3 in our experiment. The sparsity regularizer and the extra branch to predict regression labels are both used to avoid overfitting. The additional predictor is also applied in the filtering scheme proposed in Section 3.4.
Next, we propose a formulation of the actual conditional density ratio function in the feature space:
ψ * (h|y) := q r (h|y) q g (h|y) = q r (h|y) · ∂h/∂x q g (h|y) · ∂h/∂x = p r (x|y) p g (x|y) = r * (x|y),(1)
where q r (h|y) and q g (h|y) are respectively the density functions of the real and fake condition distributions of high-level features. Based on Eq. (1) and the pre-trained neural network φ(x), we model the conditional density ratio function ψ * (h|y) in the feature space by a 5-layer MLP (MLP-5) denoted by ψ(h|y) with both h and its label y as input. To train ψ(h|y), we propose the conditional Softplus (cSP) loss as follows:
L c (ψ) =E (h,y)∼qg(h,y) [σ(ψ(h|y)) − η(ψ(h|y))] − E (h,y)∼qr(h,y) [σ(ψ(h|y))] ,(2)
where q g (h, y) = q g (h|y)p(y), q r (h, y) = q r (h|y)p(y), and p(y) is the distribution of labels. This new loss extends the SP loss [9] to the conditional setting. The empirical approximation to Eq. (2) is
L c (ψ) = 1 N g Ng i=1 [σ(ψ(h g i |y g i ))ψ(h g i |y g i ) − η(ψ(h g i |y g i ))] − 1 N r Nr i=1 σ(ψ(h r i |y r i )).
(3)
Similar to DRE-F-SP, to prevent ψ(h|y) from overfitting the training data, a natural constraint applied to ψ(h|y) is
E h∼qg(h|y) [ψ(h|y)] = 1.(4)
If Eq. (4) holds, then
E (h,y)∼qg(h,y) [ψ(h|y)] = 1.(5)
Algorithm 1: The optimization algorithm for the density ratio model training in cDRE-F-cSP.
Data: N r real image-label pairs {(x r 1 , y r 1 ), · · · , (x r N r , y r N r )}, a generator G, a pre-trained ResNet-34 or encoder φ(x) for feature extraction, a 5-layer MLP ψ(h|y) and a preset hyperparameter λ. Result: a trained conditional density ratio model r(x|y) = ψ(φ(x)|y) = ψ(h|y). 1 Initialize ψ; 2 for k = 1 to K do 3 Sample a mini-batch of m real image-label pairs
{(x r (1) , y r (1) ), · · · , (x r (m) , y r (m) )} from {(x r 1 , y r 1 ), · · · , (x r N r , y r N r )}; 4
Sample a mini-batch of m fake image-label pairs
{(x g (1) , y g (1) ), · · · , (x g (m) , y g (m) )} from G; 5
Update ψ via SGD or a variant with the gradient of Eq. (7), i.e., Lc(ψ) + λ Qc(ψ) .
end
An empirical approximation to Eq. (5) is
1 N g N g i=1 ψ(h g i |y g i ) = 1.(6)
Therefore, in practice, we minimize the penalized version of Eq.
(3) as follows:
min ψ L c (ψ) + λ Q c (ψ) ,(7)
where
Q c (ψ) = 1 N g N g i=1 ψ(h g i |y g i ) − 1 2 .(8)
An algorithm shown in Alg. 1 is used to implement cDRE-F-cSP in practice. In our experiments, λ is set as 10 −2 or 10 −3 empirically.
Error bound
In this section, we derive the error bound of a density ratio model ψ(h|y) trained with the empirical cSP loss L c (ψ). For simplicity, we ignore the penalty term in this analysis.
Firstly, we introduce some notations. Let Ψ = {ψ : h → ψ(h|y)} denote the hypothesis space of the density ratio model ψ(h|y). We also defineψ andψ as follows:ψ = arg min ψ∈Ψ L c (ψ) andψ = arg min ψ∈Ψ L c (ψ). Please note that the hypothesis space Ψ may not cover the actual density ratio function ψ * . Therefore, L c (ψ) − L c (ψ * ) ≥ 0. Denote by α all learnable parameters of ψ and assume α is in a parameter space A. Denote σ(ψ(h|y))ψ(h|y)−η(ψ(h|y)) by g(h|y; α). LetR qr(h,y),N r (Ψ) denote the empirical Rademacher complexity [27] of Ψ, which is defined based on independent feature-label pairs {(h r 1 , y r 1 ), . . . , (h r N r , y r N r )} from q r (h, y). Then, we derive the error bound of the conditional density ratio estimateψ under Eq. (2) as follows:
Theorem 1. If (i) N g is large enough, (ii) A is compact, (iii) ∀g(h|y; α)
is continuous at α, (iv) ∀g(h|y; α), ∃ a function g u (h|y) that does not depend on α, s.t. |g(h|y; α)| ≤ g u (h|y), and (iv) E (h,y)∼qg(h,y) g u (h|y) < ∞, then ∀δ ∈ (0, 1) and ∀δ ∈ (0, δ] with probability at least 1 − δ,
L c (ψ) − L c (ψ * ) ≤ 1 N g +R qr(h,y),N r (Ψ) + 2 4 N r log 2 δ + L c (ψ) − L c (ψ * ).(9)
Proof. The proof is in Supp. S.2.
Remark 1.R qr(h,y),N r (Ψ) on the right of Eq. (9) implies that we should not use an overly complicated density ratio model. It supports our proposed cDRE-F-cSP because we just need a small neural network (e.g., a shallow MLP) to model the density ratio function in the feature space.
cDR-RS: Conditional density ratio-guided rejection sampling for conditional GANs
Based on the cDRE method proposed in Section 3.1, we develop a rejection sampling scheme, termed cDR-RS, to subsample conditional GANs. The workflow can be summarized in Fig. 3 and Alg. 2. This rejection sampling scheme is conducted for each distinct label y of interest. For example, on ImageNet-100 [3], we train only one density ratio model ψ(h|y), based on which we repeat the rejection sampling scheme 100 times for 100 classes respectively.
A filtering scheme to increase label consistency in subsampling CcGANs
To solve the problem of insufficient data, CcGANs [8, 7] use images with labels in a vicinity of a conditioning regression label y to estimate p r (x|y). Consequently, the actual labels of some images sampled from p g (x|y) may be far from y (aka label inconsistency). Unfortunately, the current subsampling Algorithm 2: Subsampling fake images with label y by cDR-RS.
Data: a generator G, a trained ResNet-34 or encoder φ(x) for feature extraction, a trained conditional density ratio model ψ(h|y). Result: images = {N filtered fake images with label y} 1 Generate N burn-in fake images from G conditional on label y.; 2 Estimate the density ratios of these N fake images conditional on y by evaluating ψ(φ(x)|y); 3 M ← max{N estimated density ratios}; 4 images ← ∅; 5 while |images| < N do 6
x ← get a fake image with label y from G; Table 2). An intuitive solution to this issue is to filter out these fake images with actual labels different from y, but this filtering may substantially decrease the diversity of fake images.
To increase label consistency without losing diversity, we propose a filtering scheme for cDR-RS, which is inspired by a finding that we may sample images with actual labels equal to y from both p g (x|y) and p g (x|y ) in the CcGAN sampling, where y = y. Let's take the subsampling at y to show the procedure of this filtering scheme. We first replace p g (x|y) with p y,ζ g (x) as the density of the proposal distribution, where p y,ζ g (x) stands for the density function of the distribution of fake images with predicted labels in Y ζ y := [y − ζ, y + ζ] and ζ is a hyper-parameter. The predicted labels, assumed to be close to the fake images' actual labels, are the prediction from the composition of the encoder and predictor in Fig. 2. Afterwards, the density ratio model r(x|y) is used to model p r (x|y)/p y,ζ g (x) and it is trained on fake images with predicted labels in Y ζ y . In the sampling phase, before conducting the rejection sampling process ( Fig. 3 and Alg. 2), we filter out fake images with predicted labels outside of Y ζ y . Please note that ζ controls the trade-off between label consistency and diversity. As shown in Fig. 6, a smaller ζ often leads to higher label consistency but lower diversity, while a larger ζ often leads to lower label consistency but higher diversity. We can adjust ζ according to our needs, but a good ζ should improve label consistency without decreasing diversity. In our experiment, ζ is set based on a vicinity parameter m κ of CcGANs [8,7]. Please refer to Supp. S.3 for a rule of thumb to select ζ.
Experiments
In this section, we will empirically evaluate the efficiency and the effectiveness of the proposed cDR-RS scheme in subsam-pling class-conditional GANs and CcGANs. We compare cDR-RS with four state-of-the-art sampling methods: GOLD [26] , Intra-FID [25], and Inception Score (IS) [29]. Intra-FID is an overall image quality metric, which computes FID separately for each class and reports the average FID score. A lower Intra-FID or FID score indicates better image quality or vice versa. Oppositely, a larger Inception Score implies better image quality. Experimental results: We quantitatively compare the image quality of fake images sampled from class-conditional GANs with different candidate methods. For the CIFAR-10 experiment, we draw 10,000 fake images for each class by each sampling method. For the CIFAR-100 and ImageNet-100 experiments, we sample 1000 fake images for each class. Quantitative results are summarized in Table 1. We can see, in all settings, the proposed cDR-RS and DRE-F-SP+RS perform comparably, and they substantially outperform other candidate methods with a large margin in terms of all three metrics. We also show in Fig. 4 some example fake images for the "indigo bunting" class sampled by Baseline, DRE-F-SP+RS, and cDR-RS with real images as reference. Both DRE-F-SP+RS and cDR-RS can effectively remove some fake images with unrecognizable birds (marked by red rectangles). Although cDR-RS and DRE-F-SP+RS have similar effectiveness, their efficiency differs significantly. Our efficiency analysis on ImageNet-100, visualized in Fig. 1(a), shows that cDR-RS only requires 1.19% of the storage usage and 77% of the implementation time consumed by DRE-F-SP+RS. Furthermore, since the cDRE-F-cSP is sample-based and does not rely on any properties of cGANs, cDR-RS is applicable to various cGAN architectures, making it more flexible than GOLD, Collab, DRS and DDLS. Figure 4: Some example images for the "indigo bunting" class at 128 × 128 resolution in the ImageNet-100 experiment. The first row includes six real images. From the second row to the bottom, we show some example fake images generated from Baseline, DRE-F-SP+RS [9], and the proposed cDR-RS, respectively. We often observe some fake images sampled by Baseline without recognizable birds (e.g., two images with red rectangles), while both DRE-F-SP+RS and cDR-RS can effectively remove such images.
Sampling from CcGANs
Experimental setup: Besides class-conditional GANs, we evaluate the performances of candidate methods in sampling from CcGANs. We experiment on the UTKFace [33] and RC-49 datasets [8,7], which are two benchmark datasets for CcGANs. UTKFace is an RGB human face image dataset with ages as regression labels. We use the pre-processed UTK-Face dataset [8], consisting of 14,760 RGB images at 64 × 64 resolution with ages in [1,60]. The number of images in UTKFace ranges from 50 to 1051 for different ages, and all images are used for training. RC-49 consists of 44,051 RGB images at 64 × 64 resolution for 49 types of chairs. Each type of chairs contains 899 images labeled by 899 distinct yaw rotation angles from 0.1 • to 89.9 • with a step size of 0.1 • . We first choose yaw angles with odd numbers as the last digit for training. Then, for each chosen angle, we randomly select 25 images to construct a training set. The final training set of RC-49 includes 11250 images and 450 distinct angles. All 44,051 RC-49 images are used in the experiment evaluation.
We follow the official implementation of CcGAN (SVDL+ILI) in [8,7]. GOLD is not taken as a baseline method because of its incompatibility in subsampling CcGANs. DRE-F-SP+RS is tested on UTKFace but excluded from the RC-49 experiment because it cannot subsample images that are gen- Table 1: Effectiveness analysis of different methods in sampling various class-conditional GANs in terms of Intra-FID, FID, and IS. For experiments on CIFAR-10 and CIFAR-100, we report the quality of 100,000 fake images (10,000/1000 per class) sampled by each method. For experiment on ImageNet-100, we evaluate the quality of 100,000 fake images (1000 per class) generated from each method. The numbers in the parentheses stand for the standard deviations of FIDs computed within each distinct classes. The quality of real images from each datasets is also provided as references, where the Intra-FID and FID are computed between training and test samples while IS is computed in terms of test samples only. Please note that the test samples of ImageNet-100 is insufficient for computing a reliable Intra-FID. In all settings, cDR-RS is either better than or comparable to DRE-F-SP+RS [9], and these two methods substantially outperform the others.
Method
Intra-FID ↓ FID ↓ IS ↑ Intra-FID is taken as the overall image quality metric, which computes FID separately at each evaluation angle and report the average score (along with the standard deviation). Diversity measures the diversity of fake images generated conditional on a given label in terms of some categorical properties of fake images (i.e., races for UTKFace and chair types for RC-49). Label Score evaluates label consistency, measuring the discrepancy between actual and conditioning labels of fake images. Quantitatively, we prefer smaller Intra-FID, NIQE and Label Score indices but larger Diversity values. Please see Supp. S.8 and S.9 for detailed definitions. Fig. 5 some example fake images for age 24 sampled by Baseline, DRE-F-SP+RS, and cDR-RS (Filter) with real images as reference. Fig. 5 shows the effectiveness of cDR-RS (Filter) and the failure of DRE-F-SP+RS. Furthermore, an efficiency analysis is also conducted on UTKFace and visualized in Fig. 1. This efficiency analysis shows that cDR-RS requires only reasonable computational resources but leads to substantial performance improvements. Moreover, in Table 2, we also show the performance of cDR-RS without the filtering scheme, i.e., cDR-RS (No Filter). We can see, without filtering, cDR-RS cannot effectively solve the label inconsistency problem and even makes it worse. Thus, this observation validates the effectiveness of the proposed filtering scheme. In addition, we conduct an ablation study on UTKFace to analyze the effects of ζ of the filtering scheme, and this analysis is visualized in Fig. 6. Table 2, respectively. The vertical grey line specifies the ζ used in Table 2. We can see both Label Score and Diversity increase as ζ increases. If we prefer higher label consistency, we can decrease ζ. Oppositely, if we prefer higher image diversity, we can increase ζ. No matter what the preference is, when choosing ζ, we should ensure that the corresponding Label Score is below the red line while Diversity is above the blue line. In that case, both label consistency and image diversity can be improved. A rule of thumb for the parameter selection is provided in Supp. S.3.
CIFAR-10
Conclusion
In this work, we have presented a novel conditional subsampling scheme to improve the image quality of fake images from cGANs. First, we propose novel conditional extensions of density ratio estimation (cDRE) in the feature space and the Softplus loss function (cSP). Then, we learn the conditional density ratio model through an MLP network. Also, we derive the error bound of a conditional density ratio model trained with the proposed cSP loss. A novel filtering scheme is also proposed in subsampling CcGANs to improve the label consistency. Finally, we validate the effectiveness of the proposed subsampling framework with extensive experiments in sampling multiple conditional GAN models on four benchmark datasets with diverse evaluation metrics.
[ Theorem 1 provides an error bound of a conditional density ratio models trained with the cSP loss.R qr(h,y),N r (Ψ) on the right of Eq. (9) implies that, a more complicated density ratio model has a more significant error bound, i.e., a worse generalization performance. Thus, we should not use an overly complicated density ratio model. It supports our proposed cDRE-F-cSP because we just need a small neural network (e.g., a shallow MLP) to model the density ratio function in the feature space. Our proof of Theorem 1 follows the proof of Theorem 3 in [9].
Proof. Following [9], we decompose L c (ψ) − L c (ψ * ) as follows:
L c (ψ) − L c (ψ * ) =L c (ψ) − L c (ψ) + L c (ψ) − L c (ψ) + L c (ψ) − L c (ψ) + L c (ψ) − L c (ψ * ) (Since L c (ψ) − L c (ψ) ≤ 0) ≤L c (ψ) − L c (ψ) + L c (ψ) − L c (ψ) + L c (ψ) − L c (ψ * ) ≤2 sup ψ∈Ψ L c (ψ) − L c (ψ) + L c (ψ) − L c (ψ * ).
(S.10)
The second term in Eq. (S.10) is a constant which implies an inevitable error. The first term can be bounded as follows:
sup ψ∈Ψ L c (ψ) − L c (ψ) ≤ sup ψ∈Ψ E (h,y)∼qg(h,y) [σ(ψ(h|y))ψ(h|y) − η(ψ(h|y))] − 1 N g N g i=1 [σ(ψ(h g i |y g i ))ψ(h g i |y g i ) − η(ψ(h g i |y g i ))] + E (h,y)∼qr(h,y) [σ(ψ(h|y))] − 1 N r N r i=1
σ(ψ(h r i |y r i )) .
(S.11)
Because of assumptions (i)-(iv), based on the uniform law of large number [30], for ∀ > 0, lim N g →∞ P sup ψ∈Ψ E (h,y)∼qg(h,y) [σ(ψ(h|y))ψ(h|y) − η(ψ(h|y))]
− 1 N g N g i=1
[σ(ψ(h g i |y g i ))ψ(h g i |y g i ) − η(ψ(h g i |y g i ))] = 0.
Based on this limit, we can derive an upper bound of the first term of Eq. (S.11) as follows. Since we can generate infinite fake images from a trained cGAN, N g is large enough. Let = 1/2N g , ∀δ 1 ∈ (0, 1) with probability at least 1 − δ 1 , sup ψ∈Ψ E (h,y)∼qg(h,y) [σ(ψ(h|y))ψ(h|y) − η(ψ(h|y))]
− 1 N g N g i=1 [σ(ψ(h g i |y g i ))ψ(h g i |y g i ) − η(ψ(h g i |y g i ))] ≤ 1 2N g .
(S.12) The second term of Eq. (S.11) can be bounded based on Lemma 1 and Theorem 2 (the Rademacher bound [19]) in [9] as follows: ∀δ 2 ∈ (0, 1) with probability at least 1 − δ 2 ,
E (h,y)∼qr(h,y) [σ(ψ(h|y))] − 1 N r N r i=1 σ(ψ(h r i |y r i )) ≤2R qr(h,y),N r (σ • Ψ) + 4 N r log 2 δ 2 ≤ 1 2R qr(h,y),N r (Ψ) + 4 N r log 2 δ 2 . (S.13)
Let δ = max{δ 1 , δ 2 } and δ = δ, based on Eq. (S.12) and (S.13), we can derive Eq. (9). is the l-th smallest normalized distinct real label and N r uy is the number of normalized distinct labels in the training set. [8, 7] set κ as a multiple of κ base (i.e., κ = m κ κ base ) where the multiplier m κ stands for 50% of the minimum number of neighboring labels used for estimating p r (x|y) given a label y. For example, m κ = 1 implies using 2 neighboring labels (one on the left while the other one on the right). In [8,7], m κ is often set as 1 or 2. For the soft vicinity, ν = 1/κ 2 often works well."
S.3 A rule of thumb to select ζ in the filtering scheme
Based on above rule of thumb, we propose a scheme to select the hyper-parameter ζ of the filtering scheme in cDR-RS. To be specific, we let ζ = 3 × m κ κ base . In other words, we let the vicinity Y ζ y = [y − ζ, y + ζ] to be three times wider than the hard vicinity in CcGANs. Our experiments in Section 4.2 show that this rule of thumb can let cDR-RS effectively increase label consistency without losing diversity.
As we discussed in Section 3.4, the filtering scheme is inspired by a finding that we may sample images with actual labels equal to y from both p g (x|y) and p g (x|y ), where y = y.
Unfortunately, in practice, it is difficult to know the actual label of a given fake image. Therefore, we treat predicted labels of fake images as their actual labels. However, the predicted labels may also deviate from the actual labels, but such deviation is assumed not significant because the supervised learning (i.e., training a regression CNN to predict label) is usually considered less difficult than the generative modeling (i.e., the CcGAN training). Although the prediction error may be small, it still exists and won't be zero. Therefore, when designing the proposal distribution in the rejection sampling, instead of using fake images with predicted labels exactly equal to y (i.e., set ζ = 0), we consider fake images with predicted labels in a vicinity of y, i.e., Y ζ y . We expect that the actual labels of fake images in Y ζ y should be equal or close to y. Furthermore, if we let ζ = 0, the training and sampling time may be very long. On UTKFace, we conduct an ablation study to show the effect of ζ on the training and sampling. Relevant results are summarized in Fig. S.3.7. We can see a smaller ζ substantially increases the training and sampling time, which may make cDR-RS less efficient. Therefore, in practice, we usually use a relatively large ζ, e.g., 3 × m κ κ base .
S.4 Resources for Implementing cGANs and Sampling Methods
To implement ACGAN, we refer to https://github. com/sangwoomo/GOLD.
To implement SNGAN, we refer to https: //github.com/christiancosgrove/ pytorch-spectral-normalization-gan and https://github.com/pfnet-research/sngan_ projection.
To implement BigGAN, we refer to https://github. com/ajbrock/BigGAN-PyTorch.
To implement CcGANs, we refer to https://github. com/UBCDingXin/improved_CcGAN.
To implement GOLD, we refer to https://github. com/sangwoomo/GOLD.
To implement Collab, we refer to https://github.com/YuejiangLIU/ pytorch-collaborative-gan-sampling.
To implement DRS and DRE-F-SP+RS, we refer to https://github.com/UBCDingXin/DDRE_ Sampling_GANs.
To implement DDLS, we refer to https: //github.com/JHpark1677/CGAN-DDLS and https://github.com/Daniil-Selikhanovych/ ebm-wgan/blob/master/notebook/EBM_GAN. ipynb.
S.5 More Details of Experiments on CIFAR-10
In the CIFAR-10 experiment, we first train ACGAN, SNGAN, and BigGAN with setups shown as follows. We train AC-GAN for 100,000 iterations with batch size 512. We train SNGAN for 50,000 iterations with batch size 512. BigGAN is trained for 39,000 with batch size 512. We use the architecture described in Figure 15 of [2] for BigGAN. To implement Collab, following [21], we conduct discriminator shaping for 5000 and 2500 iterations for SNGAN and BigGAN, respectively. We do the refinement 30 times in a middle layer of SNGAN and 20 times in a middle layer of BigGAN. The step size of refinement is set as 0.1 for both SNGAN and BigGAN.
To implement DDLS, we run the Langevin dynamics procedure for SNGAN and BigGAN within each class with step size 10 −4 up to 1000 iterations.
To implement DRE-F-SP+RS, we first train the specially designed ResNet-34 on the training set for 350 epochs with the SGD optimizer, initial learning rate 0.1 (decayed at epoch 150 and 250 with factor 0.1), weight decay 10 −4 , and batch size 256. Ten MLP-5 models for modeling the density ratio function within each class are trained on the training set with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 100 and 250), batch size 256, 400 epochs, and λ = 10 −2 . The network architecture of MLP-5 is shown in Table S.5.3.
To implement cDR-RS, we use the specially designed ResNet-34 in the implementation of DRE-F-SP+RS to extract features from images. The MLP-5 to model the conditional density ratio function is shown in Table S For a more accurate evaluation, we do not use Inception-V3 [31] that was pre-trained on ImageNet [5] to compute Intra-FID, FID, and IS. Instead, following [9], we train Inception-V3 from scratch on CIFAR-10 to evaluate fake images.
Please refer to our codes for more detailed setups such as network architectures and hyperparameter settings. In the CIFAR-100 experiment, we only test candidate sampling methods on BigGAN because both ACGAN and SNGAN are unstable on CIFAR-100 and somehow suffer from the model collapse problem [12]. BigGAN is trained for 38,000 iterations with batch size 512 on the training set of CIFAR-100. We use the architecture described in Figure 15 of [2] for BigGAN. We also adopt DiffAugment [34] (a data augmentation method for the GAN training with limited data) to improve the performance of BigGAN. The strongest data augmentation policy, "color,translation,cutout", is used for DiffAugment. To implement Collab, following [21], we conduct discriminator shaping for 2500 iterations for BigGAN. We do the refinement 30 times in a middle layer of the generator network of BigGAN. The step size of refinement is set as 0.1.
To implement DDLS, we run the Langevin dynamics procedure for BigGAN within each class with step size 10 −4 up to 1000 iterations.
To implement DRE-F-SP+RS, we first train the specially designed ResNet-34 on the training set for 350 epochs with the SGD optimizer, initial learning rate 0.1 (decayed at epoch 150 and 250 with factor 0.1), weight decay 10 −4 , and batch size 256. Ten MLP-5 models for modeling the density ratio function within each class are trained on the training set with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 100 and 250), batch size 256, 400 epochs, and λ = 10 −2 . The network architecture of MLP-5 is shown in Table S.5.3.
To implement cDR-RS, we use the specially designed ResNet-34 in the implementation of DRE-F-SP+RS to extract features from images. The MLP-5 to model the conditional density ratio function is shown in Table S For a more accurate evaluation, we do not use Inception-V3 [31] that was pre-trained on ImageNet [5] to compute Intra-FID, FID, and IS. Instead, following [9], we train Inception-V3 from scratch on CIFAR-100 to evaluate fake images.
Please refer to our codes for more detailed setups such as network architectures and hyperparameter settings.
S.7 More Details of Experiments on
ImageNet-100 S.7.1 Setups of training, sampling, and evaluation
In the ImageNet-100 dataset, we implement BigGAN with the BigGAN-deep architecture described in Figure 16 of [2]. BigGAN is trained for 96,000 iterations with batch size 1024.
We also adopt DiffAugment [34] (a data augmentation method for the GAN training with limited data) to improve the performance of BigGAN. The strongest data augmentation policy, "color,translation,cutout", is used for DiffAugment.
To implement Collab, following [21], we conduct discriminator shaping for 3000 iterations for BigGAN. We do the refinement 16 times in a middle layer of the generator network of BigGAN. The step size of refinement is set as 0.5.
To implement DRS, we fine-tune the discriminator of Big-GAN for 5 epochs with batch size 128.
To implement DDLS, we run the Langevin dynamics procedure for BigGAN within each class with step size 10 −4 up to 1000 iterations.
To implement DRE-F-SP+RS, we first train the specially designed ResNet-34 on the training set for 350 epochs with the SGD optimizer, initial learning rate 0.1 (decayed at epoch 150 and 250 with factor 0.1), weight decay 10 −4 , and batch size 256. Ten MLP-5 models for modeling the density ratio function within each class are trained on the training set with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 100 and 250), batch size 256, 400 epochs, and λ = 10 −2 . The network architecture of MLP-5 is similar to Table S.5.3. Please note that, when implementing DRE-F-SP+RS, we use more epochs than cDR-RS does (400 epochs vs 200 epochs) to ensure that all density ratio models are well-trained.
To implement cDR-RS, we use the specially designed ResNet-34 in the implementation of DRE-F-SP+RS to extract features from images. The MLP-5 to model the conditional density ratio function is similar to Table S For a more accurate evaluation, we fine-tune on the training set of ImageNet-100 [3] an Inception-V3 network that was pre-trained on ImageNet [5]. We fine-tune the Inception-V3 with the SGD optimizer, 50 epochs, batch size 128, and initial learning rate 10 −4 . The fine-tuned Inception-V3 is then used to compute Intra-FID, FID, and IS.
S.7.2 More details of the efficiency analysis
In order to compare the efficiency of candidate sampling methods, we summarize their storage usage and training and sampling time in Table S.8.9, based on which we plot Fig. 1. Some pie charts are also plotted to show more detailed storage usage and training time for DRE-F-SP+RS and cDR-RS in Fig. S.7.8. Please note that, the storage usage here, is the overall storage usage consumed by all models (e.g., the discriminator network in DRS and density ratio models in cDR-RS) in each sampling method except the generator network of cGANs. For DRE-F-SP+RS, the 100 MLP-5 models take a lot of disk space (almost 40 GB), making it less efficient in subsampling class-conditional GANs with many classes. We use two NVIDIA V100 GPUs (32GB) for this analysis. To implement Collab, we conduct discriminator shaping for 3000 iterations for CcGAN. We do the refinement 16 times in a middle layer of the generator network of CcGAN. The step size of refinement is set as 0.5.
To implement DRS, we fine-tune the discriminator of Cc-GAN for 2000 iterations with batch size 128.
To implement DDLS, we run the Langevin dynamics procedure for CcGAN for each age with step size 10 −4 up to 400 iterations. More iterations will make the sampling process too time-consuming.
To implement DRE-F-SP+RS, we first train the specially designed sparse AE on the training set for 200 epochs with the SGD optimizer, initial learning rate 0.01 (decayed every 50 epochs with factor 0.1), weight decay 10 −4 , batch size 256, and sparsity parameter λ = 10 −3 . The network architecture of this sparse AE is shown in Table S.8.6, Table S.8.7, and Table S.8.8. Sixty MLP-5 models for modeling the density ratio function at each age are trained on the training set with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 100 and 250), batch size 256, 400 epochs, and λ = 10 −2 . The network architecture of MLP-5 is similar to Table S.5.3. Similar to above experiments, when implementing DRE-F-SP+RS, we use more epochs than cDR-RS does to let all density ratio models converge. Nevertheless, in this experiment, there are still many density ratio models do not converge, which may be one reason of the failure of DRE-F-SP+RS in subsampling CcGANs.
To implement cDR-RS, we use the specially designed sparse AE in the implementation of DRE-F-SP+RS to extract features from images. The MLP-5 to model the conditional density ratio function is similar to Table S.5.4. It is trained with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 80 and 150), batch size 256, 200 epochs, and λ = 10 −2 . The ζ in the filtering scheme of cDR-RS is set to be 0.1.
The models for the computation of Intra-FID, NIQE, Diversity, and Label Score are consistent with those used by [8,7]. We quote and rephrase the definitions of these metrics in [8,7] as follows.
• Intra-FID [25]: "We take Intra-FID as the overall score to evaluate the quality of fake images and we prefer the small Intra-FID score. At each evaluation angle, we compute the FID [14] between real images and 1000 fake images in terms of the bottleneck feature of the pre-trained AE."
• NIQE [23]: "NIQE is used to evaluate the visual quality of fake images with the real images as the reference and we prefer the small NIQE score."
• Diversity: "Diversity is used to evaluate the intra-label diversity and the larger the better." In UTKFace, there are 6 races. At each age, we ask a pre-trained classificationoriented ResNet-34 to predict the races of the 1000 fake images and an entropy is computed based on these predicted races. The Diversity score is the average of the entropies computed on all ages.
• Label Score: "Label Score is used to evaluate the label consistency and the smaller the better." We ask the pre-trained regression-oriented ResNet-34 to predict the ages of all fake images and the predicted ages are then compared with the conditioning ages. The Label Score is defined as the average absolute distance between the predicted ages and conditioning ages over all fake images, which is equivalent to the Mean Absolute Error (MAE).
Please refer to the official implementation of CcGANs at https://github.com/UBCDingXin/improved_ CcGAN for more details.
S.8.2 Why the Diversity score is improved?
It may be counter-intuitive that the Diversity score is increased when cDR-RS may reject some generated images. We show Fig. S.8.9 to illustrate. In this figure, we visualize the distributions of 1000 fake images sampled from Baseline and cDR-RS for age 36 over 5 races, respectively. To make the illustration more clearer, we increase ζ to 0.183, so that the Label Scores of Baseline and cDR-RS are comparable and the improvement caused by cDR-RS focuses on Diversity. Please In convolutional (Conv) operations, ch denotes the number of channels, k/s/p denote kernel size, stride and number of padding, respectively. note that, in the evaluation of the UTKFace experiment, Diversity is defined as entropy of the races of fake images that are predicted by a pre-trained classification CNN (races are taken as class labels). Therefore, more balanced race distribution implies higher Diversity. From Fig. S.8.9, we can see, after applying cDR-RS, the frequency of Race 1 decreases while the frequencies for other races increase. Therefore, the Diversity score is improved. Figure S.8.9: The distributions of fake images sampled from Baseline and cDR-RS, respectively, at age 36 over 5 races in the UTKFace experiment. After subsampling by cDR-RS, the distribution of fake images for 5 races are more balanced, resulting in higher diversity.
S.8.3 More details of the efficiency analysis
Similar to the ImageNet-100 experiment, we analyze the efficiency of all candidate methods, and summarize the results in Table S.8.9 and Fig. S.8.10. Please note that, the storage usage here, is the overall storage usage consumed by all models (e.g., the discriminator network in DRS and density ratio models in cDR-RS) in each sampling method except the generator network of cGANs. The 60 MLP-5 models take a lot of disk space for DRE-F-SP+RS. We use one NVIDIA V100 GPUs (16GB) for this analysis.
S.9 More Details of Experiments on RC-49
Similar to the UTKFace experiment, we follow [8, 7] to implement CcGANs (SVDL+ILI) with the SNGAN architecture. The detailed setups can be found in [8,7] or our codes.
To implement Collab, we conduct discriminator shaping for 3000 iterations for CcGAN. We do the refinement 16 times in a middle layer of the generator network of CcGAN. The step size of refinement is set as 0.5.
To implement DRS, we fine-tune the discriminator of Cc-GAN for 1000 iterations with batch size 128.
DDLS is not implemented on RC-49 due to a too long sampling time.
DRE-F-SP+RS is not applicable to this scenario, where we need to generate images conditional on labels that are unseen in the training phase.
To implement cDR-RS, first train the specially designed sparse AE on the training set for 200 epochs with the SGD optimizer, initial learning rate 0.01 (decayed every 50 epochs with factor 0.1), weight decay 10 −4 , batch size 256, and sparsity parameter λ = 10 −3 . The MLP-5 to model the conditional density ratio function is similar to Table S.5.4. It is trained with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 80 and 150), batch size 256, 200 epochs, and λ = 10 −3 . The ζ in the filtering scheme of cDR-RS is set to be 0.13.
In the testing phase, we generate 179,800 fake images (200 per angle) from each candidate method over 899 distinct labels. This experiment adopts four evaluation metrics-(i) Intra-FID [25] is an overall image quality metric; (ii) Naturalness Image Quality Evaluator (NIQE) [23] evaluates the visual quality of fake images. Please note again that the visual quality is only one aspect of image quality; (iii) Diversity measures the diversity of fake images; and (iv) Label Score (LS) evaluates label consistency.
Specifically, the four metrics are computed as follows. (i) For the Intra-FID index, at each of the 899 angles (0.1 • − 89.9 • ), we compute the FID [14] value between 49 real images and 200 fake images in terms of the bottleneck feature of the pre-trained autoencoder. The Intra-FID score is the average FID over all 899 evaluation angles. (ii) For the NIQE index, firstly we fit an NIQE model with the 49 real rendered chair images at each of the 899 angles which gives 899 NIQE models. We then compute an average NIQE score for each evaluated angle using the NIQE model at that angle. Finally, we report the average of the 899 average NIQE scores over the 899 yaw angles. The block size and the sharpness threshold are set to 8 and 0.1 respectively in this experiments. We employ the built-in NIQE library in MATLAB. (iii) For the Diversity index, at each evaluation angle, firstly we use a pretrained classification-oriented ResNet-34 to predict the chair types (49 types in total) of these 200 fake images. Then, an entropy value can be computed based on the chair type predictions at this angle. Finally, the Diversity index is defined as the average of the entropies at all 899 angles. (iv) For the Label Score index, at each evaluation angle, firstly we ask a pretrained regression-oriented ResNet-34 to predict the yaw angles of all fake image samples and the predicted angles are then compared with the assigned angles. The Label Score value is defined as the average absolute distance between the predicted angles and assigned angles over all fake images.
Figure 2 :
2The architecture of the sparse autoencoder (AE) for feature extraction in subsampling CcGANs. The bottleneck dimension equals the size of the flattened input image x.
Figure 3 :
3The workflow of cDR-RS has two sequential modules: cDRE-F-cSP and rejection sampling. M in the acceptance probability p equals to max h {q r (h)/q g (h)}, which can be estimated by evaluating ψ(h|y) on some burn-in samples before subsampling.
Fig. 3may still accept these fake images if they have good visual quality or can contribute to the diversity increase (see
Figure 5 :
5Some example images for age 24 at 64×64 resolution in the UTKFace experiment. The first row includes ten real images. From the second row to the bottom, we show some example fake images generated from Baseline, DRE-F-SP+RS [9], and the proposed cDR-RS, respectively. By comparing Row 2 and 4, we can see cDR-RS can effectively improve the visual quality. By contrast, DRE-F-SP+RS worsens the visual quality.
Figure 6 :
6The effects of ζ in the filtering scheme of cDR-RS on the relation between Diversity and Label Score of CcGANgenerated samples in the UTKFace experiment. The dotted blue and red lines stand for Diversity and Label Score of Baseline in
Figure S. 3 . 7 :
37The effect of ζ in the filtering scheme on the implementation time of cDR-RS. Larger ζ implies shorter implementation. The implementation time here consists of the the training time of MLP-5 and the sampling time. The training time of the sparse AE for feature extraction is not influenced by ζ and excluded from this analysis.
.5.4. It is trained with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 80 and 150), batch size 256, 200 epochs, and λ = 10 −2 .
fc→ 2048 ,
2048GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 1024, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 512, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 256, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 128, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 1, ReLU Table S.5.4: The 5-layer MLP for cDRE in feature space for CIFAR-10 and CIFAR-100. The embedded class label is appended to the extracted feature h. Input: extracted feature h ∈ R 3072 and embedded class label y ∈ R C , where C = 10 for CIFAR-10 and C = 100 for CIFAR-100 Concatenate [h, y] ∈ R 3082 fc→ 2048, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 1024, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 512, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 256, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 128, GN (8 groups), ReLU, Dropout(p = 0.5) fc→ 1, ReLU S.6 More Details of Experiments on CIFAR-100
.5.4. It is trained with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 80 and 150), batch size 256, 200 epochs, and λ = 10 −2 .
.5.4. It is trained with the Adam optimizer [17], initial learning rate 10 −4 (decayed at epoch 80 and 150), batch size 128, 200 epochs, and λ = 10 −2 .
, Collab[21], DRS[1],DDLS [4], and DRE-F-SP+RS[9]. For sampling methods proposed for unconditional GANs, we implement them for each distinct class or regression label. For a fair comparison, extensive experiments are conducted on multiple benchmark datasets and cGAN architectures, and diverse evaluation metrics are utilized to demonstrate the superiority of our cDR-RS method.4.1 Sampling from class-conditional GANsExperimental setup: For class-conditional GANs, we conduct experiments on three image datasets: CIFAR-10 [18], CIFAR-100 [18], and ImageNet-100 [3], respectively. ImageNet-100 [3], as a subset of ImageNet [5], has 128,503 RGB images at 128 × 128 resolution from 100 classes. In our experiment, we randomly split ImageNet-100 into a training set and a test set, where 10,000 images are for testing (on average 100 images per class) and the rest are for training. On CIFAR-10, we train three class-conditional GANs: AC-GAN [28], SNGAN [24], and BigGAN [2]. On CIFAR-100 and ImageNet-100, we only implement BigGAN because the training of ACGAN and SNGAN is unstable. When sampling from ACGAN, we test four candidate methods, i.e., Baseline, GOLD [26], DRE-F-SP+RS [9], and cDR-RS. When sampling from other GANs, we test six candidates, i.e., Baseline, Collab [21], DRS [1], DDLS [4], DRE-F-SP+RS [9], and cDR-RS. Please note that Collab, DRS, and DDLS are inapplicable to ACGAN, and Baseline refers to sampling without subsampling or refining. Please refer to Supp. S.5, S.6, and S.7 for detailed setups. Evaluation metrics: We evaluate the quality of fake images by Fréchet Inception Distance (FID) [14]
Quantitative results: We quantitatively compare the performances of different candidate methods in sampling from Cc-GANs. For the UTKFace experiment, we sample 1000 fake images for each age by each method. For the RC-49 experiment, we sample 200 fake images for each of 899 distinct angles by each method (449 angles are unseen in the training set). The comparison results are shown inTable 2. We can see Collab does not take effect in sampling from CcGANs.DRS can improve NIQE in both experiments, but it fails to
improve Diversity and Label Score simultaneously. DDLS
improves visual quality in the UTKFace experiment, but it
sacrifices Diversity for a slightly lower Label Score. The sam-
pling time (11.24 hours) spent by DDLS is also much longer
than others, and such a long sampling time makes DDLS in-
feasible in the RC-49 experiment. DRE-F-SP+RS performs
worst among all methods, and it does not apply to RC-49. The
proposed cDR-RS with the filtering scheme, denoted by cDR-
RS (Filter), outperforms all other candidate methods on both
datasets. It can effectively improve Label Score. Meanwhile,
it also improves NIQE and Diversity scores. We also show
in
Table 2 :
2Effectiveness analysis of different sampling meth-
ods with CcGANs (SVDL+ILI) and two regression datasets in
terms of Intra-FID, NIQE, Diversity, and Label Score. Values
in the parentheses represent the standard deviation of evalu-
ation scores reported at each distinct regression label. The
proposed cDR-RS substantially outperforms other candidate
methods on both datasets.
Method
Intra-FID ↓
NIQE ↓
Diversity ↑
Label Score ↓
-UTKFace -
Baseline
0.457 (0.171) 1.722 (0.172) 1.303 (0.170) 7.403 (5.956)
Collab [21]
0.457 (0.168) 1.722 (0.170) 1.298 (0.180) 7.420 (6.030)
DRS [1]
0.455 (0.169) 1.707 (0.176) 1.282 (0.185) 7.487 (6.105)
DDLS [4]
0.445 (0.166) 1.712 (0.172) 1.288 (0.180) 7.202 (5.888)
DRE-F-SP
+RS [9]
0.588 (0.657) 1.724 (0.173) 1.293 (0.184) 7.462 (6.046)
Ours: cDR-RS
(No filter)
0.443 (0.183) 1.703 (0.168) 1.348 (0.139) 7.528 (6.125)
Ours: cDR-RS
(Filter)
0.430 (0.176) 1.708 (0.169) 1.307 (0.208) 6.317 (5.026)
-RC-49 -
Baseline
0.389 (0.096) 1.759 (0.168) 2.950 (0.067) 1.938 (1.484)
Collab [21]
0.389 (0.095) 1.762 (0.174) 2.952 (0.067) 1.939 (1.487)
DRS [1]
0.360 (0.097) 1.751 (0.149) 2.853 (0.132) 1.855 (1.404)
Ours: cDR-RS
(Filter)
0.334 (0.095) 1.756 (0.163) 3.048 (0.090) 1.114 (0.885)
Table S .
S5.3: The 5-layer MLP for DRE in feature space for CIFAR-10 and CIFAR-100.Extracted feature h ∈ R 3072
S.8 More Details of Experiments on UTKFace S.8.1 Setups of training, sampling, and evaluation We follow [8, 7] to implement CcGANs (SVDL+ILI) with the SNGAN architecture. The detailed setups can be found in [8, 7] or our codes.
Table S
S.7.5: Efficiency analysis of different sampling methods on ImageNet-100 based on Two NVIDA V100. For DRE-F-SP+RS and cDR-RS, the training time includes the time spent on the ResNet-34 training and the MLP-5 network training.Table S.8.6: The architecture of the encoder in the sparse autoencoder for extracting features from 64 × 64 RGB images.Methods
Total storage
usage (GB)
Total training
time (hours)
Total sampling
time (hours)
Total implementation
time (hours)
Collab
0.13
5.58
5.23
10.81
DRS
0.13
1.47
0.75
2.22
DDLS
0.13
0
218.05
218.05
DRE-F-SP+RS
38.94
84.41
2.43
86.84
cRS-RS
0.75
65.69
1.18
66.87
Table S .
S8.8: The architecture of the label prediction branch in the sparse autoencoder for 64 × 64 images. Input: extracted sparse features h ∈ R 12288 from Table S.8.6. fc→ 1024, BN, ReLU fc→ 512, BN, ReLU fc→ 256, BN, ReLU fc→ 1, ReLU Output: the predicted labelŷ
Table S .
S8.9: Efficiency analysis of different sampling methods on UTKFace based on One NVIDA V100. For DRE-F-SP+RS and cDR-RS, the training time includes the time spent on the sparse AE training and the MLP-5 network training.Methods
Total storage
usage (MB)
Total training
time (hours)
Total sampling
time (hours)
Total implementation
time (hours)
Collab
82.8
1.05
0.27
1.32
DRS
82.8
0.16
0.13
0.29
DDLS
82.8
0
11.24
11.24
DRE-F-SP+RS
6,671
1.92
5.69
7.61
cRS-RS
303
1.99
0.49
2.48
Discriminator rejection sampling. Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, Augustus Odena, International Conference on Learning Representations. 7Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, and Augustus Odena. Discriminator rejec- tion sampling. In International Conference on Learning Representations, 2019. 1, 3, 6, 7, 8
Spectral normalization for generative adversarial networks. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida, International Conference on Learning Representations. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. 6
cGANs with projection discriminator. Takeru Miyato, Masanori Koyama, International Conference on Learning Representations. 17Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In International Conference on Learning Representations, 2018. 1, 2, 3, 6, 7, 14, 17
Mining GOLD samples for conditional GANs. Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin, Advances in Neural Information Processing Systems. 327Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, and Jinwoo Shin. Mining GOLD samples for condi- tional GANs. Advances in Neural Information Process- ing Systems, 32:6170-6181, 2019. 2, 3, 6, 7
Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. Mehryar Mohri, MIT pressMehryar Mohri, Afshin Rostamizadeh, and Ameet Tal- walkar. Foundations of machine learning. MIT press, 2018. 4
Conditional image synthesis with auxiliary classifier GANs. Augustus Odena, Christopher Olah, Jonathon Shlens, International conference on machine learning. 6Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In International conference on machine learning, pages 2642-2651, 2017. 1, 2, 3, 6
Improved techniques for training GANs. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved tech- niques for training GANs. In Proceedings of the 30th International Conference on Neural Information Process- ing Systems, NIPS'16, page 2234-2242, 2016. 6
Some useful asymptotic theory. Xiaoxia Shi, 11Xiaoxia Shi. Some useful asymptotic theory. (Link). 11
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1213Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 2818-2826, 2016. 12, 13
Self-attention generative adversarial networks. Han Zhang, Ian Goodfellow, Dimitris Metaxas, Augustus Odena, International Conference on Machine Learning. 13Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Au- gustus Odena. Self-attention generative adversarial net- works. In International Conference on Machine Learning, pages 7354-7363, 2019. 1, 3
Age progression/regression by conditional adversarial autoencoder. Zhifei Zhang, Yang Song, Hairong Qi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhifei Zhang, Yang Song, and Hairong Qi. Age progres- sion/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 5810-5818, 2017.
13 (a) Storage Usage (GB) for DRE-F-SP+RS (b) Storage Usage (GB) for cDR-RS. Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, Song Han, arXiv:2006.10738arXiv preprintDifferentiable augmentation for data-efficient GAN trainingShengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentation for data-efficient GAN training. arXiv preprint arXiv:2006.10738, 2020. 13 (a) Storage Usage (GB) for DRE-F-SP+RS (b) Storage Usage (GB) for cDR-RS
Training Time (hours) for DRE-F-SP+RS (d) Training Time (hours) for cDR-RS. Training Time (hours) for DRE-F-SP+RS (d) Training Time (hours) for cDR-RS
Total Implementation Time (hours) for DRE-F-SP+RS (f) Total Implementation Time (hours) for cDR-RS. Total Implementation Time (hours) for DRE-F-SP+RS (f) Total Implementation Time (hours) for cDR-RS
S Figure, Pie charts for the efficiency analysis of DRE-F-SP+RS and cDR-RS on ImageNet-100. Input: an RGB image x ∈ R 64×64×3. Figure S.7.8: Pie charts for the efficiency analysis of DRE-F-SP+RS and cDR-RS on ImageNet-100. Input: an RGB image x ∈ R 64×64×3 .
BN, ReLU Conv (ch → 64, k4/s2/p1), BN, ReLU Conv (ch → 128, k3/s1/p1), BN, ReLU Conv (ch → 128, k4/s2/p1), BN, ReLU Conv (ch → 256, k3/s1/p1), BN, ReLU Conv (ch → 256, k4/s2/p1), BN, ReLU Conv (ch → 256, k3/s1/p1). ; Conv, Relu Bn, Conv, k3/s1/p1). 64ch → 64, k4/s2/p1. ReLU Flatten: 64 × 4 × 4 × 4 → 4096 fc→ 12288, ReLU Output: extracted sparse features h ∈ R 12288Conv (ch → 64, k4/s2/p1), BN, ReLU Conv (ch → 64, k3/s1/p1), BN, ReLU Conv (ch → 64, k4/s2/p1), BN, ReLU Conv (ch → 128, k3/s1/p1), BN, ReLU Conv (ch → 128, k4/s2/p1), BN, ReLU Conv (ch → 256, k3/s1/p1), BN, ReLU Conv (ch → 256, k4/s2/p1), BN, ReLU Conv (ch → 256, k3/s1/p1), BN, ReLU Flatten: 64 × 4 × 4 × 4 → 4096 fc→ 12288, ReLU Output: extracted sparse features h ∈ R 12288
7: The architecture of the decoder in the sparse autoencoder for reconstructing 64 × 64 input images from extracted features. In transposed-convolutional (ConvT) operations, ch denotes the number of channels, k/s/p denote kernel size, stride and number of padding, respectively. S Table, Input: extracted sparse features h ∈ R 12288Table S.8.7: The architecture of the decoder in the sparse autoencoder for reconstructing 64 × 64 input images from extracted features. In transposed-convolutional (ConvT) opera- tions, ch denotes the number of channels, k/s/p denote kernel size, stride and number of padding, respectively. Input: extracted sparse features h ∈ R 12288
BN, ReLU ConvT (ch → 128, k4/s2/p1), BN, ReLU ConvT (ch → 64, k3/s1/p1), BN, ReLU ConvT (ch → 64, k4/s2/p1), BN, ReLU ConvT (ch → 64, k3/s1/p1), BN, ReLU ConvT (ch → 64, k4/s2/p1), BN, ReLU ConvT (ch → 64, k3/s1/p1), BN, ReLU ConvT (ch → 3, k1/s1/p0). ; Convt, Relu Bn, Convt, Tanh Output: a reconstructed image x ∈ R 64×64×3 (a) Storage Usage (MB) for DRE-F-SP+RS (b) Storage Usage (MB) for cDR-RS. 128k3/s1/p1)ConvT (ch → 256, k4/s2/p1), BN, ReLU ConvT (ch → 128, k3/s1/p1), BN, ReLU ConvT (ch → 128, k4/s2/p1), BN, ReLU ConvT (ch → 64, k3/s1/p1), BN, ReLU ConvT (ch → 64, k4/s2/p1), BN, ReLU ConvT (ch → 64, k3/s1/p1), BN, ReLU ConvT (ch → 64, k4/s2/p1), BN, ReLU ConvT (ch → 64, k3/s1/p1), BN, ReLU ConvT (ch → 3, k1/s1/p0), Tanh Output: a reconstructed image x ∈ R 64×64×3 (a) Storage Usage (MB) for DRE-F-SP+RS (b) Storage Usage (MB) for cDR-RS
Training Time (hours) for DRE-F-SP+RS (d) Training Time (hours) for cDR-RS. Training Time (hours) for DRE-F-SP+RS (d) Training Time (hours) for cDR-RS
Total Implementation Time (hours) for DRE-F-SP+RS (f) Total Implementation Time (hours) for cDR-RS. Total Implementation Time (hours) for DRE-F-SP+RS (f) Total Implementation Time (hours) for cDR-RS
10: Pie charts for the efficiency analysis of DRE-F-SP+RS and cDR-RS on UTKFace. S Figure, Figure S.8.10: Pie charts for the efficiency analysis of DRE-F-SP+RS and cDR-RS on UTKFace.
| [
"https://github.com/pfnet-research/sngan_",
"https://github.com/YuejiangLIU/",
"https://github.com/UBCDingXin/DDRE_",
"https://github.com/Daniil-Selikhanovych/",
"https://github.com/UBCDingXin/improved_"
] |
[
"Blueprinting Nematic Glass: Systematically Constructing and Combining Active Points of Curvature for Emergent Morphology",
"Blueprinting Nematic Glass: Systematically Constructing and Combining Active Points of Curvature for Emergent Morphology"
] | [
"C D Modes \nCavendish Laboratory\nUniversity of Cambridge\n19 JJ Thomson AvenueCB3 0HECambridgeU.K\n",
"M Warner \nCavendish Laboratory\nUniversity of Cambridge\n19 JJ Thomson AvenueCB3 0HECambridgeU.K\n"
] | [
"Cavendish Laboratory\nUniversity of Cambridge\n19 JJ Thomson AvenueCB3 0HECambridgeU.K",
"Cavendish Laboratory\nUniversity of Cambridge\n19 JJ Thomson AvenueCB3 0HECambridgeU.K"
] | [] | Much recent progress has been made in the study of nematic solids, both glassy and elastomeric, particularly in the realm of stress-free, defect-driven deformation in thin sheets of material. In this paper we consider a subset of texture domains in nematic glasses that are simple to synthesize, and explore the ways that these simple domains may be compatibly combined to yield analogs of the traditional smooth disclination defect textures seen in standard liquid crystals. We calculate the deformation properties of these constructed textures, and show that, subject to the compatibility constraints of the construction, these textures may be further combined to achieve shape blueprinting of 3-D structures from flat sheets. | 10.1103/physreve.84.021711 | [
"https://arxiv.org/pdf/1105.1116v1.pdf"
] | 35,053,125 | 1105.1116 | 000b270eb9277f782dba11c613d51824bbf95fc5 |
Blueprinting Nematic Glass: Systematically Constructing and Combining Active Points of Curvature for Emergent Morphology
5 May 2011
C D Modes
Cavendish Laboratory
University of Cambridge
19 JJ Thomson AvenueCB3 0HECambridgeU.K
M Warner
Cavendish Laboratory
University of Cambridge
19 JJ Thomson AvenueCB3 0HECambridgeU.K
Blueprinting Nematic Glass: Systematically Constructing and Combining Active Points of Curvature for Emergent Morphology
5 May 2011(Dated: January 20, 2013)numbers: 6130Jf4632+x4670De4670Hg
Much recent progress has been made in the study of nematic solids, both glassy and elastomeric, particularly in the realm of stress-free, defect-driven deformation in thin sheets of material. In this paper we consider a subset of texture domains in nematic glasses that are simple to synthesize, and explore the ways that these simple domains may be compatibly combined to yield analogs of the traditional smooth disclination defect textures seen in standard liquid crystals. We calculate the deformation properties of these constructed textures, and show that, subject to the compatibility constraints of the construction, these textures may be further combined to achieve shape blueprinting of 3-D structures from flat sheets.
I. INTRODUCTION
The prospect of pre-programming a desired shape transformation in a material that may be remotely activated at a later time has understandably been the source of much recent interest, from engineering sheets with activated folds [1] to attempts at understanding how nature accomplishes its vast spectrum of morphologies [2,3]. The primary difficulty with such programming on an initially flat sheet lies in moving to target shapes beyond simple folding or crumpling, characterized by d-cones and developable geometry [4]. By requiring non-developable results one must be prepared to deal either with a high stretch or compressional cost, or find a way to pre-program a change in the material's local metric geometry. This latter option may be attempted, as it often occurs in nature, through differential growth rates inside the material itself [5], however, such a system is irreversible, difficult to program in advance, difficult to activate controllably, and hence not amenable to device applications. Other approaches to the problem include the use of gels [6] or fluid membranes [7] but similar issues appear in these cases, coupled with the disadvantage of fluid membranes and gels being less robust materials than desirable for use in shape-programmable devices. This paper will propose a new avenue to blueprinting for the purpose of broad morphology control in a thin solid sheet that circumvents these difficulties and results in stress-free final states by taking advantage of materials with local orientational order.
Liquid crystalline solids undergo macroscopic elongations and contractions in response to heat, light, pH, and other stimuli that change the molecular order. Most studied are nematic glasses [8] and elastomers [9]. Both have spontaneous deformation gradients, λ ij = ∂x i /∂x 0 j transforming reference space points x 0 to target points x that are of the form
λ = (λ − λ −ν )n n + λ −ν δ(1)
that is an extension/contraction λ along the director n and a contraction/extension λ −ν perpendicular to n for λ > 1/ < 1 respectively. By analogy to the elastic case, ν is what we call an opto-thermal Poisson ratio that relates the perpendicular to the parallel response. Thus λ is a uniaxial distortion when it is spontaneous and not associated with any subsequent stresses that distort the body away from the new natural state. For glasses ν ∈ ( 1 2 , 2) [8] while elastomers (rubbers) have ν = 1/2. Rubbers have spectacular stimuli responses λ ∈ (0. 5,4), that is up to 400% opto-thermal strains. Their directors are mobile in a fluid-like way -in fact the rotation of n, in placing the longer dimension of the solid along the direction of imposed elongation allows shape change without energy cost [10,11]. Glasses, on the other hand, have shear moduli comparable to their compressional resistance (∼ 10 9 Pa) and their optothermal elongations/contractions are upto ±4%, that is λ ∈ (0.96, 1.04). The loss in dramatic elastic sensitivity to stimuli is compensated by the fact that their directors are anchored to the polymer matrix, at most convecting with the matrix as it is distorted. This allows for a feasible patterning of the director field at the initial time of cross-linking and the subsequent guarantee that the chosen pattern will not be erased by the soft elasticity present in the rubber.
It is in the spirit of this 'written-in' patterning that we have previously addressed the many routes to a cantilever-style actuator [12] along with an in-depth treatment of such actuators whose properties are facilitated by the patterning of splay-bend or twist textures through the thickness of the material [13]. Widening the net to allow for nematic director spatial variation across the surface of a thin sheet opens the door to emergent Gaussian curvature that manifests in a controllable way, growing conical shapes from +1 disclination defects [14,15]. In this paper we will continue the approach to a realizable blueprint for actively switching a flat sheet of nematic glass from its nascent, developable state to a curved or faceted and potentially complex shape. Due to the extreme smallness of the only inherent length scales in the problem -those governed by the Frank energies, of order 10s of nm -and the relative smallness of the practical length scales -such as how thin a sheet may feasibly be manufactured, of order microns -any such blueprinted sheet could be envisaged as useful in applications from remotely operable peristaltic pumps in microfluidic circuits to macroscopic shape adaptation.
In lieu of considering the full realm of all possible two-dimensional nematic director fields, we choose to concentrate instead on those textures that are locally simplest to pattern and hence most amenable to application. The textures we allow, therefore, are either those with locally constant director field or those with a locally circular field; the use of masking in the preparation stages allows for regions of these types of textures to be joined with themselves, and with one another, to the desired effect. Notice that, although the topological charge in the +1 director field is intimately related to the appearance of conical curvature in those textures, the rest of the zoo of the traditional types of disclination charges in 2D nematics is disallowed by this restriction, as the smoothly varying realizations of these defects that minimize the Frank energy are highly non-trivial to manifest in a controllable way for patterning, particularly in groups of more than one. Furthermore, a simple symmetry argument shows that the emergent shape behavior of the smooth Frank-minimizing defects must exhibit more than simple point-like sources of Gaussian curvature. Consider such a Frank-minimizing defect field of disclination charge m. In the one-constant approximation this field may be defined simply by [16]:
φ = mθ + δ(2)
where φ is the director direction, θ is the polar angle, and δ is an arbitrary phase. If we now locally rotate the director at each point by an amount ∆φ the texture becomes φ + ∆φ = mθ + δ. So long as m = 1 this form may be recast as a global, solid-body rotation:
φ − ∆φ m − 1 = m θ − ∆φ m − 1 + δ(3)
implying that local and global rotations for these defected textures are equivalent. In particular, a local rotation of π/2 is also equivalent to interchanging the roles of heating and cooling, as the director and perpendicular directions are swapped (see remarks following Eq. 1). Thus any shapes emergent from these textures must be the same up to solid-body rotations upon either heating or cooling from the flat state. As a consequence, simple point charges of Gaussian curvature -which produce rotationally distinct results upon heating and cooling -are disallowed, and more complicated morphologies must result from smooth Frank-minimizing defects with m = 1. Indeed, by considering the metric tensor for the distorted space, g = λ T λ, one may show that the Gaussian curvature is indeed distributed. In the coordinates of the reference space, it is:
κ = m(m − 1) λ 2(1+ν) − 1 r 2 λ 2 cos 2(m − 1)θ (4)
for a Frank-minimizing defect of charge m [17].
In their stead, we will construct "piece-wise constant" stand-ins that play the same role, and in so doing demonstrate that our restricted set of director patterns is enough to allow for the kind of active material blueprinting we seek.
We will take a constructionist view to the understanding of this class of textures and the way they can be combined with one another, first exploring possible component pieces and then synthesizing simple textures, and finally complicated combinations from them. Hence, the basic building blocks of the larger, complete textures will be addressed in Section II, and an examination of the point-defected textures that can be constructed from them follows in Section III. Section IV presents a guide to the intuition for the purposes of designing a switchable shape blueprint from multiple such constructed defects and then considers some examples of practically and theoretically relevant nematic glass textures that can be constructed through the combination of these point defects. We conclude and discuss in Section V.
II. ELEMENTAL BUILDING BLOCKS FOR POINT DEFECTS
Following this constructionist philosophy, we begin by considering the constituent pieces from which we will compose more interesting textures. Since we expect to be able to fit these pieces together, initially, around a point to create a point-defected structure, we consider wedges of material with wedge angle, θ. Under spontaneous strain, the wedges we consider will deform in a self-similar way with respect to their director pattern, possibly allowing for a change in θ.
A. Slices of Cone/Anti-cone Textures
The first such wedge we consider is simply a slice taken from a pattern of concentric circles (Fig. 1, left). The deformation and strain-response of a complete 2π texture of such a "wedge" is well-characterized by the adoption of conical or anti-conical shapes [14], where in the conical case the Gaussian curvature is related to the resulting cone's opening angle by K = 2π(1 − sin φ c ). Since a point charge of Gaussian curvature can be thought of as an angular deficit or surplus around that point, we may expect that a partial wedge of this texture must exhibit a change in its wedge angle upon spontaneous strain. Indeed, consider an arc of the material θ θ 2 FIG. 1: Three representative textured wedges whose angular extent varies with imposed spontaneous strain. The nematic director lies along the lines shown. On the left, the nematic director lies along concentric circles and the texture is simply cut from one considered in previous work. In the middle, the wedge contains a line of rank-1 connection of the nematic director and on the right the director is trivially aligned normal to the line bisecting the wedge angle. All three cases may also occur with a director field perpendicular to that shown: radial director lines on the left, rank-1 connected with the cusp pointing away from the wedge tip and the director normal to the wedge boundaries in the middle, and with the director aligned along the wedge angle's bisector on the right. a distance r from the wedge tip such that the director always lies along a tangent. Initially, the length of this arc is simply rθ. Because this arc always coincides with the director, after strain its new natural state will have a length of s ′ = λrθ. Meanwhile, the radii to this arc from the tip of the wedge coincide with the perpendicular to the director, and hence, the new distance from wedge tip to arc is r ′ = λ −ν r. The new wedge angle is thus:
θ ′ = s ′ /r ′ = λ 1+ν θ(5)
This is consistent with the conclusions drawn about a full texture of concentric circles [14] as taking θ = 2π here leads to an angular deficit, and hence Gaussian curvature, of 2π(1 − λ 1+ν ) as required. Note also that this wedge texture may be replaced by one in which the director lies everywhere along radii emanating from the wedge tip and the director perpendiculars lie along concentric circles with no change in the conclusions other than a reversal of the effect of the strain, i.e. θ ′ = λ −1−ν θ.
B. Rank-1 Connected Wedges
Next, we consider a wedge adorned with the director pattern shown in Figure 1, middle. This "rank-1 connected" wedge is characterized by two regions of simple parallel director fields joined across the wedgebisecting line (dashed line in Fig. 1). Note that, in order for the resultant strains to be compatible, the angle at which each of the two separate regions meet the bisecting line must be the same [18]. This condition is known as rank-1 connectedness (Fig. 2a). Note also, that if the wedge angle is θ, then by rank-1 connectedness the angle between the director and the bisecting l > 1
a 0 a 1 n 0 n m 0 m (a) (b)
FIG. 2: Rank-1 connection. A disc of material is adorned with a rank-1 connected director pattern in (a), with the director field meeting the boundary of rank-1 connection with angle α 0 . Upon the application of spontaneous strain (b) the angle across this boundary must change to α 1 , and the disc deforms to a stylized heart shape.
line is θ/2. How does spontaneous strain affect such an object? Consider a right triangle formed with the nematic director lying along one side, the director perpendicular lying along another, and the hypotenuse lying along the wedge-bisecting line. The angle between the hypotenuse and the director side is, as just stated, θ/2. Prior to the imposition of spontaneous strain, let the length of the director side be p, and that of the director-perpendicular side, q. After strain, these sides will deform simply to lengths of λp and λ −ν q, respectively. Therefore the new half-wedge angle is related to the original by:
tan(θ ′ /2) = λ −1−ν tan(θ/2) (6) θ ′ = 2 tan −1 λ −1−ν tan(θ/2)(7)
As in the case of the wedge textured with concentric circles, the spontaneous strain gives rise to a change in the angular extent of the wedge (Fig. 2b). An initial flat state composed of 2π radians worth of such wedges would hence develop an angular deficit or surplus after spontaneous strain and exhibit the conical (or anticonical, respectively) behavior associated with a point charge of Gaussian curvature. Note that a rank-1 connected wedge patterned with a director field perpendicular to that considered, such that the director is normal to the wedge boundaries, will distort its wedge angle in the opposite sense with respect to λ:
θ ′ = 2 tan −1 λ 1+ν tan(θ/2) .(8)
C. Triangle Wedges
Finally, consider the trivial, or "triangle" wedge pictured on the right of Figure 1, adorned entirely with one region of a simply parallel director field. In this case, it is easy to see intuitively the the wedge angle must change under spontaneous strain, as, for example, the director lines shorten and the perpendicular lines elongate. In order to quantify this intuition, consider again a right triangle, analogously to the middle case, but this time with vertex at the wedge tip, one side along the wedge-bisecting line (which coincides with the director perpendicular), one side along the director and the hypotenuse along the edge of the wedge. The angle between the director-perpendicular side and the hypotenuse is now θ/2 and the argument goes through as in the case of the rank-1 connected wedges with λ and λ ν swapped. Hence, we have:
tan(θ ′ /2) = λ 1+ν tan(θ/2) (9) θ ′ = 2 tan −1 λ 1+ν tan(θ/2)(10)
Note that another trivial wedge exists as a perpendicular version of the triangle wedge, where the wedgebisecting line coincides with the director. In this case, the roles of λ and λ −ν are swapped again and the relation between θ and θ ′ becomes identical to that given by Eq. 7.
III. CONSTRUCTING POINT DEFECTS FROM MATERIAL WEDGES
With these wedge-shaped building blocks in hand and an understanding of how they deform under the imposition of spontaneous strain it is a relatively straightforward matter to put together enough wedges to reach an angular extent of exactly 2π at the shared tip in the unstrained state and hence construct a complete texture which exhibits a geometric (curvature) point defect under spontaneous strain. There are, however, constraints -namely, two wedges may only be stitched together if the nematic director field is the same on both sides of the boundary, or, if the boundary lies along a line of rank-1 connectedness in the director field. Hence, a wedge adorned with concentric circles may not be joined directly to a radial wedge, nor a triangle wedge with angular extent π/2 to a triangle wedge with angular extent π/4, but a rank-1 connected wedge may be joined to another or a triangle wedge to one decorated with concentric circles.
We proceed by categorizing these constructed textures according to their corresponding disclination defect charge. Note that, because many of the final textures will include at least one rank-1 connected border, the concept of a disclination defect is somewhat ambiguous -when the angle of the director field changes discontinuously by an (apparent) amount α, it may instead be considered to have changed by an amount α − π, or indeed, α + nπ for any integer n. In order to sidestep this ambiguity we will always consider the angle change across such a discontinuous boundary to be either the smallest positive or largest negative value available. We will consider each of these two cases separately. We assume that the particular choice of angle change is made consistently for textures with more than one such discontinuous boundary.
A. m < 0 and Polygonal m = 1 Defects FIG. 3: Nematic glass texture corresponding to a disclination defect of charge −1/2, up to jump angle ambiguities. The texture is composed of three rank-1 connected wedges each with an internal connection angle π/3. Lines of rank-1 connection are illustrated as dashed, and thick gray lines mark wedge boundaries.
As noted above, stitching rank-1 connected wedges to one another is permitted by our constraints, and so one of the most straightforward available 2π constructions is to simply take n congruent rank-1 connected wedges, each of angular extent 2π/n, and stitch them together. The resulting complete texture qualitatively resembles a disclination defect of charge 1 − n/2 for n ≥ 3 and indeed, by consistently choosing to assign the negative-valued angle change across the rank-1 connected boundaries, this is precisely the disclination charge of the texture (see Fig. 3 for a m = −1/2 example). The spontaneous strain induced deformations of such a texture may be directly calculated from the behavior of the constituent wedges. Conveniently, all these wedges are congruent for this texture and we immediately arrive at a total angular deficit, and hence Gaussian curvature, of:
∆θ tot = K n = 2π − 2n tan −1 λ −1−ν tan(π/n)(11)
concentrated at the point in the middle of the texture where all of the wedge tips meet. Notice, on the other hand, that had we chosen to assign the positive-valued angle change at all the discontinuous boundaries, we could have concluded that all these textures, regardless of n, have disclination charge +1. This is intuitive as well, as consideration of the perpendicular field to the director field described above give a texture of concentric regular polygons, qualitatively very reminiscent of the concentric circles of a traditional +1 disclination defect. From our constructivist point of view, this new texture could have been composed from scratch by stitching together n triangle wedges of angular extent 2π/n. Unsurprisingly, the total angular deficit produced by this structure -as can consistently seen by either combining n triangle wedges or swapping the roles of the director and perpendicular in the texture considered above -is given by: ∆θ tot = K n = 2π − 2n tan −1 λ 1+ν tan(π/n) . (12) Furthermore, as n tends to ∞ we identically recover a +1 disclination defect texture from our concentric polygons. As required, lim n→∞ K n = 2π(1 − λ 1+ν ) [14]. Likewise, the same limit taken for the texture composed of congruent rank-1 connected wedges recovers a radial +1 disclination defect texture, and the limiting value of the Gaussian curvature also matches as appropriate.
A concrete example, n = 4 -the square representation of an m = 1 defect, Fig. 4(a), serves to show that all such polygonal +1 defects must give 3-D structures that relax into circular cones because the bend energy is convex. Ignoring the cost of bend, we expect the n = 4 sided defect to rise into a square pyramidal cone, Fig. 4(b). The Gaussian curvature localised at the vertex is K 4 = 2π 1 − 4 π tan −1 (λ 1+ν ) from Eq. (12). On heating there is a contraction along lines such as AC = L → A'C' = λL and elongation along OB
= L → O'B' = λ −ν L. Considering the triangle O'OB' (where OB' = A'C') then φ p = sin −1 (OB'/O'B') = sin −1 (λ 1+ν )
is the pyramidal opening angle. We note the line length
O'C' = 1 √ 2 (λ 2 + λ −2ν ) 1 2 OC = (λ 2 + λ −2ν ) 1
2 L since the line OC is at an angle π/4 with respect to n (rotate λ by π/4). Note that the n = ∞, i.e. circular, form of the m = 1 defect gives circular cones of the same opening angle, φ ∞ = sin −1 (λ 1+ν ).
However the pyramidal cone has its bend localised into 4 creases emanating from the vertex O ′ . Since the bend energy density is a quadratic function of the curvature, this convexity dictates that the energy be reduced by delocalising the curvature over the whole surface of the cone, that is by forming a circular cone, Fig. 4(c). The integral lines of the director are not concentric circles centred on the tip, but are cusped lines. Lengths from the tip to the integral curves include L(λ 2 + λ −2ν ) 1 2 to cusps at points like C ′ , and Lλ −ν to points A ′ , B ′ etc. Comparing the Gaussian curvature K 4 (which does not change on relaxation from the pyramid) with that of a circular cone of opening angle φ c , i.e. with K c = 2π(1 − sin φ c ), we have for the opening angle of the relaxed circular cone φ c = sin −1 4 π tan −1 (λ 1+ν ) which is indeed flat, φ c = π/2, when λ = 1.
B. m = +1/2 Defects
FIG. 5:
Nematic glass textures corresponding to a disclination defect of charge +1/2, up to jump angle ambiguities. The texture on the left, a hemi-stadium, is composed of a conetextured 'wedge' subtending an angle π and a trivial 'triangle' wedge subtending the remaining π. The texture on the right is composed of two rank-1 connected wedges subtending π/2 each, and a trivial wedge subtending the remaining π radians. In both cases lines of rank-1 connection are denoted by a dashed line and wedge boundaries by thick gray lines.
By making use of wedges textured with concentric circle director patterns, we may also construct a version of a +1/2 charged disclination, reminiscent of a halfstadium, with the available constituent wedges (Fig. 5, left). Here the region of the texture adorned with constant director field aligned perpendicular to the joining boundary, subtending π radians, does not change its angular extent upon being strained. The other half of the texture, composed of concentric half-circles, undergoes an angular change of π(1 − λ 1+ν ), and hence this is the total curvature generated at the defect point of the texture:
∆θ tot = K = π(1 − λ 1+ν ).(13)
leading to a cone opening angle in the final, deformed state for this texture of sin φ c = 1 2 (1 + λ 1+ν ). More discrete, piece-wise constant versions of this texture may be constructed as well, in much the same way as the concentric regular polygon analogs of a smooth +1 texture discussed previously (Fig. 5, right). In this case, we again start with a π-radian region of constant director field aligned normal to the joining boundary. Instead of joining across the boundary with a semi circular pattern, we join to a semi-polygonal pattern, obtained by slicing an even-sided polygonal +1 in half through its defect, normal to a pair of its polygonal sides. An even-sided polygon is required to ensure that opposite component wedges may have their director field aligned parallel, and hence join smoothly with the other, constant field region. Since, by construction, we may cut such a texture into pieces we have already dealt with, the curvature is simply half that of a full concentric-polygonal texture:
∆θ tot = K n = π − n tan −1 λ 1+ν tan(π/n)(14)
where n is the number of sides of a complete polygon, not just the number present on the polygonal side of the texture. As pointed out above, n must be even as well.
C. m > 1 Defects
We have demonstrated that our simple basis set of wedges allows for the construction of many of the possible disclination point defects, all such with charge ≤ 1. The rest of the possible disclinated director fields, with charge ≥ 3/2, may not be realized with our piecewise constant components. In order to understand why this is so, consider the feasibility of promoting our 2D nematic textures to a 2D smectic-A state. It is now a necessary condition for the smectic-A phase that develops to be in a local minimum of the free energy that the smectic layers must be allowed to adopt a constant inter-layer spacing [16]. Because each of our component wedges either have a locally constant director field, or are decorated with regions of concentric circles, they are compatible with these requirements and are thus, in principle, compatible with smectic layers. On the other hand, smectic textures are incompatible with disclination charges greater than one [19], which necessarily lead to a divergent layer-compression energy. Hence construction of these higher disclination defects with these simple wedge components is disallowed.
D. Charge-Free Bending Defects
Thus far, we have constructed complete textures from our simple constituent wedges designed to generate an θ π 4 − θ 2
FIG. 6: Nematic glass texture corresponding to a geometric point defect without associated topological disclination charge, resulting from a stepwise "smoothing" of a discontinuous bend in the nematic director direction. The texture is composed of three rank-1 connected wedges -one with connection angle θ and two each with connection angle π 4 − θ 2 . The remaining π radians are accounted for a pair of π/2 trivial regions. Lines of rank-1 connection are denoted by a dashed line and wedge boundaries by thick gray lines. angular surplus or deficit, and hence Gaussian curvature, by creating disclination defects. As it happens, point-sources of curvature may result in other ways. These new point-sources arise from a step-wise smoothing of the discontinuous director-field bending associated with a line of rank-one connection. We have chosen to name these textures "charge-free bending defects" (Fig. 6), where here 'charge' refers to disclination charge and 'defect' to a curvature defect. By calculating the angular change that results from each of the five wedges, we arrive at a formula for the curvature. For an initial rank-one line whose director lines meet the boundary at an angle θ:
∆θ tot = K = π − 2 tan −1 λ −1−ν tan θ + 2 tan −1 λ 1+ν tan π 4 − θ 2 .(15)
Note that, despite the complicated form taken, the limiting value for λ = 1 remains appropriately K = 0. Furthermore, the overall strength of the curvature generated by one of these charge-free bending defects is somewhat less than that seen in the disclinated director fields treated earlier, as there are counterbalancing terms in the strain present. Finally, it is worth pointing out that the limit θ → 0 recovers a discrete +1/2 disclination defect, as the bend becomes so strong that the initial rank-one boundary disappears altogether. Accordingly, a different choice of angle-change accounting across that boundary for the 'charge-free' case leads to a disclination charge of +1/2.
E. Generalizations and Exotica
FIG. 7: Nematic glass textures corresponding to more exotic combinations of the fundamental wedges, each leading to a geometric curvature defect after spontaneous strain is imposed. First, radially-textured wedges join rank-1 connected zones. Second, a variant of the "half-stadium" representation of a +1/2 charge disclination with closed director lines. Third, a variant of the −1/2 disclination with unequal wedge angles. In all cases, lines of rank-1 connection are denoted by a dashed line and wedge boundaries by thick gray lines.
Beyond simple reconstruction of topological charges or bend-smoothing, there is a plethora of variants supported by the available building-block wedges. One might use the heretofore unused radial version of the concentric circular arc texture to bridge the gap between smaller rank-one connected wedges (Fig. 7, left). One can distort a +1/2 by increasing or reducing the region covered by concentric arcs and plug up the difference with rank-one connected wedges instead of a trivial constant piece as in the hemistadium +1/2 (Fig. 7, middle). Or one might consider playing with the relative sizes of the regions in one of the piece-wise constant, negatively charged disclination textures (Fig. 7, right). This last one turns out to be of particular use in the blueprinting scheme that follows in the next section.
IV. BLUEPRINTING WITH COMBINATIONS OF POINT DEFECTS: TEXTURE AND SHAPE
Having demonstrated all the ways in which our piecewise-constant buliding-block wedges may be combined to produced curvature effects, we are now in a position to consider higher level combinations, that is, combining multiple such points of curvature to achieve a desired shape. In order to match multiple point defects together in a single texture using the building blocks at hand is simply a matter of allowing finite polygonal patches in the texture in addition to the infinite wedges discussed earlier. These finite polygons are again restricted to be adorned by piece-wise constant director fields, and again must be linked to one another, and to any wedges, by rank-one connected boundaries. In this case, the grouping of polygonal vertices plays the same role as wedge tips in generating curvature. As such, there is an enormous range of possible morphologies that can emerge from the union of several such points of curvature, corresponding to the myriad ways of tiling the plane with (potentially irregular) polygons and infinitely extended regions.
In order to guide the potential design of blueprints for these nematic glass sheets, it is worth noting that, due to the restriction of the allowed director patterns on the constituent pieces, treating the integral lines of the director field as the contour lines of a topographic map of the target shape is always a stress-free solution. This is because the simple piece-wise textures chosen may always conform to any imposed spontaneous strain by adding a z component to the distance between director field integral lines. More abstractly, this is a manifestation of the fact that, by construction, our textures are allowable 2D smectics, and 2D smectics may be represented as multiply leaved height functions through their phase field [20]. As discussed in Section III on comparing pyrmidal or cone-like outcomes for the concentric polygon texture, the primary reason a texture may not adopt this contour-line-like solution is the desire to minimize the bending energy once the metric is satisfied and stress is eliminated. In a texture with many defect points, however, it becomes impossible to simply freely choose a bending minimum relative to one defect without imposing costly stretch at another. In this case, the final shape will more closely resemble the topographic map as the overall minimum energy will require balancing (relatively cheap) bend costs against (relatively expensive) stretch. The more defects present, the more closely the final shape will hew to the topographic realization, as there is ever more of a potential stretch price to bear.
A simple example of this principle is a texture in which the plane is simply tiled by regular squares, each one containing a concentric-square pattern and polygonal +1 defect a the center. The vertices of this tiling correspond to −1 defects. If each of the individual +1 defects became smooth cones, as is the case if they were isolated, then stretch penalties proliferate all along their boundaries. Instead, they retain a pyramid shape, and the overall texture becomes an array of pyramids. Interestingly, the bend energy still has a role to play here: these pyramids may grow up out of the plane or down from it, and the minimization of the bend energy leads to an anti-ferromagnetic Ising model interaction on the up/downess of the pyramids. The final shape is thus a pyramidal square egg-crate. Of course, if we had chosen to tile the plane with hexagons instead, then the anti-ferromagnetic Ising model interaction is frustrated and a multitude of degenerate ground states ensue.
A. True Blueprinting: An Emergent Pyramid
In the spirit of the egg-crate morphology discussed above, we wish to present a simple example of the manner in which the director field may be used to blueprint a sheet of nematic glass in order to realize a desired FIG. 8: A simple example of a blueprinted shape, a pyramid flanked by a square crumple pattern. The director field blueprint is shown on the left: it is distinguished from the simple cone-producing concentric square texture by a ±1/2 disclination pair, seen in blown-up. The presence of these extra defects manifests as a terminated trough on one corner of the pyramid that subsequently spirals around repeatedly -contour blow-up, right hand side; darker shading corresponds to lower elevation.
shape. In addition, we wish to demonstrate that a nontrivial blueprinted object is achievable with a only a small number of simple masking steps in the preparation stage. Consider a texture of concentric squares with a piece-wise constant ±1/2 pair situated some distance from the central defect along one of the lines of rank-one connection (Fig. 8, left and blow-up detail). Such a texture is easy to prepare, requiring only four steps and masking boundaries that are simple straight lines or one with a small zig-zag that seeds the ±1/2 pair. Ignoring the effect of bend energy minimization in the thin sheet, this texture will produce a classic pyramid rising above a flat plane that has been weakly crumpled into a shallow spiraling moat (Fig 8, right). As can be seen in close-up detail (Fig. 8, right blowup), the half charge disclination dipole produces the interior terminus of the spiraling moat. Accounting for the effect of the bend energy will lead to some smoothing of the creases near the pyramid tip, and a gradual fading of the moat into a conical skirt far from the defect dipole. Both of these effects can be dampened by the inclusion of more defect dipoles, for example at the other three corners of the pyramid, which in this case does not increase the complication of the preparationin fact it is simpler, as only one mask boundary need be used.
V. DISCUSSION
We have shown how a thin sheet of nematic glass may be prepared with simple to understand and produce constituent regions of texture that work together to create an actively switchable, pre-programmable shape change, including the development of multiple points of Gaussian curvature in concert. Such an actively transformable sheet is theoretically realizable with features at any length scale above that dominated by the Frank energies -tens of nm -and already possible at the micron scale, leading to a host of possible applications. It is our fervent hope that this new tool inspires clever new device design that fulfills the strong potential of nematic glasses.
FIG. 4 :
4(a) A square representation of a m = 1 defect. (b) Ignoring bend energy, on cooling the defect rises to being a pyramidal cone. (c) Relieving the bend energy of the creased edges of the pyramid yields a circular cone, where the integral lines of n are the cusped trajectories shown.
The authors would like to thank Dick Broer and Carlos Sanchez for stimulating discussions. C.D.M. and M.W. acknowledge support from the EPSRC-GB.
. E Hawkes, B An, N M Benbernou, Proc. Natl. Acad. Sci. 10712441E. Hawkes, B. An, N. M. Benbernou, et al. Proc. Natl. Acad. Sci., 107, 12441 (2010).
. J Dervaux, M Ben Amar, Phys. Rev. Lett. 10168101J. Dervaux and M. Ben Amar. Phys. Rev. Lett., 101, 068101 (2008).
H Y Liang, L Mahadevan, Proc. Natl. Acad. Sci. Natl. Acad. Sci10622049H. Y. Liang and L. Mahadevan. Proc. Natl. Acad. Sci., 106, 22049 (2009).
. T Witten, Rev. Mod. Phys. 79643T. Witten. Rev. Mod. Phys., 79, 643 (2007).
. M M Müller, M Ben Amar, J Guven, Phys. Rev. Lett. 101156104M. M. Müller, M. Ben Amar, and J. Guven. Phys. Rev. Lett., 101, 156104 (2008).
. Y Klein, E Efrati, E Sharon, Science. 3151116Y. Klein, E. Efrati, and E. Sharon Science, 315, 1116 (2007).
. N Uchida, Phys. Rev. E. 6640902N. Uchida. Phys. Rev. E, 66, 040902 (2002).
. C L Van Oosten, K D Harris, C W M Bastiaansen, D J Broer, Euro. Phys. J. E. 23329C. L. van Oosten, K. D. Harris, C. W. M. Bastiaansen, and D. J. Broer. Euro. Phys. J. E, 23, 329 (2007).
Liquid Crystal Elastomers. M Warner, E M Terentjev, Oxford University PressOxfordrevised paperback editionM. Warner and E. M. Terentjev. Liquid Crystal Elastomers. Oxford University Press, Oxford, revised paperback edi- tion (2007).
. H Finkelmann, I Kundler, E M Terentjev, M Warner, J. de Phys. II1059H. Finkelmann, I. Kundler, E. M. Terentjev, and M. Warner. J. de Phys. II, 7, 1059 (1997).
. A Desimone, Ferroelectrics. 222275A. DeSimone. Ferroelectrics, 222, 275 (1999).
. M Warner, C D Modes, D Corbett, Proc. R. Soc. A. 4662975M. Warner, C. D. Modes, and D. Corbett. Proc. R. Soc. A, 466, 2975 (2010).
. C D Modes, M Warner, C L Van Oosten, D Corbett, Phys. Rev. E. 8241111C. D. Modes, M. Warner, C. L. van Oosten, and D. Cor- bett. Phys. Rev. E, 82, 041111 (2010).
. C D Modes, K Bhattacharya, M Warner, Phys. Rev. E. 8160701C. D. Modes, K. Bhattacharya, and M. Warner. Phys. Rev. E, 81, 060701(R) (2010).
. C D Modes, K Bhattacharya, M Warner, Proc. R. Soc. A. 4671121C. D. Modes, K. Bhattacharya, and M. Warner. Proc. R. Soc. A, 467, 1121 (2011).
P G De Gennes, J Prost, The Physics of Liquid Crystals. OxfordClarendon Press2nd ed.P. G. de Gennes and J. Prost. The Physics of Liquid Crystals. Clarendon Press, Oxford, 2nd ed. (1993).
. C D Modes, B Guilfoyle, M Warner, in preparationC. D. Modes, B. Guilfoyle, and M. Warner. in preparation.
Microstructure of Martensite. K Bhattacharya, Oxford University PressOxfordK. Bhattacharya. Microstructure of Martensite. Oxford Uni- versity Press, Oxford (2007).
. N D Mermin, Rev. Mod. Phys. 51591N. D. Mermin. Rev. Mod. Phys., 51, 591 (1979).
. B G Chen, G P Alexander, R D Kamien, Proc. Natl. Acad. Sci. U. S. A. 10615577B. G. Chen, G. P. Alexander, and R. D. Kamien. Proc. Natl. Acad. Sci. U. S. A., 106, 15577 (2009).
| [] |
[
"A Framework and Method for Online Inverse Reinforcement Learning",
"A Framework and Method for Online Inverse Reinforcement Learning"
] | [
"Saurabh Arora \nDepartment of Computer Science\nDepartment of Computer Science\nUniversity of Georgia\n30605AthensGA\n",
"Prashant Doshi [email protected] \nSchool of Computing\nUniversity of Georgia\n30605AthensGA\n",
"Bikramjit Banerjee [email protected] \nUniversity of Southern Mississippi\n39406HattiesburgMS\n"
] | [
"Department of Computer Science\nDepartment of Computer Science\nUniversity of Georgia\n30605AthensGA",
"School of Computing\nUniversity of Georgia\n30605AthensGA",
"University of Southern Mississippi\n39406HattiesburgMS"
] | [] | Inverse reinforcement learning (IRL) is the problem of learning the preferences of an agent from the observations of its behavior on a task. While this problem has been well investigated, the related problem of online IRL-where the observations are incrementally accrued, yet the demands of the application often prohibit a full rerun of an IRL method-has received relatively less attention. We introduce the first formal framework for online IRL, called incremental IRL (I2RL), and a new method that advances maximum entropy IRL with hidden variables, to this setting. Our formal analysis shows that the new method has a monotonically improving performance with more demonstration data, as well as probabilistically bounded error, both under full and partial observability. Experiments in a simulated robotic application of penetrating a continuous patrol under occlusion shows the relatively improved performance and speed up of the new method and validates the utility of online IRL.Preprint. Work in progress. | 10.1007/s10458-020-09485-4 | [
"https://arxiv.org/pdf/1805.07871v1.pdf"
] | 29,161,257 | 1805.07871 | c6f1b036b7bddc22d3cc7342c1f8035e90588946 |
A Framework and Method for Online Inverse Reinforcement Learning
Saurabh Arora
Department of Computer Science
Department of Computer Science
University of Georgia
30605AthensGA
Prashant Doshi [email protected]
School of Computing
University of Georgia
30605AthensGA
Bikramjit Banerjee [email protected]
University of Southern Mississippi
39406HattiesburgMS
A Framework and Method for Online Inverse Reinforcement Learning
Inverse reinforcement learning (IRL) is the problem of learning the preferences of an agent from the observations of its behavior on a task. While this problem has been well investigated, the related problem of online IRL-where the observations are incrementally accrued, yet the demands of the application often prohibit a full rerun of an IRL method-has received relatively less attention. We introduce the first formal framework for online IRL, called incremental IRL (I2RL), and a new method that advances maximum entropy IRL with hidden variables, to this setting. Our formal analysis shows that the new method has a monotonically improving performance with more demonstration data, as well as probabilistically bounded error, both under full and partial observability. Experiments in a simulated robotic application of penetrating a continuous patrol under occlusion shows the relatively improved performance and speed up of the new method and validates the utility of online IRL.Preprint. Work in progress.
Introduction
Inverse reinforcement learning (IRL) [11,15] refers to the problem of ascertaining an agent's preferences from observations of its behavior on a task. It inverts RL with its focus on learning the reward function given information about optimal action trajectories. IRL lends itself naturally to a robot learning from demonstrations by a human teacher (often called the expert) in controlled environments, and therefore finds application in robot learning from demonstration [2] and imitation learning [12].
Previous methods for IRL [1,3,7,8,13,19] typically operate on large batches of observations and yield an estimate of the expert's reward function in a one-shot manner. These methods fill the need of applications that predominantly center on imitation learning. Here, the task being performed is observed and must be replicated subsequently. However, newer applications of IRL are motivating the need for continuous learning from streaming data or data in mini-batches. Consider, for example, the task of forecasting a person's goals in an everyday setting from observing his activities using a body camera [14]. Alternately, a robotic learner observing continuous patrols from a vantage point is tasked with penetrating the patrolling and reaching a goal location speedily and without being spotted [4]. Both these applications offer streaming observations, no episodic tasks, and would benefit from progressively learning and assessing the other agent's preferences.
In this paper, we present the first formal framework to facilitate investigations into online IRL. The framework, labeled as incremental IRL (I2RL), establishes the key components of this problem and rigorously defines the notion of an incremental variant of IRL. Next, we focus on methods for online IRL. Very few methods exist [10,14] that are suited for online IRL, and we cast these in the formal context provided by I2RL. More importantly, we introduce a new method that generalizes recent advances in maximum entropy IRL with hidden training data to an online setting. Key theoretical properties of this new method are also established. In particular, we show that the performance of the method improves monotonically with more data and that we may probabilistically bound the error in estimating the true reward after some data both under full observability and when some data is hidden.
Our experiments evaluate the benefit of online IRL on the previously introduced robotic application of penetrating continuous patrols under occlusion [4]. We comprehensively demonstrate that the proposed incremental method achieves a learning performance that is similar to that of the previously introduced batch method. More importantly, it does so in significantly less time thereby suffering from far fewer timeouts and a significantly improved success rate. Consequently, this paper makes important initial contributions toward the nascent problem of online IRL by offering both a formal framework, I2RL, and a new general method that performs well.
Background on IRL
Informally, IRL refers to both the problem and method by which an agent learns preferences of another agent that explain the latter's observed behavior [15]. Usually considered an "expert" in the task that it is performing, the observed agent, say E, is modeled as executing the optimal policy of a standard MDP defined as S E , A E , T E , R E . The learning agent L is assumed to perfectly know the parameters of the MDP except the reward function. Consequently, the learner's task may be viewed as finding a reward function under which the expert's observed behavior is optimal.
This problem in general is ill-posed because for any given behavior there are infinitely-many reward functions which align with the behavior. Ng and Russell [11] first formalized this task as a linear program in which the reward function that maximizes the difference in value between the expert's policy and the next best policy is sought. Abbeel and Ng [1] present an algorithm that allows the expert E to provide task demonstrations instead of its policy. The reward function is modeled as a linear combination of K binary features, φ k : S E × A E → [0, 1] , k ∈ {1, 2 . . . K}, each of which maps a state from the set of states S E and an action from the set of E's actions A E to a value in [0,1]. Note that non-binary feature functions can always be converted into binary feature functions although there will be more of them. Throughout this article, we assume that these features are known to or selected by the learner. The reward function for expert E is then defined as R E (s, a) = θ T φ(s, a) = K k=1 θ k · φ k (s, a), where θ k are the weights in vector θ; let R = R |S E ×A E | be the continuous space of the reward functions. The learner's task is reduced to finding a vector of weights that complete the reward function, and subsequently the MDP such that the demonstrated behavior is optimal.
To assist in finding the weights, feature expectations are calculated for the expert's demonstration and compared to those of possible trajectories [19]. A demonstration is provided as one or more trajectories, which are a sequence of length-T state-action pairs, ( s, a 1 , s, a 2 , . . . s, a T ), corresponding to an observation of the expert's behavior across T time steps. Feature expectations of the expert are discounted averages over all observed trajectories,
φ k = 1 |X| X∈X T t=1 γ t φ k ( s, a t )
, where X is a trajectory in the set of all observed trajectories, X , and γ ∈ (0, 1) is a discount factor. Given a set of reward weights the expert's MDP is completed and solved optimally to produce π * E . The differenceφ − φ π * E provides a gradient with respect to the reward weights for a numerical solver.
Maximum Entropy IRL
While expected to be valid in some contexts, the max-margin approach of Abeel and Ng [1] introduces a bias into the learned reward function in general. To address this, Ziebart et al. [19] find the distribution with maximum entropy over all trajectories that is constrained to match the observed feature expectations. max ∆ − X∈X P r(X) log P r(X) subject to X∈X P r(X) = 1 X∈X P r(X)
T t=1 γ t φ k ( s, a t ) =φ k ∀k (1)
Here, ∆ is the space of all distributions over the space X of all trajectories. We denote the analytical feature expectation on the left hand side of the second constraint above as E X [φ k ].Equivalently, as the distribution is parameterized by learned weights θ, E X [φ k ] represents the feature expectations of policy π * E computed using the learned reward function. The benefit is that distribution P r(X; θ) makes no further assumptions beyond those which are needed to match its constraints and is maximally noncommittal to any one trajectory. As such, it is most generalizable by being the least wrong most often of all alternative distributions. A disadvantage of this approach is that it becomes intractable for long trajectories because the set of trajectories grows exponentially with length. In this regard, another formulation defines the maximum entropy distribution over policies [7], the size of which is also large but fixed.
IRL under Occlusion
Our motivating application involves a subject robot that must observe other mobile robots from a fixed vantage point. Its sensors allow it a limited observation area; within this area it can observe the other robots fully, outside this area it cannot observe at all. Previous methods [4,5] denote this special case of partial observability where certain states are either fully observable or fully hidden as occlusion. Subsequently, the trajectories gathered by the learner exhibit missing data associated with time steps where the expert robot is in one of the occluded states. The empirical feature expectation of the expertφ k will therefore exclude the occluded states (and actions in those states).
To ensure that the feature expectation constraint in IRL accounts for the missing data, Bogert and Doshi [4] while maximizing entropy over policies [7] limit the calculation of feature expectations for policies to observable states only. A recent approach [6] improves on this method by taking an expectation over the missing data conditioned on the observations. Completing the missing data in this way allows the use of all states in the constraint and with it the Lagrangian dual's gradient as well. The nonlinear program in (1) is modified to account for the hidden data and its expectation.
Let Y be the observed portion of a trajectory, Z is one way of completing the hidden portions of this trajectory, Z is the set of all possible Z, and X = Y ∪ Z. Now we may treat Z as a latent variable and take the expectation to arrive at a new definition for the expert's feature expectations:
φ Z| Y k 1 |Y| Y ∈Y Z∈Z P r(Z|Y ; θ) T t=1 γ t φ k ( s, a t )(2)
where s, a t ∈ Y ∪ Z, Y is the set of all observed Y and X is the set of all complete trajectories. The program in (1) is modified by replacingφ k withφ Z| Y k , as we show below. Notice that in the case of no occlusion Z is empty and X = Y. Thereforeφ Z| Y k =φ k and this method reduces to (1). Thus, this method generalizes the previous maximum entropy IRL method. max
∆ − X∈X P r(X) log P r(X) subject to X∈X P r(X) = 1 X∈X P r(X) T t=1 γ t φ k ( s, a t ) =φ Z| Y k ∀k (3)
However, the program in (3) becomes nonconvex due to the presence of P r(Z|Y ). As such, finding its optima by Lagrangian relaxation is not trivial. Wang et al. [18] suggests a log linear approximation to obtain maximizing P r(X) and casts the problem of finding the reward weights as likelihood maximization that can be solved within the schema of expectation-maximization [9]. An application of this approach to the problem of IRL under occlusion yields the following two steps with more details in [6]: E-step This step involves calculating Eq. 2 to arrive atφ
Z| Y ,(t) k
, a conditional expectation of the K feature functions using the parameter θ (t) from the previous iteration. We may initialize the parameter vector randomly.
M-step In this step, the modified program (3) is optimized by utilizingφ
Z| Y ,(t) k
from the E-step above as the expert's constant feature expectations to obtain θ (t+1) . Normalized exponentiated gradient descent [16] solves the program.
As EM may converge to local minima, this process is repeated with random initial θ and the solution with the maximum entropy is chosen as the final one.
Incremental IRL (I2RL)
We present our framework labeled I2RL in order to realize IRL in an online setting. Then, we present an initial set of methods cast into the framework of I2RL. In addition to including previous techniques for online IRL, we introduce a new method that generalizes the maximum entropy IRL involving latent variables.
Framework
Expanding on the notation and definitions given previously in Section 2, we introduce some new notation, which will help us in defining I2RL rigorously. DEFINITION 1 (SET OF FIXED-LENGTH TRAJECTORIES). The set of all trajectories of finite length T from an MDP attributed to the expert E is defined as,
X T = {X|X = ( s, a 1 , s, a 2 , . . . s, a T ), ∀s ∈ S E , ∀a ∈ A E }.
Let N + be a bounded set of natural numbers. Then, the set of all trajectories is X =
X 1 ∪ X 2 ∪ . . . ∪ X |N + | .
Recall that a demonstration is some finite set of trajectories of varying lengths, X = {X T |X T ∈ X T , T ∈ N + }, and it includes the empty set. 1 Subsequently, we may define the set of demonstrations. DEFINITION 2 (SET OF DEMONSTRATIONS). The set of demonstrations is the set of all subsets of the spaces of trajectories of varying lengths. Therefore, it is the power set,
2 X = 2 X 1 ∪X 2 ∪...∪X |N + | .
In the context of the definitions above, traditional IRL attributes an MDP without the reward function to the expert, and usually involves determining an estimate of the expert's reward function,R E ∈ R, which best explains the observed demonstration, X ∈ 2 X . As such, we may view IRL as a function:
ζ(M DP /R E , X ) =R E .
To establish the definition of I2RL, we must first define a session of I2RL. LetR 0 E be an initial estimate of the expert's reward function. DEFINITION 3 (SESSION). A session of I2RL represents the following function:
ζ i (M DP /R E , X i ,R i−1 E ) =R i E .
In this i th session where i > 0, I2RL takes as input the expert's MDP sans the reward function, the demonstration observed since the previous session, X i ∈ 2 X , and the reward function estimate learned from the previous session. It yields a revised estimate of the expert's rewards from this session of IRL.
Note that we may replace the reward function estimates with some statistic that is sufficient to compute it (e.g., θ).
We may let the sessions run infinitely. Alternately, we may establish stopping criteria for I2RL, which would allow us to automatically terminate the sessions once the criterion is satisfied. Let LL(R i E |X 1:i ) be the log likelihood of the demonstrations received up to the i th session given the current estimate of the expert's reward function. We may view this likelihood as a measure of how well the learned reward function explains the observed data.
R i E |X 1:i ) − LL(R i−1 E |X 1:i−1 )| ,
where is a very small positive number and is given.
Definition 4 reflects the fact that additional sessions are not impacting the learning significantly. On the other hand, a more effective stopping criterion is possible if we know the expert's true policy. We utilize the inverse learning error [8] in this criterion, which gives the loss of value if L uses the learned policy on the task instead of the expert's:
ILE(π * E , π L E ) = ||V π * E − V π L E || 1 .
Here, V π * E is the optimal value function of E's MDP and V π L E is the value function due to utilizing the learned policy π L E in E's MDP. Notice that when the learned reward function results in an identical optimal policy to E's optimal policy, π * E = π L E , ILE will be zero; it increases monotonically as the two policies increasingly diverge in value.
DEFINITION 5 (STOPPING CRITERION #2). Terminate the sessions of I2RL when ILE(π * E , π i−1 E ) − ILE(π * E , π i E )
, where is a very small positive error and is given. Here, π i E is the optimal policy obtained from solving the expert's MDP with the reward functionR i E learned in session i.
Obviously, prior knowledge of the expert's policy is not common. Therefore, we view this criterion as being more useful during the formative assessments of I2RL methods.
Utilizing Defs. 3, 4, and 5, we formally define I2RL next.
DEFINITION 6 (I2RL). Incremental IRL (I2RL) is a sequence of learning sessions
{ζ 1 (M DP /R E , X 1 ,R 0 E ), ζ 2 (M DP /R E , X 2 ,R 1 E ), ζ 3 (M DP /R E , X 3 ,R 2 E ), .
. . , }, which continue infinitely, or until either stopping criterion #1 or #2 is met depending on which one is chosen a'priori.
Methods
Our goal is to facilitate a portfolio of methods each with its own appealing properties under the framework of I2RL. This would enable online IRL in various applications. An early method for online IRL [10] modifies Ng and Russell's linear program [11] to take as input a single trajectory (instead of a policy) and replaces the linear program with an incremental update of the reward function. We may easily present this method within the framework of I2RL. A session of this method
ζ i (M DP /R E , X i ,R i−1 E )
is realized as follows: Each X i is a single state-action pair s, a ; initial reward functionR 0
E = 1 √ |S E | , whereas for i > 0R i E =R i−1 E + α · v i ,
where v i is the difference in expected value of the observed action a at state s and the (predicted) optimal action obtained by solving the MDP with the reward functionR i−1 E , and α is the learning rate. While no explicit stopping criterion is specified, the incremental method terminates when it runs out of observed state-action pairs. Jin et al. [10] provide the algorithm for this method as well as error bounds.
A recent method by Rhinehart and Kitani [14] performs online IRL for activity forecasting. Casting this method to the framework of I2RL, a session of this method is ζ i (M DP /R E , X i , θ i−1 ), which yields θ i . Input observations for the session, X i , are all the activity trajectories observed since the end of previous goal until the next goal is reached. The session IRL finds the reward weights θ i that minimize the margin φ π * E −φ using gradient descent. Here, the expert's policy π * E is obtained by using soft value iteration for solving the complete MDP that includes a reward function estimate obtained using previous weights θ i−1 . No explicit stopping criterion is utilized for the online learning thereby emphasizing its continuity.
Incremental Latent MaxEnt
We present a new method for online IRL under the I2RL framework, which modifies the latent maximum entropy (LME) optimization of Section 2.2. It offers the capability to perform online IRL in contexts where portions of the observed trajectory may be occluded.
For differentiation, we refer to the original method as the batch version. Recall the k th feature expectation of the expert computed in Eq. 2 as part of the E-step. If X i = Y i ∪ Z i is a trajectory in session i composed of the observed portion Y i and the hidden portion Z i ,φ Z| Y ,1:i k is the expectation computed using all trajectories obtained so far, we may rewrite Eq. 2 for feature k as:
φ Z| Y ,1:i k 1 |Y1:i| Y ∈Y 1:i Z∈Z P r(Z|Y ; θ) T t=1 γ t φ k ( s, a t) = 1 |Y1:i| Y ∈Y 1:i−1 Z∈Z P r(Z|Y ; θ) T t=1 γ t φ k ( s, a t) + Y ∈Y i Z∈Z P r(Z|Y ; θ) T t=1 γ t φ k ( s, a t) = 1 |Y1:i| |Y1:i−1|φ Z| Y ,1:i−1 k + |Yi|φ Z| Y ,i k (by definition) = |Y1:i−1| |Y1:i−1| + |Yi|φ Z| Y ,1:i−1 k + |Yi| |Y1:i−1| + |Yi|φ Z| Y ,i k (4)
A session of our incremental LME takes as input the expert's MDP sans the reward function, the current session's trajectories, the number of trajectories observed until the start of this session, the expert's empirical feature expectation and reward weights from previous session. More concisely, each session is denoted by,
ζ i (M DP /R E , Y i , |Y 1:i−1 |,φ Z| Y ,1:i−1 , θ i−1 )
. In each session, the feature expectations using that session's observed trajectories are computed, and the output feature expectations are obtained by including these as shown above in Eq. 4. Of course, each session may involve several iterations of the E-and M-steps until the converged reward weights θ i is obtained thereby giving the corresponding reward function estimate. Here, the expert's feature expectation is a sufficient statistic to compute the reward function. We refer to this method as LME I2RL.
Wang et al. [17] shows that if the distribution over the trajectories in (3) is log linear, then the reward function that maximizes the entropy of the trajectory distribution also maximizes the log likelihood of the observed portions of the trajectories. Given this linkage with log likelihood, the stopping criterion #1 as given in Def. 4 is utilized. In other words, the sessions will terminate when,
|LL(θ i |Y 1:i ) − LL(θ i−1 |Y 1:i−1 )| ≤ ,
where θ i fully parameterizes the reward function estimate from the i th session and is a given acceptable difference. The algorithm for this method is presented in the supplementary file.
Incremental LME admits some significant convergence guarantees with a confidence of meeting the desired error-specification on the demonstration likelihood. We state these results with discussions but defer the detailed proofs to the supplementary file. LEMMA 1 (MONOTONICITY). Incremental LME increases the demonstration likelihood monotonically with each new session, LL(θ i |Y 1:i ) − LL(θ i−1 |Y 1:i−1 ) 0, when |Y 1:i−1 | |Y i |, yielding a feasible solution to I2RL with missing training data.
While Lemma 1 suggests that the log likelihood of the demonstration can only improve from session to session after a significant amount of training data has been accumulated, a stronger result illuminates the confidence with which it approaches, over a series of sessions, the log likelihood of the expert's true weights θ E . To establish this convergence result, we first focus on the full observability setting. For a desired bound ε on the log-likelihood loss (difference in likelihood w.r.t expert's θ E and that w.r.t learned θ i ) for session i in this setting, the confidence is bounded as follows: THEOREM 1 (CONFIDENCE FOR INCREMENTAL MAX-ENTROPY IRL). Given X 1:i as the (fully observed) demonstration till session i, θ E ∈ [0, 1] K is the expert's weights, and θ i is the solution of session i for incremental max-entropy IRL, we have
LL(θ E |X 1:i ) − LL(θ i |X 1:i ) ε with probability at least 1 − δ, where δ = 2K exp − |X 1:i|ε 2 (1−γ) 2 2K 2 .
As a step toward relaxing the assumption of full observability made above, we first consider the error in approximating the feature expectations of the unobserved portions of the data, accumulated from the first to the current session of I2RL. Notice thatφ Z| Y ,1:i k given by Eq. 4, is an approximation of the full-observability expectationφ 1:i k , estimated by sampling the hidden Z from P r(Z|Y, θ i−1 ) [6]. The following lemma relates the error due to this sampling-based approximation, i.e., φ1:i k −φ Z| Y ,1:i k , to the difference between feature expectations for learned policy and that estimated for the expert's true policy. LEMMA 2 (CONSTRAINT BOUNDS FOR INCREMENTAL LME ). Suppose X 1:i has portions of trajectories in Z 1:i = {Z|(Y, Z) ∈ X 1:i } occluded from the learner. Let ε sampling be a bound on the
error φ1:i k −φ Z| Y ,1:i k 1
, k ∈ {1, 2 . . . K} after N samples for approximation. Then, with probability at least 1 − (δ + δ sampling ), the following holds:
E X [φ k ] −φ Z| Y ,1:i k 1 ε/2K + ε sampling , k ∈ {1, 2 . . . K}
where ε, δ are as defined in Theorem 1, and δ sampling = 2K exp(−2(1 − γ) 2 (ε sampling ) 2 N ).
Theorem 1 combined with Lemma 2 now allows us to completely relax the assumption of full observability as follows: THEOREM 2 (CONFIDENCE FOR INCREMENTAL LME). Let Y 1:i = {Y |(Y, Z) ∈ X 1:i } be the observed portions of the demonstration until session i, ε, ε sampling are inputs as defined in Lemma 2, and θ i is the solution of session i for incremental LME, then we have
LL(θ E |Y 1:i ) − LL(θ i |Y 1:i ) ≤ ε latent
with a confidence at least 1 − δ latent , where ε latent = ε + 2Kε sampling , and δ latent = δ + δ sampling .
Given required inputs and fixed ε latent , Theorem 2 computes a confidence 1 − δ latent . The amount of required demonstration, |Y 1:i |, can be decided based on the desired magnitude of the confidence.
Experiments
We evaluate the benefit of online IRL on the perimeter patrol domain, introduced by Bogert and Doshi [4], and simulated in ROS using data and files made publicly available. It involves a robotic learner observing two guards continuously patrol a hallway as shown in Fig. 1(left). The learner is tasked with reaching the cell marked 'X' (Fig. 1(right)) without being spotted by any of the patrollers. Each guard can see up to 3 grid cells in front. This domain differs from the usual applications of IRL toward imitation learning. In particular, the learner must solve its own distinct decision-making problem (modeled as another MDP) that is reliant on knowing how the guards patrol, which can be estimated from inferring each guard's preferences. The guards utilized two types of binary state-action features leading to a total of six: does the current action in the current state make the guard change its grid cell? And, is the robot's current cell (x, y) in a given region of the grid, which is divided into 5 regions? The true weight vector θ E for these feature functions is .57, 0, 0, 0, .43, 0 .
GOAL
Guard 1
Guard 2 GOAL Figure 1: The map and corresponding MDP state space for each patroller [4]. Shaded squares are the turn-around states and the red 'X' is L's goal state. L is unaware of where each patroller turns around or their navigation capabilities.
Notice that the learner's vantage point limits its observability. Therefore, this domain requires IRL under occlusion. Among the methods discussed in Section 2, LME allows IRL when portions of the trajectory are hidden, as mentioned previously.
To establish the benefit of I2RL, we compare the performances of both batch and incremental variants of this method.
Efficacy of the methods was compared using the following metrics: learned behavior accuracy (LBA), which is the percentage of overlap between the learned policy's actions and demonstrated policy's actions; ILE, which was defined previously in Section 3.1; and success rate, which is the percentage of runs where L reaches the goal state undetected. Note that when the learned behavior accuracy is high, we expect ILE to be low. However, as MDPs admit multiple optimal policies, a low ILE need not translate into a high behavior accuracy. As such, these two metrics are not strictly correlated.
We report the LBA, ILE, and the time duration in seconds of the inverse learning for both batch and incremental LME in Figs. 2(a) and 2(b); the latter under a 30% degree of observability and the former under 70%. Each data point is averaged over 100 trials for a fixed degree of observability and a fixed number of state-action pairs in the demonstration X . While the entire demonstration is given as input to the batch variant, the X i for each session has one trajectory. As such, the incremental learning stops when there are no more trajectories remaining to be processed. Inc LME Batch LME Random Policy (c) Success rates and timeouts under both 30%, 70%, and full observability. The success rate obtained by a random baseline is shown as well. This method does not perform IRL and picks a random time to start moving to the goal state. To better understand any differentiations in performance, we introduce a third variant that implements each session as, ζ i (M DP /R E , Y i , |Y i:i−1 |,φ Z| Y ,1:i−1 ). Notice that this incremental variant does not utilize the previous session's reward weights, instead it initializes them randomly in each session; we label it as incremental LME with random weights.
As the size of demonstration increases, both batch and incremental variants exhibit similar quality of learning although initially the incremental performs slightly worse. Importantly, incremental LME achieves these learning accuracies in significantly less time compared to batch, with the speed up ratio increasing to four as |X | grows.
Is there a benefit due to the reduced learning time? We show the success rates of the learner when each of the three methods are utilized for IRL in Fig. 2(c). Incremental LME begins to demonstrate comparatively better success rates under 30% observability itself, which further improves when the observability is at 70%. While the batch LME's success rate does not exceed 40%, the incremental variant succeeds in reaching the goal location undetected in about 65% of the runs under full observability (the last data point). A deeper analysis to understand these differences reveals that batch LME suffers from a large percentage of timeouts -more than 50% for low observability, which drops down to 10% for full observability. A timeout occurs when IRL fails to converge to a reward estimate in the given amount of time for each run. On the other hand, incremental LME suffers from very few timeouts. Of course, other factors play a role in success as well.
Concluding Remarks
This paper makes an important contribution toward the nascent problem of online IRL by offering the first formal framework, I2RL, to help analyze the class of methods for online IRL. We presented a new method within the I2RL framework that generalizes recent advances in maximum entropy IRL to online settings. Our comprehensive experiments show that the new I2RL method indeed improves over the previous state-of-the-art in time-limited domains, by approximately reproducing its accuracy but in significantly less time. In particular, we have shown that given practical constraints on computation time for an online IRL application, the new method suffers fewer timeouts and is thus able to solve the problem with a higher success rate. In addition to experimental validation, we have also established key theoretical properties of the new method, ensuring the desired monotonic progress within a pre-computable confidence of convergence. Future avenues for investigation include understanding how I2RL can address some of the challenges related to scalability to a larger number of experts as well as the challenge of accommodating unknown dynamics of the experts.
DEFINITION 4 (
4STOPPING CRITERION #1). Terminate the sessions of I2RL when |LL(
(
Learned behavior accuracy, ILE, and learning duration under a 30% degree of observability. Learned behavior accuracy, ILE, and learning duration under a 70% degree of observability. Observability (%),Timeout threshold (secs))
Figure 2 :
2Various metrics for comparing the performances of batch and incremental LME on Bogert and Doshi's[4] perimeter patrolling domain. Our experiments were run on a Ubuntu 16 LTS system with an Intel i5 2.8GHz CPU core and 8GB RAM.
Repeated trajectories in a demonstration can usually be excluded for many methods without impacting the learning.
Apprenticeship learning via inverse reinforcement learning. P Abbeel, A Y Ng, Twenty-first International Conference on Machine Learning (ICML). P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Twenty-first International Conference on Machine Learning (ICML), pages 1-8, 2004.
A survey of robot learning from demonstration. B D Argall, S Chernova, M Veloso, B Browning, Robotics and autonomous systems. 575B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469-483, 2009.
Apprenticeship learning about multiple intentions. M Babes-Vroman, V Marivate, K Subramanian, M Littman, 28th International Conference on Machine Learning (ICML). M. Babes-Vroman, V. Marivate, K. Subramanian, and M. Littman. Apprenticeship learning about multiple intentions. In 28th International Conference on Machine Learning (ICML), pages 897-904, 2011.
Multi-robot inverse reinforcement learning under occlusion with state transition estimation. K Bogert, P Doshi, International Conference on Autonomous Agents and Multiagent Systems (AAMAS). K. Bogert and P. Doshi. Multi-robot inverse reinforcement learning under occlusion with state transition estimation. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 1837-1838, 2015.
Toward estimating others' transition models under occlusion for multi-robot irl. K Bogert, P Doshi, 24th International Joint Conference on Artificial Intelligence (IJCAI). K. Bogert and P. Doshi. Toward estimating others' transition models under occlusion for multi-robot irl. In 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 1867-1873, 2015.
Expectation-maximization for inverse reinforcement learning with hidden data. K Bogert, J F S Lin, P Doshi, D Kulic, 2016 International Conference on Autonomous Agents and Multiagent Systems. K. Bogert, J. F.-S. Lin, P. Doshi, and D. Kulic. Expectation-maximization for inverse reinforce- ment learning with hidden data. In 2016 International Conference on Autonomous Agents and Multiagent Systems, pages 1034-1042, 2016.
Structured apprenticeship learning. A Boularias, O Krömer, J Peters, European Conference on Machine Learning and Knowledge Discovery in Databases, Part II. A. Boularias, O. Krömer, and J. Peters. Structured apprenticeship learning. In European Conference on Machine Learning and Knowledge Discovery in Databases, Part II, pages 227-242, 2012.
Inverse reinforcement learning in partially observable environments. J Choi, K.-E Kim, J. Mach. Learn. Res. 12J. Choi and K.-E. Kim. Inverse reinforcement learning in partially observable environments. J. Mach. Learn. Res., 12:691-730, 2011.
Maximum likelihood from incomplete data via the em algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society, Series B (Methodological). 39A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B (Methodological), 39:1-38, 1977.
Convergence analysis of an incremental approach to online inverse reinforcement learning. Z Jin, H Qian, S Yi Chen, M Liang Zhu, Journal of Zhejiang University -Science C. 121Z. jun Jin, H. Qian, S. yi Chen, and M. liang Zhu. Convergence analysis of an incremental approach to online inverse reinforcement learning. Journal of Zhejiang University -Science C, 12(1):17-24, 2010.
Algorithms for inverse reinforcement learning. A Ng, S Russell, Seventeenth International Conference on Machine Learning. A. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Seventeenth Interna- tional Conference on Machine Learning, pages 663-670, 2000.
An algorithmic perspective on imitation learning. T Osa, J Pajarinen, G Neumann, J A Bagnell, P Abbeel, J Peters, Foundations and Trends in Robotics. 71-2T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, and J. Peters. An algorithmic perspective on imitation learning. Foundations and Trends in Robotics, 7(1-2):1-179, 2018.
Bayesian inverse reinforcement learning. D Ramachandran, E Amir, 20th International Joint Conference on Artifical Intelligence (IJCAI). D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In 20th International Joint Conference on Artifical Intelligence (IJCAI), pages 2586-2591, 2007.
First-person activity forecasting with online inverse reinforcement learning. N Rhinehart, K M Kitani, International Conference on Computer Vision (ICCV. N. Rhinehart and K. M. Kitani. First-person activity forecasting with online inverse reinforce- ment learning. In International Conference on Computer Vision (ICCV), 2017.
Learning agents for uncertain environments (extended abstract). S Russell, Eleventh Annual Conference on Computational Learning Theory. S. Russell. Learning agents for uncertain environments (extended abstract). In Eleventh Annual Conference on Computational Learning Theory, pages 101-103, 1998.
Adaptivity and optimism: An improved exponentiated gradient algorithm. J Steinhardt, P Liang, 31st International Conference on Machine Learning. J. Steinhardt and P. Liang. Adaptivity and optimism: An improved exponentiated gradient algorithm. In 31st International Conference on Machine Learning, pages 1593-1601, 2014.
The Latent Maximum Entropy Principle. S Wang, R Rosenfeld, Y Zhao, D Schuurmans, IEEE International Symposium on Information Theory. S. Wang, R. Rosenfeld, Y. Zhao, and D. Schuurmans. The Latent Maximum Entropy Principle. In IEEE International Symposium on Information Theory, pages 131-131, 2002.
The Latent Maximum Entropy Principle. S Wang, D Schuurmans Yunxin, Zhao, ACM Transactions on Knowledge Discovery from Data. 68S. Wang and D. Schuurmans Yunxin Zhao. The Latent Maximum Entropy Principle. ACM Transactions on Knowledge Discovery from Data, 6(8), 2012.
Maximum entropy inverse reinforcement learning. B D Ziebart, A Maas, J A Bagnell, A K Dey, 23rd National Conference on Artificial Intelligence. 3B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In 23rd National Conference on Artificial Intelligence -Volume 3, pages 1433-1438, 2008.
| [] |
[
"Particle filter efficiency under limited communication",
"Particle filter efficiency under limited communication"
] | [
"Deborshee Sen [email protected] \nDepartment of Mathematical Sciences\nUniversity of Bath\nBA27AYBathUK\n"
] | [
"Department of Mathematical Sciences\nUniversity of Bath\nBA27AYBathUK"
] | [] | Sequential Monte Carlo methods are typically not straightforward to implement on parallel architectures. This is because standard resampling schemes involve communication between all particles. The α-sequential Monte Carlo method was proposed recently as a potential solution to this which limits communication between particles. This limited communication is controlled through a sequence of stochastic matrices known as α-matrices. We study the influence of the communication structure on the convergence and stability properties of the resulting algorithms. In particular, we quantitatively show that the mixing properties of the α-matrices play an important role in the stability properties of the algorithm. Moreover, we prove that one can ensure good mixing properties by using randomized communication structures where each particle only communicates with a few neighboring particles. The resulting algorithms converge at the usual Monte Carlo rate. This leads to efficient versions of distributed sequential Monte Carlo. | 10.1093/biomet/asac015 | [
"https://arxiv.org/pdf/1904.09623v3.pdf"
] | 127,999,470 | 1904.09623 | cffe6a1549fa9c9b06f13ea95646fd931be9847e |
Particle filter efficiency under limited communication
Deborshee Sen [email protected]
Department of Mathematical Sciences
University of Bath
BA27AYBathUK
Particle filter efficiency under limited communication
α-sequential Monte CarloBootstrap particle filterCentral limit theoremDistributed algorithmsMixingStability
Sequential Monte Carlo methods are typically not straightforward to implement on parallel architectures. This is because standard resampling schemes involve communication between all particles. The α-sequential Monte Carlo method was proposed recently as a potential solution to this which limits communication between particles. This limited communication is controlled through a sequence of stochastic matrices known as α-matrices. We study the influence of the communication structure on the convergence and stability properties of the resulting algorithms. In particular, we quantitatively show that the mixing properties of the α-matrices play an important role in the stability properties of the algorithm. Moreover, we prove that one can ensure good mixing properties by using randomized communication structures where each particle only communicates with a few neighboring particles. The resulting algorithms converge at the usual Monte Carlo rate. This leads to efficient versions of distributed sequential Monte Carlo.
Introduction
Hidden Markov models (Rabiner and Juang, 1986), also known as state-space models (Durbin and Koopman, 2012), constitute a large class of numerical methods frequently used in statistics and signal processing. Examples of application areas include ecology (Michelot et al., 2016), finance (Nystrup et al., 2017), medical physics (Ingle et al., 2015), natural language processing (Kang et al., 2018), oceanology (Grecian et al., 2018), and sociology (Qiao et al., 2017). A hidden Markov model with measurable state space (X, X ) and observation space (Y, Y) is a process {(X t , Y t )} t≥0 , where {X t } t≥0 is a Markov chain on X, and each observation Y t , valued in Y, is conditionally independent of the rest of the process given X t . Let π 0 and {K t } t≥1 be respectively a probability distribution and a sequence of Markov kernels on (X, X ), and let {g t } t≥0 be a sequence of Markov kernels acting from (X, X ) to (Y, Y), with g t (x, ·) admitting a strictly positive density -denoted similarly by g t (x, y) -with respect to some dominating σ-finite measure for every t ≥ 0, which we shall assume to be the Lebesgue measure for convenience. The hidden Markov model specified by π 0 , {K t } t≥1 and {g t } t≥0 is
X 0 ∼ π 0 (·), X t | (X t−1 = x t−1 ) ∼ K t (x t−1 , ·) (t ≥ 1), Y t | (X t = x t ) ∼ g t (x t , ·) (t ≥ 0).(1)
In the sequel, we fix a sequence of observations y = {y t } t≥0 and use g t (x) to denote g t (x, y t ) for t ≥ 0. The functions {g t (·)} t≥0 are known as potential functions and the kernels {K t } t≥1 are known as latent transition kernels. Let M(X) and P(X) denote the set of measures and probability measures on (X, X ), respectively, and let B(X) denote the set of all real-valued measurable functions on (X, X ) which are bounded by one in absolute value. For a measure π ∈ M(X) and a function ϕ ∈ B(X), we define π(ϕ) = X ϕ(x)π(dx), and for a Markov kernel K on (X, X ), we define Kϕ(x) = X ϕ(x )K(x, dx ). We use the notation Y s:t for s ≤ t to denote (Y s , . . . , Y t ).
We focus our attention on the predictive distribution in this article, which is the distribution of X T | Y 0:(T −1) for T ≥ 1. The analysis developed can be straightforwardly extended to the filtering distribution, which is the distribution of X T | Y 0:T . We denote the predictive distribution by π T {X T | Y 0:(T −1) } for T ≥ 1. Integrals of functions ϕ ∈ B(X) with respect to the predictive distribution can be written as
π T (ϕ) = 1 Z T X T +1 π 0 (dx 0 ) T t=1 K t (x t−1 , dx t ) T −1 t=0 g t (x t ) ϕ(x T ),(2)
where Z T is the normalisation constant, which is the marginal likelihood of the observations Y 0:(T −1) given by Z T = X T +1 π 0 (dx 0 ) T t=1 K t (x t−1 , dx t ) T −1 t=0 g t (x t ); we also define Z 0 = 1. For the purpose of analysis, it is useful to also consider the unnormalised measure γ T defined as γ T (ϕ) = Z T × π T (ϕ).
(
Unfortunately, these integrals cannot be evaluated analytically except for linear Gaussian models, and Monte Carlo methods must be used instead. The bootstrap particle filter algorithm (Gordon et al., 1993) is commonly used for inference in hidden Markov models. It starts by generating N ≥ 1 independent and identically distributed samples, termed particles, X 0 = {X i 0 } N i=1 from the distribution π 0 . Given particles X t−1 = {X i t−1 } N i=1 , it performs multinomial resampling according to (unnormalised) weights {g t−1 (X i t−1 )} N i=1 , before propagating the particles via the Markov kernel K t . At each time t ≥ 0, the bootstrap particle filter provides a particle approximation of the predictive distribution π t and the normalisation constant Z t .
Parallel and distributed algorithms have become increasingly relevant as parallel computing architectures have become the norm rather than the exception. While there has been significant research devoted to distributed Markov chain Monte Carlo algorithms (Ahn et al., 2014;Scott et al., 2016;Li et al., 2017;Heng and Jacob, 2019;Ou et al., 2021), the same has generally not been true for particle filtering. The resampling step of particle filters makes it difficult to parallelize. Two parallel implementations of the resampling step were proposed by Bolic et al. (2005), and alternative schemes were investigated by Miao et al. (2011);Murray (2012); Murray et al. (2016). Vergé et al. (2015 provided algorithms involving resampling at two hierarchical levels, and Del Moral et al. (2017) proved convergence and central limit theorem. Míguez (2014); Míguez and Vázquez (2016) provided proofs of convergence for distributed particle filters relying on techniques developed in Bolic et al. (2005); however, these assumed that a certain notion of weight degeneracy does not occur. We prove in this article that weight degeneracy can be avoided by suitably choosing network architectures of distributed particle filters. Heine et al. (2020) designed stable-in-time distributed sequential Monte Carlo algorithms with limited interactions; however, these converge at a slower rate than the standard Monte Carlo rate.
The α-sequential Monte Carlo algorithm was proposed recently as a general method for distributed sequential Monte Carlo. This is a generalisation of the bootstrap particle filter that can be implemented on parallel architectures. This is achieved by allowing particles to interact with only a small subset of other particles in the resampling step, and is formalized through a sequence of stochastic matrices. These are referred to as connectivity matrices in the sequel since they describe how particles are connected to each other. It has been shown that certain "local exchange" communication structures do not lead to stable algorithms (Heine and Whiteley, 2017), and sophisticated adaptive mechanisms have been designed for ensuring stability Heine et al., 2020). However, a general understanding of the influence of the communication structure on the stability properties of the algorithm is lacking.
In this article, we relate the stability properties of the α-sequential Monte Carlo algorithm to the connectivity and mixing properties of the communication structures described by the connectivity matrices. In particular, we show that it is possible to design α-sequential Monte Carlo algorithms with time-uniform convergence at the standard Monte Carlo rate of N −1/2 without the degree of the interaction graph growing with the number of particles N . Computer code for numerical experiments in this article can be found online at https://github.com/deborsheesen/ alphaSMC.
2 α-Sequential Monte Carlo
Algorithm description
The α-sequential Monte Carlo algorithm with N ≥ 1 particles relies on a sequence of (possibly random) matrices {α t } t≥0 , where each α t = (α ij t ) N i,j=1 ∈ R N,N is a stochastic matrix; for any time index t ≥ 0 and particle index i = 1, . . . , N , we have N j=1 α ij t = 1. The α-sequential Monte Carlo algorithm simulates a sequence {X t ; t ≥ 0}, where, for each time index t ≥ 0, we have X t = {X i t ; i = 1, . . . , N }, and X i t ∈ X is the location of the i-th particle at time index t ≥ 0. The particle approximation π N t of π t produced by the α-sequential Monte Carlo algorithm is given by
π N t = N i=1 W i t δ X i t , where W t = (W 1 t , . . . , W N t ) ∈ P(N ) denotes the vector of normalised weights with P(N ) = {x ∈ R N + : N i=1 x i = 1} being the N -dimensional probability simplex. We have also defined W i t = W i t /( N j=1 W j t )
as the normalised weights. The unnormalised weights W t = (W 1 t , . . . , W N t ) ∈ R N + are recursively defined as follows. At time index t = 0, the weights are all initialized to one, that is, W i t = 1 (i = 1, . . . , N ). For t ≥ 1, the weights are recursively defined as
W i t = N j=1 α ij t−1 W j t−1 g t−1 (X j t−1 ) (i = 1, . . . , N ).(4)
The α-sequential Monte Carlo algorithm also produces a particle approximation of the unnormalised measure γ t and the normalisation constant Z t as
γ N t = (1/N ) N i=1 W i t δ X i t and Z N t = γ N t (1) = (1/N ) N i=1 W i t . The particle equivalent of equation (3) is γ N t = Z N t × π N t ,
which states that the estimate of the unnormalised measure can be decomposed into the product of estimates for the normalised measure and the normalisation constant; this is the same as for the bootstrap particle filter.
The particles are initialised as follows. At time index t = 0, particles X i 0 ∈ X are simulated as being independent and identically distributed from the initial distribution π 0 . We define F t−1 to be the σ-algebra generated by all the particles up to and including time (t − 1), that is, X 0:(t−1) , and all the connectivity matrices up to and including time (t − 1), that is, α 0:(t−1) . We also define the notations E t (·) = E(· | F t ) and var t (·) = var(· | F t ) for convenience, which are the conditional mean and variance conditioned upon the state of the system up to and including time t; these will typically be used in the context of events happening after time t. At time index t ≥ 1 and conditionally upon F t−1 , the particles
{X i t } N i=1 are simulated independently, with P X i t ∈ dx | F t−1 = 1 W i t N j=1 α ij t−1 W j t−1 g t−1 (X j t−1 ) K t (X j t−1 , dx).
The α-sequential Monte Carlo algorithm is summarised in Algorithm 1. Throughout this text, we assume that the connectivity matrices {α t } t≥0 can all be generated at the start of the algorithm. In other words, we do not consider adaptive schemes for constructing the connectivity matrices, as for example is explored by Liu and Chen (1995); Whiteley et al. (2016); Lee and Whiteley (2016). Zhang et al. (2020) have implemented a distributed resampling technique using a message passing interface for a scheme that is similar to the local exchange scheme analysed by Heine and Whiteley (2017), and have reported computational gains from doing so.
Basic Properties
The predictive probability distributions {π t } t≥0 defined by the state-space model (2) satisfy π t = F t π t−1 , where the mapping F t : P(X) → P(X) associates to any probability measure π ∈ P(X) the probability measure F t π that acts on functions ϕ ∈ B(X) as
F t π (ϕ) = π(g t−1 K t ϕ) π(g t−1 ) , t ≥ 1.
For two time indices 0 ≤ s ≤ t, set F s,t = F t • · · · • F s+1 , with the convention that F t,t is the identity mapping, so that we have π t = F s,t π s . Similarly, the unnormalised measures {γ t } t≥0 satisfy γ t (ϕ) = γ s (Q s,t ϕ), where Q s,t = Q s+1 • · · · • Q t and the operator Q t acts on a test function ϕ ∈ B(X) as Q t ϕ = g t−1 K t ϕ, t ≥ 1.
Input: Connectivity matrices {α t } t≥0 , potential functions {g t } t≥0 , Markov kernels {K t } t≥1 and initial distribution π 0 .
1: for (i = 1, . . . , N ) do 2:
Set W i 0 = 1.
3:
Sample X i 0 ∼ π 0 independently. 4: end for 5: for t ≥ 1 do 6:
for (i = 1, . . . , N ) do 7: Set W i t = N j=1 α ij t−1 W j t−1 g t−1 (X j t−1 ). 8: Sample X i t | F t−1 ∼ 1 W i t N j=1 α ij t−1 W j t−1 g t−1 (X j t−1 ) K t (X j t−1 , ·) independently. 9:
end for 10: end for Output: Weighted particle system {(X i t , W i t ) ; i = 1, . . . , N, t ≥ 1}. Algorithm 1: α-sequential Monte Carlo algorithm .
As noted in Whiteley et al. (2016), if the connectivity matrices {α t } t≥0 keep (almost surely) the uniform distribution on {1, . . . , N } invariant, that is, 1α t = 1, where 1 = (1, . . . , 1) ∈ R N is the N -dimensional vector of ones. The definition (4) of the weights shows that the particle approximations γ N t are such that for any test function ϕ ∈ B(X),
E t−1 γ N t (ϕ) = E t−1 W 1 t ϕ(X 1 t ) = 1 N N j=1 W j t−1 Q t ϕ(X j t−1 ) = γ N t−1 (Q t ϕ).(5)
Consequently, iterating equation (5) shows that the particle approximation γ N t (ϕ) is unbiased:
E{ γ N t (ϕ)} = E{ γ N 0 (Q 0,t ϕ)} = γ 0 (Q 0,t ϕ) = γ t (ϕ). Since Z N t = γ N t (1), it also follows that E( Z N t ) = Z t .
This lack-of-bias property allows the α-sequential Monte Carlo approach to be straightforwardly leveraged within other Monte Carlo schemes such as the pseudo-marginal Monte Carlo approach (Andrieu and Roberts, 2009), particle Markov chain Monte Carlo methods (Andrieu et al., 2010), and advanced sequential Monte Carlo methods (Chopin et al., 2013).
3 Time-uniform stability of α-sequential Monte Carlo
Mixing of connectivity matrices
In this section, we assume that there exists a fixed bi-stochastic matrix α ∈ R N,N + such that α t = α for all t ≥ 0. Under the assumption that the uniform distribution on {1, . . . , N } is the unique invariant distribution of α, we relate the stability properties of the α-sequential Monte Carlo algorithm to the mixing properties of the connectivity matrix α. We define the mixing constant λ(α) of the connectivity matrix α as
λ(α) = sup v∈B 0 1 αv < 1,(6)
where · denotes the Euclidean norm and B 0 1 = {v ∈ R N : v = 1 and v, 1 = 0} is the compact set of unit vectors that are orthogonal to the vector 1. The quantity λ(α) ≥ 0 is the smallest constant such that for any vector v ∈ R N , we have
α k v − N i=1 v i N × 1 ≤ λ(α) k v − N i=1 v i N × 1 .(7)
If the Markov transition matrix α is reversible with respect to the uniform distribution on {1, . . . , N }, that is, α is symmetric, the quantity λ(α) equals the absolute value of the second largest (in absolute value) eigenvalue of α: λ(α) = max k∈{2,...,N } |λ k |, where 1 = λ 1 ≥ · · · ≥ λ N > −1 is the spectrum of α. In other words, in the reversible case, λ(α) can also be expressed as one minus the absolute spectral gap of the matrix α. In the case where w ∈ R N is a probability vector, that is, w ∈ P(N ), equation (7) can be reformulated as
αw 2 ≤ 1 − λ 2 (α) N + λ 2 (α) w 2 .(8)
This is the key inequality that we will use to establish the stability properties of the α-sequential Monte Carlo algorithm.
Stability
To measure the discrepancy between two (possibly random) probability measures µ and ν, consider the norm
|||µ − ν||| 2 = sup E {µ(ϕ) − ν(ϕ)} 2 : ϕ ∈ B(X) .
We assume in this section that the potential functions {g t } t≥0 and latent transition kernels {K t } t≥1 of the state-space model (1) are uniformly bounded in time; this is standard when studying the stability properties of particle filters (Del Moral and Guionnet, 2001;Whiteley et al., 2016). In other words, we make the following Assumption 1.
Assumption 1. There exist constants κ K > 1 and κ g > 1 such that
κ −1 K ≤ K t ≤ κ K (t ≥ 1) and κ −1 g ≤ g t ≤ κ g (t ≥ 0).
The main result of this section is that under Assumption 1 and as soon as the absolute spectral gap of the matrix α is large enough, the discrepancy between the particle approximation π N t and its limiting value π t can be uniformly bounded in time:
sup t≥0 ||| π N t − π t ||| ≤ Cst × N −1/2 .
In other words, the particle approximation π N t converges to the true predictive distribution π t at the usual Monte Carlo rate, and this convergence can be controlled uniformly in time. This is formalised in Theorem 1, which is proved in Appendix A.
Theorem 1 (Uniform stability). Suppose that the state-space model (1) satisfies Assumption 1. Consider the α-sequential Monte Carlo algorithm with N particles and a constant bi-stochastic connectivity matrix α ∈ R N,N + such that
• the uniform distribution on {1, . . . , N } is the unique invariant distribution of α, and
• the mixing constant λ(α) defined in equation (6)
satisfies λ(α) < κ −2 g .
Then the following uniform bound for the N -particle approximations π t holds:
N × ||| π N t − π t ||| 2 ≤ D 1 − ρ × κ 4 g {1 − λ 2 (α)} 1 − κ 4 g λ 2 (α)(9)
for constants D > 0, κ g > 1 and 0 < ρ < 1 that depend only on the state-space-model (1).
The bootstrap particle filter corresponds to the case where λ(α) = 0, and in that case one
obtains that N × ||| π N t − π t ||| 2 ≤ Dκ 4 g /(1 − ρ) = C bootstrap .
In the case of fast mixing connectivity matrices, that is, λ(α) 1, expanding the right-hand-side of equation (9)
in powers of λ(α) yields that N × ||| π N t − π t ||| 2 ≤ C bootstrap + Const × λ 2 (α) + O{λ 4 (α)}.(10)
In other words, when compared to the bootstrap particle filter, the use of a connectivity matrix α with limited communication incurs a cost of leading order λ 2 (α). In Section 5.2, we discuss another situation leading to similar conclusions.
Randomized connectivity matrices 4.1 Setting and basic properties
We extend the analysis of the previous section to randomized connectivity structures and obtain a central limit theorem. To this end, let M N be the set of all N × N symmetric stochastic matrices, and consider a distribution υ N on M N such that
α ∼ υ N =⇒ P αP −1 ∼ υ N for any N × N permutation matrix P.(11)
The operation P αP −1 corresponds to permuting the nodes of the graph associated with the matrix α. For α ∈ M N , let S(α) be the set of all permutations of the nodes of the graph associated with α. The distribution υ N is uniform over the set S(α) for every α. Moreover, the mixing constant (6) is the same for every matrix in the set S(α), and this is therefore a generalisation of the setting considered in Section 3. Common examples of this framework include the bootstrap particle filter (which corresponds to υ N placing mass one on the matrix 11 T /N ) and importance sampling (which corresponds to υ N placing mass one on the identity matrix). We shall consider another such setting in Section 5.1 with limited connections. We study the asymptotic behaviour of the α-sequential Monte Carlo algorithm under this setting. In order to keep the analysis simple, we assume in this section that the potential functions {g t } t≥0 are uniformly bounded: there exists a constant κ g > 0 such that, for any time index t ≥ 0 and x ∈ X, we have 0 < g t (x) ≤ κ g ; we note that this is weaker than Assumption 1. We also make the following assumption.
Assumption 2. The distribution υ N is such that for α ∼ υ N , E(α ij α ik ) = O(N −2 ) for all i = j = k. In particular, there exists 0 ≤ c 3 < ∞ such that E(α ij α ik ) ≤ c 3 N −2 for N large.
Assumption 2 is clearly satisfied for the bootstrap particle filter and for sequential importance sampling.
We prove consistency and a central limit theorem for the normalised measures π N t and unnormalised measures γ N t under this setting. In the proof of the central limit theorem, we will need to consider a further sequence of unnormalised measures defined as
µ N t = (1/N ) N i=1 (W i t ) 2 δ X i t . Define an operator Q t that acts on a test function ϕ ∈ B(X) as Q t ϕ = g 2 t−1 K t ϕ.
We show in Section 4.2 that under Assumption 2, as N → ∞, the unnormalised measures µ N t converge to the measure µ t defined as µ 0 = π 0 , and, for a test function ϕ ∈ B(X),
µ t (ϕ) = µ t−1 ( Q t ϕ) × lim N →∞ E{(α 11 ) 2 } + N E{(α 12 ) 2 } + Z 2 t π t (ϕ) × lim N →∞ N 2 E(α 12 α 13 ) + 2N E(α 11 α 12 ) ;(12)
we have implicitly assumed that the limits on the right hand side of equation (12) exist. This is true for the bootstrap particle filter and sequential importance sampling, and more generally is true for the settings we consider in Section 5. Moreover, Assumption 2 and Proposition 1 of Appendix B.1 ensure that the right hand side of the previous equation is finite. We shall exploit equation (12) to study the asymptotic behaviour of α-sequential Monte Carlo with sparse connections in Section 5.2.
Consistency and central limit theorem
We first establish that the particle approximations π N We next show a central limit theorem for the particle approximations π N t and γ N t , which is proved in Appendix B.3.
Theorem 3 (Central limit theorem). Assume that the potential functions satisfy 0 < g t (x) ≤ κ g , and suppose also that Assumption 2 holds. For any bounded test function ϕ ∈ B(X), the re-
normalised quantities N 1/2 { γ N t (ϕ)−γ t (ϕ)} and N 1/2 { π N t (ϕ)−π t (ϕ)} converge in laws to centred Gaussian distributions with variances V γ t (ϕ) and V π t (ϕ)
, respectively, where the variances satisfy the following recursions:
V γ t (ϕ) = V γ t−1 (Q t ϕ) + µ t (ϕ 2 ) − Z 2 t π t (ϕ) 2 , V π t (ϕ) = V π t−1 (Q t ϕ) π t−1 (g t−1 ) 2 + µ t (ϕ 2 ),(13)where ϕ t = ϕ − π t (ϕ).
Theorem 3 provides a way to quantify the trade-off (relative to the bootstrap particle filter) in using α-sequential Monte Carlo under different settings as measured by its asymptotic variance. It is worth stressing that the terms µ t (ϕ 2 ) and µ t (ϕ 2 ) depend on the choice of the α-matrices used. We discuss this in more detail in Section 5. In particular, we consider a setting in which particles are connected to a few other particles at each time and study the effect of the number of connections on the asymptotic variances.
Statistical tradeoffs
Permutations of a random walk on d-regular graph
We describe and analyse a version of α-sequential Monte Carlo with sparse connections that falls into the setting considered in Section 4.1. Consider an undirected d-regular graph G N with N vertices. Let A be the stochastic matrix corresponding to a random walk on G N . In other words, A ij = d −1 if nodes i and j have a vertex connecting them, and zero otherwise. Let P permute be the uniform distribution over the set of all permutations of {1, . . . , N }, and let υ N be a distribution over M N specified by P AP −1 for P ∼ P permute . The operation P AP −1 re-indexes A by the permutation σ of the indices. We consider the case where the graph G N corresponds to a random d-regular graph without self connections (that is, no node is connected to itself). In this case,
lim N →∞ E(α ii ) = 0, lim N →∞ N E{(α ij ) 2 } = 1/d, lim N →∞ 2N E(α ii α ij ) = 0, and lim N →∞ N 2 E(α ij α ik ) = (d − 1)/d. By equation (12), this implies µ t (ϕ) = 1 d µ t−1 ( Q t ϕ) + d − 1 d Z 2 t π t (ϕ).(14)
This is used to analyze the asymptotic variances (13) of α-sequential Monte Carlo in the next section and compare them to those of the bootstrap particle filter.
Cost under sparse connections
We leverage the central limit theorem to analyze the influence of the number of connections d on the performance of the α-sequential Monte Carlo algorithm. Iterating equation (14) immediately shows that µ t (ϕ) = Z 2 t π t (ϕ) + t k=1 β t,k (ϕ)/d k for some coefficients {β t,k (ϕ)} t k=1 that depend on the test function ϕ and the state-space model (1), but not on the connectivity d. It then follows from Theorem 3 that the asymptotic variance can be expanded as
V γ t (ϕ) = Z 2 0 var π 0 (Q 0,t ϕ) + Z 2 1 var π 1 (Q 1,t ϕ) + · · · + Z 2 t var πt (ϕ) + t k=1 β t,k (ϕ) d k
for some coefficients { β t,k (ϕ)} t k=1 that depend on the test function ϕ and the state-space model (1), but not on the connectivity d; here var πs denotes the variance under π s . Not surprisingly, since the limit d → ∞ corresponds to the bootstrap particle filter, the first term on the right-hand side of the previous equation equals exactly the asymptotic variance obtained from a standard bootstrap particle filter (Chopin, 2004). In other words,
V γ t (ϕ) = V bootstrap t (ϕ) + t k=1 β t,k (ϕ) d k ≈ V bootstrap t (ϕ) + β t,1 (ϕ) d ,(15)
where V bootstrap t (ϕ) denotes the asymptotic variance of the bootstrap particle filter. It is interesting to note that, in general, the first coefficient β t,1 (ϕ) can be either positive or negative. In other words, there are situations where the estimates obtained from α-sequential Monte Carlo are statistically more efficient that those obtained from the bootstrap particle filter: β t,1 (ϕ) < 0. At a heuristic level, this may be explained as follows. When using α-sequential Monte Carlo, the propagation of information between particles is typically worse than that for the bootstrap particle filter. For example, if the distribution π t is more concentrated than the initial distribution π 0 , it is typically the case that the distributional estimates obtained from an α-sequential Monte Carlo with low value of d will have thicker tails than the one obtained from the bootstrap particle filter (Figure 1). In these situations, the α-sequential Monte Carlo estimates of tail events of π t can have lower variance than the one obtained from the bootstrap particle filter.
As a concrete example, one can show that when π 0 is a standard real Gaussian distribution, g 0 (x) = 0.1 + 100 × I(|x| < 0.1), ϕ(x) = I(|x| > 1), and K 1 (x, dy) = δ x (dy), the α-sequential Monte Carlo estimates of γ 1 (ϕ) = π 0 (g 0 ϕ) with d = 2 have an asymptotic variance that is roughly half as large as the one obtained from the bootstrap particle filter.
In most more realistic scenarios where particle filters are routinely used (for example, tracking of partial and/or noisy dynamical systems), though, we have indeed observed that α-sequential Monte Carlo estimates have a higher variance than the estimates obtained from the bootstrap particle filter. Equation (15) shows that there is typically a cost of order O(d −1 ) additional variance for using α-sequential Monte Carlo instead of the bootstrap particle filter. This is demonstrated numerically in Figure 5 of Section 6.3.
Connecting back to the setting considered in Section 3 (which considers a fixed α matrix), this result is in the same spirit as the bound (10) that showed that there was a cost of order λ(α) 2 (when Figure 1: Dynamics: X t+1 = βX t + 1 − β 2 ξ t with β = 0.9 and ξ t ∼ Normal(0, 1). Potentials: g t (x) = 0.1 + 10 × I(|x − 2| < 0.1) for t ≥ 0. We plot the accuracy of the estimation of π T =6 (ϕ) with ϕ(x) = x 2 . Red: Particle densities generated by the α-sequential Monte Carlo with N = 10 4 particles. Green: Particle densities generated by a bootstrap particle filter.
controlling N × ||| π N t − π t ||| 2 ) when α-sequential Monte Carlo is used instead of the bootstrap particle filter. To see the connection, consider the connectivity matrix α ∈ R N,N + to be equal to the Markov transition matrix of the random walk on an undirected graph G N that is chosen uniformly at random among all the d-regular graphs on N vertices. Any such connectivity matrix α is bistochastic, so λ(α) equals one minus the absolute spectral gap of α: the Alon-Friedman theorem (Alon, 1986;Friedman, 2008) states that
λ(α) pr → 2 √ d − 1 d as N → ∞.(16)
In other words, for such graphs and for a fixed connectivity d ≥ 2, the mixing constant λ(α) does not deteriorate as N → ∞; this is demonstrated numerically in Section 6.1. If the connectivity matrix α was chosen this way, for large N we would observe that λ 2 (α) = O(d −1 ). Theorem 1 thus shows that, under regularity assumptions on the state-space model, in order to obtain an αsequential Monte Carlo algorithm that is stable, one does not need to increase the number of connections d ≥ 3 with the total number of particles N as long as d is large enough. We conjecture that a similar result holds for randomised connectivity matrices as well. Note that if G N is the undirected graph on {1, . . . , N } where the vertex i is connected to each vertex j ∈ {i ± 1, . . . , i ± d/2 } mod N , the mixing constant λ(α) converges to one as N → ∞, ultimately leading to poor performances. This is a variation of the local exchange mechanism considered in Heine and Whiteley (2017), where the authors indeed show that one cannot expect such an algorithm to converge uniformly at rate N −1/2 .
6 Numerical examples 6.1 Spectral gap of random α-matrices
We use the random graph generation algorithm of Steger and Wormald (1999) as implemented in the NetworkX package of Python (Hagberg et al., 2008) to generate random α-matrices. This generates graphs G N that are samples from the uniform distribution over all d-regular graphs with N nodes. The α-matrix is defined as the Markov transition matrix of the random walk on G N . We consider different values of (d, N ) and simulate 100 random α matrices for each pair. Figure 2 shows the quality of the mixing constant λ(α) as a function of d and N , as well as the limiting value as N → ∞ as described in equation (16).
Predictive distribution estimations
Consider the state-space-model with initial distribution π 0 = Normal(0, 1), dynamics X t+1 = βX t + 1 − β 2 ξ t with β = 0.9 and ξ t ∼ Normal(0, 1). We consider the situation where the potential functions are all equal and given by g t (x) = 0.1 + 10 × I(|x − 2| < 0.1) for t ≥ 0. We run several experiments with N = 10 4 particles for different values of the connectivity d. For each experiment, we randomly generate a d-regular graph as described in Section 6.1 and run the α-sequential Monte Carlo algorithm using this. The top panel of Figure 3 shows the performance of the α-sequential Monte Carlo algorithm for the estimation of π T =6 (ϕ) for ϕ(x) = x 2 . For a connectivity d = 50, the estimate from α-sequential Monte Carlo is roughly as accurate as the bootstrap particle filter. The bottom panel of Figure 3 shows the Wasserstein distance between the estimated predictive distributions and the true predictive distribution obtained by running an αsequential Monte Carlo algorithm for several values of the connectivity d ≥ 0; the true predictive distribution is obtained by running the bootstrap particle filter with a large number of particles.
Comparison with bootstrap particle filter
We numerically investigate the effects of using sparse d-regular networks on the stability of the α-sequential Monte Carlo algorithm. Three settings are considered.
(a) A local exchange scheme (Heine and Whiteley, 2017). (b) Generating an α matrix as described in Section 6.1 at the beginning of the algorithm and fixing it throughout. This is the setting considered in Section 3 and is referred to as 'random d-regular (no permutation)' in this section.
(c) Randomly permuting the matrix generated in (b) at time time step; this is the setting considered in Section 4 and is referred to as "random d-regular (with permutation)' in this section.
We consider a time-discretized version of the chaotic Lorenz 63 model (Lorenz, 1963). The hidden chain {X t } t≥0 is three-dimensional with X t = (X t,1 , X t,2 , X t,3 ) and evolves as X t+∆t,1 = X t,1 + ∆tσ(X t,2 − X t,1 ) + ε t,1 , X t+∆t,2 = X t,2 + ∆t{X t,1 (ρ − X t,3 ) − X t,2 } + ε t,2 , X t+∆t,3 = X t,3 + ∆t(X t,1 X t,2 − βX t,3 ) + ε t,3 , Predictive mean local exchange (d=5) random d-regular (no permutation) (d=5) random d-regular (with permutation) (d=5) local exchange (d=20) random d-regular (no permutation) (d=20) random d-regular (with permutation) (d=20) Figure 4: Relative performances with respect to the bootstrap particle filter; the left two plots are for N = 5 × 10 3 .
where ∆t = 10 −3 is the time-discretization and ε t = (ε t,1 , ε t,2 , ε t,3 ) are independent and identically distributed as Normal(0, ∆tτ 2 I) for τ = 10 −1 . This model is known to be chaotic when (σ, ρ, β) = (10, 28, 8/3), and this is the setting we choose. We collect observations Y t after every δ = 10∆t units of time and assume that they are distributed as Y t | X t ∼ Normal(X t , η 2 I) for η = 5 × 10 −1 . We generate T = 10 3 observations from this model. The bootstrap particle filter with 10 6 particles is used to calculate the ground truth. We compare the relative mean square errors of the estimate to the log-likelihood and predictive mean E(X T | Y 0:(T −1) ) for the three methods; this is the ratio of the mean square error of the estimate obtained by each method to the mean square error of the estimate obtained by the bootstrap particle filter with the same number of particles. We repeat the experiments 100 times to obtain the mean square error.
The two left plots of Figure 4 display relative mean square errors for N = 5 × 10 4 as the degree d of the graph increases. As expected, the local exchange particle filter has a large error as compared to the bootstrap particle filter, which decreases as the degree increases. More interestingly, choosing a random d-regular graph has much lower error and is virtually indistinguishable from the bootstrap particle filter. This is true irrespective of whether we permute the nodes of the graph at every time, which is unsurprising as the permutation operation leaves the mixing constant unchanged. A random 5-regular graph appears to perform extremely well.
The two right plots of Figure 4 display relative mean square errors as the network size (number of particles) N increases. The performance of the local exchange particle filter deteriorates as N increases, which is unsurprising since its mixing deteriorates. However, as predicted by the theory, the performance of a random d-regular graph remains stable as N increases, whether or not the nodes are permuted at each time.
Finally, we display in Figure 5 the additional variance of the α-sequential Monte Carlo algorithm as compared to the bootstrap particle filter when using a random d-regular graph as the connectivity structure. As predicted by the theory, the additional variance is of order O(d −1 ).
Conclusion
The bottleneck in parallelising particle filters is usually the resampling step since it typically involves interactions between all particles. Reducing these interactions can lead to more efficient algorithms, albeit sometimes at the expense of stability. Future directions can include relaxing the assumptions made in this article, considering adaptive sequential Monte Carlo (Fearnhead and Taylor, 2013), and considering high-dimensional target spaces (Beskos et al., 2014). An interesting future direction would be to consider estimating the variance of the estimates obtained by the α-sequential Monte Carlo algorithm along the lines of Chan and Lai (2013); Lee and Whiteley (2018). From an applied perspective, it would be interesting to compare stable distributed implementations of sequential Monte Carlo with distributed Markov chain Monte Carlo.
Acknowledgment
The author acknowledges support from grant DMS-1638521 from SAMSI. The author would like to thank Alexandre H Thiery and Kari Heine for helpful discussions.
A Proof of time-uniform stability
Proof of Theorem 1. Recall that the sequence of probability measures {π t } t≥0 defined by the statespace model (2) of the main text satisfies π t = F s,t π s , where the operator F t is defined as
F t π(ϕ) = π(g t−1 K t ϕ) π(g t−1 ) (t ≥ 1).
The stability properties of the operators {F t } t≥1 are well-understood (Del Moral, 2004). Under Assumption 1 of the main text, there exist constants D > 0 and ρ ∈ (0, 1) such that for any two probability measures µ, µ ∈ P(X), we have |||F s,t µ − F s,t µ ||| ≤ Dρ t−s |||µ − µ |||. The decomposition ( π N T − π T ) = ( π N T − F 0,T π N 0 ) + (F 0,T π N 0 − F 0,T π 0 ) and the standard telescoping expansion ( π N T − F 0,T π N 0 ) = T t=1 (F t,T π N t − F t,T F t π N t−1 ) yields that the discrepancy ||| π N T − π T ||| can be controlled as
||| π N T − π T ||| ≤ T t=1 |||F t,T π N t − F t,T F t π N t−1 ||| + |||F 0,T π N 0 − F 0,T π 0 ||| ≤ T t=1 Dρ T −t ||| π N t − F t π N t−1 ||| + Dρ T ||| π N 0 − π 0 |||.(17)
Since π N 0 (ϕ) = N −1 N i=1 ϕ(X 0,i ) for independent and identically distributed samples X 0,i ∼ π 0 , it follows that ||| π N 0 − π 0 ||| ≤ N −1/2 . Consequently, since ρ ∈ (0, 1), for proving an upper bound of the type sup t≥0 ||| π N t − π t ||| ≤ Cst × N −1/2 , it only remains to prove that the quantities ||| π N t − F t π N t−1 ||| can be uniformly bounded in time by a constant multiple of N −1/2 . For a test function ϕ ∈ B(X), we have
π N t (ϕ) − F t π N t−1 (ϕ) = ( A − A)/B for A = N i=1 W i t−1 g t−1 (X i t−1 )K t ϕ(X i t−1 ), A = N i=1 W i t ϕ(X i t ), and B = N i=1 W i t−1 g t−1 (X i t−1 ). We have E t−1 ( A) = A, and the quantities A, B and W i t are all F t−1 -measurable. It follows that E t−1 π N t (ϕ) − F t π N t−1 (ϕ) 2 = B −2 var t−1 ( A) = B −2 N i=1 (W i t ) 2 var t−1 ϕ(X i t ) ≤ B −2 N i=1 (W i t ) 2 = W t 2 = E N t .(18)
In the last line of equation (18), we have used the fact that B = N i=1 W i t . We have also introduced the quantity E N t = W t 2 ; this is a measure of the effective sample size . In summary, we have thus established that
||| π N t − F t π N t−1 ||| 2 ≤ E E N t .(19)
As recognised in Whiteley et al. (2016), equation (18) shows that controlling the behaviour of E N t is crucial to studying the stability properties of the α-sequential Monte Carlo algorithm. For proving a bound of the type given by sup t≥0 ||| π N t − π t ||| ≤ Cst × N −1/2 , equation (19) reveals that it suffices to have the uniform-in-time bound E(E N t ) ≤ Cst/N . Recalling that κ −1 g ≤ g t−1 (x) ≤ κ g by Assumption 1 of the main text, the bound (8) yields that
E N t = N i=1 { N j=1 α ij W j t−1 g t−1 (X j t−1 )} 2 { N i=1 N j=1 α ij W j t−1 g t−1 (X j t−1 )} 2 ≤ κ 4 g N i=1 N j=1 α ij W t−1,j 2 = κ 4 g αW t−1 2 ≤ κ 4 g 1 − λ 2 (α) N + λ 2 (α) W t−1 2 = κ 4 g 1 − λ 2 (α) N + λ 2 (α)E N t−1 .(20)
If the constant λ(α) defined in equation (6) of the main text satisfies λ(α) < 1/κ 2 g , iterating the bound (20) directly yields that
||| π N t − F t π N t−1 ||| 2 ≤ E E N t ≤ κ 4 g {1 − λ 2 (α)} 1 − κ 4 g λ 2 (α) 1 N for all t ≥ 0.(21)
Combining equation (17) and equation (21), the theorem is proved.
B Proofs for randomized connections B.1 Setup
The following proposition is useful in studying the asymptotic behaviour of the α-sequential Monte Carlo algorithm.
Proposition 1 (Basic properties). The following are true, where the expectations are with respect to the distribution υ N on M N .
(a) E{(α ii ) 2 } does not depend on i.
(b) For i = j, E(α ij ) and E{(α ij ) 2 } do not depend on (i, j). Further, there exists 0 ≤ c 1 < ∞ such that E(α ij ) ≤ c 1 N −1 and E{(α ij ) 2 } ≤ c 1 N −1 for N large. (c) For i = j, E(α ii α ij ) does not depend on (i, j), Further, there exists 0 ≤ c 2 < ∞ such that E(α ii α ij ) ≤ c 2 N −1 for N large.
For example, for i = j, we have E(α ij ) = N −1 and E{(α ij ) 2 } = N −2 for the bootstrap particle filter, and we have E(α ij ) = E{(α ij ) 2 } = 0 for sequential importance sampling.
Let the permutation corresponding to a permutation matrix P be σ : {1, . . . , N } → {1, . . . , N }. Then (P αP −1 ) ij = α σ(i)σ −1 (j) . By equation (11), this implies α ij D = α σ(i)σ −1 (j) for all 1 ≤ i, j ≤ N and all permutations σ, where we have used the notation · D = · to denote that the left and right hand side have the same distribution.
Proof of Proposition 1. Part (a) follows since for any i = i , there exists a permutation σ such that σ(i) = i and σ(i ) = i, which implies that α ii D = α σ(i)σ −1 (i) = α i i . To see part (b), consider i = j = k. There exists a permutation σ such that σ(i) = i and σ(k) = j, which implies that α ij D = α σ(i)σ −1 (j) = α ik . This implies that E(α ij ) = E(α ik ) for all i = j = k. Similarly, we have E(α ij ) = E(α i j ) for all i = i = j. The previous two statements imply that E(α ij ) does not depend on (i, j) for i = j. Since N j=1 α ij = 1 and α ij ≤ 1, the results follow.
To see part (c), consider i = j = k. There exists a permutation σ such that σ(i) = i and σ(k) = j, and therefore (α ii , α ij ) D = (α σ(i)σ −1 (i) , α σ(i)σ −1 (j) ) = (α ii , α ik ) for j = k. Thus E(α ii α ij ) does not depend on j for j = i. Similarly, for i = i = j, there exists a permutation σ such that σ(i) = i , σ(i ) = i, and σ(j) = j, which implies (α ii , α ij ) D = (α σ(i)σ −1 (i) , α σ(i)σ −1 (j) ) = (α i i , α i j ). Thus E(α ii α ij ) does not depend on i for i = j. The inequality follows from part (b) as α ij ≤ 1.
By the assumption that for any time index t ≥ 0 and x ∈ X, we have 0 < g t (x) ≤ κ g , it follows that for any time index t ≥ 0 and particle index 1 ≤ i ≤ N , we have 0 < W i t ≤ κ t g , so that, for a test function ϕ ∈ B(X), the random variables γ N t (ϕ) and µ N t (ϕ) are almost surely bounded:
γ N t (ϕ) ∞ ≤ κ t g and µ N t (ϕ) ∞ ≤ κ 2t g ,
where the infinity norm of a random variable X is defined as X ∞ = inf{B > 0 : pr(|X| ≤ B) = 1}.
For the bootstrap particle filter, it is standard that as N → ∞, the sequence of particle approximations π N t is consistent (Del Moral, 1996). Since in that case, all the weights W i t are equal and converge in probability to Z t , it follows that
µ N t (ϕ) = 1 N N i=1 (W i t ) 2 ϕ(X i t ) pr → Z 2 t π t (ϕ),(22)
where pr → denotes convergence in probability. Equation (12) of the main text can be seen as a generalisation of equation (22).
For the bootstrap particle filter, equation (12) of the main text implies µ t (ϕ) = Z 2 t π t (ϕ), which in turn implies equation (22). Similarly, for importance sampling, equation (12) of the main text implies µ t (ϕ) = µ t−1 ( Q t ϕ), which is as expected. Equation (12) is therefore a generalization of the bootstrap particle filter and importance sampling, which represent two extreme communication structures (fully connected and not connected at all, respectively).
(II) Convergence of µ N t . We proceed in two steps. We first show that E{ µ N t (ϕ)} → µ t (ϕ), and then prove that var{ µ N t (ϕ)} converges to zero. Since µ N t (ϕ) ∞ ≤ κ 2t 2 , this is enough to obtain that µ N t (ϕ) pr → µ t (ϕ). To begin with,
E t−1 µ N t (ϕ) = 1 N N i=1 E t−1 (W i t ) 2 ϕ(X i t ) = 1 N N i=1 E t−1 N j=1 α ij t−1 W j t−1 g t−1 (X j t−1 ) × N k=1 α ik t−1 W k t−1 g t−1 (X k t−1 ) K t ϕ(X k t−1 ) = 1 N N i=1 E t−1 N j=1 (α ij t−1 ) 2 (W j t−1 ) 2 g 2 t−1 (X j t−1 )K t ϕ(X j t−1 )(23)+ 1 N N i=1 E t−1 N j=1 N k=1 k =j α ij t−1 α ik t−1 W k t−1 W j t−1 g t−1 (X k t−1 ) g t−1 (X j t−1 ) K t ϕ(X k t−1 ) .(24)
Expression (23) is
1 N N i=1 E (α ii ) 2 (W i t−1 ) 2 g 2 t−1 (X i t−1 )K t ϕ(X i t−1 ) + 1 N N i=1 N j=1 j =i E{(α ij ) 2 }(W j t−1 ) 2 g 2 t−1 (X j t−1 )K t ϕ(X j t−1 ) = E{(α 11 ) 2 } N N i=1 (W i t−1 ) 2 g 2 t−1 (X i t−1 )K t ϕ(X i t−1 ) + E{(α 12 ) 2 } N N i=1 N j=1 j =i (W j t−1 ) 2 g 2 t−1 (X j t−1 )K t ϕ(X j t−1 ) = E{(α 11 ) 2 } × µ N t−1 ( Q t ϕ) + E{(α 12 ) 2 } N N i=1 N j=1 (W j t−1 ) 2 g 2 t−1 (X j t−1 )K t ϕ(X j t ) − (W i t−1 ) 2 g 2 t−1 (X i t−1 )K t ϕ(X i t−1 ) = E{(α 11 ) 2 } × µ N t−1 ( Q t ϕ) + N E{(α 12 ) 2 } 1 N N j=1 (W j t−1 ) 2 g 2 t−1 (X j t−1 )K t ϕ(X j t ) + O 1 N = µ N t−1 ( Q t ϕ) × E{(α 11 ) 2 } + N E{(α 12 ) 2 } + O 1 N pr → µ t−1 ( Q t ϕ) × lim N →∞ E{(α 11 ) 2 } + N E{(α 12 ) 2 } ,
where the first equality is by Proposition 1(a) and the first part of Proposition 1(b), the second and fourth equalities are because
µ N t−1 ( Q t ϕ) = (1/N ) N i=1 (W i t−1 ) 2 g 2 t−1 (X i t−1 )K t ϕ(X i t−1 )
, the third equality is by the second part of Proposition 1(b), and and the limit is by the consistency of µ N t−1 . For the sake of convenience, define
b jk t = W k t−1 W j t−1 g t−1 (X k t−1 ) g t−1 (X j t−1 ) K t ϕ(X k t−1 ), a ijk t = E(α ij t−1 α ik t−1 )W k t−1 W j t−1 g t−1 (X k t−1 ) g t−1 (X j t−1 ) K t ϕ(X k t−1 ) = E(α ij t−1 α ik t−1 ) b jk t .
Expression (24) can be written as
1 N N i=1 N j=1 N k=1 k =j a ijk t = 1 N N i=1 N k=1 k =i a iik t + N j=1 j =i N k=1 k =j a ijk t = 1 N N i=1 N k=1 k =i a iik t + N j=1 j =i N k=1 k =j,i a ijk t + a iji t = 1 N N i=1 N k=1 k =i E(α ii t−1 α ik t−1 ) b ik t + 1 N N i=1 N j=1 j =i N k=1 k =j,i E(α ij t−1 α ik t−1 ) b jk t + 1 N N i=1 N j=1 j =i E(α ij t−1 α ii t−1 ) b ji t = E(α 11 α 12 ) N N i=1 N k=1 k =i b ik t + E(α 12 α 13 ) 1 N N i=1 N j=1 j =i N k=1 k =j,i b jk t + E(α 12 α 11 ) N N i=1 N j=1 j =i b ji t = E(α 11 α 12 ) N N i=1 N k=1 b ik t − b ii t + N j=1 b ji t − b ii t + E(α 12 α 13 ) N N i=1 N j=1 N k=1 k =j,i b jk t − N k=1 k =i b ik t = E(α 11 α 12 ) N N i=1 N k=1 b ik t + N j=1 b ji t − 2b ii t + E(α 12 α 13 ) N N i=1 N j=1 N k=1 b jk t − b jj t − b ji t − N k=1 b ik t − b ii t
= 2N E(α 11 α 12 ) γ N t−1 (g t−1 ) γ N t−1 (g t−1 K t ϕ) + N 2 E(α 12 α 13 ) γ N t−1 (g t−1 ) γ N t−1 (g t−1 K t ϕ) + O 1 N pr → γ t−1 (g t−1 K t ϕ)γ t−1 (g t−1 ) × lim N →∞ N 2 E(α 12 t−1 α 13 t−1 ) + 2N E(α 11 t−1 α 12 t−1 ) = Z 2 t π t (ϕ) × lim N →∞ N 2 E(α 12 t−1 α 13 t−1 ) + 2N E(α 11 t−1 α 12 t−1 ) ,
where the third equality uses Proposition 1(c), the fourth equality uses Assumption 2, and the sixth equality uses the fact all the relevant quantities are bounded.
Putting together what we have obtained so far, we get
E t−1 { µ N t (ϕ)} pr → µ t−1 ( Q t ϕ) × lim N →∞ E (α 11 ) 2 + N E{(α 12 ) 2 } + Z 2 t π t (ϕ) × lim N →∞
N 2 E(α 12 α 13 ) + 2N E(α 11 α 12 ) = µ t (ϕ).
It remains to prove that var{ µ N t (ϕ)} = var[E t−1 { µ N t (ϕ)}] + E[var t−1 { µ N t (ϕ)}] → 0. We now prove that each term converges to zero. From the proof of E t−1 { µ N t (ϕ)} → µ t (ϕ), we have
E t−1 { µ N t (ϕ)} = µ N t−1 ( Q t ϕ) E (α 11 t−1 ) 2 + N E{(α 12 ) 2 } + N 2 E(α 12 t−1 α 13 t−1 ) γ N t−1 (g t−1 ) γ N t−1 (g t−1 K t ϕ) + O(N −1 ) + N E(α 11 t−1 α 12 t−1 ) γ N t−1 (g t−1 ) γ N t−1 (g t−1 K t ϕ) + O(N −1 ) .(25)
• By Proposition 1 and Assumption 2, all terms on the right-hand side of equation (25) are upper bounded by a universal constant and, by induction, converge in probability to a constant. Therefore the variance of E t−1 { µ N t−1 (ϕ)} converges to zero.
• The expectation of var t−1 { µ N t (ϕ)} also converges to zero. Indeed, since the random variables {(W i t ) 2 ϕ(X i t )} N i=1 are independent conditionally upon F N t−1 and upper bounded in absolute value by κ 2t g , we have that, almost surely,
var t 1 N N i=1 (W i N ) 2 ϕ(X i N ) ≤ κ 4t g N → 0.
This concludes the proof.
B.3 Central limit theorem
Proof of Theorem 3. Notice that { π N directly from the standard central limit theorem for independent and identically distributed random
→ ∞, we have 1 N N i=1 var t−1 U i t pr → µ t (ϕ 2 ) − Z 2 t π t (ϕ) 2 ,(27)1 N N i=1 E t−1 (U i t ) 2 I |U i t | > N 1/2 pr → 0,(28)
where I(·) denotes the indicator function. The tail condition (28) directly follows from the fact that we consider bounded test functions ϕ ∈ B(X) and that 0 < W i t ≤ κ t g almost surely. We thus focus on proving equation (27). We have var t−1 (
U i 1 N N i=1 var t−1 U i t = E t−1 1 N N i=1 (W i t ) 2 ϕ 2 (X i t ) − 1 N N i=1 E t−1 W i t ϕ(X i t ) 2 = E t−1 µ N t (ϕ 2 ) − γ N t−1 (Q t ϕ) 2 pr → µ t (ϕ 2 ) − γ t−1 (Q t ϕ) 2 = µ t (ϕ 2 ) − γ t (ϕ) 2 = µ t (ϕ 2 ) − Z 2 t π t (ϕ) 2 ,
as desired. This concludes the proof.
Figure 2 :
2Mixing constants λ(α) for several values of d and N .
Figure 3 :
3The top plot displays the accuracy of the estimation of π T =6 (ϕ) with ϕ(x) = x 2 and N = 10 4 particles. The bottom plot displays the Wasserstein distance between the estimated predictive distributions and the truth, which measures the accuracy of the estimated predictive distribution at time index T = 6.
Figure 5 :
5Additional variance of the α-sequential Monte Carlo algorithm as compared to the bootstrap particle filter for N = 5 × 10 3 . The orange line is proportional to d −1 .
t , γ N t , and µ N t are consistent. Theorem 2 (Consistency). Assume that the potential functions satisfy 0 < g t (x) ≤ κ g , and suppose also that Assumption 2 holds. For any test function ϕ ∈ B(X), as N → ∞, the particle approximations π N t (ϕ), γ N t (ϕ) and µ N t (ϕ) converge in probability to π t (ϕ), γ t (ϕ), and µ t (ϕ), respectively.Theorem 2 is proved in Appendix B.2. Consistency of the particle approximations π N t (ϕ) and γ N t (ϕ) was established inWhiteley et al. (2016) under an asymptotic negligibility condition, which is automatically satisfied when the α matrices are bi-stochastic; we nonetheless include a straightforward proof for the sake of being a self-contained article. The consistency of µ N t (ϕ) is a more involved proof and is novel in our work.
t (ϕ)} can be decomposed as the sum of the expectation of var t−1 { γ N t (ϕ)} and the variance of E t−1 { γ N t (ϕ)}, this concludes the proof that γ N t (ϕ) pr → γ t (ϕ).
t (ϕ) − π t (ϕ)} = { γ N t (1)} −1 [ γ N t {ϕ − π t (ϕ)}] and γ N t (1) pr → Z t .Consequently, the recursive formula for the asymptotic variance V π t (ϕ) readily follows from the one describing V γ t (ϕ). We thus concentrate on proving the recursive formula for V γ t (ϕ). We proceed by induction and use a standard Fourier-theoretic approach. The initial case t = 0 follows
t , where the random variables U i t = W i t ϕ(X i t ) − E t−1 {W i t ϕ(X i t )} = W i t ϕ(X i t ) − γ N t−1 (Q t ϕ)are independent and identically distributed conditionally upon F t−1 . Theorem A.3 ofDouc and Moulines (2007) shows that in order to prove equation(26), it is enough to prove that for any > 0 and as N pr
t ) = E t−1 {(W i t ) 2 ϕ 2 (X i t )} − E t−1 {W i t ϕ(X i t )} 2 , so equation(5)of the main text, Theorem 2, and the boundedness of µ N t (ϕ 2 ) together yield that
B.2 ConsistencyProof of Theorem 2. Since π N t (ϕ) = γ N t (ϕ)/ γ N t (1), it suffices to prove that γ N t (ϕ) pr → γ t (ϕ) and µ N t (ϕ) pr → µ t (ϕ). We work by induction, the initial case t = 0 following directly from the weak law of large numbers.(I) Convergence of γ N t . This follows from Theorem 1 ofWhiteley et al. (2016), but we include a proof for the sake of completeness. Since γ N t (ϕ) ∞ ≤ κ t g and E{ γ N t (ϕ)} = γ t (ϕ), for proving γ N t (ϕ) pr → γ t (ϕ), it suffices to prove that the variance of γ N t (ϕ) converges to zero.converges to zero since the random variables γ N t (Q t ϕ) are bounded by κ t g and, by induction, converge in probability to γ t (Q t ϕ) as N → ∞.• The expectation of var t−1 { γ N t (ϕ)} also converges to zero. Indeed, since the random vari-are independent conditionally upon F t−1 and upper bounded in absolute value by κ t g , we have that, almost surely,Since var{ γ N variables. We need to prove that for any ξ ∈ R and S N t, where i denotes the imaginary unit. We havesince it then follows (by Slutsky's theorem) thatWe thus concentrate on establishing equation(26). To this end, note that we can write A N t (ϕ) = N −1/2 N i=1 U i
Distributed stochastic gradient MCMC. S Ahn, B Shahbaba, M Welling, International Conference on Machine Learning. Ahn, S., Shahbaba, B., and Welling, M. (2014). Distributed stochastic gradient MCMC. In Inter- national Conference on Machine Learning, pages 1044-1052.
Eigenvalues and expanders. N Alon, Combinatorica. 62Alon, N. (1986). Eigenvalues and expanders. Combinatorica, 6(2):83-96.
Particle Markov chain Monte Carlo methods. C Andrieu, A Doucet, R Holenstein, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 723Andrieu, C., Doucet, A., and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269-342.
The pseudo-marginal approach for efficient Monte Carlo computations. C Andrieu, G O Roberts, The Annals of Statistics. Andrieu, C. and Roberts, G. O. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. The Annals of Statistics, pages 697-725.
On the stability of sequential Monte Carlo methods in high dimensions. A Beskos, D Crisan, A Jasra, The Annals of Applied Probability. 244Beskos, A., Crisan, D., and Jasra, A. (2014). On the stability of sequential Monte Carlo methods in high dimensions. The Annals of Applied Probability, 24(4):1396-1445.
Resampling algorithms and architectures for distributed particle filters. M Bolic, P M Djuric, S Hong, IEEE Transactions on Signal Processing. 537Bolic, M., Djuric, P. M., and Hong, S. (2005). Resampling algorithms and architectures for dis- tributed particle filters. IEEE Transactions on Signal Processing, 53(7):2442-2450.
A general theory of particle filters in hidden Markov models and some applications. H P Chan, T L Lai, The Annals of Statistics. 416Chan, H. P. and Lai, T. L. (2013). A general theory of particle filters in hidden Markov models and some applications. The Annals of Statistics, 41(6):2877-2904.
Central limit theorem for sequential Monte Carlo methods and its application to Bayesian inference. N Chopin, The Annals of Statistics. 326Chopin, N. (2004). Central limit theorem for sequential Monte Carlo methods and its application to Bayesian inference. The Annals of Statistics, 32(6):2385-2411.
SMC2: an efficient algorithm for sequential analysis of state space models. N Chopin, P E Jacob, Papaspiliopoulos , O , Journal of the Royal Statistical Society: Series B (Statistical Methodology). 753Chopin, N., Jacob, P. E., and Papaspiliopoulos, O. (2013). SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3):397-426.
Non-linear filtering: interacting particle resolution. P Del Moral, Markov Processes and Related Fields. 2Del Moral, P. (1996). Non-linear filtering: interacting particle resolution. Markov Processes and Related Fields, 2(4):555-581.
Feynman-Kac Formulae: Genealogical and interacting particle systems with applications, Probability and its applications. P Del Moral, SpringerDel Moral, P. (2004). Feynman-Kac Formulae: Genealogical and interacting particle systems with applications, Probability and its applications. Springer.
On the stability of interacting processes with applications to filtering and genetic algorithms. P Del Moral, A Guionnet, 37Annales de l'Institut Henri Poincare (B) Probability and StatisticsDel Moral, P. and Guionnet, A. (2001). On the stability of interacting processes with applications to filtering and genetic algorithms. Annales de l'Institut Henri Poincare (B) Probability and Statistics, 37(2):155-194.
Convergence properties of weighted particle islands with application to the double bootstrap algorithm. P Del Moral, E Moulines, J Olsson, C Vergé, Stochastic Systems. 62Del Moral, P., Moulines, E., Olsson, J., and Vergé, C. (2017). Convergence properties of weighted particle islands with application to the double bootstrap algorithm. Stochastic Sys- tems, 6(2):367-419.
Limit theorems for weighted samples with applications to sequential Monte Carlo methods. R Douc, E Moulines, ESAIM: Proceedings. EDP Sciences19Douc, R. and Moulines, E. (2007). Limit theorems for weighted samples with applications to sequential Monte Carlo methods. In ESAIM: Proceedings, volume 19, pages 101-107. EDP Sciences.
J Durbin, S J Koopman, Time Series Analysis by State Space Methods. Number 38 in Oxford Statistical Science Series. Oxford University PressDurbin, J. and Koopman, S. J. (2012). Time Series Analysis by State Space Methods. Number 38 in Oxford Statistical Science Series. Oxford University Press.
An adaptive sequential Monte Carlo sampler. P Fearnhead, B M Taylor, Bayesian Analysis. 82Fearnhead, P. and Taylor, B. M. (2013). An adaptive sequential Monte Carlo sampler. Bayesian Analysis, 8(2):411-438.
A proof of Alon's second eigenvalue conjecture and related problems. J Friedman, American Mathematical SocietyFriedman, J. (2008). A proof of Alon's second eigenvalue conjecture and related problems. Amer- ican Mathematical Society.
Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Radar and Signal Processing. N J Gordon, D J Salmond, A F Smith, IEE Proceedings F. 1402Gordon, N. J., Salmond, D. J., and Smith, A. F. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Radar and Signal Processing, IEE Proceedings F, 140(2):107-113.
Understanding the ontogeny of foraging behaviour: insights from combining marine predator bio-logging with satellite-derived oceanography in hidden Markov models. W J Grecian, J V Lane, T Michelot, H M Wade, K C Hamer, Journal of the Royal Society Interface. 15143Grecian, W. J., Lane, J. V., Michelot, T., Wade, H. M., and Hamer, K. C. (2018). Understanding the ontogeny of foraging behaviour: insights from combining marine predator bio-logging with satellite-derived oceanography in hidden Markov models. Journal of the Royal Society Interface, 15(143):20180084.
Exploring network structure, dynamics, and function using NetworkX. A A Hagberg, D A Schult, P J Swart, Proceedings of the 7th Python in Science Conference (SciPy2008). the 7th Python in Science Conference (SciPy2008)Pasadena, CA USAHagberg, A. A., Schult, D. A., and Swart, P. J. (2008). Exploring network structure, dynamics, and function using NetworkX. In Proceedings of the 7th Python in Science Conference (SciPy2008), pages 11-15, Pasadena, CA USA.
Fluctuations, stability and instability of a distributed particle filter with local exchange. K Heine, N Whiteley, Stochastic Processes and their Applications. 127Heine, K. and Whiteley, N. (2017). Fluctuations, stability and instability of a distributed particle filter with local exchange. Stochastic Processes and their Applications, 127(8):2508-2541.
Parallelizing particle filters with butterfly interactions. K Heine, N Whiteley, A T Cemgil, Scandinavian Journal of Statistics. 472Heine, K., Whiteley, N., and Cemgil, A. T. (2020). Parallelizing particle filters with butterfly interactions. Scandinavian Journal of Statistics, 47(2):361-396.
Unbiased Hamiltonian Monte Carlo with couplings. J Heng, P E Jacob, Biometrika. 1062Heng, J. and Jacob, P. E. (2019). Unbiased Hamiltonian Monte Carlo with couplings. Biometrika, 106(2):287-302.
Ultrasonic tracking of shear waves using a particle filter. A N Ingle, C Ma, T Varghese, Medical Physics. 4211Ingle, A. N., Ma, C., and Varghese, T. (2015). Ultrasonic tracking of shear waves using a particle filter. Medical Physics, 42(11):6711-6724.
Opinion mining using ensemble text hidden Markov models for text classification. M Kang, J Ahn, K Lee, Expert Systems with Applications. 94Kang, M., Ahn, J., and Lee, K. (2018). Opinion mining using ensemble text hidden Markov models for text classification. Expert Systems with Applications, 94:218-227.
Forest resampling for distributed sequential Monte Carlo. Statistical Analysis and Data Mining. A Lee, N Whiteley, The ASA Data Science Journal. 94Lee, A. and Whiteley, N. (2016). Forest resampling for distributed sequential Monte Carlo. Statis- tical Analysis and Data Mining: The ASA Data Science Journal, 9(4):230-248.
Variance estimation in the particle filter. A Lee, N Whiteley, Biometrika. 1053Lee, A. and Whiteley, N. (2018). Variance estimation in the particle filter. Biometrika, 105(3):609- 625.
Simple, scalable and accurate posterior interval estimation. C Li, S Srivastava, D B Dunson, Biometrika. 1043Li, C., Srivastava, S., and Dunson, D. B. (2017). Simple, scalable and accurate posterior interval estimation. Biometrika, 104(3):665-680.
Blind deconvolution via sequential imputations. J S Liu, R Chen, Journal of the American Statistical Association. 90430Liu, J. S. and Chen, R. (1995). Blind deconvolution via sequential imputations. Journal of the American Statistical Association, 90(430):567-576.
Deterministic nonperiodic flow. E N Lorenz, Journal of the Atmospheric Sciences. 202Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2):130-141.
Algorithm and parallel implementation of particle filtering and its use in waveform-agile sensing. L Miao, J J Zhang, C Chakrabarti, A Papandreou-Suppappola, Journal of Signal Processing Systems. 652Miao, L., Zhang, J. J., Chakrabarti, C., and Papandreou-Suppappola, A. (2011). Algorithm and parallel implementation of particle filtering and its use in waveform-agile sensing. Journal of Signal Processing Systems, 65(2):211-227.
moveHMM: an R package for the statistical modelling of animal movement data using hidden Markov models. T Michelot, R Langrock, T A Patterson, Methods in Ecology and Evolution. 711Michelot, T., Langrock, R., and Patterson, T. A. (2016). moveHMM: an R package for the statisti- cal modelling of animal movement data using hidden Markov models. Methods in Ecology and Evolution, 7(11):1308-1315.
On the uniform asymptotic convergence of a distributed particle filter. J Míguez, IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM). IEEEMíguez, J. (2014). On the uniform asymptotic convergence of a distributed particle filter. In 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), pages 241-244. IEEE.
A proof of uniform convergence over time for a distributed particle filter. J Míguez, M A Vázquez, Signal Processing. 122Míguez, J. and Vázquez, M. A. (2016). A proof of uniform convergence over time for a distributed particle filter. Signal Processing, 122:152-163.
L Murray, arXiv:1202.6163GPU acceleration of the particle filter: the Metropolis resampler. arXiv preprintMurray, L. (2012). GPU acceleration of the particle filter: the Metropolis resampler. arXiv preprint arXiv:1202.6163.
Parallel resampling in the particle filter. L M Murray, A Lee, Jacob , P E , Journal of Computational and Graphical Statistics. 253Murray, L. M., Lee, A., and Jacob, P. E. (2016). Parallel resampling in the particle filter. Journal of Computational and Graphical Statistics, 25(3):789-805.
Long memory of financial time series and hidden Markov models with time-varying parameters. P Nystrup, H Madsen, E Lindström, Journal of Forecasting. 368Nystrup, P., Madsen, H., and Lindström, E. (2017). Long memory of financial time series and hidden Markov models with time-varying parameters. Journal of Forecasting, 36(8):989-1002.
Scalable Bayesian inference for time series via divideand-conquer. R Ou, D Sen, D Dunson, arXiv:2106.11043arXiv preprintOu, R., Sen, D., and Dunson, D. (2021). Scalable Bayesian inference for time series via divide- and-conquer. arXiv preprint arXiv:2106.11043.
Predicting social unrest events with hidden Markov models using GDELT. Discrete Dynamics in Nature and Society. F Qiao, P Li, X Zhang, Z Ding, J Cheng, Wang , H , Qiao, F., Li, P., Zhang, X., Ding, Z., Cheng, J., and Wang, H. (2017). Predicting social unrest events with hidden Markov models using GDELT. Discrete Dynamics in Nature and Society, 2017.
An introduction to hidden Markov models. L Rabiner, B Juang, IEEE ASSP Magazine. 31Rabiner, L. and Juang, B. (1986). An introduction to hidden Markov models. IEEE ASSP Maga- zine, 3(1):4-16.
Bayes and big data: The consensus Monte Carlo algorithm. S L Scott, A W Blocker, F V Bonassi, H A Chipman, E I George, R E Mcculloch, International Journal of Management Science and Engineering Management. 112Scott, S. L., Blocker, A. W., Bonassi, F. V., Chipman, H. A., George, E. I., and McCulloch, R. E. (2016). Bayes and big data: The consensus Monte Carlo algorithm. International Journal of Management Science and Engineering Management, 11(2):78-88.
Generating random regular graphs quickly. A Steger, N C Wormald, Combinatorics, Probability and Computing. 84Steger, A. and Wormald, N. C. (1999). Generating random regular graphs quickly. Combinatorics, Probability and Computing, 8(4):377-396.
On parallel implementation of sequential Monte Carlo methods: the island particle model. C Vergé, C Dubarry, P Del Moral, E Moulines, Statistics and Computing. 252Vergé, C., Dubarry, C., Del Moral, P., and Moulines, E. (2015). On parallel implementation of se- quential Monte Carlo methods: the island particle model. Statistics and Computing, 25(2):243- 260.
On the role of interaction in sequential Monte Carlo algorithms. N Whiteley, A Lee, K Heine, Bernoulli. 221Whiteley, N., Lee, A., and Heine, K. (2016). On the role of interaction in sequential Monte Carlo algorithms. Bernoulli, 22(1):494-529.
Performance analysis of resampling algorithms of parallel/distributed particle filters. X Zhang, L Zhao, W Zhong, F Gu, IEEE Access. 9Zhang, X., Zhao, L., Zhong, W., and Gu, F. (2020). Performance analysis of resampling algorithms of parallel/distributed particle filters. IEEE Access, 9:4711-4725.
| [
"https://github.com/deborsheesen/"
] |
[
"QUANTUM STATE TRANSFER AND HADAMARD GATE FOR COHERENT STATES",
"QUANTUM STATE TRANSFER AND HADAMARD GATE FOR COHERENT STATES"
] | [
"Thiago Prudêncio \nInstitute of Physics\nUniversity of Brasilia-UnB\n04455, 70919-970Brasilia-DFBrazil\n"
] | [
"Institute of Physics\nUniversity of Brasilia-UnB\n04455, 70919-970Brasilia-DFBrazil"
] | [] | We propose a quantum state transfer from an atomic qubit to a cat-like qubit by means of one degenerate Raman interaction and one Hadamard gate operation for coherent states. We show that the coefficients of the atomic qubit can be mapped onto coherent state qubit, with an effective qubit state transfer. | 10.1142/s021974991350024x | [
"https://arxiv.org/pdf/1307.1350v1.pdf"
] | 119,287,357 | 1307.1350 | 8c04cd72a5114de41fb98fec6093be06318f5e68 |
QUANTUM STATE TRANSFER AND HADAMARD GATE FOR COHERENT STATES
4 Jul 2013 May 7, 2014
Thiago Prudêncio
Institute of Physics
University of Brasilia-UnB
04455, 70919-970Brasilia-DFBrazil
QUANTUM STATE TRANSFER AND HADAMARD GATE FOR COHERENT STATES
4 Jul 2013 May 7, 201421:35 WSPC/INSTRUCTION FILE qtdr˙ijqi International Journal of Quantum Information c World Scientific Publishing CompanyState transferHadamard gatecoherent states
We propose a quantum state transfer from an atomic qubit to a cat-like qubit by means of one degenerate Raman interaction and one Hadamard gate operation for coherent states. We show that the coefficients of the atomic qubit can be mapped onto coherent state qubit, with an effective qubit state transfer.
INTRODUCTION
Three-level atoms can appear in different configurations as Λ, Ξ and V, what permits, for instance, the realization of population inversion and many applications in laser physics 1,2,4,5 . By interacting with an one or two-mode optical field 6,7 , three-level atoms can in some special regimes be effectivelly described as two-level systems with adiabatic elimination of the highest level in the cases of Λ 8 and Ξ 9 configuration or the adiabatic elimination of the lowest level in the case of V configuration 10 . In such situations, the system dynamics can be exactly solved for the Raman coupling with one or two modes of a quantized cavity electromagnetic field 11 .
For a Λ configuration, a special case occurs when the two lower levels are degenerate. In the interaction with a single-mode optical field at far off-resonant and large detuning regimes the atomic states are reduced to a two-level system of degenerate states, characterizing a degenerate Raman regime. This effective interaction leads in fact to an adequate description of the Raman interaction in far off-resonant and large detuning regimes if compared to the full microscopic Hamiltonian of the Raman process 12 , in the case of short evolving times and description of physical quantities only involving the square of amplitude probabilities 14 . From quantum optics, the degenerate Raman regime is well known, with important proposes of generation of non-classical states and protocols of quantum information derived from it. For instance, the generation of Fock states 15,16 , superpositions of coherent states 17 , superpositions of phase states 18 and generation of entangled coherent states 19 . In quantum information, protocols of quantum teleportation were proposed for unknown atomic states 22 , unknown entangled coherent states 23,24 and superpositions of coherent states 25,26 . However, a quantum information transfer from atom to field state have not yet been proposed for degenerate Raman regime. In fact, although the physics behind the atom-field entanglement is a well known process in this system 39 , the step forward that can lead to the quantum transfer, i.e., a Hadamard gate for coherent state qubits, was only recently been proposed 20 and experimentally achieved 21 .
In this paper, we propose a quantum information transfer of an one-qubit state from an atom to a single mode field with a cavity initially in a coherent state. We first consider the atom-field interaction by degenerate Raman regime, then a Hadamard gate operation on the resulting coherent state qubit is realized, such that the state is lead to a qubit with the explict form of the atomic qubit. This propose has an advantage of realizing the quantum state transfer with a small number of operations. On the other hand, the requirement of coherent state qubit, a cat like state, makes use of recent developments in the coherent state quantum computing 34 . These states formed by superpositions of coherent states are the closest to classical superpositions due to the minimum uncertainty relation of coherent states, related to the famous Schrodinger's example of cat states 28 . Such states are, in fact, experimentally achievable, for instance, in electrodynamic cavities 30,39 and systems of trapped ions 31 . The main experimental difficult for the creation and the observation of such superpositions of coherent states is related to the fast decay of the coherences 29 . Advances in continuous variable quantum computation 34,33 lead to the manipulation of such states as qubit states and the possibility of realizing gate operations 33 , both theoretical and experimentally 21 .
The paper is organized as follows: in the section II we review the degenerate Raman interaction between an atom and a coherent state. In the section III we discuss the Hadamard gate operation and the corresponding Hadamard gate operation for coherent states to be used in our protocol. In the section IV, we propose the protocol quantum state transfer of coherent state qubits making using of the previous sections. Finally, in the section V, we present our conclusions.
DEGENERATE RAMAN INTERACTION
We denote the three levels of the three-level Rydberg atom of Λ-type by |g , |e and |f . Due to the degeneracy, |g and |e have same frequency ω 0 . The frequency ω f is associated to |f . The single-mode field is initially in a coherent state |α with frequency ω (See figure 1). Under atom-field degenerate Raman coupling the following relation is satisfied
ω f − ω 0 = ∆ + ω,(1)
where ∆ is the detuning between the atomic transition (ω f −ω 0 ) and the single-mode frequency ω. In the case of large detuning, the upper state |f can be adiabatically eliminated 11 . In the case of large detuning, short evolving times and physical quantities that only involve the square of the amplitude probabilities, the following effective Hamiltonian can be considered ( = 1) 12
H ef =nβ(|e g| + h. c.),(2)
where β = −λ 2 /∆ is the effective atom-field coupling, λ is the transition coupling from the lower states ( |e and |g ) to the upper state (|f ),n =â †â is the number operator,â andâ † the creation and annihilation operators acting on the single-mode field.
Retaining the Stark shifts in the adiabatic elimination 13 the hamiltonian term
H S =nβ(|g g| + |e e|)(3)
is included in the effective hamiltonian (2), where the Stark parameters are equal to the effective atom-field coupling β. We can then write the effective degenerate Raman hamiltonian as 14Ĥ
=Ĥ S +Ĥ ef .(4)
For the single-mode field initially in the coherent state |α , the hamiltonian (4) has validity when the following inequalities 14 are satisfied
∆ 2 ≫ 2|2λα| 2 ,(5)
and,
t ≪ 3∆ 3 4|λα| 4 .(6)
The time evolution during a time t of an initial state of an atom in a superposed state of the form c g |g + c e |e , |c g | 2 + |c e | 2 = 1, and a field in a coherent state |α under the interaction (4) is given by
|ψ(t) = (c + e −2inβt − c − )|g, α + (c + e −2inβt + c − )|e, α ,(7)
where
c ± = 1 2 (c e ± c g ) .(8)
We can also write the state (7) as
|ψ(t) = c + |e −2iβt α − c − |α |g + c + |e −2iβt α + c − |α |e .(9)
In the case c g = 1, c e = 0, corresponding to evolution of the state |g , we have
|ψ(t) = 1 2 |e −2iβt α + |α |g + 1 2 |e −2iβt α − |α |e ,(10)
and in the case c g = 0, c e = 1, corresponding to evolution of the state |e , we have
|ψ(t) = 1 2 |e −2iβt α − |α |g + 1 2 |e −2iβt α + |α |e .(11)
Notice that for this situation can also deal with approximatelly degenerate states, where the two frequency transitions associated to each state by the same optical field have similar transition frequencies. Consequently, this scheme an exact degeneracy is not necessary, we can have ω e 0 ≈ ω g 0 ≈ ω 0 , such that deviations can also be easily incorporated. Since a small energy difference between the two lower atomic states is likely to be typical in experiments, this situation where the states are approximatelly degenerates is of practical purpose.
HADAMARD GATE FOR COHERENT STATE QUBITS
A Hadamard gate operation is one case of one-gate operations and can be represented in matrix form byĥ
= 1 √ 2 1 1 1 −1 .(12)
The action on states |0 and |1 leads respectivellŷ
h|0 = |0 + |1 √ 2 (13) h|1 = |0 − |1 √ 2(14)
where |0 and |1 are orthonormal states.
In the case of coherent states a quantum gate operation can be realized only approximatelly due to the non-orthonormality of these states, with assintotically negligible overlap.
We consider a Hadamard gate operation for coherent states, involving coherent states qubits of the type (a|α + b| − α ). This is a particular one-qubit gate operation involving coherent states descring the one case of following general process
(a|α + b| − α ) → (c|α + d| − α ) ,(15)
where a,b,c, d are complex numbers. Note that taking into account the projection relations for coherent states 1 , in the case of the states |α and | − α ,
α|α = 1,(16)α| − α = e −2|α| 2 ,(17)
we can consider |α| suficiently large and satisfying degenerate Raman regime inequalities (6) and (5), such that the states |α and | − α are approximatelly orthonormal α| − α ≈ 0, a condition of negligible overlap. Consequently, the system of coherent states can be described as a Hilbert space formed by the states |α and | − α . A Hadamard gate operation can be represented by an operatorĥ c in the Hilbert space H c formed by the states |α and |−α . Its action on the qubit (a|α + b| − α ) will result in a change in the coefficients of the superposition
h c (a|α + b| − α ) ≡ (c|α + d| − α ) .(18)
Different from qubits formed from the atomic states, a|e + b|g , or Fock states, a|1 + b|0 , for example, the qubit a|α + b| − α is not physically simple of being operated. Indeed, these states are cat-like states and the gate operations in these states are a case of cat-like gate operations. These cat state qubits of coherent states, due the uncertainties between amplitude and phase are the minimum allowed by the Heisenberg's uncertainty principle in the case of coherent states 1 . Experiments for generation of superpositions of coherent states were proposed experimentally in both cavity QED 30 and trapped ions 31 . The difficulty for the creation and the observation of such superpositions is mainly due to the fast decay of the coherences 29 . With the advances of the techniques for continuous variable quantum computation, the necessity of deal with such states as qubit states leads to the problem of one and more gate operations 33 . The realization of gate operations in superpositions of coherent states was only recently achieved 21 , what rises the possibilities of quantum gate operations on qubits of cat-like states and use them in quantum computation.
In our case, the quantum gate operation on cat-like qubit will be of type
(c + | − α + c − |α ) → (c g | − α + c e |α ) ,(19)
as we will show in the next section, this is a Hadamard type operationĥ c that can be written in the basis of coherent states |α and | − α in the form
h c = | − α −α| − |α α| + |α −α| + | − α α| √ 2 .(20)
Due to negligible overlap, we can map the coherent state qubit in a qubit space and the Hadamard gate can be represented in matrix form Now we propose a quantum state transfer protocol as as coming from two steps (figure 2): (1) In the first step, the degenerate Raman interaction and an atomic detection are realized. In this step, we can consider the Hadamard gate for coherent states operations as switched off. (2) After the first step, we can switch on the gate operation for coherent states such that the process related to the manipulation of the coherent state qubit is achieved.
h c ↔ 1 √ 2 1 1 1 −1 ,(21)| − α ↔ 1 0 , |α ↔ 0 1 .(22)
QUANTUM STATE TRANFER WITH COHERENT STATE QUBITS
Let us start with the single-mode field initially in the coherent state |α . At degenerate Raman interaction regime, the upper level |f of the three level Rydberg atom of Λ-type can be neglected, reducing the atomic system to a two-level state system. In this situation, the atomic qubit state is described by
|φ = c g |g + c e |e ,(23)
where |c g | 2 + |c e | 2 = 1.
The interaction between the single-mode field in the coherent state |α and the atomic state |φ , eq.(23), is given by the effective Raman interaction described previoulsly, eq. (4). We want to realize the transfer of the unknown coefficients c g and c e from the atom to the single-mode field in such a way that in the final step the atomic qubit is lead to a coherent state qubit with the same form of the atomic one, i.e., carrying the coefficients c g and c e .
If the atom-field interaction occurs during a time t, the system composed of atom-field evolves to the state
|ψ αα ′ = (c + |α ′ − c − |α ) |g + (c + |α ′ + c − |α ) |e ,(25)
where c ± is given by (8) and α ′ is related to α by means of
α ′ = e −2iβt α.(26)
An atomic detection then will project the field in a superposition of coherent states, leaving it in a cat-like state. If the measurement of the atom is realized into the state |e , the field is projected into the cat state
|φ + = c + |α ′ + c − |α .(27)
On the other hand, a measurement of the atom in the state |g will project the field into the cat state
|φ − = c + |α ′ − c − |α .(28)
We can choose the time of interaction t = π/2β such that from (26) we have
|φ ± = c + | − α ± c − |α .(29)
Now, taking into account the projection relations for coherent states, equations (16) and (17), and the condition of negligible overlap, α| − α ≈ 0, the states |α and | − α are orthonormal and can be described as a Hilbert space for a two-level system, generated by the coherent states |α and |−α . In this situation, the atomic qubit can be stored as a coherent state qubit. and capable of storing the qubit state expressed by the atomic state (23). We note that the coeficients in (29) do not correspond to c g and c e , as in the initial atomic qubit, but are related to these by means of relations in (8) or the following
c + + c − = c e ,(30)c + − c − = c g .(31)
For this reason, it is necessary the realization of an qubit gate operation for coherent states. Indeed, a Hadamard gate operation can be written in the basis of coherent states by means of
h c = | − α −α| − |α α| + |α −α| + | − α α|,(32)
where we drop the previous √ 2 factor and consider just the linear action on the states. As a 2 × 2-matrix, this operation can be written in terms of Pauli matrices as a linear combination of σ z and σ x , by means ofĥ c = σ z + σ x 27 .
The quantum gate operationĥ c acts on |α and | − α in the following way (see figure 3)ĥ The action of the operatorĥ c on the field |φ + leads the single mode field to the following stateĥ
c |α = −|α + | − α ,(33)c | − α = |α + | − α ,(34)|α |α + | − α −|α + | − α | − αc + | − α − c − |α c + | − α + c − |α c e | − α + c g |α c g | − α + c e |αc |φ + = c e | − α + c g |α ,(35)
corresponding to a quantum information transfer when the atom is detected in the state |e . On the other hand, the action ofĥ c on the field |φ − leads to the statê
h c |φ − = c g | − α + c e |α ,(36)
corresponding to a quantum information transfer when the atom is detected in the state |g (see figure 4). In order to fix the coefficients, we can choose the previous selective atomic state detection to measure the atomic state in the ground state |g , such that the coherent state qubit has the form (36). Consequently, this protocol is equivalent to a quantum state transfer of the state c g |0 + c e |1 , that in our case goes from an atomic qubit state for a coherent state qubit, a cat-like qubit state.
CONCLUSION
In conclusion, a quantum information transfer from atom to field state had not yet been proposed for degenerate Raman regime. In fact, although the physics behind the atom-field entanglement it is a well known process 3 , the step forward of a Hadamard gate for coherent state qubits was only recently proposed 20 and experimentally achieved 21 , leading to the possibility of realize quantum computation with cat-like states.
Here we have proposed a protocol of quantum transfer where the atom-field interacts in degenerate Raman regime, a selective atomic detection of one of the degenerate states is realized and finally a Hadamard gate operation for coherent states is applied in the coherent state qubit. This protocol is equivalent to a quantum state transfer of the state c g |0 + c e |1 , that in our case goes from an atomic qubit state for a coherent state qubit, a cat-like qubit state. As such, it can make use of the recent advances in coherent state quantum computation.
This scheme can also take into account approximatelly degenerate states, such that the two transitions frequences associated to each state by the same optical field have similar transition frequencies. Consequently, an exact degeneracy is not necessary. Since a small energy difference between the two lower atomic states is likely to be typical in experiments, this is of practical interest.
Independent of the final state in which the atom is detected, |g or |e , the action of the Hadamard gate for coherent statesĥ c on the resulting coherent state qubit leads to a complete quantum transfer of the atomic qubit coefficients c g and c e from the atom to the single-mode field. However, in order to avoid the statistical average that traces over the atom state resulting in a statistical mixture of the results on detection in the states |g and |e , we have to choose one state to realize the detection. By choosing the atomic selective detection on the state |g , we can complete the quantum state transfer of the state c g |0 + c e |1 , from the atomic qubit to the coherent state qubit.
The experimental implementation of the atom-field entanglement is generally considered with the use of Rydberg atoms of long lifetime and high-Q superconducting microwave cavities, where the dissipative processes are not relevant compared to the the atom-field interaction 19 . The appropriate choices of λ and ∆, of the order ≈ 10 kHz and ≈ 10 3 kHz, respectivelly, are sufficient for the condition of large detuning. A quality factor Q of order ≈ 10 11 corresponds to a cavity lifetime T c of the order ≈ 10 −1 s, associated to atomic velocities of order ≈ 10 3 m/s 32 . We can choose α equal to 10, 5, 3 or 2, for instance, that lead to negligible overlap conditions e −2|α| 2 of orders ≈ 10 −87 , ≈ 10 −22 , ≈ 10 −8 and ≈ 10 −4 , respectivelly. The Hadamard gate operation for coherent states can be executed in ≈ 10 −2 s without surpass the cavity lifetime 21 .
The quantum transfer from atom qubit to cat-like state qubit is an important quantum operation for coherent state quantum computing 33,34,35,36,37,38 . When the coefficients of the atomic qubit are mapped onto coherent state qubit, the cat-like state, a quantum state transfer between systems of different nature is realized with the effective transfer of the qubit c g |0 + c e |1 . This is an interesting instance of matter-light quantum state transfer that has recently been achieved with different protocols in quantum information and quantum computation 39,40,41,42,43,44,45,46,47,48,49 .
Fig. 1 .
1(Color online) Scheme of degenerate Raman interaction.
Fig. 2 .
2(Color online) Protocol scheme divided in two steps.
Fig. 3 .
3(Color online) Scheme of the states |α and | − α states (blue arrows) and under action of the Hadamard gate for coherent states (red arrows).
Fig. 4 .
4(Color online) Scheme of |φ + and |φ − states (blue arrows) and under action of the Hadamard gate for coherent states (red arrows).
ACKNOWLEDGEMENTSThe author thanks CAPES (Brazil) for partial finantial support and reviewers for suggestions of references and comments on the manuscript.
D F Walls, G J Milburn, Quantum Optics. BerlinSpringer-VerlagD. F. Walls, G. J. Milburn, Quantum Optics (Springer-Verlag, Berlin, 2008).
V Vedral, Modern Foundations of Quantum Optics. LondonImperial College PressV. Vedral, Modern Foundations of Quantum Optics (Imperial College Press, London, 2005).
D Suter, The Physics of laser-atom interactions. CambridgeCambridge University PressD. Suter, The Physics of laser-atom interactions (Cambridge University Press, Cam- bridge, 1997).
. C Wei, D Suter, A S M Windsor, N B Manson, Phys. Rev. A. 582310C. Wei, D. Suter, A. S. M. Windsor, N. B. Manson, Phys. Rev. A, 58 (1998) 2310.
. P M Sheldon, A B Matsko, Laser Physics. 9894P. M. Sheldon, A. B. Matsko, Laser Physics, 9 (1999) 894.
. G R Agarwal, Phys. Rev. A. 11445G. R. Agarwal, Phys. Rev. A, 1 (1970) 1445.
. N Nayak, V Bartzis, Phys. Rev. A. 422953N. Nayak, V. Bartzis, Phys. Rev. A 42 (1990) 2953.
. Y Wu, Phys. Rev. A. 541586Y. Wu, Phys. Rev. A 54 (1996) 1586.
. Y Wu, X Yang, Phys. Rev. A. 562443Y. Wu, X. Yang, Phys. Rev. A 56 (1997) 2443.
. Z Xinhua, Y Zhiyong, X Peipei, Sci. China Ser. G Phys Mec. Atron. 521034Z. XinHua, Y. ZhiYong, X. PeiPei, Sci. China Ser. G Phys Mec. Atron. 52 (2009) 1034.
. C C Gerry, J H Eberly, Phys. Rev. A. 426805C. C. Gerry, J. H. Eberly, Phys. Rev. A 42 (1990) 6805.
. L Xu, Z-F Luo, Z-M Zhang, J. Phys. B. 271649L. Xu, Z-F. Luo, Z-M. Zhang, J. Phys. B, 27 (1994) 1649.
. A Sinatra, Quantum Semiclass. Opt. 7405A. Sinatra et al., Quantum Semiclass. Opt. 7 (1995) 405.
. L Xu, Z-M Zhang, Z. Phys. B. 95507L. Xu, Z-M. Zhang, Z. Phys. B 95 (1994) 507.
. A T Avelar, T M Rocha Filho, L Losano, B Baseia, Phys. Lett. A. 34074A. T. Avelar, T. M. Rocha Filho, L. Losano, B. Baseia, Phys. Lett. A 340 (2005) 74.
. L P A Maia, B Baseia, A T Avelar, J M C Malbouisson, J. Opt. B: Quantum. Semiclass. Opt. 6351L. P. A. Maia, B. Baseia, A. T. Avelar, J. M. C. Malbouisson, J. Opt. B: Quantum. Semiclass. Opt. 6 (2004) 351.
. S-B Zheng, G-C Guo, Quantum Semiclass. Opt. 945S-B. Zheng, G-C. Guo, Quantum Semiclass. Opt. 9 (1997) L45.
. A T Avelar, L A Souza, T M Da Rocha Filho, B Baseia, J. Opt. B: Quantum Semiclass. Opt. 6383A. T. Avelar, L. A. de Souza, T. M. da Rocha Filho, B. Baseia, J. Opt. B: Quantum Semiclass. Opt. 6 (2004) 383.
. K.-H Song, W.-J Zhang, G.-C Guo, Eur. Phys. J. D. 19267K.-H. Song, W.-J. Zhang, G.-C. Guo, Eur. Phys. J. D 19 (2002) 267.
. P Marek, J Fiurasek, Phys. Rev. A. 8214304P. Marek, J. Fiurasek, Phys. Rev. A, 82 (2010) 014304.
. A Tipsmark, Phys. Rev. A. 8450301A. Tipsmark, et al., Phys. Rev. A, 84 (2011) 050301.
. S-B Zheng, G-C Guo, Phys. Lett. A. 232171S-B. Zheng, G-C. Guo, Phys. Lett. A, 232 (1997) 171.
. M Feng, M A S She, Comm. Theor. Phys. 47330M. Feng, M. A. S. She, Comm. Theor. Phys. 47 (2007) 330.
. K.-H Song, W.-J Zhang, Phys. Lett. A. 290214K.-H. Song, W.-J. Zhang, Phys. Lett. A 290 (2001) 214.
. S-B Zheng, Chin. Phys. Journal. 35S-B. Zheng, Chin. Phys. Journal, 42 (2004) 35.
. N Nayak, S S Hassan, T Biswas, arXiv:atom-ph/9605007v1N. Nayak, S. S. Hassan, T. Biswas, arXiv:atom-ph/9605007v1 (1996).
M A Nielsen, I L Chuang, Quantum Computation and Quantum Information. CambridgeCambridge University PressM. A. Nielsen, I. L. Chuang, Quantum Computation and Quantum Information (Cam- bridge University Press, Cambridge, 2000).
. E Schrodinger, Naturwissenschaften. 23807E. Schrodinger, Naturwissenschaften, 23 (1935) 807.
. G Huyet, S Franke-Arnold, S M Barnett, Phys. Rev. A. 6343812G. Huyet, S. Franke-Arnold, S. M. Barnett, Phys. Rev. A, 63 (2001) 043812.
. M Brune, E Hagley, J Dreyer, X Matre, A Maali, C Wunderlich, J M Raimond, S Haroche, Phys. Rev. Lett. 774887M. Brune, E. Hagley, J. Dreyer, X. Matre, A. Maali, C. Wunderlich, J.M. Raimond, and S. Haroche, Phys. Rev. Lett. 77, (1996) 4887.
. C Monroe, D M Meekhof, B E King, D J Wineland, Science. 2721131C. Monroe, D.M. Meekhof, B.E. King, D.J. Wineland, Science 272 (1996) 1131.
. J I Cirac, P Zoller, Phys. Rev. A. 502799J. I. Cirac, P. Zoller, Phys. Rev. A, 50 (1994) 2799.
. J S Neegard-Nilsen, Phys. Rev. Lett. 10553602J. S. Neegard-Nilsen, et al., Phys. Rev. Lett. 105 (2010) 053602.
N Cerf, G Leuchs, E Polzik, Quantum Information with Continuous Variables of Atoms and Light. LondonImperial College PressN. Cerf, G. Leuchs, and E. Polzik, Quantum Information with Continuous Variables of Atoms and Light (Imperial College Press, London, 2007).
. P T Cochrane, G J Milburn, W J Munro, Phys. Rev. A. 592631P. T. Cochrane, G. J. Milburn, and W. J. Munro, Phys. Rev. A 59, 2631 (1999).
. H Jeong, M S Kim, Phys. Rev. A. 6542305H. Jeong and M. S. Kim, Phys. Rev. A 65, 042305 (2002).
. T C Ralph, A Gilchrist, G J Milburn, W J Munro, S Glancy, Phys. Rev. A. 6842319T. C. Ralph, A. Gilchrist, G. J. Milburn, W. J. Munro, and S. Glancy, Phys. Rev. A 68, 042319 (2003).
. A P Lund, T C Ralph, H L Haselgrove, Phys. Rev. Lett. 10030503A. P. Lund, T. C. Ralph, and H. L. Haselgrove, Phys. Rev. Lett. 100, 030503 (2008).
. J M Raimond, M Brune, S Haroche, Rev. Mod. Phys. 73565J. M. Raimond, M. Brune, S. Haroche, Rev. Mod. Phys., 73 (2001) 565.
. S-B Zheng, G-C Guo, Phys. Rev. Lett. 852392S-B.Zheng, G-C. Guo, Phys. Rev. Lett. 85 (2000) 2392.
. X-W Wang, G-J Yang, Optics Communications. 2815282X-W. Wang, G-J. Yang, Optics Communications 281 (2008) 5282.
. E Hagley, Phys. Rev. Lett. 791E. Hagley, et. al., Phys. Rev. Lett., 79 (1997) 1.
. S Osnaghi, P Bertet, A Auffeves, P Maioli, M Brune, J M Raimond, S Haroche, Phys. Rev. Lett. 8737902S. Osnaghi, P. Bertet, A. Auffeves, P. Maioli, M. Brune, J. M. Raimond, S. Haroche, Phys. Rev. Lett. 87 (2001) 037902.
. L Davidovich, A Maali, M Brune, J M Raimond, S Haroche, Phys. Rev. Lett. 712360L. Davidovich, A. Maali, M. Brune, J. M. Raimond, S. Haroche, Phys. Rev. Lett. 71 (1993) 2360.
. D N Matsukevich, A Kuzmich, Science. 85306D. N. Matsukevich, A. Kuzmich, Science 85 (2004) 306.
. J F Sherson, H Krauter, R K Olsson, B Jusgaard, K Hammerer, I Cirac, E S Polzik, Nature. 444557J. F. Sherson, H. Krauter, R. K. Olsson, B. Jusgaard, K. Hammerer, I. Cirac, E. S. Polzik, Nature, 444 (2006) 557.
. K Koshino, S Ishizaka, Y Nakamura, Phys. Rev. A. 8210301K. Koshino, S. Ishizaka, Y. Nakamura, Phys. Rev. A, 82 (2010) 010301.
. X Maitre, E Hagley, G Nogues, C Wunderlich, P Goy, M Brune, J M Raimond, S Haroche, Phys. Rev. Lett. 79769X. Maitre, E. Hagley, G. Nogues, C. Wunderlich, P. Goy, M. Brune, J. M. Raimond, S. Haroche, Phys. Rev. Lett. 79 (1997) 769.
. S Barz, E Kashefi, A Broadbent, J F Fitzsimons, A Zeilinger, P Walther, Science. 303S. Barz, E. Kashefi, A. Broadbent, J. F. Fitzsimons, A. Zeilinger, P. Walther, Science, 335 (2012) 303.
| [] |
[
"Characterizations of Super-regularity and its Variants",
"Characterizations of Super-regularity and its Variants"
] | [
"Aris Daniilidis ",
"D Russell Luke ",
"Matthew K Tam "
] | [] | [] | Convergence of projection-based methods for nonconvex set feasibility problems has been established for sets with ever weaker regularity assumptions. What has not kept pace with these developments is analogous results for convergence of optimization problems with correspondingly weak assumptions on the value functions. Indeed, one of the earliest classes of nonconvex sets for which convergence results were obtainable, the class of so-called super-regular sets [10], has no functional counterpart. In this work, we amend this gap in the theory by establishing the equivalence between a property slightly stronger than super-regularity, which we call Clarke super-regularity, and subsmootheness of sets as introduced by Aussel, Daniilidis and Thibault[1]. The bridge to functions shows that approximately convex functions studied by Ngai, Luc and Thera[12]are those which have Clarke super-regular epigraphs. Further classes of regularity of functions based on the corresponding regularity of their epigraph are also discussed. | 10.1007/978-3-030-25939-6_6 | [
"https://arxiv.org/pdf/1808.04978v1.pdf"
] | 119,651,847 | 1808.04978 | 002fe7bc746bd268fa390908ca942a55cc9bb2ae |
Characterizations of Super-regularity and its Variants
15 Aug 2018 August 16, 2018
Aris Daniilidis
D Russell Luke
Matthew K Tam
Characterizations of Super-regularity and its Variants
15 Aug 2018 August 16, 2018super-regularitysubsmoothnessapproximately convex MSC2010 49J5326B2549J5265K10
Convergence of projection-based methods for nonconvex set feasibility problems has been established for sets with ever weaker regularity assumptions. What has not kept pace with these developments is analogous results for convergence of optimization problems with correspondingly weak assumptions on the value functions. Indeed, one of the earliest classes of nonconvex sets for which convergence results were obtainable, the class of so-called super-regular sets [10], has no functional counterpart. In this work, we amend this gap in the theory by establishing the equivalence between a property slightly stronger than super-regularity, which we call Clarke super-regularity, and subsmootheness of sets as introduced by Aussel, Daniilidis and Thibault[1]. The bridge to functions shows that approximately convex functions studied by Ngai, Luc and Thera[12]are those which have Clarke super-regular epigraphs. Further classes of regularity of functions based on the corresponding regularity of their epigraph are also discussed.
Introduction
The notion of a super-regular set was introduced by Lewis, Luke and Malick [10] who recognized the property as an important ingredient for proving convergence of the method of alternating projections without convexity. This was generalized in subsequent publications [3,6,7,11], with the weakest known assumptions guaranteeing local linear convergence of the alternating projections algorithm for two-set, consistent feasibility problems to date found in [15,Theorem 3.3.5]. The regularity assumptions on the individual sets in these subsequent works are vastly weaker than super-regularity, but what has not kept pace with these generalizations is their functional analogs. Indeed, it appears that the notion of a super-regular function has not yet been articulated. In this note, we bridge this gap between super-regularity of sets and functions as well as establishing connections to other known function-regularities in the literature. A missing link is yet another type of set regularity, what we call Clarke super-regularity, which is a slightly stronger version of super-regularity and, as we show, this is equivalent to other existing notions of regularity. For a general set that is not necessarily the epigraph of a function, we establish an equivalence between subsmoothness as introduced by Aussel, Daniilidis and Thibault [1] and Clarke super-regularity.
To begin, in Section 2 we recall different concepts of the normal cones to a set as well as notions of set regularity, including Clarke regularity (Definition 2.3) and (limiting) super-regularity (Definition 2.4). Next, in Section 3 we introduce the notion of Clarke super-regularity (Definition 3.1) and relate it to the notion of subsmoothness (Theorem 3.4). We also provide an example illustrating that Clarke super-regularity at a point is a strictly weaker condition than Clarke regularity around the point (Example 3.2). Finally, in Section 4, we provide analogous statements for Lipschitz continuous functions, relating the class of approximately convex functions to super-regularity of the epigraph. After completing this work we received a preprint [16] which contains results of this flavor, including a characterization of (limiting) super-regularity in terms of (metric) subsmoothness.
Normal cones and Clarke regularity
The notation used throughout this work is standard for the field of variational analysis, as can be found in [14]. The closed ball of radius r > 0 centered at x ∈ R n is denoted B r (x) and the closed unit ball is denoted B := B 1 (0). The (metric) projector onto a set Ω ⊂ R n , denoted by P Ω : R n ⇒ Ω, is the multi-valued mapping defined by
P Ω (x) := {ω ∈ Ω : x − ω = d(x, Ω)},
where d(x, Ω) denotes the distance of the point x ∈ R n to the set Ω. When Ω is nonempty and closed, its projector P Ω is everywhere nonempty. A selection from the projector is called a projection.
Given a set Ω, we denote its closure by cl Ω, its convex hull by conv Ω, and its conic hull by cone Ω. In this work we shall deal with two fundamental tools in nonsmooth analysis; normal cones to sets and subdifferentials of functions (Section 4).
Definition 2.1 (normal cones).
Let Ω ⊆ R n and letω ∈ Ω.
(i) The proximal normal cone of Ω atω ∈ Ω is defined by
N P Ω (ω) = cone P −1 Ωω −ω .
Equivalently,ω * ∈ N P Ω (ω) whenever there exists σ ≥ 0 such that
ω * , ω −ω ≤ σ ω −ω 2 , ∀ω ∈ Ω.
(ii) The Fréchet normal cone of Ω atω is defined bŷ
N Ω (ω) = {ω * ∈ R n : ω * , ω −ω ≤ o( ω −ω ), ∀ω ∈ Ω} ,
Equivalently,ω * ∈N Ω (ω), if for every ε > 0 there exists δ > 0 such that
ω * , ω −ω ≤ ε ω −ω , for all ω ∈ Ω ∩ B δ (ω).(1)
(iii) The limiting normal cone of Ω atω is defined by
N Ω (ω) = Lim sup ω→ωN Ω (ω),
where the limit superior denotes the Painlevé-Kuratowski outer limit.
(iv) The Clarke normal cone of Ω atω is defined by
N C Ω (ω) = cl conv N Ω (ω).
Whenω ∈ Ω, all of the aforementioned normal cones atω are defined to be empty.
Central to our subsequent analysis is the notion of a truncation of a normal cone. Given r > 0, one defines the r-truncated version of each of the above cones to be its intersection with a ball centered at the origin of radius r. For instance, the r-truncated proximal normal cone of Ω atω ∈ Ω is defined by
N rP Ω (ω) = cone P −1 Ωω −ω ∩ B r ,
that is,ω * ∈ N rP Ω (ω) whenever ω * ≤ r and for some σ ≥ 0 we have ω * , ω −ω ≤ σ ω −ω 2 , ∀ω ∈ Ω.
In general, the following inclusions between the normal cones can deduce straightforwardly from their respective definitions:
N P Ω (ω) ⊆N Ω (ω) ⊆ N Ω (ω) ⊆ N C Ω (ω).(2)
The regularity of sets is characterized by the relation between elements in the graph of the normal cones to the sets and directions constructable from points in the sets. The weakest kind of regularity of sets that has been shown to guarantee convergence of the alternating projections algorithm is elemental subregularity (see [7, Cor.3.13(a)] and [15,Theorem 3.3.5]). It was called elemental (sub)regularity in [8,Definition 5] and [11,Definition 3.1] to distinguish regularity of sets from regularity of collections of sets. Since we are only considering the regularity of sets, and later functions, we can drop the "elemental" qualifier in the present setting. We also streamline the terminology and variations on elemental subregularity used in [8,11], replacing uniform elemental subregularity with a more versatile and easily distinguishable variant.
Definition 2.2 (subregularity [8, Definition 5]).
Let Ω ⊆ R n andω ∈ Ω. The set Ω is said to be ε-subregular relative to Λ atω for (ω,ω * ) ∈ gph N Ω if it is locally closed atω and there exists an ε > 0 together with a neighborhood U ofω such that
ω * − (ω ′ − ω), ω −ω ≤ ε||ω * − (ω ′ − ω)|| ω −ω , ∀ω ′ ∈ Λ ∩ U, ∀ω ∈ P Ω (ω ′ ). (3)
If for every ε > 0 there is a neighborhood (depending on ε) such that (3) holds, then Ω is said to be subregular relative to Λ atω for (ω,ω * ) ∈ gph N Ω .
The property that distinguishes the degree of regularity of sets is the diversity of vectors (ω,ω * ) ∈ gph N Ω for which (3) holds, as well as the choice of the set Λ. Of particular interest to us are Clarke regular sets, which satisfy (3) for all ε > 0 and for all Clarke normal vectors atω. Definition 2.3 (Clarke regularity). The set Ω is said to be Clarke regular atω ∈ Ω if it is locally closed atω and for every ε > 0 there exists δ > 0 such that for all
(ω,ω * ) ∈ gph N C Ω ω * , ω −ω ≤ ε ||ω * || ω −ω , ∀ω ∈ Ω ∩ B δ (ω).(4)
Note that (4) is (3) with Λ = Ω and U = B δ (ω), which in the case of Clarke regularity holds for all (ω,ω * ) ∈ gph N C Ω . A short argument shows that, for Ω Clarke regular at ω, the Clarke and Fréchet normal cones coincide atω. Indeed, this property is used to define Clarke regularity in [14,Definition 6.4]. It is also immediately clear from the definitions that if Ω is Clarke regular atω, then it is subregular relative to Λ = Ω atω for allω * ∈ N Ω (ω).
By setting Λ = R n , lettingω ∈ Ω be in a neighborhood ofω and fixingω * = 0 in the context of Definition 2.2, we arrive at super-regularity which, when stated explicitly, takes the following form.
Definition 2.4 (super-regularity [10, Definition 4.3]).
Let Ω ⊆ R n andω ∈ Ω. The set Ω is said to be super-regular atω if it is locally closed atω and for every ε > 0 there is
a δ > 0 such that for all (ω, 0) ∈ gph N Ω ∩ {(B δ (ω), 0)} ω ′ − ω,ω − ω ≤ ε ||ω ′ − ω|| ω − ω , ∀ω ′ ∈ B δ (ω), ∀ω ∈ P Ω (ω ′ ).(5)
Rewriting the above leads the the following equivalent characterization of super-regularity, which is more useful for our purposes.
ω * 1 , ω 2 − ω 1 ≤ ε ||ω * 1 || ω 2 − ω 1 , ∀(ω 1 , ω * 1 ) ∈ gph N Ω ∩ (B δ (ω) × R n ) , ∀ω 2 ∈ Ω ∩ B δ (ω). (6)
It is immediately clear from this characterization that super-regularity implies Clarke regularity. By continuing our development of increasingly nicer regularity properties to convexity, we have the following relationships involving stronger notions of regularity.
Proposition 2.6. Let Ω ⊆ R n be locally closed atω ∈ Ω.
(i) If Ω is prox-regular atω ( i.e., there exists a neighborhood of x on which the projector is single-valued), then there is a constant γ > 0 such that for all ε > 0
ω * 1 , ω 2 − ω 1 ≤ ε||ω * 1 || ω 2 − ω 1 , ∀(ω 1 , ω * 1 ) ∈ gph N Ω ∩ (B γε (ω) × R n ) , ∀ω 2 ∈ Ω ∩ B γε (ω). (7) (ii) If Ω is convex, then ω * 1 , ω 2 − ω 1 ≤ 0 , ∀(ω 1 , ω * 1 ) ∈ gph N Ω , ∀ω 2 ∈ Ω.(8)
Proof. The proof of (i) can be found in [8,Proposition 4(vi)]. Part (ii) is classical.
Example 2.7 (Pac-Man). Let x = 0 ∈ R 2 and consider two subsets of R 2 given by
A = {(x 1 , x 2 ) ∈ R 2 | x 2 1 + x 2 2 ≤ 1, − (1/2)x 1 ≤ x 2 ≤ x 1 , x 1 ≥ 0}, B = {(x 1 , x 2 ) ∈ R 2 | x 2 1 + x 2 2 ≤ 1, x 1 ≤ |x 2 |}.
The set B looks like a "Pac-Man"' with mouth opened to the right and the set A, if you like, a piece of pizza. For an illustration, see Figure 1. The set B is subregular relative
to A at x = 0 for all (b, v) ∈ gph (N B ∩ A) for ε = 0 on all neighborhoods since, for all a ∈ A, a B ∈ P B (a) and v ∈ N B (b) ∩ A. To see this, we simply note that v − (a − a B ), a B − b = v, a B − b − a − a B , a B − b = 0.
In other words, from the perspective of the piece of pizza, Pac-Man looks convex. The set B, however, is only ε-subregular at x = 0 relative to R 2 for any v ∈ N B (0) for ε = 1 since, by choosing
x = tv ∈ B (where 0 = v ∈ B ∩ N B (0), t ↓ 0), we have v, x = v x > 0.
Clearly, this also means that Pac-Man is not Clarke regular.
Super-regularity and subsmoothness
In the context of the definitions surveyed in the previous section, we introduce an even stronger type of regularity that we identify, in Theorem 3.4, with subsmoothness as studied in [1]. This will provide a crucial link to the analogous characterizations of regularity for functions considered in Theorem 4.6, in particular, to approximately convex functions studied in [12].
Definition 3.1 (Clarke super-regularity).
Let Ω ⊆ R n andω ∈ Ω. The set Ω is said to be Clarke super-regular atω if it is locally closed atω and for every ε > 0 there exists
δ > 0 such that for every (ω,ω * ) ∈ gph N C Ω ∩ (B δ (ω) × R n ), the following inequality holds ω * , ω −ω ≤ ε ||ω * || ω −ω , ∀ω ∈ Ω ∩ B δ (ω).(9)
The only difference between Clarke super-regularity and super-regularity is that, in the case of Clarke super-regularity, the key inequality above holds for all nonzero Clarke normals in a neighborhood instead holding only for limiting normals (compare (6) with (9)). It therefore follows that Clarke super-regularity at a point implies Clarke regularity there. Nevertheless, even this stronger notion of regularity does not imply Clarke regularity aroundω, as the following counterexample shows. Example 3.2 (regularity only at a point). Let f : R 2 → R be the continuous, piecewise linear function (see Figure 2) defined by
f (x) := 0, if x ≤ 0 − 1 2 k (x − 1 2 k ) − 1 3·4 k , if 1 2 k+1 ≤ x ≤ 1 2 k (for k = 1, 2, . . . ) − 1 12 , if x ≥ 1 2 . Notice that − 4 3 x 2 ≤ f (x) ≤ − 1 3 x 2 , ∀x ∈ 0, 1 2 .(10)
Let Ω = epi f denote the epigraph of f . Thanks to (10) it is easily seen that Ω is Clarke regular atω = (0, 0) in the sense of Definition 2.3. However, Ω is not Clarke regular at the sequence of points ω k = ( 1 2 k+1 , 1 2 k ) converging toω. Indeed, the Fréchet normal conesN Ω (ω k ) are reduced to {0} for all k ≥ 1, while the corresponding limiting normal cones are given by (i) The set Ω is subsmooth atω ∈ Ω if, for every r > 0 and ε > 0, there exists δ > 0 such that for all ω 1 ,
N Ω (ω k ) = R + − 1 2 k , −1 , − 1 2 k+1 , −1 , ∀k ∈ N.ω 2 ∈ B δ (ω) ∩ Ω, all ω * 1 ∈ N rC Ω (ω 1 ) and all ω * 2 ∈ N rC Ω (ω 2 ) we have: ω * 1 − ω * 2 , ω 1 − ω 2 ≥ −ε ω 1 − ω 2 .(11)
(ii) The set Ω is semi-subsmooth atω if, for every r > 0 and ε > 0, there exists δ > 0 such that for all ω ∈ B δ (ω) ∩ Ω, all ω * ∈ N rC Ω (ω) and allω * ∈ N rC Ω (ω)
ω * −ω * , ω −ω ≥ −ε ω −ω .(12)
It is clear from the definitions that subsmoothness at a point implies semi-subsmoothness at the same point. The next theorem establishes the precise connection between subsmoothness and Clarke super-regularity (Definition 3.1).
Theorem 3.4 (characterization of subsmoothness).
Let Ω ⊆ R n be closed and nonempty.
(i) The set Ω is subsmooth atω ∈ Ω if and only if Ω is Clarke super-regular atω.
(ii) The set Ω is semi-subsmooth atω ∈ Ω if and only if for each constant ε > 0 there is a δ > 0 such that for every (ω,ω * ) ∈ gph N C Ω ω * , ω −ω ≤ ε ||ω * || ω −ω , ∀ω ∈ Ω ∩ B δ (ω) and for all (ω, ω * ) ∈ gph N C Ω ∩ (B δ (ω) × R n ),
ω * ,ω − ω ≤ ε ||ω * || ω − ω .
Proof. (i). Assume Ω is subsmooth atω ∈ Ω and fix an ε > 0. Set r = 1 and let δ > 0 be given by the definition of subsmoothness. Then for every ω 1 , ω 2 ∈ Ω ∩ B δ (ω) and ω * 2 ∈ N C Ω (ω 2 ) {0}, applying (11) for ω * 1 = {0} ∈ N (r=1)C Ω (ω 1 ) and ||ω * 2 || −1 ω * 2 ∈ N (r=1)C Ω
(ω 2 ) we deduce (9). The same argument applies in the case that ω * 2 = 0 and ω * 1 = 0. If both ω * 1 = ω * 2 = 0, then the required inequality holds trivially.
Let us now assume that Ω is Clarke super-regular atω and fix r > 0 and ε > 0. Let δ > 0 be given by the definition of Clarke super-regularity corresponding to ε ′ = ε/2r and let ω 1 , ω 2 ∈ B δ (ω) ∩ Ω, ω * 1 ∈ N rC Ω (ω 1 ) and ω * 2 ∈ N rC Ω (ω 2 ). It follows from (9) that
ω * 1 , ω 1 − ω 2 ≥ − ε 2r ||ω * 1 || ω 1 − ω 2 ≥ − ε 2 ω 1 − ω 2 and −ω * 2 , ω 1 − ω 2 ≥ − ε 2r ||ω * 2 || ω 1 − ω 2 ≥ − ε 2 ω 1 − ω 2 .
We conclude by adding the above inequalities.
Part (ii) is nearly identical and the proof is omitted.
The following corollary utilizes Theorem 3.4 to summarize the relations between various notions of regularity for sets, the weakest of these being the weakest known regularity assumption under which local convergence of alternating projections has been established [15,Theorem 3.3.5].
Corollary 3.5.
Let Ω ⊆ R n be closed, letω ∈ Ω and consider the following assertions.
(i) Ω is prox-regular atω.
(ii) Ω is subsmooth atω.
(iii) Ω is Clarke super-regular at ω.
(iv) Ω is (limiting) super-regular at ω.
(v) Ω is Clarke regular at ω.
(vi) Ω is subregular at ω relative to some nonempty Λ ⊂ R n for all (ω, ω * ) ∈ V ⊂ gph N P Ω .
Then (i) =⇒ (ii) ⇐⇒ (iii) =⇒ (iv) =⇒ (v) =⇒ (vi).
Proof.
Regularity of functions
The extension of the above notions of set regularity to analogous notions for functions typically passes through the epigraphs. Given a function f : R n → [−∞, +∞], recall that its domain is dom f := {x ∈ R n : f (x) < +∞} and its epigraph is
epi f := {(x, α) ∈ R n × R : f (x) ≤ α}.
The subdifferential of a function at a pointx can be defined in terms of the normal cone to its epigraph at that point. Let f : R n → (−∞, +∞] and letx ∈ dom f . The proximal subdifferential of f atx is defined by
∂ P f (x) = {v ∈ R n : (v, −1) ∈ N P epi f ((x, f (x))}.(13)
The Fréchet (resp. limiting, Clarke) subdifferential, denoted∂f (x) (resp. ∂f (x), ∂ C f (x)), is defined analogously by replacing normal cone
N P epi f (ω) withN epi f (ω) (resp. N epi f (ω), N C epi f (ω)) in (13) whereω = (x, f (x)
). The horizon and Clarke horizon subdifferentials at x are defined, respectively, by
∂ ∞ f (x) = {v ∈ R n : (v, 0) ∈ N epi f ((x, f (x))}, ∂ C ∞ f (x) = {v ∈ R n : (v, 0) ∈ N C epi f ((x, f (x))}.
In what follows, we define regularity of functions in terms of the regularity of their epigraphs. We refer to a regularity defined in such a way as epi-regularity. (i) f is said to be ε-epi-subregular atx ∈ dom f relative to Λ ⊆ dom f for (y, v) whenever epi f is ε-subregular atx ∈ dom f relative to {(x, α) ∈ epi f | x ∈ Λ} for (y, (v, e)) with fixed e ∈ {−1, 0}.
(ii) f is said to be epi-subregular atx relative to Λ ⊆ dom f for (y, v) whenever epi f is subregular at (x, f (x)) relative to {(x, α) ∈ epi f | x ∈ Λ} for (y, (v, e)) with fixed e ∈ {−1, 0}.
(iii) f is said to be epi-Clarke regular atx whenever epi f is Clarke regular at (x, f (x)). Similarly, the function is said to be epi-Clarke super-regular (resp. epi-super-regular, epi-prox-regular) atx whenever its epigraph is Clarke super-regular (resp. superregular, or prox-regular) at (x, f (x)).
Recent work [2,4] makes use of the directional regularity (in particular Lipschitz regularity) of functions or their gradients. The next example illustrates how this fits naturally into our framework.
Example 4.2. The negative absolute value function f (x) = −|x| is the classroom example of a function that is not Clarke regular at x = 0. It is, however, ε-epi-subregular relative to R at x = 0 for all limiting subdifferentials there for the same reason that the Pac-Man of Example 2.7 is ε-subregular relative to R 2 at the origin for ε = 1. Indeed, ∂f (0) = {−1, +1} and at any point (x, y) below epi f the vector (x, y) − P epi f (x, y) ∈ {α(−1, −1), α(1, −1)} with α ≥ 0. So by the Cauchy-Schwarz inequality
(±1, −1) − α(±1, −1), P epi f (x, y) ≤ ||(±1, −1) − α(±1, −1)|| P epi f (x, y) , ∀(x, y) ∈ R 2 .(14)
In particular, any point (x, x) ∈ gph f we have
P epi f (x, x) = (x, x) and (x, x) − P epi f (x, x) = (0, 0),
so the inequality is tight for the subgradient −1 ∈ ∂f (0). Following (3), this shows that epi f is ε-subregular at the origin relative to R 2 for all limiting normals (in fact, for all Clarke normals) at (0, 0) for ε = 1. In contrast, the function f is not epi-subregular at x = 0 relative to R since the inequality above is tight on all balls around the origin, just as with the Pac-Man of Example 2.7. If one employs the restriction Λ = {x | x < 0} then epi-subregularity of f is recovered at the origin relative to the negative orthant for the subgradient v = 1 for ε = 0 on the neighborhood U = R, that is, −|x| looks convex from this direction! ♦
In a subsequent section, we develop an equivalent, though more elementary, characterizations of these regularities of functions defined in Definition 4.1.
Lipschitz continuous functions
In this section, we consider the class of locally Lipschitz functions, which allows us to avoid the horizon subdifferential (since this is always {0} for Lipschitz functions). Recall that a set Ω is called epi-Lipschitz atω ∈ Ω if it can be represented nearω as the epigraph of a Lipschitz continuous function. Such a function is called a locally Lipschitz representation of Ω atω.
The following notion of approximately convex functions were introduced by Ngai, Luc and Thera [12] and turns out to fit naturally within our framework. Definition 4.3 (approximate convexity). A function f : R n → (−∞, +∞] is said to be approximately convex atx ∈ R n if for every ε > 0 there exists δ > 0 such that
(∀x, y ∈ B δ (x))(∀t ∈ ]0, 1[ ) : f (tx + (1 − t)y) ≤ tf (x) + (1 − t)f (y) + εt(1 − t) x − y .
Daniilidis and Georgiev [5] and subsequently Daniilidis and Thibault [1,Theorem 4.14] showed the connection between approximately convex functions and subsmooth sets. Using our results in the previous section, we are able to provide the following extension of their characterization. In what follows, set ω = (x, t) ∈ R n × R and denote by π(ω) = x its projection onto R n .
Proposition 4.4 (subsmoothness of Lipschitz epigraphs).
Let Ω be an epi-Lipschitz subset of R n and letω ∈ bdryΩ. Then the following statements are equivalent:
(i) Ω is Clarke super regular atω.
(ii) Ω is subsmooth atω.
(iii) every locally Lipschitz representation f of Ω atω is approximately convex at π(ω).
(iv) some locally Lipschitz representation f of Ω atω is approximately convex at π(ω).
Proof. The equivalence of (i) and (ii) follows from Theorem 3.4(i), and does not require Ω to be epi-Lipschitz. The equivalence of (ii), (iii) and (iv) by [1,Theorem 4.14].
Remark 4.5. The equivalences in Theorem 4.4 actually hold in the Hilbert space setting without any changes. In fact, the equivalence of (ii)-(iv) remains true in Banach spaces [1,Theorem 4.14]. ♦
The following characterization extends [5, Theorem 2].
Theorem 4.6 (characterizations of aproximate convexity). Let f : R n → R be locally Lipschitz on R n and letx ∈ R n . Then the following are equivalent.
(i) epi f is Clarke super-regular atx.
(ii) f is approximately convex atx.
(iii) For every ε > 0, there exists a δ > 0 such that
(∀x, y ∈ B δ (x))(∀v ∈ ∂ C f (x)) f (y) − f (x) ≥ v, y − x − ε y − x .
(iv) ∂f is submonotone [5,Definition 7] at x 0 , that is, for every ε there is a δ such that for all x 1 , x 2 ∈ B δ (x 0 ) ∩ dom ∂f , and all x * i ∈ ∂f (x i ) (i = 1, 2), one has
x * 1 − x * 2 , x 1 − x 2 ≥ −ε x 1 − x 2 .(15)
Proof.
Non-Lipschitzian functions
In this section, we collect results which hold true without assuming local Lipschitz continuity.
f (x) = x 0 := n j=1 |sign(x j )|, where sign(t) := −1 for t < 0 0 for t = 0 +1 for t > 0 .
This function is lower-semicontinuous and Clarke epi-super-regular almost everywhere, but not locally Lipschitz at x whenever x 0 < n; a fortiori, f it is actually discontinuous at all such points. Indeed, the epigraph of f is locally convex almost everywhere and, in particular, at any point (x, α) with α > f (x). At the point (x, f (x)) however, the epigraph is not even Clarke regular when x 0 < n. Nevertheless, it is ε-subregular, for the limiting subgradient 0 with ε = 1. Conversely, if x is any point with x 0 = n, then the counting function is locally constant and so in fact locally convex. These observations agree nicely with those in [9], namely, that the rank function (a generalizaton of the counting function) is subdifferentially regular everywhere (i.e., all the various subdifferentials coincide) with 0 ∈ ∂ x 0 for all x ∈ R n . ♦
In order to state the following corollary, recall that an extended real-valued function f is strongly amenable atx is if f (x) is finite and there exists an open neighborhood U ofx on which f has a representation as a composite g • F with F of class C 2 and g a proper, lsc, convex function on R n . (i) f is strongly amenable atx.
(ii) f is prox-regular atx.
(iii) epi f is Clarke super-regular at (x, f (x)).
Then: (i) =⇒ (ii) =⇒ (iii).
Proof. The fact that strong amenability implies prox-regularity is discussed in [13,Proposition 2.5]. To see that (ii) implies (iii), suppose f is prox-regular atx. Then epi f is prox-regular at (x, f (x)) by [13,Theorem 3.5] and hence Clarke super-regular at (x, f (x)) by Theorem 3.4.
To conclude, we establish a primal characterization of epi-subregularity analogous to the characterization of Clarke epi-super-regularity in Theorem 4.6. It is worth noting that, unlike the results in Section 4.1, this characterization includes the possibly of horizon normals. In what follows, we denote the epigraph of a function f restricted to a subset Λ ⊂ dom f by epi(f Λ ) := {(x, α) ∈ epi f | x ∈ Λ}.
Proposition 2. 5 ([ 10 ,
510Proposition 4.4]). The set Ω ⊆ R n is super-regular atω ∈ Ω if and only if it is locally closed atω and for every ε > 0 there exists δ > 0 such that
Figure 1 :
1An illustration of the sets in Example 2.7.
Figure 2 :
2A sketch of the function f and the sequence (ω k ) given in Example 3.2.
♦
A missing link in the cascade of set regularity is subsmooth and semi-subsmooth sets, introduced and studied byAussel, Daniilidis and Thibault in [1, Definitions 3.1 & 3.2].
Definition 3.3 ((Semi-)subsmooth sets).Let Ω ⊂ R n be closed and letω ∈ Ω.
(i) =⇒ (ii): This was shown in [1, Proposition 3.4(ii)]. (ii) ⇐⇒ (iii): This is Theorem 3.4(i). (iii) =⇒ (iv) =⇒ (v) =⇒ (vi): These implications follow from the definitions.
Remark 3. 6 (
6amenablility). A further regularity between convexity and prox-regularity is amenability[14, Definition 10.23]. This was shown in [13, Corollary 2.12] to imply prox-regularity. Amenability plays a larger role in the analysis of functions and is defined precisely in this context below. ♦
Definition 4 . 1
41(epi-regular functions). Let f : R n → (−∞, +∞],x ∈ dom f , Λ ⊆ dom f , and (y, v) ∈ gph ∂f ∪ gph ∂ ∞ f .
(i) ⇐⇒ (ii): Since f is locally Lipschitz atx, it is trivially a local Lipschitz representation of Ω = epi f atω = (x, f (x)) ∈ Ω. The result thus follows from Proposition 4.4. (ii) ⇐⇒ (iii) ⇐⇒ (iv): This is [5, Theorem 2].
Proposition 4 . 9 .
49Let f : R n → (−∞, +∞] and consider the following assertions.
Proposition 4.7. Let f : R n → R be lower semicontinuous (lsc) and approximately convex. Then epi f is Clarke-super regular.Proof. As a proper, lsc, approximately convex function is locally Lipschitz at each point in the interior of its domain [12, Proposition 3.2] and dom f = R n , the result follows from Theorem 4.6.Example 4.8 (Clarke super-regularity does not imply approximate convexity). Consider the counting function f : R n → {0, 1, . . . , n} defined by
Acknowledgments. The research of AD has been supported by the grants AFB170001 (CMM) & FONDECYT 1171854 (Chile) and MTM2014-59179-C2-1-P (MINECO of Spain). The research of DRL was supported in part by DFG Grant SFB755 and DFG Grant GRK2088. The research of MKT was supported in part by a post-doctoral fellowship from the Alexander von Humboldt Foundation.Proposition 4.10. Consider a function f : R n → (−∞, +∞], let x ∈ dom f and let (x, v) ∈ (gph ∂ C f ∪ gph ∂ C ∞ f ). Then the following assertions hold.(i) f has an ε-subregular epigraph atx ∈ dom f relative to Λ ⊆ dom f for (x, v) if and only if for some constant ε > 0 there is a neighborhood U of (x, f (x)) such that, for all (x, α) ∈ epi(f Λ ) ∩ U , one of the following two inequalities holds:any point (x, v) ∈ (gph ∂f ∪ gph ∂ ∞ f ) corresponds to either a normal vector of the form (v, −1) or a horizon normal of the form (v, 0). Suppose first that f is ε-epi-subregular at) with constant ε and neighborhood U of (x, f (x)) in(3). Thus, for all (x, α) ∈ epi(f Λ ) ∩ U , we havewhich from the claim follows. The only other case to consider is that f is ε-epi-subregular at x relative to Λ ⊂ dom f for v ∈ ∂ C ∞ f (x) with constant ε and neighborhood U ′ of x. In this case, epi f is ε-which completes the proof of (i).(ii): Follows immediately from the definition.
When f = ι Ω for a closed set Ω the various subdifferentials coincide with the respective normal cones to Ω. In this case, inequality (16b) subsumes (16a) since all subgradients are also horizon subgradients and (16b) reduces to (3) in agreement with the corresponding notions of regularity of sets. Remark 4.11 (indicator functions of subregular sets. ♦ ReferencesRemark 4.11 (indicator functions of subregular sets). When f = ι Ω for a closed set Ω the various subdifferentials coincide with the respective normal cones to Ω. In this case, inequality (16b) subsumes (16a) since all subgradients are also horizon subgradients and (16b) reduces to (3) in agreement with the corresponding notions of regularity of sets. ♦ References
Subsmooth sets: functional characterizations and related concepts. D Aussel, A Daniilidis, L Thibault, Trans. Amer. Math. Soc. 357D. Aussel, A. Daniilidis, and L. Thibault. Subsmooth sets: functional characteri- zations and related concepts. Trans. Amer. Math. Soc., 357:1275-1301, 2004.
A descent lemma beyond lipschitz gradient continuity: first order methods revisited and applications. H H Bauschke, J Bolte, M Teboulle, Math. Oper. Res. 422H. H. Bauschke, J. Bolte, and M. Teboulle. A descent lemma beyond lipschitz gradient continuity: first order methods revisited and applications. Math. Oper. Res., 42(2):330-348, 2016.
Restricted Normal Cones and the Method of Alternating Projections: Theory. Set-Valued Var. H H Bauschke, D R Luke, H M Phan, X Wang, Anal. 21H. H. Bauschke, D. R. Luke, H. M. Phan, and X. Wang. Restricted Normal Cones and the Method of Alternating Projections: Theory. Set-Valued Var. Anal., 21:431- 473, 2013.
First order methods beyond convexity and lipschitz gradient continuity with applications to quadratic inverse problems. J Bolte, S Sabach, M Teboulle, Y Vaisbourd, SIAM J. Optim. Accepted for publicationJ. Bolte, S. Sabach, M. Teboulle, and Y. Vaisbourd. First order methods beyond convexity and lipschitz gradient continuity with applications to quadratic inverse problems. SIAM J. Optim., 2018. Accepted for publication.
Approximate convexity and submonotonicity. A Daniilidis, P Georgiev, J. Math. Anal. Appl. 291A. Daniilidis and P. Georgiev. Approximate convexity and submonotonicity. J. Math. Anal. Appl., 291:292-301, 2004.
M N Dao, H M Phan, arXiv:1609.00341Linear convergence of projection algorithms. M. N. Dao and H. M. Phan. Linear convergence of projection algorithms. arXiv:1609.00341, 2017.
Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems. R Hesse, D R Luke, SIAM J. Optim. 234R. Hesse and D. R. Luke. Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems. SIAM J. Optim., 23(4):2397-2419, 2013.
Set regularities and feasibility problems. A Y Kruger, D R Luke, Nguyen H Thao, arXiv:1602.04935Math. Programming B. A. Y. Kruger, D. R. Luke, and Nguyen H. Thao. Set regularities and feasibility problems. Math. Programming B, 2016. arXiv:1602.04935.
Generalized subdifferentials of the rank function. H Y Le, Optimization Letters. H. Y. Le. Generalized subdifferentials of the rank function. Optimization Letters, pages 1-13, 2012.
Local linear convergence of alternating and averaged projections. A S Lewis, D R Luke, J Malick, Found. Comput. Math. 94A. S. Lewis, D. R. Luke, and J. Malick. Local linear convergence of alternating and averaged projections. Found. Comput. Math., 9(4):485-513, 2009.
Quantitative convergence analysis of iterated expansive, set-valued mappings. D R Luke, H Nguyen, M K Thao, Tam, Math. Oper. Res. to appearD. R. Luke, Nguyen H. Thao, and M. K. Tam. Quantitative convergence anal- ysis of iterated expansive, set-valued mappings. Math. Oper. Res., to appear. http://arxiv.org/abs/1605.05725.
Approximate convex functions. H V Ngai, D T Luc, M Théra, J. Nonlinear Convex Anal. 1H. V. Ngai, D. T. Luc, and M. Théra. Approximate convex functions. J. Nonlinear Convex Anal., 1:155-176, 2000.
Prox-regular functions in variational analysis. R A Poliquin, R T Rockafellar, Trans. Amer. Math. Soc. 348R. A. Poliquin and R. T. Rockafellar. Prox-regular functions in variational analysis. Trans. Amer. Math. Soc., 348:1805-1838, 1996.
R T Rockafellar, R J Wets, Variational Analysis. Grundlehren Math. Wiss. BerlinSpringer-VerlagR. T. Rockafellar and R. J. Wets. Variational Analysis. Grundlehren Math. Wiss. Springer-Verlag, Berlin, 1998.
Algorithms for Structured Nonconvex Optimization: Theory and Practice. N H Thao, GöttingenGeorg-August Universität GöttingenPhD thesisN. H. Thao. Algorithms for Structured Nonconvex Optimization: Theory and Prac- tice. PhD thesis, Georg-August Universität Göttingen, Göttingen, 2017.
Subsmooth functions and sets. Linear and Nonlinear Analysis. L Thibault, to appearL. Thibault. Subsmooth functions and sets. Linear and Nonlinear Analysis, to appear.
| [] |
[
"V -A hadronic tau decays : a QCD laboratory *",
"V -A hadronic tau decays : a QCD laboratory *"
] | [
"Stephan Narison [email protected] \nLaboratoire de Physique Mathématique et Théorique\nUniversité de Montpellier II Place Eugène Bataillon\n34095 -, Cedex 05MontpellierFrance\n"
] | [
"Laboratoire de Physique Mathématique et Théorique\nUniversité de Montpellier II Place Eugène Bataillon\n34095 -, Cedex 05MontpellierFrance"
] | [] | Recent ALEPH/OPAL data on the V -A spectral functions from hadronic τ decays are used for fixing the QCD continuum threshold at which the first and second Weinberg sum rules should be satisfied in the chiral limit, and for predicting the values of the low-energy constants fπ, m π + − m π 0 and L10. Some DMO-like sum rules and the τ -total hadronic widths Rτ,V −A are also used for extracting the values of the D = 6, 8 QCD vacuum condensates and the corresponding (in the chiral limit) electroweak kaon penguin matrix elements Q 3/2 8,7 2π , , where a deviation from the vacuum saturation estimate has been obtained. Combining these results with the one of the QCD penguin matrix element Q 1/2 6 2π obtained from a (maximal)qq-gluonium mixing scheme from the scalar meson sum rules, we deduce, in the Electroweak Standard Model (ESM), the conservative upper bound for the CP-violating ratio: ǫ ′ /ǫ ≤ (22 ± 9)10 −4 , in agreement with the present measurements. | 10.1016/s0920-5632(01)01154-9 | [
"https://export.arxiv.org/pdf/hep-ph/0012019v1.pdf"
] | 31,712,908 | hep-ph/0012019 | 2d5d593132e07e328b2e719036d74c730fd86871 |
V -A hadronic tau decays : a QCD laboratory *
3 Dec 2000
Stephan Narison [email protected]
Laboratoire de Physique Mathématique et Théorique
Université de Montpellier II Place Eugène Bataillon
34095 -, Cedex 05MontpellierFrance
V -A hadronic tau decays : a QCD laboratory *
3 Dec 2000
Recent ALEPH/OPAL data on the V -A spectral functions from hadronic τ decays are used for fixing the QCD continuum threshold at which the first and second Weinberg sum rules should be satisfied in the chiral limit, and for predicting the values of the low-energy constants fπ, m π + − m π 0 and L10. Some DMO-like sum rules and the τ -total hadronic widths Rτ,V −A are also used for extracting the values of the D = 6, 8 QCD vacuum condensates and the corresponding (in the chiral limit) electroweak kaon penguin matrix elements Q 3/2 8,7 2π , , where a deviation from the vacuum saturation estimate has been obtained. Combining these results with the one of the QCD penguin matrix element Q 1/2 6 2π obtained from a (maximal)qq-gluonium mixing scheme from the scalar meson sum rules, we deduce, in the Electroweak Standard Model (ESM), the conservative upper bound for the CP-violating ratio: ǫ ′ /ǫ ≤ (22 ± 9)10 −4 , in agreement with the present measurements.
Introduction
Hadronic tau decays have been demonstrated [1] (hereafter referred as BNP) to be an efficient laboratory for testing perturbative and nonperturbative QCD. That is due both to the exceptional value of the tau mass situated at a frontier regime between perturbative and nonperturbative QCD and to the excellent quality of the ALEPH/OPAL [2,3] data. On the other, it is also known before the advent of QCD, that the Weinberg [4] and DMO [5] sum rules are important tools for controlling the chiral and flavour symmetry realizations of QCD, which are broken by light quark mass terms to higher order [6] and by higher dimensions QCD condensates [7] within the SVZ expansion [8].
In this talk, we shall discuss the impact of the new ALEPH/OPAL data on the V -A spectral functions in the analysis of the previous and some other related sum rules, which will be used for determining the low-energy constants of the effective chiral lagrangian [9][10][11], the SVZ QCD vacuum condensates [8]. In particular, we shall discuss the consequences of these results on the * Talk given at the QCD 00 International Euroconference (Montpellier 6-13th July 2000). This is a summary of the paper "New QCD Estimate of the Kaon Penguin Matrix Elements" hep-ph/0004247 (Nucl. Phys. B in press). estimate of the kaon CP -violation parameter ǫ ′ /ǫ in the Electroweak Standard Model (ESM). These results have been originally obtained in [12] and will be reviewed here.
2. Tests of the "sacrosante" Weinberg and DMO sum rules in the chiral limit
Notations
We shall be concerned here with the two-point correlator:
Π µν LR (q) ≡ i d 4 x e iqx 0|T J µ L (x) (J ν R (0)) † |0 = −(g µν q 2 − q µ q ν )Π LR (q 2 ) ,(1)
built from the left-and right-handed components of the local weak current:
J µ L =ūγ µ (1 − γ 5 )d, J µ R =ūγ µ (1 + γ 5 )d ,(2)
and/or using isospin rotation relating the neutral and charged weak currents:
ρ V − ρ A ≡ 1 2π ImΠ LR ≡ 1 4π 2 (v − a) .(3)
The first term is the notation in [13], while the last one is the notation in [2,3].
The sum rules
The "sacrosante" DMO and Weinberg sum rules read in the chiral limit 2 :
S 0 ≡ ∞ 0 ds 1 2π ImΠ LR = f 2 π , S 1 ≡ ∞ 0 ds s 1 2π ImΠ LR = 0 , S −1 ≡ ∞ 0 ds s 1 2π ImΠ LR = −4L 10 , S em ≡ ∞ 0 ds s log s µ 2 1 2π ImΠ LR = − 4π 3α f 2 π m 2 π ± − m 2 π 0 ,(4)
where f π | exp = (92.4 ± 0.26) MeV is the experimental pion decay constant which should be used here as we shall use data from τ -decays involving physical pions; m π ± − m π 0 | exp ≃ 4.5936(5) MeV;
L 10 ≡ f 2 π r 2 π /3 − F A [ r 2 π = (0.439 ± 0
.008)f m 2 is the mean pion radius and F A = 0.0058 ± 0.0008 is the axial-vector pion form factor for π → eνγ] is one the low-energy constants of the effective chiral lagrangian [9][10][11]. In order to exploit these sum rules using the ALEPH/OPAL [2,3] data from the hadronic tau-decays, we shall work with their Finite Energy Sum Rule (FESR) versions (see e.g. [6,1] for such a derivation). In the chiral limit (m q = 0 and ūu = d d = ss ), this is equivalent to truncate the LHS at t c until which the data are available, while the RHS of the integral remains valid to leading order in the 1/t c expansion in the chiral limit, as the breaking of these sum rules by higher dimension D = 6 condensates in the chiral limit which is of the order of 1/t 3 c is numerically negligible [7].
Matching between the low and highenergy regions
In order to fix the t c values which separate the low and high energy parts of the spectral functions, we require that the 2nd Weinberg sum rule (WSR) S 1 should be satisfied by the present data. As shown in Fig. 1 (see [12]), this is obtained for two values of t c 3 :
t c ≃ (1.4 ∼ 1.5) GeV 2 and (2.4 ∼ 2.6) GeV 2 .(5)
2 Systematic analysis of the breaking of these sum rules by light quark masses [6] and condensates [7,8] within the context of QCD have been done earlier. 3 One can compare the two solutions with the tc-stability region around 2 GeV 2 in the QCD spectral sum rules analysis (see e.g. Chapter 6 of [14]). Though the 2nd value is interesting from the point of view of the QCD perturbative calculations (better convergence of the QCD series), its exact value is strongly affected by the inaccuracy of the data near the τ -mass (with the low values of the ALEPH/OPAL data points, the 2nd Weinberg sum rule is only satisfied at the former value of t c ). After having these t c solutions, we can improve the constraints by requiring that the 1st Weinberg sum rule S 0 reproduces the experimental value of f π 4 within an accuracy 2-times the experimental error. This condition allows to fix t c in a very narrow margin due to the sensitivity of the result on the changes of t c values 5
t c = (1.475 ± 0.015) GeV 2 ,(6)
3. Low-energy constants L 10 , m π ± − m π 0 and f π in the chiral limit
Using the previous value of t c into the S −1 sum rule, we deduce:
L 10 ≃ −(6.26 ± 0.04) × 10 −3 ,(7)
which agrees quite well with more involved analysis including chiral symmetry breakings [15,3], and with the one using a lowest meson dominance (LMD) of the spectral integral [16]. Analogously, one obtains from the S em sum rule:
∆m π ≡ m π ± − m π 0 ≃ (4.84 ± 0.21) MeV . (8)
This result is 1σ higher than the data 4.5936 (5) MeV, but agrees within the errors with the more detailed analysis from τ -decays [17,3] and with the LMD result of about 5 MeV [16]. We have checked that moving the subtraction point µ from 2 to 4 GeV slightly decreases the value of ∆m π by 3.7% which is relatively weak, as expected. Indeed, in the chiral limit, the µ dependence does not appear in the RHS of the S em sum rule, and then, it looks natural to choose:
µ 2 = t c ,(9)
because t c is the only external scale in the analysis. At this scale the result increases slightly by 2.5%. One can also notice that the prediction for ∆m is more stable when one changes the value of t c = µ 2 . Therefore, the final predictions from the value of t c in Eq. (6) fixed from the 1st and 2nd Weinberg sum rules are:
∆m ≃ (4.96 ± 0.22) MeV ,
L 10 ≃ −(6.42 ± 0.04) × 10 −3 ,(10)
which we consider as our "best" predictions. For some more conservative results, we also give the predictions obtained from the second t c -value given in Eq. (5). In this way, one obtains:
f π = (87 ± 4) MeV , ∆m ≃ (3.4 ± 0.3) MeV , L 10 ≃ −(5.91 ± 0.08) × 10 −3 ,(11)
where one can notice that the results are systematically lower than the ones obtained in Eq. (10) from the first t c -value given previously, which may disfavour a posteriori the second choice of t cvalues, though we do not have a strong argument favouring one with respect to the other 6 . Therefore, we take as a conservative value the largest range spanned by the two sets of results, namely:
f π = (86.8 ± 7.1) MeV , ∆m ≃ (4.1 ± 0.9) MeV , L 10 ≃ −(5.8 ± 0.2) × 10 −3 ,(12)
which we found to be quite satisfactory in the chiral limit. The previous tests are very useful, as they will allow us to jauge the confidence level of the next predictions.
4. Soft pion and kaon reductions of (ππ) I=2 |Q 3/2 7,8 |K 0 to vacuum condensates We shall consider here the kaon electroweak penguin matrix elements:
Q 3/2 7,8 2π ≡ (ππ) I=2 |Q 3/2 7,8 |K 0 ,(13)
defined as:
Q 7 ≡ 3 2 (sd) V −A u,d,s e ψ ψ ψ V +A , Q 8 ≡ 3 2 (s α d β ) V −A u,d,s e ψ ψ β ψ α V +A ,(14)
where α, β are colour indices; e ψ denotes the electric charges. In the chiral limit m u,d,s ∼ m 2 π ≃ m 2 K = 0, one can use soft pion and kaon techniques in order to relate the previous amplitude 6 Approach based on 1/Nc expansion and a saturation of the spectral function by the lowest state within a narrow width approximation (NWA) favours the former value of tc given in Eq. (6) [16].
to the four-quark vacuum condensates [13] (see also [16]):
Q 3/2 7 2π ≃ − 4 f 3 π O 3/2 7 , Q 3/2 8 2π ≃ − 4 f 3 π 1 3 O 3/2 7 + 1 2 O 3/2 8 ,(15)
where we use the shorthand notations: 0|O
3/2 7,8 |0 ≡ O 3/2 7,8 , and f π = (92.42 ± 0.26) MeV 7 . Here: O 3/2 7 = u,d,sψ γ µ τ 3 2 ψψγ µ τ 3 2 ψ −ψγ µ γ 5 τ 3 2 ψψγ µ γ 5 τ 3 2 ψ , O 3/2 8 = u,d,sψ γ µ λ a τ 3 2 ψψγ µ λ a τ 3 2 ψ −ψγ µ γ 5 λ a τ 3 2 ψψγ µ γ 5 λ a τ 3 2 ψ ,(16)
where τ 3 and λ a are flavour and colour matrices.
Using further pion and kaon reductions in the chiral limit, one can relate this matrix element to the B-parameters:
B 3/2 7 ≃ 3 4 (m u + m d ) m 2 π (m u + m s ) m 2 K 1 f π Q 3/2 7 2π B 3/2 8 ≃ 1 4 (m u + m d ) m 2 π (m u + m s ) m 2 K 1 f π Q 3/2 8 2π(17)
where all QCD quantities will be evaluated in the M S-scheme and at the scale M τ .
5. The O 3/2 7,8
vacuum condensates from DMO-like sum rules in the chiral limit
In previous papers [13,16], the vacuum condensates O
2π α s O 3/2 8 = ∞ 0 ds s 2 µ 2 s + µ 2 (ρ V − ρ A ) , 16π 2 3 O 3/2 7 = ∞ 0 ds s 2 × log s + µ 2 s (ρ V − ρ A ) ,(18)
where µ is the subtraction point. Due to the quadratic divergence of the integrand, the previous sum rules are expected to be sensitive to the high energy tails of the spectral functions where the present ALEPH/OPAL data from τdecay [2,3] are inaccurate. This inaccuracy can a priori affect the estimate of the four-quark vacuum condensates. On the other hand, the explicit µ-dependence of the analysis can also induce another uncertainty. En passant, we check below the effects of these two parameters t c and µ. After evaluating the spectral integrals, we obtain at µ= 2 GeV and for our previous values of t c in Eq. (6), the values (in units of 10 −3 GeV 6 ) using the cut-off momentum scheme (c.o):
α s O 3/2 8 c.o ≃ −(0.69 ± 0.06) , O 3/2 7 c.o ≃ −(0.11 ± 0.01) ,(19)
where the errors come mainly from the small changes of t c -values. If instead, we use the second set of values of t c in Eq. (5), we obtain by setting µ=2 GeV:
α s O 3/2 8 c.o ≃ −(0.6 ± 0.3) , O 3/2 7 c.0 ≃ −(0.10 ± 0.03) ,(20)
which is consistent with the one in Eq. (19), but with larger errors as expected. We have also checked that both O , a feature which has been already noticed in [16]. In order to give a more conservative estimate, we consider as our final value the largest range spanned by our results from the two different sets of t c -values. This corresponds to the one in Eq. (20) which is the less accurate prediction. We shall use the relation between the momentum cut-off (c.o) and M Sschemes given in [13]:
O 3/2 7 MS ≃ O 3/2 7 c.o + 3 8 a s 3 2 + 2d s O 3/2 8 O 3/2 8 MS ≃ 1 − 119 24 a s ± 119 24 a s 2 × O 3/2 8 c.o − a s O 3/2 7 ,(21)
where d s = −5/6 (resp 1/6) in the so-called Naïive Dimensional Regularization NDR (resp. t'Hooft-Veltmann HV) schemes 8 ; a s ≡ α s /π. One can notice that the a s coefficient is large in the 2nd relation (50% correction), and the situation is worse because of the relative minus sign between the two contributions. Therefore, we have added a rough estimate of the a 2 s corrections based on the naïve growth of the PT series, which here gives 50% corrections of the sum of the two first terms. For a consistency of the whole approach, we shall use the value of α s obtained from 8 The two schemes differ by the treatment of the γ 5 matrix.
τ -decay, which is [2,3]:
α s (M τ )| exp = 0.341 ± 0.05 =⇒ α s (2 GeV) ≃ 0.321 ± 0.05 .(22)
Then, we deduce (in units of 10 −4 GeV 6 ) at 2 GeV:
O 3/2 7 MS ≃ −(0.7 ± 0.2) , O 3/2 8 MS ≃ −(9.1 ± 6.4) ,(23)
where the large error in O 3/2 8 comes from the estimate of the a 2 s corrections appearing in Eq. (21). In terms of the B factor and with the previous value of the light quark masses in Eq. (??), this result, at µ = 2 GeV, can be translated into:
B 3/2 7 ≃ (0.7 ± 0.2) m s (2) [MeV] 119 2 k 4 , B 3/2 8 ≃ (2.5 ± 1.3) m s (2) [MeV] 119 2 k 4 .(24)
where:
k ≡ 92.4 f π [MeV]
.
• Our results in Eqs. (23) compare quite well with the ones obtained by [13] in the M Sscheme (in units of 10 −4 GeV 6 ) at 2 GeV:
using the same sum rules but presumably a slightly different method for the uses of the data and for the choice of the cut-off in the evaluation of the spectral integral.
• Our errors in the evaluation of the spectral integrals, leading to the values in Eqs. (19) and (20), are mainly due to the slight change of the cut-off value t c 9 .
• The error due to the passage into the M Sscheme is due mainly to the truncation of the QCD series, and is important (50%) for O
3/2 8
and B
3/2 8 , which is the main source of errors in our estimate.
• As noticed earlier, in the analysis of the pion mass-difference, it looks more natural to do the subtraction at t c . We also found that moving the value of µ can affects the value of B For the above reasons, we expect that the results given in [13] for O 3/2 8 though interesting are quite fragile, while the errors quoted there have been presumably underestimated. Therefore, we think that a reconsideration of these results using alternative methods are mandatory.
6. The O 3/2 7,8
vacuum condensates from the hadronic tau total decay rates In the following, we shall not introduce any new sum rule, but, instead, we shall exploit known informations from the total τ -decay rate and available results from it, which have not the previous drawbacks. The V -A total τ -decay rate, for the I = 1 hadronic component, can be deduced from BNP [1], and reads 10 :
R τ,V −A = 3 2 |V ud | 2 S EW D=2,4,... δ (D) V −A .(27)
|V ud | = 0.9753 ± 0.0006 is the CKM-mixing angle, while S EW = 1.0194 is the electroweak corrections [18]. In the following, we shall use the BNP results for R τ,V /A in order to deduce R τ,V −A :
• The chiral invariant D = 2 term due to a short distance tachyonic gluon mass [19,20] cancels in the V -A combination. Therefore, the D = 2 contributions come only from the quark mass terms:
M 2 τ δ (2) V −A ≃ 8 1 + 25 3 a s (M τ ) m u m d ,(28)
as can be obtained from the first calculation [6], where m u ≡ m u (M τ ) ≃ (3.5±0.4) MeV, m d ≡ m d (M τ ) ≃ (6.3 ± 0.8) MeV [21] are respectively the running coupling and quark masses evaluated at the scale M τ .
• The dimension-four condensate contribution reads:
M 4 τ δ (4) V −A ≃ 32π 2 1 + 9 2 a 2 s m 2 π f 2 π +O m 4 u,d ,(29)
where we have used the SU (2) relation ūu = d d and the Gell-Mann-Oakes-Renner PCAC relation:
(m u + m d ) ūu +dd = −2m 2 π f 2 π . (30)
• By inspecting the structure of the combination of dimension-six condensates entering in R τ,V /A given by BNP [1], which are renormalizaton group invariants, and using 10 Hereafter we shall work in the M S-scheme.
a SU (2) isospin rotation which relates the charged and neutral (axial)-vector currents, the D = 6 contribution reads:
M 6 τ δ (6) V −A = −2 × 48π 4 a s 1 + 235 48 a s ± 235 48 a s 2 − λ 2 M 2 τ O 3/2 8 +a s O 3/2 7 ,(31)
where the overall factor 2 in front expresses the different normalization between the neutral isovector and charged currents used respectively in [13] and [1], whilst all quantities are evaluated at the scale µ = M τ . The last two terms in the Wilson coefficients of O
3/2 8
are new: the first term is an estimate of the NNLO term by assuming a naïve geometric growth of the a s series; the second one is the effect of a tachyonic gluon mass introduced in [20], which takes into account the resummation of the QCD asymptotic series, with: a s λ 2 ≃ −0.06 GeV 2 11 . Using the values of α s (M τ ) given previously, the corresponding QCD series behaves quite well as:
Coef. O 3/2 8 ≃ 1 + (0.53 ± 0.08) ±0.28 + 0.18 ,(32)
where the first error comes from the one of α s , while the second one is due to the unknown a 2 s -term, which introduces an uncertainty of 16% for the whole series. The last term is due to the tachyonic gluon mass. This leads to the numerical value:
M 6 τ δ (6) V −A ≃ −(1.015 ± 0.149) × 10 3 × (1.71 ± 0.29) O 3/2 8 +a s O 3/2 7 ,(33)
• If, one estimates the D = 8 contribution using a vacuum saturation assumption, the relevant V -A combination vanishes to leading order of the chiral symmetry breaking terms. Instead, we shall use the combined ALEPH/OPAL [2,3] fit for δ (8) V /A , and deduce:
δ (8) V −A | exp = −(1.58 ± 0.12) × 10 −2 .(34)
We shall also use the combined ALEPH/OPAL data for R τ,V /A , in order to obtain:
R τ,V −A | exp = (5.0 ± 1.7) × 10 −2 ,(35)
Using the previous informations into the expression of the rate given in Eq. (27), one can deduce:
δ (6) V −A ≃ (4.49 ± 1.18) × 10 −2 .(36)
This result is in good agreement with the result obtained by using the ALEPH/OPAL fitted mean value for δ (6) V /A :
δ (6) V −A | f it ≃ (4.80 ± 0.29) × 10 −2 .(37)
We shall use as a final result the average of these two determinations, which coincides with the most precise one in Eq. (37). We shall also use the result:
O 3/2 7 O 3/2 8 ≃ 1 8.3 resp. 3 16 ,(38)
where, for the first number we use the value of the ratio of B which is about 0.7 ∼ 0.8 from e.g. lattice calculations quoted in Table 1, and the formulae in Eqs. (15) to (17); for the second number we use the vacuum saturation for the four-quark vacuum condensates [8]. The result in Eq. (38) is also comparable with the estimate of [13] from the sum rules given in Eq. (18). Therefore, at the scale µ = M τ , Eqs. (31), (37) and (38) lead, in the M S-scheme, to: where the main errors come from the estimate of the unknown higher order radiative corrections. It is instructive to compare this result with the one using the vacuum saturation assumption for the four-quark condensate (see e.g. BNP):
O 3/2 8 | v.s ≃ − 32 18 ūu 2 (M τ ) ≃ −0.65 × 10 −3 GeV 6 ,(40)
which shows a 1σ violation of this assumption. This result is not quite surprising, as analogous deviations from the vacuum saturation have been already observed in other channels [14]. We have used for the estimate of ψ ψ the value of (m u + m d )(M τ ) ≃ 10 MeV [21] and the GMOR pion PCAC relation. However, this violation of the vacuum saturation is not quite surprising, as a similar fact has also been observed in other channels [14,2,3], though it also appears that the vacuum saturation gives a quite good approximate value of the ratio of the condensates [14,2,3]. The result in Eq. (39) is comparable with the value −(.98 ± 0.26)× 10 −3 GeV 6 at µ=2 GeV ≈ M τ obtained by [13] using a DMO-like sum rule, but, as discussed previously, the DMO-like sum rule result is very sensitive to the value of µ if one fixes t c as in Eq. (6) according to the criterion discussed above. Here, the choice µ = M τ is welldefined, and then the result becomes more accurate (as mentioned previously our errors come mainly from the estimated unknown α 3 s term of the QCD series). Using Eqs. (15) and (38), our previous result in Eq. (39) can be translated into the prediction on the weak matrix elements in the chiral limit and at the scale M τ (k is defined in Eq. (25)):
(ππ) I=2 |Q 3/2 8 |K 0 ≃ (2.58 ± 0.58) GeV 3 k 3 (41) normalized to f π , which avoids the ambiguity on the real value of f π to be used in a such expression. Our result is higher by about a factor 2 than the quenched lattice result [23]. A resolution of this discrepancy can only be done after the inclusion of chiral corrections in Eqs. (15) to (17), and after the uses of dynamical fermions on the lattice. However, some parts of the chiral corrections in the estimate of the vacuum condensates are already included into the QCD expression of the τ -decay rate and these corrections are negligibly small. We might expect that chiral corrections, which are smooth functions of m 2 π will not affect strongly the relation in Eqs. (15) to (17), though an evaluation of their exact size is mandatory. Using the previous mean values of the light quark running masses [21], we deduce in the chiral limit and at the scale M τ :
B 3/2 8 ≃ (1.70 ± 0.39) m s (M τ ) [MeV] 119 2 k 4 ,(42)
where k is defined in Eq. (25). One should notice that, contrary to the B-factor, the result in Eq. (41) is independent to leading order on value of the light quark masses.
7. Impact of our results on the CPviolation parameter ǫ ′ /ǫ
One can combine the previous result of B 8 with the value of the B 6 parameter of the QCD penguin diagram [24]:
Q 1/2 6 2π ≡ (π + π − ) I=0 |Q 1/2 6 |K 0 ≃ − 2 π + |ūγ 5 d|0 π − |su|K 0 + π + π − |dd +ūu|0 0|sγ 5 d|K 0 ≃ −4 3 2 m 2 K m s + m d 2 × √ 2 (f K − f π ) B 1/2 6 (m c ) .(43)
We have estimated the Q 1/2 6 2π matrix element by relating its 1st term to the K → π lν l semileptonic form factors as usually done (see e.g. [25]), while the 2nd term has been obtained from the contribution of the S 2 ≡ (ūu +dd) scalar meson having its mass and coupling fixed by QCD spectral sum rules [14,26] and in the scheme where the observed low mass σ meson results from a maximal mixing between the S 2 and the σ B associated to the gluon component of the trace of the anomaly [27,28,26] 12 :
θ µ µ = 1 4 β(α s )G 2 + (1 + γ m (α s )) u,d,s m iψi ψ i ,(44)
where β and γ m are the β function and mass anomalous dimension. In this way, one obtains at the scale m c :
B 1/2 6 (m c ) ≃ 3.7 m s + m d m s − m u 2 × (0.65 ± 0.09) − (0.53 ± 0.13) × (m s − m u ) [MeV] 142.6 ,(45)
which satisfies the double chiral constraint [30].
We have used the running charm quark mass m c (m c ) = 1.2 ± 0.05 GeV [31,32]. Evaluating the running quark masses at 2 GeV, with the values given in [21], one deduces:
The errors added quadratically have been relatively enhanced by the partial cancellations of the two contributions. Therefore, we deduce the combination: where we have added the errors quadratically. Using the approximate simplified expresssion [24]
B 68 ≡ Bǫ ′ ǫ ≈ 14.5 × 10 −4 110 m s (2) [MeV] 2 B 68 ,(48)
one can deduce the result in units of 10 −4 :
ǫ ′ ǫ ≃ (4 ± 5) for m s (2) = 119 MeV, ≤ (22 ± 9) , for m s (2) ≥ 90 MeV,(49)
where the errors come mainly from B 68 (40%). The upper bound agrees quite well with the world average data [33]:
ǫ ′ ǫ ≃ (19.3 ± 2.4) × 10 −4 .(50)
12 Present data appear to favour this scheme [?].
We expect that the failure of the inaccurate estimate for reproducing the data is not naïvely due to the value of the quark mass, but may indicate the need for other important contributions than the aloneqq scalar meson S 2 (not the observed σ)-meson which have not been considered so far in the analysis. Among others, a much better understanding of the effects of the gluonium (expected large component of the σ-meson [27,26,28]) in the amplitude, through presumably a new operator needs to be studied.
Summary and conclusions
We have explored the V -A component of the hadronic tau decays for predicting nonperturbative QCD parameters. Our main results are summarized as:
• QCD contiuum threshold -transition between the low-and high-energy regimes: Eq. (6).
• Low-energy constants L 10 , m π ± − m π 0 and f π in the chiral limit:
-Eq. (10) (best)
-Eq (12) (conservative).
• Electroweak penguins: Our results are compared with some other predictions in Table 1. However, as mentioned in the table caption, a direct comparison of these results is not straightforward due to the different schemes and values of the scale where the results have been obtained. In most of the approaches, the values of B which range from 0.6 ∼ 3.0. We are still far from a good control of these non-perturbative parameters, which do not permit us to give a reliable prediction of the CP violation parameter ǫ ′ /ǫ. Therefore, no definite bound for new physics effects can be derived at present, before improvements of these ESM predictions. Table 1 Penguin B-parameters for the ∆S = 1 process from different approaches at µ = 2 GeV. We use the value m s (2) = (119 ± 12) MeV from [21], and predictions based on dispersion relations [13,16] have been rescaled according to it. We also use for our results f π = 92.4 MeV, but we give in the text their m s and f π dependences. Results without any comments on the scheme have been obtained in the M S − N DR−scheme. However, at the present accuracy, one cannot differentiate these results from the ones of M S − HV −scheme.
Methods
Figure 1 .
1FESR version of the 2nd Weinberg sum rule versus t c in GeV 2 using the ALEPH/OPAL data of the spectral functions. Only the central values are shown.
extracted using Das-Mathur-Okubo(DMO)-and Weinberg-like sum rules based on the difference of the vector and axial-vector spectral functions ρ V,A of the I = 1 component of the neutral current:
9
A slight deviation from such a value affects notably previous predictions as the tc-stability of the results (tc ≈ 2 GeV 2 ) does not coincide with the one required by the 2nd Weinberg sum rules. At the stability point the predictions are about a factor 3 higher than the one obtained previously.
(
M τ ) ≃ −(0.94 ± 0.21) × 10 −3 GeV 6 ,(39)
2) ≃ (1.0 ± 0.4) for m s (2) = 119 MeV, ≤ (1.5 ± 0.4) for m s (2) ≥ 90 MeV .
≃
(0.3 ± 0.4) for m s (2) = 119 MeV, ≤ (1.0 ± 0.4) for m s (2) ≥ 90 MeV, (47)
-8
Eq|K 0 .• ǫ ′ /ǫ: Eq. (49)
B1/2
6
B
3/2
8
B
3/2
7
Though we are working here in the chiral limit, the data are obtained for physical pions, such that the corresponding value of fπ should also correspond to the experimental one.5 For the second set of tc-values in Eq. 5, one obtains a slightly lower value: fπ = (84.1 ± 4.4) MeV.
In the chiral limit fπ would be about 84 MeV. However, it is not clear to us what value of fπ should be used here, so we shall leave it as a free parameter which the reader can fix at his convenience.
This contribution may compete with the dimension-8 operators discussed in[22].
CommentsLattice[23]0.6 ∼ 0.8 0.7 ∼ 1.1 0.5 ∼ 0.8Huge NLO unreliable at matching[34]Large1.6 ∼ 3.0 0.7 ∼ 0.9 − M σ : free; SU (3) F trunc. µ ≈ 1 GeV; scheme ?DispersiveLarge N c + LMD − − 0.9 N LO in 1/N c , +LSD-match.[16]strong µ-dep.
. E Braaten, S Narison, A Pich, Nucl. Phys. 373581E. Braaten, S. Narison and A. Pich, Nucl. Phys. B373 (1992) 581.
. R Barate, ALEPH collaborationEur. Phys.J C. 4409The ALEPH collaboration, R. Barate et al., Eur. Phys.J C 4 (1998) 409.
. K The Opal Collaboration, Ackerstaff, Eur. Phys.J C. 7571The OPAL collaboration, K. Ackerstaff et al., Eur. Phys.J C 7 (1999) 571.
. S Weinberg, Phys. Rev. Lett. 18507S. Weinberg, Phys. Rev. Lett. 18 (1967) 507.
. T Das, V S Mathur, S Okubo, Phys. Rev. Lett. 19859T. Das, V.S. Mathur and S. Okubo, Phys. Rev. Lett. 19 (1967) 859;
. T Das, Phys. Rev. Lett. 18759T. Das et al., Phys. Rev. Lett. 18 (1967) 759.
. E G Floratos, S Narison, E De Rafael, Nucl. Phys. 155115E.G. Floratos, S. Narison and E. de Rafael, Nucl. Phys. B155 (1979) 115;
. S Narison, E De Rafael, Nucl. Phys. 169253S. Narison and E. de Rafael, Nucl. Phys. B169 (1979) 253.
. S Narison, Z. Phys. 14263S. Narison, Z. Phys. C14 (1982) 263.
. M A Shifman, A I Vainshtein, V I Zakharov, Nucl. Phys. 147448M.A. Shifman, A.I. Vainshtein and V.I. Za- kharov, Nucl. Phys. B147 (1979) 385, 448.
. S Weinberg, Physica. 96327S. Weinberg, Physica A96 (1979) 327;
. J Gasser, H Leutwyler, Ann. Phys. 158142J. Gasser and H. Leutwyler, Ann. Phys. 158 (1984) 142;
. Nucl. Phys. 250465Nucl. Phys. B250 (1985) 465.
E Rafael, Proc. Suppl.) B, A7 (1989) 1 and hep-ph/9502254, Lectures given at the Tasi-school on CP-violation and the limits of the Standard Model. Suppl.) B, A7 (1989) 1 and hep-ph/9502254, Lectures given at the Tasi-school on CP-violation and the limits of the Standard ModelBoulder-ColoradoE. de Rafael, Nucl. Phys. (Proc. Suppl.) B, A7 (1989) 1 and hep-ph/9502254, Lectures given at the Tasi-school on CP-violation and the limits of the Standard Model, Boulder- Colorado (1994).
A Pich, hep- ph/9607484hep-ph/9502366; A. Manohar. A. Pich, hep-ph/9502366; A. Manohar, hep- ph/9607484.
hep-ph/0004247 (Nucl. Phys. B, in press. S Narison, S. Narison, hep-ph/0004247 (Nucl. Phys. B, in press.)
. J F Donoghue, E Golowich, hep- ph/9911309J.F. Donoghue and E. Golowich, hep- ph/9911309.
For a review and references to original works, see e.g., S. Narison, QCD spectral sum rules. Lecture Notes in Physics. 26World ScientificFor a review and references to original works, see e.g., S. Narison, QCD spectral sum rules, Lecture Notes in Physics, Vol 26 (1989) ed. World Scientific.
. M Davier, hep-ph/9802447M. Davier et al., hep-ph/9802447.
QCD 99 Euroconference (Montpellier). M Knecht, S Peris, E De Rafael, Nucl. Phys. (Proc. Suppl.). 861M. Knecht, S. Peris and E. de Rafael, QCD 99 Euroconference (Montpellier), Nucl. Phys. (Proc. Suppl.) B86 (2000) 1;
E Rafael, these proceedings and references therein. E. de Rafael, these proceedings and references therein.
. R D Peccei, J Solà, Nucl. Phys. 2811R.D. Peccei and J. Solà, Nucl. Phys. B281 (1987) 1.
. W J Marciano, A Sirlin, Phys. Rev. Lett. 5622W.J. Marciano and A. Sirlin, Phys. Rev. Lett. 56 (1986) 22.
. V A Zakharov, Nucl. Phys. (Proc. Sup.). 74392For a review, see e.g. V.A. Zakharov, Nucl. Phys. (Proc. Sup.) B74 (1999) 392;
. F , F.V.
. M I Gubarev, V I Polikarpov, Zakharov, hep-th/9812030Gubarev, M.I. Polikarpov and V.I. Zakharov, hep-th/9812030.
. K Chetyrkin, S Narison, V I Zakharov, Nucl. Phys. 550353K. Chetyrkin, S. Narison and V.I. Zakharov, Nucl. Phys. B550 (1999) 353.
QCD 99 Euroconference (Montpellier), Nucl. Phys. (Proc. Suppl.) B64 (1998) 210 and original references therein. For reviews. For reviews, see e.g., S. Narison, QCD 99 Euroconference (Montpellier), Nucl. Phys. (Proc. Suppl.) B64 (1998) 210 and origi- nal references therein;
. H Leutwyler, hep- ph/0011049H. Leutwyler, hep- ph/0011049.
. J F Donoghue, these proceedingsJ.F. Donoghue, these proceedings;
. V , V.
. J F Cirigliano, E Donoghue, Golowich, hep-ph/0007196Cirigliano, J.F. Donoghue and E. Golowich, hep-ph/0007196.
Martinelli, hepph/9910237; R. Gupta. Nucl. Phys. (Proc. Suppl.). 63278For a review, see.e.g., G. Martinelli, hep- ph/9910237; R. Gupta, Nucl. Phys. (Proc. Suppl.) B63 (1998) 278.
. S Bosch, Nucl. Phys. 5653For a review, see e.g., S. Bosch et al., Nucl. Phys. B565 (2000) 3;
. G Buchalla, A J Buras, M Lautenbacher, Rev. Mod. Phys. 681125G. Buchalla, A.J. Buras and M. Lautenbacher, Rev. Mod. Phys. 68 (1996) 1125.
Shifman. A I Vainshtein, V I Zakharov, M A , Sov. Phys. JETP. 454670A.I. Vainshtein, V.I. Zakharov and M.A. Shif- man, Sov. Phys. JETP 45(4) (1977) 670;
. V I Zakharov, private communicationV.I. Zakharov (private communication).
plenary talk given at Hadron 99, Nucl. Phys. A675 (2000) 54c; ibid, Nucl. Phys. B509 (1998) 312; ibid. S Narison, Nucl. Phys. (Proc. Suppl.). 64210S. Narison, plenary talk given at Hadron 99, Nucl. Phys. A675 (2000) 54c; ibid, Nucl. Phys. B509 (1998) 312; ibid, Nucl. Phys. (Proc. Suppl.) B64 (1998) 210.
. S Narison, G Veneziano, Int. J. Mod. Phys. 42751S. Narison and G. Veneziano, Int. J. Mod. Phys A4, 11 (1989) 2751.
. A Bramon, S Narison, Mod. Phys. Lett. 41113A. Bramon and S. Narison, Mod. Phys. Lett. A4 (1989) 1113.
L Montanet, QCD 99 Euroconference. MontpellierL. Montanet, QCD 99 Euroconference (Mont- pellier);
. U Gastaldi, these proceedingsU. Gastaldi, these proceedings.
We thank John Donoghue for mentioning this important point. We thank John Donoghue for mentioning this important point.
. S Narison, Nucl. Phys. (Proc. Suppl.). 74304S. Narison, Nucl. Phys. (Proc. Suppl.) B74 (1999) 304.
. M Eidemuller, M Jamin, M. Eidemuller and M. Jamin, these proceed- ings.
The kTeV collaboration, B. Hsiung, QCD99 Euroconference (Montpelllier) and these proceedings; The NA48 collaboration, B. Peyaud, QCD99 Euroconference. MontpelllierThe kTeV collaboration, B. Hsiung, QCD99 Euroconference (Montpelllier) and these pro- ceedings; The NA48 collaboration, B. Peyaud, QCD99 Euroconference (Montpell- lier);
. G Tatishvili, these proceedingsG. Tatishvili, these proceedings.
. D Pekurovsky, G Kilcup, hep- lat/9812019D. Pekurovsky and G. Kilcup, hep- lat/9812019.
. T Hambye, P Soldan, these proceedingsT Hambye and P. Soldan, these proceedings.
. S Bertolini, J O Eeg, M Fabbrichesi, hep-ph/0002234Rev. Mod. Phys. 7265S. Bertolini, J.O. Eeg and M. Fabbrichesi, hep-ph/0002234; Rev. Mod. Phys. 72 (2000) 65.
these proceedings and private communication. J Prades, J. Prades, these proceedings and private com- munication.
. T Morozumi, C S Lim, A I Sanda, Phys. Rev. Lett. 65404T. Morozumi, C.S. Lim and A.I. Sanda, Phys. Rev. Lett. 65 (1990) 404;
. Y Y Keum, U Nierste, A I Sanda, Phys. Lett. 457157Y.Y. Keum, U. Nier- ste and A.I. Sanda, Phys. Lett. B457 (1999) 157.
. M Harada, hep-ph/9910201M. Harada et al., hep-ph/9910201.
. E Pallante, A Pich, hep-ph/9911233E. Pallante and A. Pich, hep-ph/9911233.
hep-ph/0002116; see however: A. Pich. A J Buras, private communicationA.J. Buras et al., hep-ph/0002116; see how- ever: A. Pich (private communication);
. G Mennessier, private communicationG. Mennessier (private communication).
| [] |
[] | [
"\nNational Institute of Standards and Technology\n80305BoulderCOUSA\n",
"\nDepartment of Physics\nUSA. He is now with\nwas with Rice University\n77005HoustonTX\n",
"\nColorado State University\nFort Collins80523COUSA\n",
"\nUniversity of Science and Technology\nWuhanChina\n",
"\nBeihang University\nBeijingChina, in\n"
] | [
"National Institute of Standards and Technology\n80305BoulderCOUSA",
"Department of Physics\nUSA. He is now with\nwas with Rice University\n77005HoustonTX",
"Colorado State University\nFort Collins80523COUSA",
"University of Science and Technology\nWuhanChina",
"Beihang University\nBeijingChina, in"
] | [] | Infinitesimal electric and magnetic dipoles are widely used as an equivalent radiating source model. In this paper, an improved method for dipole extraction from magnitude-only electromagnetic-field data based on genetic algorithm and back-and-forth iteration algorithm [1] is proposed. Compared with conventional back-and-forth iteration algorithm, this method offers an automatic flow to extract the equivalent dipoles without prior decision of the type, position, orientation and number of dipoles. It can be easily applied to electromagnetic-field data on arbitrarily shaped surfaces and minimize the number of required dipoles. The extracted dipoles can be close to original radiating structure, thus being physical. Compared with conventional genetic algorithm based method, this method reduces the optimization time and will not easily get trapped into local minima during optimization, thus being more robust. This method is validated by both simulation data and measurement data and its advantages are proved. The potential application of this method in phase retrieval is also discussed. | null | [
"https://arxiv.org/pdf/1901.06776v1.pdf"
] | 127,970,593 | 1901.06776 | 70ce4a7b940e04fb578e6ac2b47af9272df35083 |
National Institute of Standards and Technology
80305BoulderCOUSA
Department of Physics
USA. He is now with
was with Rice University
77005HoustonTX
Colorado State University
Fort Collins80523COUSA
University of Science and Technology
WuhanChina
Beihang University
BeijingChina, in
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 Index Terms-Source reconstructionnumerical modelingequivalent radiating source modelphase retrieval
Infinitesimal electric and magnetic dipoles are widely used as an equivalent radiating source model. In this paper, an improved method for dipole extraction from magnitude-only electromagnetic-field data based on genetic algorithm and back-and-forth iteration algorithm [1] is proposed. Compared with conventional back-and-forth iteration algorithm, this method offers an automatic flow to extract the equivalent dipoles without prior decision of the type, position, orientation and number of dipoles. It can be easily applied to electromagnetic-field data on arbitrarily shaped surfaces and minimize the number of required dipoles. The extracted dipoles can be close to original radiating structure, thus being physical. Compared with conventional genetic algorithm based method, this method reduces the optimization time and will not easily get trapped into local minima during optimization, thus being more robust. This method is validated by both simulation data and measurement data and its advantages are proved. The potential application of this method in phase retrieval is also discussed.
enclosure [4] or near-field coupling from noise source to a victim antenna [5]. In [6], a physics-based dipole is extracted and used to debug the radiation mechanism of a flexible printed circuit board (PCB). It's found that physical dipoles which are close to original radiation source are useful in terms of finding the noise current path. Magnetic dipoles are formed by current loops which may incorporate displacement current. However, the physical dipoles are extracted by recognizing the pattern generated by a single horizontal magnetic dipole. In other cases, the pattern may be complicated and cannot be recognized easily. Later, a dipole based reciprocity method is proposed to calculate the coupled noise from a physical dipole to victim antenna [7]. The direction and location of the physical dipole can be optimized to reduce the coupled noise. This can provide guideline to optimize the placement of noise source and layout in actual DUT. In conclusion, physical dipoles turn out to be useful in debugging of radiation mechanism (current flow), near-field coupling estimation and its reduction.
There are several methods to extract equivalent dipoles from scanned radiated electromagnetic field. In [3], genetic algorithm is used to optimize the type (electric or magnetic), position, orientation, magnitude and phase of all dipoles. This method works pretty well but the disadvantages are that it is very time consuming because of the large number of optimization variables (8 variables for each dipole) and the optimization can fall into local minima easily because of the large variation range of dipole magnitude which is from zero to positive infinity. As a result, the number of dipoles is usually strictly limited. In [8], a set of vertical magnetic dipoles or horizontal electric dipoles are placed on the discretized horizontal surface of an radiating integrated circuit (IC) and linear least square method is used to solve the current intensity of these dipoles. The method requires simple matrix operations to obtain the results and is very fast. But the dipoles obtained do not correspond to the real source and the position and number of dipoles need to be determined in advance by meshing the surface of IC. In [9], an array of uniformly placed dipole sets are used and each dipole set includes one vertical electric dipole P z and two horizontal magnetic dipoles M x , M y at the same position. The regularization technique and the truncated singular-value decomposition method are investigated together with the conventional least-squares method to calculate the dipole magnitude and phase from the near-field data. The regularization technique can achieve a better source model that reflects the actual physics. But the physics is shown by the 2-D pattern of dipole magnitude and phase, so a large number of An Improved Dipole Extraction Method From Magnitude-Only Electromagnetic-Field Data Chunyu Wu, Ze Sun, Xu Wang, Yansheng Wang, Ben Kim and Jun Fan, Fellow, IEEE I dipole sets is needed. Also, how to select the regularization coefficient brings another problem. By combining the genetic algorithm and linear least square method, the equivalent dipoles are obtained with a reduced number of dipoles in a reduced computing time without prior decision of the location and number of dipoles in [10], but the algorithm does not optimize the dipole type. It was assumed that only magnetic dipoles exist.
Since the methods involving matrix inversion require both magnitude and phase of electric or magnetic field while phase-resolved electric or magnetic field measurement brings more difficulty than magnitude-only electric or magnetic field measurement, Ji proposes a back-and-forth iteration algorithm which requires only the information of the field magnitudes on two near-field scanning planes to extract dipoles [1]. This method is effective and fast to extract dipoles from magnitude-only near-field data, but there are two disadvantages about this method. The first disadvantage is that the method requires propagating field from the lower plane to the higher plane. This will bring some difficulties when cylindrical scan is used. Transformation between field on two different cylindrical surfaces is much more difficult than transformation between field on two different planar surfaces. And it requires that the sampling must be done uniformly on the surface. The second disadvantage is that the dipole sets which include one vertical electric dipole P z and two horizontal magnetic dipoles M x , M y at the same position are placed uniformly. This will lead to a large number of dipoles and more scanning points to have a well-conditioned transformation matrix during transformation from field to dipole moments. Also, the position and number of dipole sets need to be determined in advance. This is usually done by trial and error.
In this paper, an improved method for dipole extraction from magnitude-only electromagnetic-field data based on genetic algorithm and back-and-forth iteration algorithm is proposed, which aims at solving the two disadvantages mentioned above. It offers an automatic flow to extract the equivalent dipoles without prior decision of the type, position, orientation and number of dipoles. Compared with conventional back-and-forth iteration algorithm, this method can be easily applied to electromagnetic-field data on arbitrarily shaped surfaces and minimize the number of dipoles. It can also generate physical dipoles which are close to original radiating source. Compared with conventional genetic algorithm based method, this method reduces the optimization time and will not easily get trapped into local minima during optimization because it will not optimize the magnitude and phase of dipoles.
This paper is organized as below. Section II introduces the method in detail. Section III validates the method using simulated magnitude-only electromagnetic-field data on cylindrical surfaces with infinitesimal dipoles or wire antenna as source and measured magnitude-only electromagnetic-field data on cylindrical surfaces with a radiating server as source. Section IV extends the method into single surface scanning and discusses its potential application in phase retrieval. Section V concludes the paper.
II. DESCRIPTION OF THE METHOD
The principle of this method is shown in Fig. 1. Electric or magnetic field data with magnitude only are obtained firstly on two cylindrical surfaces around DUT, then infinitesimal dipoles are extracted to generate the same electric or magnetic field. These infinitesimal dipoles can be considered as an equivalent radiating source model. Although the method is discussed and validated in term of cylindrical scanning in this paper, it can be applied easily to planar scanning and spherical scanning as well. Electric field is taken as input data to describe and validate the method in this paper, but the input data can also be magnetic field.
The general flow of this method is shown in Fig. 2. The method starts with one dipole. Then genetic algorithm is used to optimize the dipole type and dipole position to minimize the relative error between the measured electric field and calculated electric field from equivalent dipole source. After optimization, the minimized relative error will be compared with previous relative error with one less dipole. If the decrease of relative error is smaller or equal than µ, it can be concluded that the relative error has converged and it will output the optimized location and type of dipoles. Otherwise, the number of dipoles will be increased by one and this optimization will be executed again. µ means the minimum decrease of relative error which can be tolerated with increment of dipole number. The definition of relative error is shown in equation (1)~(3), where RE1 is the relative error between the measured electric field and calculated electric field from equivalent dipole source in surface #1, RE2 is the relative error in surface #2 and RE is the overall relative error which is the average of RE1 and RE2. 2 2
#1 2 2 #1 2 2 # 2 2 2 # 2 [( ) ( ) ] 1 (1) ( ) [( ) ( ) ] 2 (2) ( ) 1 ( 1 2) (3) 2 scan fit scan fit z z Surface scan scan z Surface scan fit scan fit z z Surface scan RE E E RE RE RE
A. Improved Back-and-forth Iteration Algorithm
The improved back-and-forth iteration algorithm is shown in Fig. 3. The algorithm starts with initial values of dipole magnitude and phase and given dipole position and type. The initial values can be decided by assuming that the field at every scanning point on surface #2 has the same phase. Then the initial values are obtained using linear least square method. step 1: calculate the field on surface #1 using the transfer function relating field on surface #1 with dipole source as shown in equation (4), where F represents the field value vector, D represents the dipole value vector and T represents the transfer function which is determined by the location and types of dipoles and the location of scanning points. 1 1
[F] [ ] [ ] (4) M M N N T D step 2:
calculate the relative error on surface #1, namely RE1. step 3: enforce the magnitude of field on surface #1 to be the measured magnitude but keep the phase unchanged.
step 4: use the updated field on surface #1 to inversely calculate the dipole magnitude and phase by linear least square method as shown in equation (5).
1 ( ) (5) T T D T T T F step 5
: calculate the field on surface #2 using the transfer function relating field on surface #2 with dipole source similar to equation (4). step 6: calculate the relative error on surface #2, namely RE2. step 7: enforce the magnitude of field on surface #2 to be the measured magnitude but keep the phase unchanged.
step 8: use the updated field on surface #2 to inversely calculate the dipole magnitude and phase by linear least square method similar to equation (5).
The overall relative error RE will be compared with RE in previous iteration. If the decrease of RE is smaller or equal than ε, it can be concluded that RE has converged and it will output the obtained magnitude and phase of dipoles. Otherwise, this iteration will be executed again. ε means the minimum decrease of RE which can be tolerated with increment of iteration number.
The main difference between the improved back-and-forth iteration algorithm introduced in this paper and the conventional algorithm is that transformation between field on two different surfaces is not needed. This will bring two advantages. The first advantage is that the method can be applied to electromagnetic-field scanning in arbitrarily shaped surfaces. In [1], the field transformation from the lower plane to the higher plane is achieved using the plane-wave expansion method. For cylindrical scanning, cylindrical wave expansion which is more complicated is needed. The improved back-and-forth iteration algorithm does not require complicated wave expansion method and can be easily applied in planar, cylindrical or spherical scanning. It is even possible that one scanning is done in planar surface and the other scanning is done in cylindrical surface. The second advantage is that there are no strict requirements about sampling in each surface for the improved algorithm. Since the wave expansion method requires FFT or IFFT to efficiently perform the numerical evaluation of the integral equations converting EM field between spectral domain and spatial domain, the conventional algorithm enforces some sampling restrictions.
B. Genetic Algorithm
The genetic algorithm is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm selects individuals from the current population to be parents and uses them to produce the children for the next generation. Over successive generations, the population "evolves" toward an optimal solution [11]. It's useful in terms of optimization for highly nonlinear problems. Genetic algorithm is used to optimize the dipole type and location in this method.
The objective function to minimize is the relative error defined in equation (3). The optimization range for dipole location can be set as the occupied space of original DUT or radiating structure. The optimization range for dipole type can be set as [P x , P y , P z , M x , M y , M z ]. It means that the dipole type can be any one of the six kinds. Meaningful optimization range can help us extract dipoles close to original radiating source and reduce optimization time. For example, the radiation from horn antenna is equivalent as radiation from J and M on the aperture, then the optimization range of dipole location can also be set as the aperture surface of horn antenna.
The general flow of the optimization using genetic algorithm is shown in Fig. 4. The algorithm starts with a population of randomly decided individuals for dipole type and location. Then the improved back-and-forth iteration algorithm is used to obtain the magnitude and phase of dipoles for each individual. The objective function for each individual is evaluated next for selection of parent. Subsequent generations evolve from the current generation through selection, crossover and mutation to search new dipole type and location. And this procedure will go over and over again until max number of generations is reached or the minimized value of objective function is no longer changing. Then the algorithm will stop and return the optimized dipole type, dipole location and minimized relative error.
III. VALIDATION OF THE METHOD
A. Simulation Data with Infinitesimal Dipoles as Source
The method is firstly validated using simulation data with infinitesimal dipoles as source. In this case, two infinitesimal dipoles are used as the original radiating source. It is expected that the proposed algorithm can extract the same infinitesimal dipoles. One Px and one My dipole are placed above the PEC ground. The simulated E field (Ez and Ephi) magnitude on two cylindrical surfaces with radius of 0.5m and 1m at 781.25MHz is used as the input data to the algorithm. The scanning height is from 1m to 4m with step of 0.25m and the scanning points along circumference is shown in Fig. 5. The optimization range for x coordinate of dipole location is from -0.5 to 0.5, The optimization range for y coordinate of dipole location is from -0.5 to 0.5 and The optimization range for z coordinate of dipole location is from 1 to 2. The extraction results are shown in Table I.
The relative error between the calculated fields from extracted dipoles and scanned fields is 0.00035. The extracted dipoles are almost the same as original radiating source in terms of dipole number, dipole type, dipole location and dipole amplitude. The phase difference of extracted two dipoles are 90 degrees, which is the same as original dipole source. The relative error change during back-and-forth iteration to calculate magnitude and phase of dipoles is shown in Fig. 6. The iteration process converges very fast.
B. Simulation Data with Wire Antenna as Source
To further validate that this algorithm can extract physical dipoles, a half wavelength wire antenna is used as original radiating source. The center of wire antenna is located at (0, 0, 1.5m) and the wire antenna is placed along z axis. The excitation voltage is 5V, 0 degree. The simulated E field (Ez and Ephi) magnitude on two cylindrical surfaces with radius of 0.5m and 1m at 781.25MHz is used as the input to the algorithm. The scanning points distribution is the same as III.A. The optimization range of dipole location is also the same as III.A. The extracted dipole is a single Pz dipole located at (-3.629e-05, 2.226e-06, 1.4977) with magnitude of 0.0066 and phase of -10.17 degrees. The relative error between the calculated fields from dipole and scanned fields is 0.0339. The extracted dipole is close to original radiating structure since the current in original wire antenna flows in the z direction and the radiating wire antenna is composed of a series of Pz dipoles along the wire. Because the observation plane is far away from wire antenna, a series of Pz dipoles along the wire can be approximately equivalent to a single Pz dipole in the center.
The current along the half wavelength wire antenna can be expressed as equation (4), as shown in Fig. 7.
V V I j Oh Z m
The value of Pz dipole can be calculated using equation (5). The calculated value is very close to the value of extracted single Pz dipole. 0 4 4 (z) 0.0072 (5) in
I V I dz Z
The phase pattern of calculated Ez field on the cylindrical surface with radius of 0.5m is compared with original wire antenna simulation as shown in Fig. 8. The phase pattern is retrieve correctly.
C. Measurement Data with Radiating Rack as Source
To further validate the algorithm, a data center rack is used as DUT. The horizontal (Ephi) and vertical (Ez) E field generated by the rack on two cylindrical surfaces with radius of 2m and 5m is measured in a semi-anechoic chamber. The measurement setup is shown in Fig. 9.
The measured E field pattern at 781.25MHz is shown in Fig. 10. The unit is dBuV/m. The x axis is the azimuth in degree and the y axis is the height.
The measured E field magnitude is used as input to the algorithm. The algorithm extracted 9 dipoles with a relative error of 0.1885. The calculated E field pattern on the same cylindrical surfaces from extracted dipoles is shown in Fig. 11.
To compare the measured E field and calculated E field from extracted dipoles, the first column in Fig. 9 and Fig. 10 (Ez and Ephi, r=2m, phi=0 degree) is plotted in one figure as shown in Fig. 12.
From the comparison, we can see that the calculated E field from extracted dipoles matches well with measurement. The algorithm can tolerate a certain amount of noise in original measurement and will not over fit the noise. The predicted E field pattern is smoother with less noise.
IV. EXTENSIONS
A. Single Surface Scanning
Although in the back-and-forth iteration algorithm, field magnitude on two surfaces is used to get the dipole value, the algorithm can still work if field on only one surface is available. The algorithm can be adapted as shown in Fig. 13 if the field magnitude on only one surface is available. The reason why field magnitude on two surfaces are normally used is to provide more information about the radiating source for extracting dipoles close to original radiating source and reduce the influence of possible measurement error. But if measurement data on the second surface does not provide more information or it takes too much time to scan on the second surface, scanning data on only one surface can also be used with the algorithm.
The example in III.B is used again to validate the algorithm with single scanning surface. Ez and Ephi on cylindrical surface with radius of 1m is used as input to the algorithm. The extracted dipole is still a Pz dipole at (5.4561e-8, 1.9085e-7, 1.494) with magnitude of 0.0066 and phase of -10.3007 degrees. The reason why single surface scanning works well for this case is the source is very simple and electric field data on only one surface already provides enough information about the source.
As shown in Fig. 14, the uniqueness theorem [12] stated that if the tangential components of E or H field over the boundary S and the sources J and M are specified, field in the region is unique when the frequency does not equal the characteristic frequency. If the frequency equals the characteristic frequency, the derivative of the tangential components of the field over the boundary with respect to frequency should also be given in order to obtain the unique field.
In the source reconstruction case with magnitude-only electromagnetic-field data, obviously the uniqueness theorem does not hold anymore. This means that there might exist several solutions of equivalent dipoles inside the scanning surface which can generate the same scanning field magnitude. In order to generate equivalent dipoles which are close to original radiating structure, more information about the source needs to be provided. This is the reason why two or three scanning surfaces and specification of the range of dipole location or type will help extract dipoles close to original radiating structure.
B. Potential Application in Phase Retrieval
There are many papers studying phase retrieval from magnitude-only electromagnetic-field data [13]- [16]. These methods work well but they are all limited on planar near-field measurement and require two planes or two probes during measurement. They also impose restrictions on the sampling points on the plane. The proposed method in this paper provides another way to retrive phase from magnitude-only electromagnetic-field measurement. When the equivalent dipoles are extracted, the phase on the measurement surfaces can also be easily calculated from extracted dipoles. Since this method can be applied to electromagnetic-field data on arbitrarily shaped surfaces, this method can retrieve phase for phaseless measurement not only on planar surfaces, but also on cylindrical surfaces or spherical surfaces. The sampling points can be sparse, not nessarily be uniform and follow a certain pattern.
V. CONCLUSION
This paper proposes an improved method to extract dipoles from magnitude-only electromagnetic-Field data. It offers an automatic flow to extract the equivalent dipoles without prior decision of the type, position, orientation and number of dipoles. It can be easily applied to electromagnetic-field data on arbitrarily shaped surfaces and minimize the number of required dipoles. The extracted dipoles can be close to original radiating structure, thus being physical if adequate field magnitude information is acquired and meaningful optimization range of dipole location and type is given. Compared with conventional genetic algorithm based method which optimize every parameter of dipoles, this method reduces the optimization time and will not easily get trapped into local minima during optimization, thus being more robust. It also shows good potential in phase retrieval application. 2012 and Ph.D. degree in electrical engineering at EMC Laboratory, Missouri University of Science and Technology, Rolla, MO, USA in 2018. His research interests include radiation characterization for cable harness inside vehicle body, method of moments, multiple-input-multiple-output over-the-air testing, desense measurement and simulation, radio-frequency interference, broadband Hfield probe design, conducted emission modeling for switched-mode power supply, stochastic modeling for high-speed channels, and generic model development for PCIe3/4.
Fig. 2 .
2General flow of the method.
Fig. 1 .
1Principle of the method.
Fig. 3 .
3General flow of the improved back-and-forth iteration algorithm.
Fig. 4 .
4General flow of the genetic algorithm.
Fig. 9 .
9Electric field scan of a rack.
Fig. 10 .
10Scanned electric field pattern of a rack.
Fig. 7 .Fig. 8 .
78Current along half wavelength wire antenna. Comparison of retrieved phase pattern and original simulated phase pattern on cylindrical surface.
Fig. 11 .
11Predicted electric field pattern from extracted dipoles.
Fig. 12 .
12Comparison of measured E field and predicted E field.
Fig. 13 .
13General flow of the back-and-forth iteration algorithm for single-surface scanning.
Fig. 14 .
14Arbitrarily shaped surface S encloses linear lossless matter and sources.
.
Jun Fan (S'97-M'00-SM'06-F'16) received the B.S. and M.S. degrees in electrical engineering from Tsinghua University, Beijing, China, in 1994 and 1997, respectively, and the Ph.D. degree in electrical engineering from the University of Missouri-Rolla, Rolla, MO, USA, in 2000.
TABLE I
ICOMPARISON BETWEEN ORIGINAL SOURCE AND EXTRACTION RESULTS
ORIGINAL SOURCE
EXTRACTED DIPOLES
DIPOLE 1
DIPOLE 2
DIPOLE 1
DIPOLE 2
DIPOLE
LOCATION
X
0.25
-0.25
0.25
-0.25
Y
0
0
-1.246E-5
-1.449E-5
Z
1.5
1.5
1.4999
1.4998
DIPOLE TYPE
PX
MY
PX
MY
DIPOLE
AMPLITUDE
1 AM
100 VM
0.9999 AM
99.9968
VM
DIPOLE PHASE
(DEGREE)
90
0
147.0478
57.0494
Fig. 6. Relative error change with iteration number
Fig. 5. Scanning points along circumference
From 2000 to 2007, he was a Consultant Engineer with NCR Corporation, San Diego, CA, USA. In July 2007, he joined Missouri University of Science and Technology (formerly University of Missouri-Rolla), where he is currently a Professor and Director of Missouri Science and Technology Electromagnetic Compatibility (S&T EMC) Laboratory. He is also the Director of the National Science Foundation Industry/ University Cooperative Research Center for EMC and Senior Investigator of Missouri S&T Material Research Center. His research interests include signal integrity and EMI designs in high-speed digital systems, dc power-bus modeling, intrasystem EMI and RF interference, printed circuit board noise reduction, differential signaling, and cable/connector designs. Dr. Fan was the Chair of the IEEE EMC Society TC-9 Computational Electromagnetics Committee from 2006 to 2008 and a Distinguished Lecturer of the IEEE EMC Society in 2007 and 2008. He is currently the Chair of the Technical Advisory Committee of the IEEE EMC Society and is an Associate Editor of the IEEE TRANSACTIONS ON ELECTROMAGNETIC COMPATIBILITY and EMC Magazine. He received an IEEE EMC Society Technical Achievement Award in August 2009.
Source reconstruction for IC radiated emissions based on magnitude-only near-field scanning. J Zhang, J Fan, IEEE Trans. Electromagn. Compat. 592J. Zhang, and J. Fan, "Source reconstruction for IC radiated emissions based on magnitude-only near-field scanning," IEEE Trans. Electromagn. Compat., vol. 59, no. 2, pp. 557-566, Apr. 2017.
Antenna modeling by infinitesimal dipoles using genetic algorithms. T S Sijher, A A Kishk, Progress Electromagn. Res. 52T. S. Sijher and A. A. Kishk, "Antenna modeling by infinitesimal dipoles using genetic algorithms," Progress Electromagn. Res., vol. PIER 52, pp. 225-254, 2005.
A genetic algorithm based method for source identification and far-field radiated emissions prediction from near-field measurements for PCB characterization. J R Regue, M Ribó, J M Garrell, A Martín, IEEE Trans. Electromagn. Compat. 434J. R. Regue, M. Ribó, J. M. Garrell, and A. Martín, "A genetic algorithm based method for source identification and far-field radiated emissions prediction from near-field measurements for PCB characterization," IEEE Trans. Electromagn. Compat., vol. 43, no. 4, pp. 520-530, Nov. 2001.
Modeling electromagnetic emissions from printed circuit boards in closed environments using equivalent dipoles. X Tong, D Thomas, A Nothofer, P Sewell, C Christopoulos, IEEE Trans. Electromagn. Compat. 522X. Tong, D. Thomas, A. Nothofer, P. Sewell, and C. Christopoulos, "Modeling electromagnetic emissions from printed circuit boards in closed environments using equivalent dipoles," IEEE Trans. Electromagn. Compat., vol. 52, no. 2, pp. 462-470, May 2010.
Estimating the near field coupling from SMPS circuits to a nearby antenna using dipole moments. C Wu, Proc. 2016 IEEE Int. Symp. Electromagn.Compat. 2016 IEEE Int. Symp. Electromagn.CompatC. Wu et al., "Estimating the near field coupling from SMPS circuits to a nearby antenna using dipole moments," in Proc. 2016 IEEE Int. Symp. Electromagn.Compat., 2016, pp. 353-357.
Physics-based dipole moment source reconstruction for RFI on a practical cellphone. Q Huang, F Zhang, T Enomoto, J Maeshima, K Araki, C Hwang, IEEE Trans. Electromagn. Compat. 596Q. Huang, F. Zhang, T. Enomoto, J. Maeshima, K. Araki, and C. Hwang, "Physics-based dipole moment source reconstruction for RFI on a practical cellphone," IEEE Trans. Electromagn. Compat., vol. 59, no. 6, pp. 1693-1700, Dec. 2017.
Analytical intra-system EMI model using dipole moments and reciprocity. S Lee, Proc. nullS. Lee et al., "Analytical intra-system EMI model using dipole moments and reciprocity," in Proc. 2018 IEEE Asia-Pac. Electromagn. Compat. Symp., May 2018.
Modeling magnetic radiations of electronic circuits using near-field scanning method. Y Vives, C Arcambal, A Louis, F De Daran, P Eudeline, B Mazari, IEEE Trans. Electromagn. Compat. 492Y. Vives, C. Arcambal, A. Louis, F. de Daran, P. Eudeline, and B. Mazari, "Modeling magnetic radiations of electronic circuits using near-field scanning method," IEEE Trans. Electromagn. Compat., vol. 49, no. 2, pp. 391-400, May 2007.
An improved dipole-moment model based on near-field scanning for characterizing near-field coupling and far-field radiation from an IC. Z Yu, J A Mix, S Sajuyigbe, K P Slattery, J Fan, IEEE Trans. Electromagn. Compat. 551Z. Yu, J. A. Mix, S. Sajuyigbe, K. P. Slattery, and J. Fan, "An improved dipole-moment model based on near-field scanning for characterizing near-field coupling and far-field radiation from an IC," IEEE Trans. Electromagn. Compat., vol. 55, no. 1, pp. 97-108, 2013.
An Efficient Method for Modeling the Magnetic Field Emissions of Power Electronic Equipment From Magnetic Near Field Measurements. F Benyoubi, L Pichon, M Bensetti, Y Le Bihan, M Feliachi, IEEE Transactions on Electromagnetic Compatibility. 59F. Benyoubi, L. Pichon, M. Bensetti, Y. Le Bihan and M. Feliachi, "An Efficient Method for Modeling the Magnetic Field Emissions of Power Electronic Equipment From Magnetic Near Field Measurements," in IEEE Transactions on Electromagnetic Compatibility, vol. 59, no. 2, pp. 609-617, April 2017.
Find global minima for highly nonlinear problems. Find global minima for highly nonlinear problems. [Online]. Available: https://www.mathworks.com/discovery/genetic-algorithm.html
The uniqueness theorem of electromagnetic fields in lossless regions. Q Chu, C Liang, IEEE Trans. Antennas Propagat. 412Q. Chu and C. Liang, "The uniqueness theorem of electromagnetic fields in lossless regions," IEEE Trans. Antennas Propagat., vol. 41, no. 2, pp. 245-246, 1993.
Phaseless bi-polar planar nearfield measurements and diagnostics of array antennas. R G Yaccarino, Y Rahmat-Samii, IEEE Trans. Antennas Propag. 473R. G. Yaccarino and Y. Rahmat-Samii, "Phaseless bi-polar planar nearfield measurements and diagnostics of array antennas," IEEE Trans. Antennas Propag., vol. 47, no. 3, pp. 574-583, Mar. 1999.
A two probes scanning phaseless near-field far-field transformation technique. R Pierri, G D'elia, F Soldovieri, IEEE Trans. Antennas Propogat. 47R. Pierri, G. D'Elia, and F. Soldovieri, "A two probes scanning phaseless near-field far-field transformation technique," IEEE Trans. Antennas Propogat., vol. 47, pp. 792-802, May 1999.
Phaseless measurements over nonrectangular planar near-field systems without probe corotation. S.-F Razavi, Y Rahmat-Samii, IEEE Trans. Antennas Propag. 611S.-F. Razavi and Y. Rahmat-Samii, "Phaseless measurements over nonrectangular planar near-field systems without probe corotation," IEEE Trans. Antennas Propag., vol. 61, no. 1, pp. 143-152, Jan. 2013.
Iteration-free-phase retrieval for directive radiators using field amplitudes on two closely separated observation planes. H P Zhao, Y Zhang, J Hu, E P Li, IEEE Trans. Electromagn. Compat. 582H. P. Zhao, Y. Zhang, J. Hu, and E. P. Li, "Iteration-free-phase retrieval for directive radiators using field amplitudes on two closely separated observation planes," IEEE Trans. Electromagn. Compat., vol. 58, no. 2, pp. 607-610, Apr. 2016.
| [] |
[
"Logic and Logical Philosophy Volume",
"Logic and Logical Philosophy Volume"
] | [
"Eugenio Orlandelli "
] | [] | [] | G3-style sequent calculi for the logics in the cube of non-normal modal logics and for their deontic extensions are studied. For each calculus we prove that weakening and contraction are height-preserving admissible, and we give a syntactic proof of the admissibility of cut. This implies that the subformula property holds and that derivability can be decided by a terminating proof search whose complexity is in Pspace. These calculi are shown to be equivalent to the axiomatic ones and, therefore, they are sound and complete with respect to neighbourhood semantics. Finally, a Maehara-style proof of Craig's interpolation theorem for most of the logics considered is given. | 10.12775/llp.2020.018 | [
"https://apcz.umk.pl/czasopisma/index.php/LLP/article/download/LLP.2020.018/26766"
] | 213,218,696 | 1903.11342 | 5a7d3e416d63d053cb66db609e64d0f92fc97069 |
Logic and Logical Philosophy Volume
2021
Eugenio Orlandelli
Logic and Logical Philosophy Volume
30202110.12775/LLP.2020.018non-normal logicsdeontic logicssequent calculistructural proof theoryinterpolationdecidability
G3-style sequent calculi for the logics in the cube of non-normal modal logics and for their deontic extensions are studied. For each calculus we prove that weakening and contraction are height-preserving admissible, and we give a syntactic proof of the admissibility of cut. This implies that the subformula property holds and that derivability can be decided by a terminating proof search whose complexity is in Pspace. These calculi are shown to be equivalent to the axiomatic ones and, therefore, they are sound and complete with respect to neighbourhood semantics. Finally, a Maehara-style proof of Craig's interpolation theorem for most of the logics considered is given.
Introduction
For many interpretations of the modal operators e.g., for deontic, epistemic, game-theoretic, and high-probability interpretations it is necessary to adopt logics that are weaker than the normal ones. For example, deontic paradoxes are one of the main motivations for adopting a nonnormal deontic logic [see 13,16]. Non-normal logics (see [4] for naming conventions) are quite well understood from a semantic point of view by means of neighbourhood semantics [15,33]. Nevertheless, until recent years their proof theory has been rather limited since it was mostly confined to Hilbert-style axiomatic systems. This situation seems to be rather unsatisfactory since it is difficult to find derivations in axiomatic systems. When the aim is to find derivations and to analyse their structural properties, sequent calculi are to be preferred to axiomatic systems. Recently different kinds of sequent calculi for non-normal logics have been proposed: Gentzen-style calculi [19,20,21,31]; labelled [11,32] and display [5] calculi based on translations into normal modal logics; labelled calculi based on the internalisation of neighbourhood [26,28] and bi-neighbourhood [6] semantics; and, finally, linear nested sequents [22]. This paper, which extends the results presented in [31], concentrates on Gentzen-style calculi since they are better suited than labelled calculi, display calculi, and nested sequents to give decision procedures (computationally well-behaved) and constructive proofs of interpolation theorems. We consider cut-and contraction-free G3-style sequent calculi for all the logics in the cube of non-normal modalities and for their extensions with the deontic axioms D ♦ := A ⊃ ♦A and D ⊥ := ¬ ⊥. The calculi we present have the subformula property and allow for a straightforward decision procedure by a terminating loop-free proof search. Moreover, with the exception of the calculi for EC(N) and its deontic extensions, they are standard [12] i.e., each operator is handled by a finite number of rules with a finite number of premisses and they admit of a Maehara-style constructive proof of Craig's interpolation theorem.
This work improves on previous ones on Gentzen-style calculi for nonnormal logics in that we prove cut admissibility for non-normal modal and deontic logics, and not only for the modal ones [19,20,21]. Moreover, we prove height-preserving admissibility of weakening and contraction, whereas neither weakening nor contraction is admissible in [19,21] and weakening but not contraction is admissible in [20]. The admissibility of contraction is a major improvement since, as it is well known, contraction can be as bad as cut for proof search: we may continue to duplicate some formula forever and, therefore, we need a (computationally expensive) loop-checker to ensure termination. Proof search procedures based on contraction-free calculi terminate because the height of derivations is bounded by a number depending on the complexity of the end-sequent and, therefore, we avoid the need of loop-checkers. To illustrate, the introduction of contraction-free calculi has allowed us to give computationally optimal decision procedures for propositional intuitionistic logic (IL p ) [17] and for the normal modal logics K and T [1,18]. The existence of a loop-free terminating decision procedure has also allowed us to give a constructive proof of uniform interpolation for IL p [36] as well as for K and T [2]. The cut-and contraction-free calculi for non-normal logics considered here are such that the height of each derivation is bounded by the weight of its end-sequent and, therefore, we easily obtain a polynomial space upper complexity bound for proof search. This upper bound is optimal the logics having C as theorem (if the conjecture made in [41] of Pspace-hardness is correct. The satisfiability problem for logics without C, instead, is in NP [41] and a coNP decision procedure based on hypersequents is presented in [7].
Moreover, the introduction of well-behaved calculi for non-normal deontic logics is interesting since proof analysis can be applied to the deontic paradoxes [16] that are one of the central topics of deontic reasoning. We illustrate this in Section 4.3 by considering Forrester's Paradox [9] and by showing that proof analysis cast doubts on the widespread opinion [16,33,38] that Forrester's argument provides evidence against rule RM (see Figure 1). If Forrester's argument is formalized as in [16] then it does not compel us to adopt a deontic logic weaker than KD. If, instead, it is formalised as in [38] then it forces the adoption of a logic where RM fails, but the formal derivation differs substantially from Forrester's informal argument.
We give here a constructive proof of interpolation for all logics having a standard calculus. To our knowledge in the literature there is no other constructive study of interpolation in non-normal logics. In [8, Chap(s). 3.8 and 6.6] a constructive proof of Craig's (and Lyndon's) interpolation theorem is given for the modal logics K and R, and for some of their extensions, including the deontic ones, but the proof makes use of model-theoretic notions. A proof of interpolation by the Maeharatechnique for KD is given in [40]. For a thorough study of interpolation in modal logics we refer the reader to [10]. A model-theoretic proof of interpolation for E is given in [15], and a coalgebraic proof of (uniform) interpolation for all the logics considered here, as well as all other rank-1 modal logics (see below), is given in [34]. As it is explained in Example 5.5, we have not been able to prove interpolation for calculi containing the non-standard rule LR-C (see Figure 6) and, as far as we know, it is still an open problem whether it is possible to give a constructive proof of interpolation for these logics.
Related Work. The modal rules of inference presented in Figure 6 are obtained from the rules presented in [21] by adding weakening contexts to the conclusion of the rules. This minor modification, used also in [20,34,35] for several modal rules, allows us to shift from set-based sequents to multiset-based ones and to prove not only that cut is admissible, as it is done in [19,20,21], but also that weakening and contraction are heightpreserving admissible. Given that implicit contraction is not eliminable from set-based sequents, the decision procedure for non-normal logics given in [21] is based on a model-theoretic inversion technique so that it is possible to define a procedure that outputs a derivation for all valid sequents and a finite countermodel for all invalid ones. One weakness of this decision procedure is that it does not respect the subformula property for logics without rule RM (the procedure adds instances of the excluded middle).
The paper [19] considers multiset-based calculi for the non-normal logic E(N) and for its extensions with axioms D ♦ , T , 4, 5, and B. Nevertheless, neither weakening nor contraction is eliminable because there are no weakening contexts in the conclusion of the modal rules. In [20] multiset-based sequent calculi for the non-normal logic E(N) and for its extensions with axioms D ♦ , T , 4, 5, and B are given. The rules LR-E and R-N are as in Figure 6, but the deontic axiom D ♦ is expressed by the following rule:
A, B =⇒ (=⇒ A, B) A, B, Γ =⇒ ∆ D-2
where the right premiss is present when we are working over LR-E but has to be omitted when we work over LR-M . In the calculi in [19,20] weakening and contraction are taken as primitive rules and not as admissible ones as in the present approach. Even if it is easy to show that weakening is eliminable from the calculi in [20], contraction cannot be eliminated because rule D-2 has exactly two principal formulas and, therefore, it is not possible to permute contraction up with respect to instances of rule D-2 (see Theorem 3.5). The presence of a non-eliminable rule of contraction makes the elimination of cut more problematic: in most cases we cannot eliminate the cut directly, but we have to consider the rule known as multicut [29, p. 88]. Moreover, cut is not eliminable from the calculus given in [20] for the deontic logic END. The formula D ⊥ := ¬ ⊥ is a theorem of this logic, but it can be derived only with a non-eliminable instance of cut as in:
=⇒ ⊤ =⇒ ⊤ R-N ⊥, ⊤ =⇒ =⇒ ⊥, ⊤ ⊤, ⊥ =⇒ D-2 ⊥ =⇒ Cut =⇒ ¬ ⊥ R¬
Finally, it is worth noticing that all the non-normal logics we consider here are rank-1 logics in the sense of [34,35,37] i.e., logics whose modal axioms are propositional combinations of formulas of the form φ, where φ is purely propositional and the calculi we give for the modal logics E, M, K and KD are explicitly considered in [34,37]. Thus, they are part of the family of modal coalgebraic logics [34,35,37] and most of the results in this paper can be seen as instances of general results that hold for rank-1 (coalgebraic) logics. If, in particular, we consider cut-elimination for coalgebraic logics [35] then all our calculi absorb congruence and Theorem 3.5 and case 3 of Theorem 3.6 show that they absorb contraction and cut. Hence, [35,Thm. 5.7] entails that cut and contraction are admissible in these calculi; moreover, [35,Props. 5.8 and 5.11] entail that they are one-step cut free complete w.r.t. coalgebraic semantics. This latter result gives a semantic proof of cut admissibility in the calculi considered here. Analogously, if we consider decidability, the polynomial space upper bound we find in Section 4.1 coincides with that found in [37] for rank-1 modal logics.
Synopsis. Section 2 summarizes the basic notions of axiomatic systems and of neighbourhood semantics for non-normal logics. Sect. 3 presents G3-style sequent calculi for these logics and then shows that weakening and contraction are height-preserving admissible and that cut is (syntactically) admissible. Section 4 describes a terminating proof-search decision procedure for all calculi, it shows that each calculus is equivalent to the corresponding axiomatic system, and it applies proof search to Forrester's paradox. Finally, Section 5 gives a Maehara-style constructive proof of Craig's interpolation theorem for the logics having a standard calculus. We introduce, following [4], the basic notions of non-normal logics. Given a countable set of propositional variables {p n | n ∈ N}, the formulas of the modal language L are generated by:
A ↔ B A ↔ B RE A ⊃ B A ⊃ B RM (A 1 ∧ · · · ∧ A n ) ⊃ B ( A 1 ∧ · · · ∧ A n ) ⊃ B RR, n≥1 (A 1 ∧ · · · ∧ A n ) ⊃ B ( A 1 ∧ · · · ∧ A n ) ⊃ B RK , n≥0 Figure 1. Rules of inference M ) (A ∧ B) ⊃ ( A ∧ B) C) ( A ∧ B) ⊃ (A ∧ B) N ) ⊤ D ⊥ ) ¬ ⊥ D ♦ ) A ⊃ ♦AA ::= p n | ⊥ | A ∧ A | A ∨ A | A ⊃ A | A
We remark that ⊥ is a 0-ary logical symbol. This will be extremely important in the proof of Craig's interpolation theorem. As usual ¬A is a shorthand for A ⊃ ⊥, ⊤ for ⊥ ⊃ ⊥, A ↔ B for (A ⊃ B) ∧ (B ⊃ A), and ♦A for ¬ ¬A. We follow the usual conventions for parentheses. Let L be the logic containing all L-instances of propositional tautologies as axioms, and modus ponens (MP) as inference rule. The minimal non-normal modal logic E is the logic L plus the rule RE of Figure 1. We will consider all the logics that are obtained by extending E with some set of axioms from Figure 2. We will denote the logics according to the axioms that define them, e.g., EC is the logic E ⊕ C, and EMD ⊥ is E ⊕ M ⊕ D ⊥ . By X we denote any of these logics and we write X ⊢ A whenever A is a theorem of X. We will call modal the logics containing neither D ⊥ nor D ♦ , and deontic those containing at least one of them. We have followed the usual naming conventions for the modal axioms, but we have introduced new conventions for the deontic ones: D ⊥ is usually called either CON or P and D ♦ is usually called D [cf. 3,13,16].
It is also possible to give an equivalent rule-based axiomatization of some of these logics. In particular, the logic EM, also called M, can be axiomatixed as L plus the rule RM of Figure 1. The logic EMC, also called R, can be axiomatized as L plus the rule RR of Figure 1. Finally, the logic EMCN, i.e. the smallest normal modal logic K, can be axiomatized as L plus the rule RK of Figure 1. These rule-based axiomatizations will be useful later on since they simplify the proof of the equivalence between axiomatic systems and sequent calculi (Theorem 4.5).
The following proposition states the well-known relations between the theorems of non-normal modal logics. For a proof the reader is referred to [4].
Proposition 2.1. For any formula A ∈ L we have that E ⊢ A implies M ⊢ A; M ⊢ A implies R ⊢ A; R ⊢ A implies K ⊢ A.
Analogously for the logics containing axiom N and/or axiom C.
Axiom D ⊥ is K-equivalent to D ♦ , but the correctness of D ♦ has been a big issue in the literature on deontic logic. This fact urges the study of logics weaker than KD, where D ⊥ and D ♦ are no more equivalent [4]. The deontic formulas D ⊥ and D ♦ have the following relations in the logics we are considering.
Proposition 2.2. D ⊥ and D ♦ are independent in E; D ⊥ is derivable from D ♦ in non-normal logics containing at least one of the axioms M and N ; D ♦ is derivable from D ⊥ in non-normal logics containing axiom C.
In Figure 3 the reader will find the lattice of non-normal modal logics [see 4, p. 237] and in Figure 4 the lattice of non-normal deontic logics.
Semantics
The most widely known semantics for non-normal logics is neighbourhood semantics. We sketch its main tenets following [4], where neighbourhood models are called minimal models.
ED ⊥ ED ♦ ED ⊥ D ♦ = ED ECD ♦ ECD ⊥ = ECD END ⊥ END ♦ = END RD ⊥ = RD ♦ = RD KD ⊥ = KD ♦ = KD MD ⊥ MD ♦ = MD MND ⊥ MND ♦ = MND ECND ⊥ = ECND ♦ = ECND|= M w A iff ||A|| M ∈ N (w)
where ||A|| M is the truth set of A i.e., ||A|| M = {w | |= M w A}. We say that a formula A is valid in a class C of neighbourhood models iff it is true in every world of every M ∈ C.
In order to give soundness and completeness results for non-normal modal and deontic logics with respect to (classes of) neighbourhood models, we introduce the following definition.
Definition 2.4. Let M = W, N, P be a neighbourhood model, X, Y ∈ 2 W , and w ∈ W , we say that:
• M is supplemented if X ∩ Y ∈ N (w) imples X ∈ N (w) and Y ∈ N (w); • M is closed under finite intersection if X ∈ N (w) and Y ∈ N (w) imply X ∩ Y ∈ N (w); • M contains the unit if W ∈ N (w); • M is non-blind if X ∈ N (w) implies X = ∅; • M is complement-free if X ∈ N (w) implies W − X ∈ N (w).
Proposition 2.5. We have the following correspondence results between L-formulas and the properties of the neighbourhood function defined above:
• Axiom M corresponds to supplementation;
• Axiom C corresponds to closure under finite intersection;
• Axiom N corresponds to containment of the unit;
• Axiom D ⊥ corresponds to non-blindness;
• Axiom D ♦ corresponds to complement-freeness.
Theorem 2.6. E is sound and complete with respect to the class of all neighbourhood models. Any logic X which is obtained by extending E with some axioms from Figure 2 is sound and complete with respect to the class of all neighbourhood models which satisfies all the properties corresponding to the axioms of X.
See [4] for the proof of Proposition 2.5 and of Theorem 2.6.
Sequent Calculi
We introduce sequent calculi for non-normal logics that extend the multiset-based sequent calculus G3cp [29,30,39] for classical propositional logic see Figure 5 by adding some modal and deontic rules from Figure 6. In particular, we consider the modal sequent calculi given in Table 1, which will be shown to capture the modal logics of Figure 3, and their deontic extensions given in Table 2, which will be shown to capture all deontic logics of Figure 4. We adopt the following notational conventions: we use G3X to denote a generic calculus from either Tables 1 or 2, and we use G3Y(Z) to denote both G3Y and GRYZ. All the rules in Figures 5 and 6 but LR-C and L-D ♦ C are standard rules in the sense of [12]: each of them is a single rule with a fixed number of premisses; LR-C and L-D ♦ C , instead, stand for a recursively enumerable set of rules with a variable number of premisses.
For an introduction to G3cp and the relevant notions, the reader is referred to [29,Chapter 3]. We sketch here the main notions that will be used in this paper. A sequent is an expression Γ =⇒ ∆, where Γ and ∆ are finite, possibly empty, multisets of formulas. If Π is the (possibly empty) multiset A 1 , . . . , A m then Π is the (possibly empty) multiset A 1 , . . . , A m . A derivation of a sequent Γ =⇒ ∆ in G3X is an upward growing tree of sequents having Γ =⇒ ∆ as root, initial sequents or instances of rule L⊥ as leaves, and such that each non-initial node is the conclusion of an instance of one rule of G3X whose premisses are its children. In the rules in Figures 5 and 6, the multisets Γ and ∆ are called contexts, the other formulas occurring in the conclusion (premiss(es), resp.) are called principal (active). In a sequent the antecedent (succedent) is the multiset occurring to the left (right) of the sequent arrow =⇒. As for G3cp, a sequent Γ =⇒ ∆ has the following denotational interpretation: the conjunction of the formulas in Γ implies the disjunction of the formulas in ∆.
As measures for inductive proofs we use the weight of a formula and the height of a derivation. The weight of a formula A, w(A), is defined inductively as follows:
w(⊥) = w(p i ) = 0; w( A) = w(A) + 1; w(A • B) = w(A) + w(B) + 1 (where • is one of the binary connectives ∧, ∨, ⊃).
The weight of a sequent is the sum of the weight of the formulas occurring in that sequent. The height of a derivation is the length of its longest branch minus one. A rule of inference is said to be (heightpreserving) admissible in G3X if, whenever its premisses are derivable in G3X, then also its conclusion is derivable (with at most the same derivation height) in G3X. The modal depth of a formula (sequent) is the maximal number of nested modal operators occurring in it(s members).
Initial sequents:
p n , Γ =⇒ ∆, p n p n propositional variable Propositional rules: Figure 5. The sequent calculus G3cp
A, B, Γ =⇒ ∆ A ∧ B, Γ =⇒ ∆ L∧ Γ =⇒ ∆, A Γ =⇒ ∆, B Γ =⇒ ∆, A ∧ B R∧ ⊥, Γ =⇒ ∆ L⊥ A, Γ =⇒ ∆ B, Γ =⇒ ∆ A ∨ B, Γ =⇒ ∆ L∨ Γ =⇒ ∆, A, B Γ =⇒ ∆, A ∨ B R∨ Γ =⇒ ∆, A B, Γ =⇒ ∆ A ⊃ B, Γ =⇒ ∆ L⊃ A, Γ =⇒ ∆, B Γ =⇒ ∆, A ⊃ B R⊃A =⇒ B B =⇒ A A, Γ =⇒ ∆, B LR-E A =⇒ B A, Γ =⇒ ∆, B LR-M A 1 , . . . , A n =⇒ B B =⇒ A 1 ... B =⇒ A n A 1 , . . . , A n , Γ =⇒ ∆, B LR-C =⇒ B Γ =⇒ ∆, B R-N A, Π =⇒ B A, Π, Γ =⇒ ∆, B LR-R Π =⇒ B Π, Γ =⇒ ∆, B LR-K A =⇒ A, Γ =⇒ ∆ L-D ⊥ Π =⇒ =⇒ Π Π, Γ =⇒ ∆ L-D ♦ E , |Π| 2 Π =⇒ Π, Γ =⇒ ∆ L-D ♦ M , |Π| 2 Π, Σ =⇒ {=⇒ A, B| A ∈ Π, B ∈ Σ} Π, Σ, Γ =⇒ ∆ L-D ♦ C Π =⇒ Π, Γ =⇒ ∆ L-D *
Structural rules of inference
We are now going to prove that the calculi G3X have the same good structural properties of G3cp: weakening and contraction are heightpreserving admissible and cut is admissible. All proofs are extension of those for G3cp, see [29,Chapter 3]; in most cases, the modal rules have to be treated differently from the propositional ones because of the presence of empty contexts in the premiss(es) of the modal ones. We adopt the following notational convention: given a derivation tree D k , the derivation tree of the n-th leftmost premiss of its last step is denoted by D kn . We begin by showing that the restriction to atomic initial sequents, which is needed to have the propositional rules invertible, is not limitative in that initial sequents with arbitrary principal formula are derivable in G3X.
Proposition 3.1. Every instance of A, Γ =⇒ ∆, A is derivable in G3X.
Proof. By induction on the weight of A. If w(A) = 0 i.e., A is atomic or ⊥ then we have an instance of an initial sequent or of a conclusion of L⊥ and there is nothing to prove. If w(A) ≥ 1, we argue by cases according to the construction of A. In each case we apply, root-first, the appropriate rule(s) in order to obtain sequents where some proper subformula of A occurs both in the antecedent and in the succedent. The claim then holds by the inductive hypothesis (IH). To illustrate, if A ≡ B and we are in G3M(ND), we have:
B =⇒ B IH B, Γ =⇒ ∆, B LR-M ⊣ Theorem 3.2.
The rules of left and right weakning are height-preserving admissible (hp-admissible, for short) in G3X
Γ =⇒ ∆ A, Γ =⇒ ∆ LW Γ =⇒ ∆ Γ =⇒ ∆, A RW Proof.
The proof is a straightforward induction on the height of the derivation D of Γ =⇒ ∆. If the last step of D is by a propositional rule, we have to apply the same rule to the weakened premiss(es), which are derivable by IH, see [29,Thm. 2.3.4]. If it is by a modal or deontic rule, we proceed by adding A to the appropriate weakening context of the conclusion of that rule instance. To illustrate, if the last rule is LR-E,
we transform . . . D 1 B =⇒ C . . . D 2 C =⇒ B B, Γ =⇒ ∆, C LR-E into . . . D 1 B =⇒ C . . . D 2 C =⇒ B B, A, Γ =⇒ ∆, C LR-E ⊣
Before considering contraction, we recall some facts that will be useful later on.
Lemma 3.3. In G3X the rules Γ =⇒ ∆, A ¬A, Γ =⇒ ∆ L¬ A, Γ =⇒ ∆ Γ =⇒ ∆, ¬A R¬ are admissible.
Proof. We have the following derivations (the step by RW is admissible thanks to Theorem 3.2):
Γ =⇒ ∆, A ⊥, Γ =⇒ ∆ L⊥ A ⊃ ⊥, Γ =⇒ ∆ L⊃ A, Γ =⇒ ∆ A, Γ =⇒ ∆, ⊥ RW Γ =⇒ ∆, A ⊃ ⊥ R⊃ ⊣ Lemma 3.4.
All propositional rules are height-preserving invertible in G3X, that is the derivability of (a possible instance of ) a conclusion of a propositional rule entails the derivability, with at most the same derivation height, of its premiss(es).
Proof. We have only to extend the proof for G3cp [see 29, Thm. 3.1.1] with new cases for the modal and deontic rules. If A • B occurs in the antecedent (succedent) of the conclusion of an instance of a modal or deontic rule then it must be a member of the weakening context Γ (∆) of this rule instance, and we have only to change the weakening context according to the rule we are inverting. ⊣ Theorem 3.5. The left and right rules of contraction are hp-admissible in G3X
A, A, Γ =⇒ ∆ A, Γ =⇒ ∆ LC Γ =⇒ ∆, A, A Γ =⇒ ∆, A RC Proof.
The proof is by simultaneous induction on the height of the derivation D of the premiss for left and right contraction. The base case is straightforward. For the inductive steps, we have different strategies according to whether the last step in D is by a propositional rule or not. If the last step in D is by a propositional rule, we have two subcases: if the contraction formula is not principal in that step, we apply the inductive hypothesis and then the rule. Else we start by using the heightpreserving invertibility Lemma 3.4 of that rule, and then we apply the inductive hypothesis and the rule [see 29, Thm. 3.2.2 for details].
If the last step in D is by a modal or deontic rule, we have two subcases: either the last step is by one of LR-C, LR-R, LR-K, L-D ♦ E , L-D ♦ M , L-D ♦ C , and L-D * and both occurrences of the contraction formula A of LC are principal in the last step or some instance of the contraction formula is introduced in the appropriate weakening context of the conclusion. In the first subcase, we apply the inductive hypothesis to the premiss and then the rule. An interesting example is when the
last step in D is by L-D ♦ E . We transform . . . D 1 B, B =⇒ . . . D 2 =⇒ B, B B, B, Γ =⇒ ∆ L-D ♦ B, Γ =⇒ ∆ LC into . . . IH(D 1 ) B =⇒ . . . IH(D 2 ) =⇒ B B, Γ =⇒ ∆ L-D ♦
where IH(D 1 ) is obtained by applying the inductive hypothesis for the left rule of contraction to D 1 and IH(D 2 ) is obtained by applying the inductive hypothesis for the right rule of contraction to D 2 .
In the second subcase, we apply an instance of the same modal or deontic rule which introduces one less occurrence of A in the appropriate context of the conclusion. Let us consider RC. If the last step is by LR-M and no instance of A is principal in the last rule, we transform
. . . D 1 B =⇒ C B, Γ ′ =⇒ ∆ ′ , A, A, C LR-M B, Γ ′ =⇒ ∆ ′ , A, C RC into . . . D 1 B =⇒ C B, Γ ′ =⇒ ∆ ′ , A, C LR-M ⊣ Theorem 3.6. The rule of cut is admissible in G3X . . . D 1 Γ =⇒ ∆, D . . . D 2 D, Π =⇒ Σ Γ, Π =⇒ ∆, Σ Cut
Proof. We consider an uppermost application of Cut and we show that either it is eliminable, or it can be permuted upward in the derivation until we reach sequents where it is eliminable. The proofs, one for each calculus, are by induction on the weight of the cut formula D with a sub-induction on the sum of the heights of the derivations of the two premisses (cut-height for shortness). The proof can be organized in 3 exhaustive cases:
1. At least one of the premisses of cut is an initial sequent or a conclusion of L⊥; 2. The cut formula in not principal in the last step of at least one of the two premisses; 3. The cut formula is principal in both premisses. • Case (2). We have many subcases according to the last rule applied in the derivation (D ⋆ ) of the premiss where the cut formula is not principal. For the propositional rules, we refer the reader to [29,Thm. 3.2.3], in which is given a procedure that allows to reduce the cut-height. If the last rule applied in D ⋆ is a modal or deontic one, we can transform the derivation into a cut-free one because the conclusion of Cut is derivable by replacing the last step of D ⋆ with the appropriate instance of the same modal or deontic rule. We present explicitly only the cases where the last step of the left premiss is by LR-E and L-D ⊥ and the cut formula is not principal in it, all other transformations being similar.
LR-E: If the left premiss is by rule
LR-E (and Γ ≡ A, Γ ′ and ∆ ≡ ∆ ′ , B), we transform . . . D 11 A =⇒ B . . . D 12 B =⇒ A A, Γ ′ =⇒ ∆ ′ , B, D LR-E . . . D 2 D, Π =⇒ Σ A, Γ ′ , Π =⇒ ∆ ′ , B, Σ Cut into . . . D 11 A =⇒ B . . . D 12 B =⇒ A A, Γ ′ , Π =⇒ ∆ ′ , B, Σ LR-E L-D ⊥ :
If the left premiss is by rule L-D ⊥ , we transform (3). If the cut formula D is principal in both premisses, we have cases according to the principal operator of D. In each case we have a procedure that allows to reduce the weight of the cut formula, possibly increasing the cut-height. For the propositional cases, which are the same for all the logics considered here [see 29, Thm. 3.2.3].
. . . D 11 A =⇒ A, Γ ′ =⇒ ∆, D L-D ⊥ . . . D 2 D, Π =⇒ Σ A, Γ ′ , Π =⇒ ∆, Σ Cut into . . . D 11 A =⇒ A, Γ ′ , Π =⇒ ∆, Σ L-D ⊥ • Case
If D ≡ C, we consider the different logics one by one, without repeating the common cases.
• G3E(ND). Both premisses are by rule LR-E, we have
. . . D 11 A =⇒ C . . . D 12 C =⇒ A A, Γ ′ =⇒ ∆, C LR-E . . . D 21 C =⇒ B . . . D 22 B =⇒ C C, Π =⇒ Σ ′ , B LR-E A, Γ ′ , Π =⇒ ∆, Σ ′ , B
Cut and we transform it into the following derivation that has two cuts with cut formulas of lesser weight, which are admissible by IH.
. . . D 11 A =⇒ C . . . D 21 C =⇒ B A =⇒ B Cut . . . D 22 B =⇒ C . . . D 12 C =⇒ A B =⇒ A Cut A, Γ ′ , Π =⇒ ∆, Σ ′ , B LR-E • G3EN(D)
. Left premiss by R-N and right one by LR-E. We transform
. . . D 11 =⇒ C Γ =⇒ ∆, C R-N . . . D 21 C =⇒ A . . . D 22 A =⇒ C C, Π =⇒ Σ ′ , A LR-E Γ, Π =⇒ ∆, Σ ′ , A Cut into . . . D 11 =⇒ C . . . D 21 C =⇒ A =⇒ A Cut Γ, Π =⇒ ∆, Σ ′ , A R-N • G3E(N)D ⊥ . Left premiss is by LR-E, and right one by L-D ⊥ . We transform . . . D 11 A =⇒ C . . . D 12 C =⇒ A A, Γ ′ =⇒ ∆, C LR-E . . . D 21 C =⇒ C, Π =⇒ Σ L-D ⊥ A, Γ ′ , Π =⇒ ∆, Σ Cut into . . . D 11 A =⇒ C . . . D 21 C =⇒ A =⇒ Cut A, Γ ′ , Π =⇒ ∆, Σ L-D ⊥ • G3E(N)D ♦ .
Left premiss is by LR-E, and right one by L-D ♦ E . We transform (|Ξ| 1)
. . . D 11 A =⇒ C . . . D 12 C =⇒ A A, Γ ′ =⇒ ∆, C LR-E . . . D 21 C, Ξ =⇒ . . . D 22 =⇒ C, Ξ C, Ξ, Π ′ =⇒ Σ L-D ♦ E A, Γ ′ , Ξ, Π ′ =⇒ ∆, Σ Cut into . . . D 22 =⇒ Ξ, C . . . D 12 C =⇒ A =⇒ Ξ, A Cut . . . D 11 A =⇒ C . . . D 21 C, Ξ =⇒ A, Ξ =⇒ Cut A, Γ ′ , Ξ, Π ′ =⇒ ∆, Σ L-D ♦ E • G3E(N)D.
Left premiss by LR-E and right one by L-D ⊥ or L-D ♦ E . Same as above.
• G3END ⊥ . Left premiss by R-N and right one by L-D ⊥ . We transform
. . . D 11 =⇒ C Γ =⇒ ∆, C R-N . . . D 21 C =⇒ C, Π =⇒ Σ L-D ⊥ Γ, Π =⇒ ∆, Σ Cut into . . . D 11 =⇒ C . . . D 21 C =⇒ =⇒ Cut Γ, Π =⇒ ∆, Σ LWs and RWs • G3END.
Left premiss by R-N and right one by L-D ♦ E . We transform (|Ξ| 1)
. . . D 11 =⇒ C Γ =⇒ ∆, C R-N . . . D 21 C, Ξ =⇒ =⇒ . . . D 22 C, Ξ C, Ξ, Π ′ =⇒ Σ L-D ♦ Ξ, Γ, Π ′ =⇒ ∆, Σ Cut into . . . D 11 =⇒ C . . . D 21 C, Ξ =⇒ Ξ =⇒ Cut Ξ, Γ, Π ′ =⇒ ∆, Σ (⋆) where (⋆) is an instance of L-D ⊥ if |Ξ| = 1,A =⇒ C A, Γ ′ =⇒ ∆, C LR-M . . . D 21 C =⇒ B C, Π =⇒ Σ ′ , B LR-M A, Γ ′ , Π =⇒ ∆, Σ ′ , B Cut into . . . D 11 A =⇒ C . . . D 21 C =⇒ B A =⇒ B Cut A, Γ ′ , Π =⇒ ∆, Σ ′ , B LRΛ =⇒ C . . . D A1 C =⇒ A 1 ... . . . D An C =⇒ A n Λ, Γ ′ =⇒ ∆, C LR-C . . . D 21 C, Ξ =⇒ E . . . D C E =⇒ C ... . . . D Bm E =⇒ B m C, Ξ, Π ′ =⇒ Σ ′ , E LR-C Λ, Γ ′ , Ξ, Π ′ =⇒ ∆, Σ ′ , E Cut
is transformed into the following derivation having n+1 cuts on formulas of lesser weight
. . . D11 Λ =⇒ C . . . D21 C, Ξ =⇒ E Λ, Ξ =⇒ E Cut . . . DC E =⇒ C . . . DA n C =⇒ A1 E =⇒ A1 Cut ... . . . DC E =⇒ C . . . DA n C =⇒ An E =⇒ An Cut . . . DB 1 E =⇒ B1 ... . . . DB m E =⇒ Bm Λ, Γ ′ , Ξ, Π ′ =⇒ ∆, Σ ′ , E LR-C • G3CN(D)
. Left premiss by R-N and right premiss by LR-C. We have
. . . D 11 =⇒ C Γ =⇒ ∆, C R-N . . . D 21 C, A 1 , . . . , A n =⇒ B . . . D C B =⇒ C ... . . . D A n B =⇒ A n C, A 1 , . . . , A n , Π ′ =⇒ Σ ′ , B LR-C Γ, A 1 , . . . , A n , Π ′ =⇒ ∆, Σ ′ , B Cut
where A 1 , . . . , A n (and thus also A 1 , . . . , A n ) may or may not be the empty multiset. If A 1 , . . . , A n is not empty, we transform it into the following derivation having one cut with a cut formula of lesser weight
. . . D 11 =⇒ C . . . D 21 C, A 1 , . . . , A n =⇒ B A 1 , . . . , A n =⇒ B Cut . . . D A 1 B =⇒ A 1 ... . . . D A n B =⇒ A n Γ, A 1 , . . . A n , Π ′ =⇒ ∆, Σ ′ , B LR-C If, instead, A 1 , . . . , A n is empty, we transform it into . . . D 11 =⇒ C . . . D 21 C =⇒ B =⇒ B Cut Γ, Π ′ =⇒ ∆, Σ ′ , B R-N • G3CD ♦ . Left premiss by LR-C and right premiss by L-D ♦ C . We transform (we assume Ξ = A 1 , . . . A k , Θ = C, B 2 , . . . , B m and Λ = D 1 , . . . , D n ) . . . D 11 Ξ =⇒ C . . . D 1Ai {C =⇒ A i | A i ∈ Ξ} Ξ, Γ ′ =⇒ ∆, C LR-C . . .=⇒ Dj, C . . . D1A i C =⇒ Ai {=⇒ Ai, Dj|Ai ∈ Ξ, Dj ∈ Λ} Cut . . . DΘ iΛj {=⇒ Bi, Dj|Bi ∈ Θ − C, Dj ∈ Λ} Ξ, B1, . . . , Bm, Λ, Γ ′ , Π ′ =⇒ ∆, Σ L-D ♦ C • G3C(N)D.
Left premiss by LR-C and right one by L-D * . It is straightforward to transform the derivation into another one having one cut with a cut formula of lesser weight.
• G3R(D). Both premisses are by rule LR-R, we transform . . . D 11
A, Ξ =⇒ C A, Ξ, Γ ′ =⇒ ∆, C LR-R . . . D 21 C, Ψ =⇒ B C, Ψ, Π ′ =⇒ Σ ′ , B LR-R A, Ξ, Γ ′ , Ψ, Π ′ =⇒ ∆, Σ ′ , B Cut into . . . D 11 A, Ξ =⇒ C . . . D 21 C, Ψ =⇒ B A, Ξ, Ψ =⇒ B Cut A, Ξ, Γ ′ , Ψ, Π ′ =⇒ ∆, Σ ′ , B LR-R • G3RD ⋆ .
Left premiss is by LR-R, and right one by L-D ⋆ , we transform
. . . D 11 A, Ξ =⇒ C A, Ξ, Γ ′ =⇒ ∆, C LR-R . . . D 21 C, Ψ =⇒ C, Ψ, Π ′ =⇒ Σ L-D * A, Ξ, Γ ′ , Ψ, Π ′ =⇒ ∆, Σ Cut into . . . D 11 A, Ξ =⇒ C . . . D 21 C, Ψ =⇒ A, Ξ, Ψ =⇒ Cut A, Ξ, Γ ′ , Ψ, Π ′ =⇒ ∆, Σ L-D * • G3K(D).
The new cases with respect to G3R(D) are those with left premiss by an instance of LR-K that has no principal formula in the antecedent. These cases can be treated like cases with the left premiss by R-N . Each calculus G3X has the strong subformula property since all active formulas of each rule in Figures 5 and 6 are proper subformulas of the principal formulas and no formula disappears in moving from premiss(es) to conclusion. As usual, this gives us a syntactic proof of consistency.
Proposition 4.1. 1. Each premiss of each rule of G3X has lesser weight than its conclusion. 2. Each premiss of each modal or deontic rule of G3X has lesser modal depth than its conclusion.
3.
The calculus G3X has the subformula property: a G3X-derivation of a sequent S contains only sequents composed of subformulas of S. 4. The empty sequent is not G3X-derivable.
We also have an effective method to decide the derivability of a sequent in G3X: we start from the desired sequent Γ =⇒ ∆ and we construct all possible G3X-derivation trees until either we find a tree where each leaf is an initial sequent or a conclusion of L⊥ meaning that we have found a G3X-derivation of Γ =⇒ ∆ or we have checked all possible G3X-derivations and we have found none meaning that Γ =⇒ ∆ is not G3X-derivable.
In more detail, we present a depth-first procedure that tests G3Xderivability in polynomial space. As it is usual in decision procedures involving non-invertible rules, we have trees involving two kinds of branching. Application of a rule with more than one premiss produce an ANDbranching point, where all branches have to be derivable to obtain a derivation. Application of a non-invertible rule to a sequent that can be the conclusion of different instances of non-invertible rules produces an OR-branching point, where only one branch need be derivable to obtain a derivation. In the procedure below we will assume that, given a calculus G3X and given a sequent Π, Γ p =⇒ ∆ p , Σ (where Γ p and ∆ p are multisets of propositional variables), there is some fixed way of ordering the finite (see below) set of instances of modal and deontic rules of G3X (X-instances, for shortness) having that sequent as conclusion. Moreover, we will represent the root of branches above an OR-branching point by nodes of shape i , where i is the name of the i-th X-instance applied (in the order of all X-instances having that conclusion). To illustrate, if we are in G3EN and we have to consider A, B, Γ p =⇒ ∆ p , C then we obtain (fixing one way of ordering the three X-instances having that sequent as conclusion):
A =⇒ C C =⇒ A LR-E B =⇒ C C =⇒ B LR-E =⇒ C R-N A, B, Γ p =⇒ ∆ p , C
where the lowermost sequent is an OR-branching point and the two nodes LR-E 1 and LR-E 2 are AND-branching points. Finally, given an AND(OR)-branching point
S 1 . . . S n S
we say that the branch above S i is an unexplored AND(OR)-branch if none of its nodes has already been active.
Definition 4.2 (G3X-decision procedure). Stage 1. We write the one node tree Γ =⇒ ∆ and we label Γ =⇒ ∆ as active. Stage n+1. Let T n be the tree constructed at stage n, let S ≡ Π =⇒ Σ be its active sequent, and let B be the branch going from the root of T n to S. Closed. If S is such that p ∈ Π ∩ Σ (for some propositional variable p) or ⊥ ∈ Π, then we label S as closed and Derivable. If B contains no unexplored AND-branch, the procedure ends and Γ =⇒ ∆ is G3X-derivable; AND-backtrack. If, instead, B contains unexplored AND-branches then we choose the topmost one and we label as active its leftmost unexplored leaf. Else Propositional. if S can be the conclusion of some instances • 1 , . . . , • m of the invertible propositional rules, we extend B by applying one of such instances:
S 1 (S 2 ) S • i 1 i m
where, if S 2 , if present, S is an AND-branching point. Else Modal. If S can be the conclusion of the following canonically ordered list of X-instances: Underivable. If B contains no unexplored OR-branch, the procedure ends and Γ =⇒ ∆ is not G3X-derivable;
OR-backtrack. If, instead, B contains unexplored OR-branches, we choose the topmost one and we label as active its leftmost unexplored leaf.
Termination can be shown as follows. Proposition 4.1.1 entails that the height of each branch of the tree T constructed in a G3X-decision procedure for a sequent Γ =⇒ ∆ is bounded by the weight of Γ =⇒ ∆ (in particular, given Proposition 4.1.2, the number of OR-branching points occurring in a branch is bounded by the modal depth of Γ =⇒ ∆). Moreover, T is finitary branching since all rules of G3X are finitary branching rules, and since each sequent can be the conclusion of a finite number k of X-instances (for each G3X k is bounded by a function of |Γ | and |∆|). Hence, after a finite number of stages we are either in case Derivable or in case Underivable and, in both cases, the procedure ends. In the first case we can easily extract a G3X-derivation of Γ =⇒ ∆ from T (we just have to delete all unexplored branches as well as all underivable sub-trees above an OR-branching point). In the latter case, thanks to Proposition 4.1.3, we know that (modulo the order of the invertible propositional rules) we have explored the whole search space for a G3X-derivation of Γ =⇒ ∆ and we have found none.
We prove that it is possible to test G3X-derivability in polynomial space by showing how it is possible to store only the active node together with a stack containing information sufficient to reconstruct unexplored branches. For the propositional part of the calculi, we proceed as in [1,17,18]: each entry of the stack is a triple containing the name of the rule applied, an index recording which of its premisses is active, and its principal formula. For the X-instances two complications arise: we need to record which OR-branches are unexplored yet, and we have to keep track of the weakening contexts of the conclusion in the premisses of Xinstances. The first problem has already been solved by having assumed that the X-instances applicable to a given sequent have a fixed canonical order. The second problem is solved by adding a numerical superscript to the formulas occurring in a sequent and by imposing that: -All formulas in the end-sequent have 1 as superscript; -The superscript k of the principal formulas of rules and of initial sequents are maximal in that sequent; -Active formulas of X-instances (propositional rules) have k + 2 (k, respectively) as superscript; -Contexts are copied in the premisses of each rule.
By doing so, the contexts of the conclusion are copied in the premisses in each rule of G3X, but they cannot be principal in the trees above the premisses of the X-instances because their superscript is never maximal therein. It is immediately evident that the superscripts occurring in a derivation are bounded by (twice) the modal depth of the end-sequent.
Instances of all modal and deontic rules in Figure 6 but LR-C and L-D ♦ C are such that there is no need to record their principal formulas in the stack entry: they are the boxed version of the formulas having maximal superscript in the active premiss; moreover, the name of the rule and the number of the premiss allow us to reconstruct the position of the principal formulas (for the right premiss of LR-E and L-D ♦ E , we have to switch the two formulas). In instances of rules LR-C and L-D ♦ C instead, this doesn't hold since in all premisses but the leftmost one there is no subformula of some principal formulas. We can overcome this problem by copying in each premiss all principal formulas having no active subformula in that premiss and by adding one to their superscript. We also keep fixed the position of all formulas (modulo the swapping of the two active formulas). To illustrate, one such instance is:
A k+2 1 , A k+2 2 , Γ =⇒ ∆, B k+2 B k+2 , A k+1 2 , Γ =⇒ ∆, A k+2 1 B k+2 , A k+1 1 , Γ =⇒ ∆, A k+2 2 A k 1 , A k 2 , Γ =⇒ ∆, B k
LR-C
In this way, given the name of the modal or deontic rule applied, any premiss of this rule instance, and its position among the premisses of this rule, we can reconstruct both the conclusion of this rule instance and its position in the fixed order of X-instances concluding that sequent (thus we know which OR-branches are unexplored yet). In doing so, we use the hp-admissibility of contraction to ensure that no formula has more than one occurrence in the antecedent or in the succedent of the conclusion of X-instances (otherwise we might be unable to reconstruct which of two identical X-instances we are considering). Hence, for X-instances each stack entry records the name of the rule applied and an index recording which premiss we are considering. The decision procedure is like in Definition 4.2. The only novelty is that at each stage, instead of storing the full tree constructed so far, we store only the active node and the stack, we push an entry in the stack and, if we are in a backtracking case, we pop stack entries (and we use them to reconstruct the corresponding active sequent) until we reach an entry recording unexplored branches of the appropriate kind, if any occurs. Proof. We have already argued that proof search terminates. As in [1,17,18], Proposition 4.1.1 entails that the stack depth is bounded by O(n) and, by storing the principal formulas of propositional rules as indexes into the end-sequent, each entry requires O(log n) space. Hence we have an O(n log n) space bound for the stack. Moreover, the active sequent contains at most O(n) subformulas of the end-sequent and their numerical superscripts. Each such subformula requires O(log n) space since it can be recorded as an index into the end-sequent; its numerical superscript requires O(log n) too since there are at most O(n) superscripts. Hence also the active sequent requires O(n log n) space. ⊣
Equivalence with the axiomatic systems
It is now time to show that the sequent calculi introduced are equivalent to the non-normal logics of Section 2. We write G3X ⊢ Γ =⇒ ∆ if the sequent Γ =⇒ ∆ is derivable in G3X, and we say that A is derivable in G3X whenever G3X ⊢ =⇒ A. We begin by proving the following Lemma 4.4. All the axioms of the axiomatic system X are derivable in G3X.
Proof.
A straightforward application of the rules of the appropriate sequent calculus, possibly using Proposition 3.1. As an example, we show that the deontic axiom D ⊥ is derivable by means of rule L-D ⊥ and that axiom C is derivable by means of LR-C.
⊥ =⇒ L⊥ ⊥ =⇒ L-D ⊥ =⇒ ¬ ⊥ R¬ A, B =⇒ A 3.1 A, B =⇒ B 3.1 A, B =⇒ A ∧ B R∧ A, B =⇒ A 3.1 A ∧ B =⇒ A L∧ A, B =⇒ B 3.1 A ∧ B =⇒ B L∧ A, B =⇒ (A ∧ B) LR-C A ∧ B =⇒ (A ∧ B) L∧ =⇒ A ∧ B ⊃ (A ∧ B) R⊃ ⊣
Next we prove the equivalence of the sequent calculi for non-normal logics with the corresponding axiomatic systems in the sense that a sequent Γ =⇒ ∆ is derivable in G3X if and only if its characteristic formula Γ ⊃ ∆ is derivable in X (where the empty antecedent stands for ⊤ and the empty succedent for ⊥). As a consequence each calculus is sound and complete with respect to the appropriate class of neighbourhood models (see Section 2.2).
Theorem 4.5. Derivability in the sequent system G3X and in the axiomatic system X are equivalent, i.e.,
G3X ⊢ Γ =⇒ ∆ iff X ⊢ Γ ⊃ ∆.
Proof. To prove the right-to-left implication, we argue by induction on the height of the axiomatic derivation in X. The base case is covered by Lemma 4.4. For the inductive steps, the case of MP follows by the admissibility of Cut and the invertibility of rule R ⊃. If the last step is by RE, then Γ = ∅ and ∆ is C ↔ D. We know that (in X) we have derived C ↔ D from C ↔ D. Remember that C ↔ D is defined as (C ⊃ D) ∧ (D ⊃ C). Thus we assume, by inductive hypothesis (IH), that G3ED ⊢ =⇒ C ⊃ D ∧ D ⊃ C. From this, by invertibility of R∧ and R ⊃ (Lemma 3.4), we obtain that G3ED ⊢ C =⇒ D and G3ED ⊢ D =⇒ C. We can thus proceed as follows
C =⇒ D IH + 3.4 D =⇒ C IH + 3.4 C =⇒ D LR-E =⇒ C ⊃ D R⊃ D =⇒ C IH + 3.4 C =⇒ D IH + 3.4 D =⇒ C LR-E =⇒ D ⊃ C R⊃ =⇒ ( C ⊃ D) ∧ ( D ⊃ C) R∧
For the converse implication, we assume G3X ⊢ Γ =⇒ ∆, and show, by induction on the height of the derivation in the sequent calculus, that X ⊢ Γ ⊃ ∆. If the derivation has height 0, we have an initial sequent so Γ ∩ ∆ = ∅ or an instance on L⊥ thus ⊥ ∈ Γ . In both cases the claim holds. If the height is n + 1, we consider the last rule applied in the derivation. If it is a propositional one, the proof is straightforward. If it is a modal rule, we argue by cases.
If the last step of a derivation in G3E(ND) is by LR-E, we have derived C, Γ ′ =⇒ ∆ ′ , D from C =⇒ D and D =⇒ C. By IH and propositional reasoning, ED ⊢ C ↔ D, thus ED ⊢ C ⊃ D. By some propositional steps we conclude ED ⊢ ( C ∧ Γ ′ ) ⊃ ( ∆ ′ ∨ D). The cases of LR-M , LR-R, and LR-K can be treated in a similar manner (thanks, respectively, to the rule RM, RR, and RK from Figure 1).
If we are in G3C(ND), suppose the last step is the following instance of LR-C:
C 1 , . . . C k =⇒ D D =⇒ C 1 . . . D =⇒ C k C 1 , . . . , C k , Γ ′ =⇒ ∆ ′ , D LR-C
By IH, we have that C(ND) ⊢ D ⊃ C i for all i k, and, by propositional reasoning, we have that C(ND) ⊢ D ⊃ C 1 ∧ · · · ∧ C k . We also know, by IH, that C(ND) ⊢ C 1 ∧ · · · ∧ C k ⊃ D. By applying RE to these two theorems we get that
C(ND) ⊢ (C 1 ∧ · · · ∧ C k ) ⊃ D(1)
By using axiom C and propositional reasoning, we know that
C(ND) ⊢ C 1 ∧ · · · ∧ C k ⊃ (C 1 ∧ · · · ∧ C k )(2)
By applying transitivity to (2) and (1) and some propositional steps, we conclude that
C(ND) ⊢ ( C 1 ∧ · · · ∧ C k ∧ Γ ′ ) ⊃ ( ∆ ′ ∨ D)
Let us now consider rule L-D ⊥ . Suppose we are in G3XD ⊥ and we have derived C, Γ ′ =⇒ ∆ from C =⇒. By IH, XD ⊥ ⊢ C ⊃ ⊥, and we know that xD ⊥ ⊢ ⊥ ⊃ C. Thus by RE (or RM ), we get XD ⊥ ⊢ C ⊃ ⊥. By contraposing it and then applying a MP with the axiom D ⊥ , we get that XD ⊥ ⊢ ¬ C. By some easy propositional steps we conclude XD ⊥ ⊢ ( C ∧ Γ ′ ) ⊃ ∆. The case R-N is similar.
Let us consider rules L-D ♦ E . Suppose we are in G3ED ♦ and we have derived A, B, Γ ′ =⇒ ∆ from the premisses A, B =⇒ and =⇒ A, B. By induction we get that ED ♦ ⊢ A ∧ B ⊃ ⊥ and ED ♦ ⊢ A ∨ B. Hence, ED ♦ ⊢ B ⊃ ¬A and ED ♦ ⊢ ¬A ⊃ B. By applying RE we get that
ED ♦ ⊢ B ⊃ ¬A
which, thanks to axiom D ♦ , entails that
ED ♦ ⊢ B ⊃ ¬ A
By some propositional steps we conclude
ED ♦ ⊢ ( A ∧ B ∧ Γ ′ ) ⊃ ∆
Notice that, thanks to Proposition 4.1.4 and Theorem 3.6, we can assume that instances of rule L-D ♦ always have two principal formulas. Otherwise the calculus would prove the empty sequent (we will also assume that neither Π nor Σ is empty in instances of rule L-D ♦ C ).
The case of L-D ♦ M is analogous to that of L-D ⊥ for instances with one principal formula and to that of L-D ♦ E for instances with two principal formulas.
Let us consider rule L-D ♦ C . Suppose we have a G3CD ♦ -derivation whose last step is:
Π, Σ =⇒ {=⇒ A, B| A ∈ Π and B ∈ Σ} Π, Σ, Γ ′ =⇒ ∆ ′
By induction and by some easy propositional steps we know that ECD ♦ ⊢ Π ↔ ¬ Σ. By rule RE we derive ECD ♦ ⊢ Π ⊃ ¬ Σ, which, thanks to axiom D ♦ , entails that ECD ♦ ⊢ Π ⊃ ¬ Σ. By transitivity with two (generalized) instances of axiom C we obtain ECD ♦ ⊢ Π ⊃ ¬ Σ. By some easy propositional steps we conclude that
ECD ♦ ⊢ ( Π ∧ Σ ∧ Γ ′ ) ⊃ ∆.
The admissibility of L-D * in EC(N)D, RD, and KD is similar to that of LR-C: in (1) we replace D with ⊥ and then we use theorem D ⊥ to transform it into ⊥. ⊣ By combining this and Theorem 2.6 we have the following result.
Corollary 4.6. The calculus G3X is sound and complete with respect to the class of all neighbourhood models for X.
Forrester's Paradox
As an application of our decision procedure, we use it to analyse two formal reconstructions of Forrester's paradox [9], which is one of the many paradoxes that endanger the normal deontic logic KD [16]. Forrester's informal argument goes as follows:
Consider the following three statements: 1. Jones murders Smith. 2. Jones ought not murder Smith.
3. If Jones murders Smith, then Jones ought to murder Smith gently. Intuitively, these sentences appear to be consistent. However 1 and 3 together imply that 4. Jones ought to murder Smith gently. Also we accept the following conditional: 5. If Jones murders Smith gently, then Jones murders Smith.
Of course, this is not a logical validity but, rather, a fact about the world we live in. Now, if we assume that the monotonicity rule is valid, then statement 5 entails 6. If Jones ought to murder Smith gently, then Jones ought to murder Smith. And so, statements 4 and 6 together imply 7. Jones ought to murder Smith. But [given the validity of D ♦ ] this contradicts statement 2. The above argument suggests that classical deontic logic should not validate the monotonicity rule [RM ]. [33, p. 16] We show that Forrester's paradox is not a valid argument in deontic logics by presenting, in Figure 7, a failed G3KD-proof search of the sequent that expresses it:
g ⊃ m, m ⊃ g, ¬m, m =⇒(3)
where m stands for 'John murders Smith' and g for 'John murders Smith gently' [16, pp. 87-91]. Note that, by Theorem 4.5, if Forrester's paradox is not G3KD-derivable, then it is not valid in all the weaker deontic logics we have considered.
To make our failed proof search into a derivation of Forrester's paradox, we would have to add (to G3MD ♦ or stronger calculi) a non-logical axiom =⇒ g ⊃ m, and to have cut as a primitive and ineliminable rule of inference. An Hilbert-style axiomatization of Forrester's argument e.g., [16, p. 88] hides this cut with a non-logical axiom in the step where g ⊃ m is derived from g ⊃ m, by one of RM, RR or RK. This step i.e., the step from 5 to 6 in the informal argument above is not acceptable because none of these rules allows to infer its conclusion when the premiss is an assumption and not a theorem. 1 We have here an instance of the same problem that has led many authors to conclude that the deduction theorem fails in modal logics, a conclusion that has been shown to be wrong in [27].
An alternative formulation of Forrester's argument is given in [38], where the sentence 'John murders Smith gently' is expressed by the complex formula g ∧ m instead of by the atomic g. In this case Forrester's 1 A reviewer pointed out that Forrester's (5) should be taken as a global assumption instead of as a local one as we did in (3). Semantically this corresponds to taking g ⊃ m as being true in all worlds of the model instead of in the actual world only. Proof-theoretically we don't know how to handle global assumptions: see [23] for a calculus with global assumption for normal modalities. We only observe that adding A as a global assumption is weaker than having =⇒ A as a non-logical axiom. Nonetheless, in logics where axiom N holds, the global assumption A might be approximated by having A, A in the antecedent of the end-sequent. If in (3) we replace g ⊃ m with g ⊃ m, (g ⊃ m) then Forrester's paradox becomes derivable. Figure 7. Failed G3KD-proof search of Forrester's paradox [16] closed Figure 8. Successful G3MD-proof search for the alternative version of Forrester's paradox [38] argument becomes valid whenever the monotonicity rule is valid as it shown in Figure 8. Nevertheless, whereas it was an essential ingredient of the informal version, under this formalization premiss 5 becomes dispensable. Hence it is disputable that this is an acceptable way of formalising Forrester's argument. This is not the place to discuss at length the correctness of the formal representations of Forrester's argument and their implications for deontic logics. We just wanted to illustrate how the calculi G3XD can be used to analyse formal representations of the deontic paradoxes. If Forrester's argument is formalised as in [16] then it does not force to adopt a deontic logic weaker than KD. If, instead, it is formalised as in [38] then it forces the adoption of a logic where RM fails, but the formal derivation differs substantially from Forrester's informal argument [9].
closed m, ¬m =⇒ g, m open g =⇒ m g, ¬m =⇒ L¬ L-D ⋆ open g =⇒ L-D ⋆ open =⇒ m ¬m =⇒ L¬ L-D ⋆ g, ¬m, m =⇒ g m ⊃ g, ¬m, m =⇒ g L⊃ . . . m, m ⊃ g, ¬m, m =⇒ g ⊃ m, m ⊃ g, ¬m, m =⇒ L⊃¬m, m =⇒ m closed g, m =⇒ m g, m, ¬m =⇒ L¬ g ∧ m, ¬m =⇒ L∧ L-D ⋆ g ∧ m =⇒ L-D ⋆ ¬m =⇒ L-D ⋆ (g ∧ m), ¬m, m =⇒ m ⊃ (m ∧ g), ¬m, m =⇒ L⊃
Craig's Interpolation Theorem
In this section we use Maehara's [24,25] technique to prove Craig's interpolation theorem for each modal or deontic logic X which has C as theorem only if it has also M (Example 5.5 illustrates the problem with the non-standard rule LR-C).
Theorem 5.1 (Craig's interpolation theorem). Let A ⊃ B be a theorem of a logic X that differs from EC(N) and its deontic extensions EC(N)D and ECD ♦ , then there is a formula I, which contains propositional variables common to A and B only, such that both A ⊃ I and I ⊃ B are theorems of X.
In order to prove this theorem, we use the following notions.
Definition 5.2. A partition of a sequent Γ =⇒ ∆ is any pair of sequents
Γ 1 =⇒ ∆ 1 || Γ 2 =⇒ ∆ 2 such that Γ 1 , Γ 2 = Γ and ∆ 1 , ∆ 2 = ∆. A G3X-interpolant of a partition Γ 1 =⇒ ∆ 1 || Γ 2 =⇒ ∆ 2 is any formula I such that: 1. All propositional variables in I are in (Γ 1 ∪ ∆ 1 ) ∩ (Γ 2 ∪ ∆ 2 ); 2. G3X ⊢ Γ 1 =⇒ ∆ 1 , I and G3X ⊢ I, Γ 2 =⇒ ∆ 2 . If I is a G3X-interpolant of the partition Γ 1 =⇒ ∆ 1 || Γ 2 =⇒ ∆ 2 , we write (G3X ⊢) Γ 1 =⇒ ∆ 1 I || Γ 2 =⇒ ∆ 2
where one or more of the multisets Γ 1 , Γ 2 , ∆ 1 , ∆ 2 may be empty. When the set of propositional variables in (Γ 1 ∪ ∆ 1 ) ∩ (Γ 2 ∪ ∆ 2 ) is empty, the X-interpolant has to be constructed from ⊥ (and ⊤). The proof of Theorem 5.1 is by the following lemma, originally due to Maehara [24,25] for (an extension of) classical logic. Tables 1 and 2), every partition of Γ =⇒ ∆ has a G3X-interpolant.
Proof. The proof is by induction on the height of the derivation D of Γ =⇒ ∆. We have to show that each partition of an initial sequent (or of a conclusion of a 0-premiss rule) has a G3X-interpolant and that for each rule of G3X (but LR-C and L-D ♦ C ) we have an effective procedure that outputs a G3X-interpolant for any partition of its conclusion from the interpolant(s) of suitable partition(s) of its premiss(es). The proof is modular and hence we can consider the modal rules without having to reconsider them in the different calculi.
For the base case of initial sequents with p principal formula, we have four possible partitions, whose interpolants are:
(1) p, Γ ′ 1 =⇒ ∆ ′ 1 , p ⊥ || Γ 2 =⇒ ∆ 2 (2) p, Γ ′ 1 =⇒ ∆ 1 p || Γ 2 =⇒ ∆ ′ 2 , p (3) Γ 1 =⇒ ∆ ′ 1 , p ¬p || p, Γ ′ 2 =⇒ ∆ 2 (4) Γ 1 =⇒ ∆ 1 ⊤ || p, Γ ′ 2 =⇒ ∆ ′ 2 , p
and for the base case of rule L⊥, we have:
(1) ⊥, Γ ′ 1 =⇒ ∆ 1 ⊥ || Γ 2 =⇒ ∆ 2 (2) Γ 1 =⇒ ∆ 1 ⊤ || ⊥, Γ ′ 2 =⇒ ∆ 2 ,
For the proof of (some of) the propositional cases the reader is referred to [39, pp. 117-118]. Thus, we have only to prove that all the modal and deontic rules of Figure 6 (modulo LR-C and L-D ♦ C ) behave as desired.
• LR-E) If the last rule applied in D is
A =⇒ B B =⇒ A A, Γ =⇒ ∆, B LR-E
we have four kinds of partitions of the conclusion:
(1) A, Γ ′ 1 =⇒ ∆ ′ 1 , B || Γ 2 =⇒ ∆ 2 (2) A, Γ ′ 1 =⇒ ∆ 1 || Γ 2 =⇒ ∆ ′ 2 , B (3) Γ 1 =⇒ ∆ ′ 1 , B || A, Γ ′ 2 =⇒ ∆ 2 (4) Γ 1 =⇒ ∆ 1 || A, Γ ′ 2 =⇒ ∆ ′ 2 , B
In each case we have to choose partitions of the premisses that permit us to construct a G3E(ND)-interpolant for the partition under consideration.
In case (1) we have
A =⇒ B C || =⇒ B =⇒ A D || =⇒ A, Γ ′ 1 =⇒ ∆ ′ 1 , B C || Γ 2 =⇒ ∆ 2
LR-E
This can be shown as follows. By IH there is some C (D) that is a Here is a proof that C is a G3E(ND)-interpolant of the partition under consideration (the sequents A =⇒ B and B =⇒ A are derivable since they are the premisses of the instance of LR-E we are considering):
A =⇒ B B =⇒ A A, Γ ′ 1 =⇒ ∆ ′ 1 , B, C LR-E C =⇒ (ii) C, Γ 2 =⇒ ∆ 2
LWs+RWs
In case (2) we have
A =⇒ C || =⇒ B B =⇒ D || =⇒ A A, Γ ′ 1 =⇒ ∆ 1 C || Γ 2 =⇒ ∆ ′ 2 , B LR-E
By IH it holds that some C and D are G3E(ND)-interpolants of the given partitions of the premisses. Thus,
(i) ⊢ A =⇒ C (ii) ⊢ C =⇒ B (iii) ⊢ B =⇒ D (iv) ⊢ D =⇒ A and (v) all propositional variables in C ∪ D are in A ∩ B.
Here is a proof that C is a G3E(ND)-interpolant of the given partition (the language condition is satisfied thanks to (v) ):
A =⇒ C (i) C =⇒ B (ii) B =⇒ D (iii) C =⇒ D Cut D =⇒ A (iv) C =⇒ A Cut A, Γ ′ 1 =⇒ ∆ 1 , C LR-E C =⇒ B (ii) B =⇒ D (iii) D =⇒ A (iv) A =⇒ C (i) D =⇒ C Cut B =⇒ C Cut C, Γ 2 =⇒ ∆ ′ 2 , B LR-E In case (3) we have =⇒ B C || A =⇒ =⇒ A D || B =⇒ Γ 1 =⇒ ∆ ′ 1 , B ♦C || A, Γ ′ 2 =⇒ ∆ 2
LR-E
By IH, there are C and D that are G3E(ND)-interpolants of the partitions of the premisses. Thus (i) ⊢=⇒ B, C (ii) ⊢ C, A =⇒ (iii) ⊢=⇒ A, D and (iv) ⊢ D, B =⇒ . We prove that ♦C is a G3E(ND)interpolant of the (given partition of the) conclusion as follows:
=⇒ B, C (i) ¬C =⇒ B L¬ =⇒ D, A (iii) A, C =⇒ (ii) C =⇒ D Cut D, B =⇒ (iv) B, C =⇒ Cut B =⇒ ¬C R¬ ¬C, Γ 1 =⇒ ∆ ′ 1 , B LR-E Γ 1 =⇒ ∆ ′ 1 , B, ¬ ¬C R¬ C, A =⇒ (ii) A =⇒ ¬C R¬ =⇒ A, D (iii) =⇒ C, B (ii) B, D =⇒ (iv) D =⇒ C Cut =⇒ A, C Cut ¬C =⇒ A L¬ A, Γ ′ 2 =⇒ ∆ 2 , ¬C LR-E ¬ ¬C, A, Γ ′ 2 =⇒ ∆ 2 L¬ In case (4) we have =⇒ C || A =⇒ B =⇒ D || B =⇒ A Γ 1 =⇒ ∆ 1 C || A, Γ ′ 2 =⇒ ∆ ′ 2 , B LRΓ 1 =⇒ ∆ 1 , C LWs+RWs A =⇒ B B =⇒ A C, A, Γ ′ 2 =⇒ ∆ ′ 2 , B LR-E • LR-M) If the last rule applied in D is A =⇒ B A, Γ =⇒ ∆, B LR-M
we present directly the G3M(ND)-interpolants of the possible partitions of the conclusion (and of the appropriate partition of the premiss). The proofs are parallel to those for LR-E.
A =⇒ B C || =⇒ A, Γ ′ 1 =⇒ ∆ ′ 1 , B C || Γ 2 =⇒ ∆ 2 LR-M A =⇒ C || =⇒ B A, Γ ′ 1 =⇒ ∆ 1 C || Γ 2 =⇒ ∆ ′ 2 , B LR-M =⇒ B C || A =⇒ Γ 1 =⇒ ∆ ′ 1 , B ♦C || A, Γ ′ 2 =⇒ ∆ 2 LR-M =⇒ C || A =⇒ B Γ 1 =⇒ ∆ 1 C || A, Γ ′ 2 =⇒ ∆ ′ 2 , B LR-M • LR-R) If the last rule applied in D is A, Π =⇒ B A, Π, Γ =⇒ ∆, B LR-R
we have four kinds of partitions of the conclusion:
(1) A, Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B || Π 2 , Γ ′ 2 =⇒ ∆ 2 (2) A, Π 1 , Γ ′ 1 =⇒ ∆ 1 || Π 2 , Γ ′ 2 =⇒ ∆ ′ 2 , B (3) Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B || A, Π 2 , Γ ′ 2 =⇒ ∆ 2 (4) Π 1 , Γ ′ 1 =⇒ ∆ 1 || A, Π 2 , Γ ′ 2 =⇒ ∆ ′ 2 , B
In case (1) we have two subcases according to whether Π 2 is empty or not. If it is not empty we have
A, Π 1 =⇒ B C || Π 2 =⇒ A, Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B ♦C || Π 2 , Γ ′ 2 =⇒ ∆ 2
LR-R
By IH, there is a G3R(D ⋆ )-interpolant C of the chosen partition of the premiss. Thus (i) ⊢ A, Π 1 =⇒ B, C and (ii) ⊢ C, Π 2 =⇒, and we have the following derivations
A, Π 1 =⇒ B, C (i) ¬C, A, Π 1 =⇒ B L¬ ¬C, A, Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B LR-R A, Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B, ¬ ¬C R¬ C, Π 2 =⇒ (ii) Π 2 =⇒ ¬C R¬ Π 2 , Γ ′ 2 =⇒ ∆ 2 , ¬C LR-R ¬ ¬C, Π 2 , Γ ′ 2 =⇒ ∆ 2 L¬
When Π 2 (and Π 2 ) is empty we cannot proceed as above since we cannot apply LR-R in the right derivation. But in this case, reasoning like in case (1) for rule LR-E, we can show that
A, Π 1 =⇒ B C || =⇒ A, Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B C || Γ ′ 2 =⇒ ∆ 2
LR-R
Cases (2) and (3) are similar to the corresponding cases for rule LR-E:
A, Π 1 =⇒ C || Π 2 =⇒ B A, Π 1 , Γ ′ 1 =⇒ ∆ 1 C || Π 2 , Γ ′ 2 =⇒ ∆ ′ 2 , B LR-R Π 1 =⇒ B C || A, Π 2 =⇒ Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 B ♦C || A, Π 2 , Γ ′ 2 =⇒ ∆ 2
LR-R
In case (4) we have two subcases according to whether Π 1 is empty or not:
=⇒ C || A, Π 2 =⇒ B Γ ′ 1 =⇒ ∆ 1 C || A, Π 2 , Γ ′ 2 =⇒ ∆ ′ 2 , B LR-R Π 1 =⇒ C || A, Π 2 =⇒ B Π 1 , Γ ′ 1 =⇒ ∆ 1 C || A, Π 2 , Γ ′ 2 =⇒ ∆ ′ 2 , B LR-R
The proofs are similar to those for case (1).
• LR-K) If the last rule applied in D is
Π =⇒ B Π, Γ =⇒ ∆, B LR-K
we present directly the G3K(D)-interpolants of the two possible partitions of the conclusion:
Π 1 =⇒ C || Π 2 =⇒ B Π 1 , Γ ′ 1 =⇒ ∆ 1 C || Π 2 , Γ ′ 2 =⇒ ∆ ′ 2 , B LR-K Π 1 =⇒ B C || Π 2 =⇒ Π 1 , Γ ′ 1 =⇒ ∆ ′ 1 , B ♦C || Π 2 , Γ ′ 2 =⇒ ∆ 2
LR-K
The proofs are, respectively, parallel to those for cases (2) and (3) of LR-E (when Π = ∅, we can proceed as for rule R-N and use C instead of C and of ♦C, respectively).
• L-D ⊥ ) If the last rule applied in D is A =⇒ A, Γ =⇒ ∆ L-D ⊥
we have two kinds of partitions of the conclusion, whose G3XD ⊥ -interpolants are, respectively:
A =⇒ C || =⇒ A, Γ ′ 1 =⇒ ∆ 1 C || Γ 2 =⇒ ∆ 2 L-D ⊥ =⇒ C || A =⇒ Γ 1 =⇒ ∆ 1 C || A, Γ ′ 2 =⇒ ∆ 2 L-D ⊥ • L-D ♦ ) If the last rule applied in D is A, B =⇒ =⇒ A, B A, B, Γ =⇒ ∆ L-D ♦ E or A, B =⇒ A, B, Γ =⇒ ∆ L-D ♦ M
we have three kinds of partitions of the conclusion:
(1) A, B, Γ ′ 1 =⇒ ∆ 1 || Γ 2 =⇒ ∆ 2 (2) Γ 1 =⇒ ∆ 1 || A, B, Γ ′ 2 =⇒ ∆ 2 (3) A, Γ ′ 1 =⇒ ∆ 1 || B, Γ ′ 2 =⇒ ∆ 2
In cases (1) and (2) we have, respectively (omitting the right premiss for L-D ♦ M ):
A, B =⇒ C || =⇒ =⇒ A, B D || =⇒ A, B, Γ ′ 1 =⇒ ∆ 1 C || Γ 2 =⇒ ∆ 2 L-D ♦ =⇒ C || A, B =⇒ =⇒ D || =⇒ A, B Γ 1 =⇒ ∆ 1 C || A, B, Γ ′ 2 =⇒ ∆ 2 L-D ♦
Finally, in case (3) we have:
A =⇒ C || B =⇒ =⇒ A D || =⇒ B A, Γ ′ 1 =⇒ ∆ 1 C || B, Γ ′ 2 =⇒ ∆ 2 L-D ♦
By IH, we can assume that C is an interpolant of the partition of the left premiss and D of the right one. We have the following G3YD ♦derivations (Y ∈ { E,M}):
A =⇒ C IH =⇒ A, D IH D =⇒ B IH B, C =⇒ IH D, C =⇒ Cut C =⇒ A Cut A, Γ ′ 1 =⇒ ∆ 1 , C LR-E C, B =⇒ IH =⇒ D, A IH A =⇒ C IH =⇒ C, D Cut D =⇒ B IH =⇒ C, B Cut C, B, Γ ′ 2 =⇒ ∆ 2 L-D ♦ E
It is also immediately evident that C satisfies the language condition for being a G3YD ♦ -interpolant of the conclusion since, by IH, we know that each propositional variable occurring in C occurs in A ∩ B.
• L-D ⋆ ) If the last rule applied in D is
Π =⇒ Π, Γ =⇒ ∆ L-D ⋆
we have the following kind of partition:
Π 1 , Γ ′ 1 =⇒ ∆ 1 || Π 2 , Γ ′ 2 =⇒ ∆ 2 .
If Π 1 is not empty we have:
Π 1 =⇒ C || Π 2 =⇒ Π 1 , Γ ′ 1 =⇒ ∆ 1 C || Π 2 , Γ ′ 2 =⇒ ∆ 2 L-D ⋆
By IH, there is some C that is an interpolant of the premiss. It holds that ⊢ Π 1 =⇒ C and ⊢ C, Π 2 =⇒ . We show that C is a G3YD-interpolant (Y ∈ {R,K}) of the partition of the conclusion as follows:
Π 1 =⇒ C IH Π 1 , Γ ′ 1 =⇒ ∆ 1 , C LR-Y C, Π 2 =⇒ IH C, Π 2 , Γ ′ 2 =⇒ ∆ 2 L-D *
If, instead, Π 1 is empty then Π 2 cannot be empty and we have:
=⇒ C || Π 2 =⇒ Γ 1 =⇒ ∆ 1 ♦C || Π 2 , Γ ′ 2 =⇒ ∆ 2 L-D ⋆
By IH there is a formula C, containing no propositional variable, such that ⊢ =⇒ C and ⊢ C, Π 2 =⇒ . Thus, G3YD ⊢ Γ 1 =⇒ ∆ 1 , ♦C (L-D * makes =⇒ ♦C derivable from =⇒ C) and G3YD ⊢ ♦⊤, Π 2 , Γ ′ 2 =⇒ ∆ 2 (LR-Y makes ♦C, Π 2 =⇒ derivable from C, Π 2 =⇒ when Π 2 = ∅). • R-N) If the last rule applied in D is
=⇒ A Γ =⇒ ∆, A R-N
The interpolants for the two possible partitions are: Observe that the proof is constructive in that Lemma 5.3 gives a procedure to extract an interpolant for A ⊃ B from a given derivation of A =⇒ B. Furthermore the proof is purely proof-theoretic in that it makes no use of model-theoretic notions.
=⇒ A ⊥ || =⇒ Γ 1 =⇒ ∆ ′ 1 , A ⊥ || Γ 2 =⇒ ∆ 2 R-N =⇒ ⊤ || =⇒ A Γ 1 =⇒ ∆ 1 ⊤ || Γ 2 =⇒ ∆ ′ 2 , A R-N ⊣ ⊤ =⇒ ⊤ || =⇒ ⊤ ⊤ =⇒ ⊤ || =⇒ ⊤ ⊤ =⇒ ⊤ || =⇒ ⊤ LR-E ⊥ =⇒ ⊥ || =⇒ ⊥ ⊥ =⇒ ⊥ || =⇒ ⊥ ⊥ =⇒ ⊥ || =⇒ ⊥ LR-E
Craig's theorem is often e.g., in [25] for an extension of classical logic stated in the following stronger version:
If A ⊃ B is a theorem of logic X, then 1. If A and B share some propositional variable, there is a formula I, which contains propositional variables common to A and B only, such that both A ⊃ I and I ⊃ B are theorems of X; 2. Else, either ¬A or B is a theorem of X.
But the second condition doesn't hold for modal and deontic logics where at least one of N := ⊤ and D ⊥ := ♦⊤ is not a theorem. To illustrate, it holds that ⊤ ⊃ ⊤ is a theorem of E and its interpolant is ⊥ (see Figure 9), but neither ¬ ⊤ nor ⊤ is a theorem of E. Analogously, we have that ⊥ ⊃ ⊥ is a theorem of E and its interpolant is ⊥ (see Figure 9), but neither ¬ ⊥ nor ⊥ is a theorem of E. These counterexamples work in all extensions of E that don't have both N and D ⊥ as theorems: to prove the stronger version of Craig's theorem we need N and D ⊥ , respectively.
Among the deontic logics considered here, the stronger version of Craig's theorem holds only for END ⊥(♦) , MND ⊥(♦) , and KD, as shown by the following Corollary 5.4. Let XD be one of END ⊥(♦) , MND ⊥(♦) , and KD. If A ⊃ B is a theorem of XD and A and B share no propositional variable, then either ¬A or B is a theorem of XD.
Proof. Suppose that XD ⊢ A ⊃ B and that A and B share no propositional variable, then the interpolant I is constructed from ⊥ and ⊤ by means of classical and deontic operators. Whenever D ⊥ and N are theorems of XD, we have that ♦⊤ ↔ ⊤, ⊤ ↔ ⊤, ♦⊥ ↔ ⊥, and ⊥ ↔ ⊥ are theorems of XD. Hence, the interpolant I is (equivalent to) either ⊥ or ⊤. In the first case XD ⊢ ¬A and in the second one XD ⊢ B. ⊣
As noted in [8, p. 298], Corollary 5.4 is a Halldén-completeness result. A logic X is Halldén-complete if, for every formulas A and B that share no propositional variable, X ⊢ A ∨ B if and only if X ⊢ A or X ⊢ B. All the modal and deontic logics considered here, being based on classical logic, are such that A ⊃ B is equivalent to ¬A ∨ B. Thus the deontic logics considered in Corollary 5.4 are Halldén-complete, whereas all other non-normal logics for which we have proved interpolation are Halldénincomplete since they don't satisfy Corollary 5.4.
Example 5.5 (Maehara's lemma and rule LR-C). We have not been able to prove Maehara's Lemma 5.3 for rule LR-C because of the cases where the principal formulas of the antecedent are split in the two elements of the partition. In particular, if we have two principal formulas in the antecedent, the problematic partitions are (omitting the weakening contexts):
(1) A 1 =⇒ || A 2 =⇒ B (2) A 1 =⇒ B || A 2 =⇒
To illustrate, an interpolant of the first partition would be a formula I such that:
(i) ⊢ A 1 =⇒ I (ii) ⊢ I, A 2 =⇒ B (iii) p ∈ I only if p ∈ (A 1 ) ∩ (A 2 , B)
But we have not been able to find partitions of the premisses allowing us to find such I. More in details, for the first premiss it is natural to consider the partition A 1 =⇒ C || A 2 =⇒ B in order to find an I that satisfies (iii). But, for any combination of the partitions of the other two premisses that is compatible with (iii), we can prove that (ii) is satisfied (by C) but we have not been able to prove that also (i) is satisfied.
Conclusion
We presented cut-and contraction-free sequent calculi for non-normal modal and deontic logics. We have proved that these calculi have good structural properties in that the rules of weakening and contraction are height-preserving admissible and cut is (syntactically) admissible. Moreover, we have shown that these calculi allow for a terminating decision procedure whose complexity is in Pspace. Finally, we have given a constructive proof of Craig's interpolation property for all the logics that do not contain rule LR-C. As far as we know, it is still an open problem whether it is possible to give a constructive proof of interpolation for these logics. Another open question is whether the calculi given here can be used to give a constructive proof of the uniform interpolation property for non-normal logics as is done in [36] for IL p and in [2] for K and T.
Figure 2
2Figure 2. Axioms
Figure 3 .
3Definition 2.3. A neighbourhood model is a triple M := W, N, P ,where W is a non-empty set of possible worlds; N : W −→ 2 2 W is a neighbourhood function that associates to each possible world w a set N (w) of subsets of W ; and P gives a truth value to each propositional variable at each world. Lattice of non-normal modal logics
Figure 4 .
4Lattice of non-normal deontic logics The definition of truth of a formula A at a world w of a neighbourhood model M |= M w A is the standard one for the classical connec-tives with the addition of
Figure 6 .
6Modal and deontic rules
else (|Ξ| = 0 and) it stands for some instances of LW and RW . • G3M(ND). Both premisses are by rule LR-M , we transform . . . D 11
- M •
MG3MN(D). Left premiss by R-N and right one by LR-M . Similar to the case with left premiss by R-N and right one by LR-E. • G3M(N)D ⊥ and G3M(N)D. Left premiss is by LR-M , and right one by L-D ⊥ or L-D ♦ M . Similar to the case with left premiss by LR-E and right by L-D ⊥ or L-D ♦ E , respectively. • G3MND ⊥ and G3MND. The cases with left premiss by R-N and right one by a deontic rule are like the analogous ones we have already considered. • G3C(ND). Both premisses are by rule LR-C. Let us agree to use Λ to denote the non-empty multiset A 1 , . . . , A n , and Ξ for the (possibly empty) multiset B 2 , . . . B m . The derivation . . . D 11
D 21 Θ
21, Λ =⇒ . . . D ΘiΛj {=⇒ E, D j | E ∈ Θ and D j ∈ Λ} C, B 2 , . . . , B m , Λ, Π ′ =⇒ Σ L-D ♦ C Ξ, B 1 , . . . , B m , Λ, Γ ′ , Π ′ =⇒ ∆, Σ Cut into the following derivation having 1 + (k × n) cuts on formulas of lesser weight . . . D11 Ξ =⇒ C . . . D21 C, B2, . . . , Bm, Λ =⇒ Ξ, B2, . . . , Bm, Λ =⇒ Cut . . . DΘ 1,Λj
if m ≥ 2, S is OR-branching and, if i is a rule with more than one premiss, i is AND-branching. Moreover, we label S 1 1 as active. Else Open. No rule of G3x can be applied to S, then we label S as open and
Theorem 4 . 3 .
43G3X-derivability is decidable in O(n log n)-space, where n is the weight of the end-sequent.
Lemma 5 . 3 (
53Maehara's lemma). If G3X ⊢ Γ =⇒ ∆ and LR-C (and L-D ♦ C ) is not a rule of G3X (see
G3E(ND)-interpolant of the given partition of the left (right) premiss. Thus both C and D contains only propositional variables common to A and B; and (i) ⊢ A =⇒ B, C (ii) ⊢ C =⇒ (iii) ⊢ B =⇒ A, D and (iv) ⊢ D =⇒ . Since the common language of the partitions of the premisses is empty, no propositional variable can occur in C nor in D.
−E By IH, there are G3E(ND)-interpolants C and D of the partitions of the premisses. Thus (i) ⊢=⇒ C (ii) ⊢ C, A =⇒ B (iii) ⊢=⇒ D and (iv) ⊢ D, B =⇒ A . Since the common language of the partitions of the premisses is empty, no propositional variable occurs in C nor in D. We show that C is a G3E(ND)-interpolant of the partition under consideration as follows (as in case (1), A =⇒ B and B =⇒ A, being the premisses of the instance of LR-E under consideration, are derivable): =⇒ C (i)
Figure 9 .
9Construction of an ED-interpolant for ⊤ ⊃ ⊤ and for ⊥ ⊃ ⊥ Proof of Theorem 5.1. Assume that A ⊃ B is a theorem of X. By Theorem 4.5 and Lemma 3.4 we have that G3X ⊢ A =⇒ B. By Lemma 5.3 (taking A as Γ 1 and B as ∆ 2 and Γ 2 , ∆ 1 empty) and Theorem 4.5 there exists a formula I that is an interpolant of A ⊃ B i.e. I is such such that all propositional variables occurring in I, if any, occur in both A and B and such that A ⊃ I and I ⊃ B are theorems of X. ⊣
Table 2 .
2Deontic calculi ( = rule of the calculus, ⋆ = admissible rule, − = neither)
• Case (1). Same as for G3cp [see 29, Thm. 3.2.3 for the details].
© 2020 by Nicolaus Copernicus University in Toruń
Acknowledgments. Partially supported by the Academy of Finland, research project no. 1308664. Thanks are due to Tiziano Dalmonte, Simone Martini, and two anonymous referees for many helpful suggestions.
A new method for bounding the complexity of modal logics. D Basin, S Matthews, L Viganò, 10.1007/3-540-63385-5_35Computational Logic and Proof Theory (KGC 1997). G. Gottlob et al.BerlinSpringerBasin, D., S. Matthews and L. Viganò, "A new method for bounding the complexity of modal logics", pages 89-102 in G. Gottlob et al. (eds.), Computational Logic and Proof Theory (KGC 1997), Berlin: Springer, 1997. DOI: 10.1007/3-540-63385-5_35
Uniform interpolation and propositional quantifiers in modal logics. M Bilková, 10.1007/s11225-007-9021-5Studia Logica. 85Bilková, M., "Uniform interpolation and propositional quantifiers in modal logics", Studia Logica 85 (2007): 1-31. DOI: 10.1007/s11225-007-9021- 5
Variants of multi-relational semantics for propositional non-normal modal logics. E Calardo, A Rotolo, 10.1080/11663081.2014.966625Journal of Applied Non-Classical Logics. 24Calardo, E., A. Rotolo, "Variants of multi-relational semantics for proposi- tional non-normal modal logics", Journal of Applied Non-Classical Logics 24 (2014): 293-320. DOI: 10.1080/11663081.2014.966625
Modal Logic: An Introduction, Cambridge: CUP. B F Chellas, 10.1017/CBO9780511621192Chellas, B. F., Modal Logic: An Introduction, Cambridge: CUP, 1980. DOI: 10.1017/CBO9780511621192
Non normal logics: semantic analysis and proof theory. J Chen, G Greco, A Palmigiano, A Tzimoulis, 10.1007/978-3-662-59533-6_7Logic, Language, Information, and Computation (WOLLIC 2019). R. Iemhoff et al.BerlinSpringerChen, J., G. Greco, A. Palmigiano and A. Tzimoulis, "Non normal logics: semantic analysis and proof theory", pages 99-118 in R. Iemhoff et al. (eds.), Logic, Language, Information, and Computation (WOLLIC 2019), Berlin: Springer, 2019. DOI: 10.1007/978-3-662-59533-6_7
Non-normal modal logics: bi-neighbourhood semantics and its labelled calculi. T Dalmonte, N Olivetti, S Negri, Advances in Modal Logic. G. Bezanishvili et al.LondonCollege Publications12Dalmonte, T., N. Olivetti and S. Negri, "Non-normal modal logics: bi-neighbourhood semantics and its labelled calculi", pages 159-178 in G. Bezanishvili et al. (eds.), Advances in Modal Logic 12, London: College Publications, 2018. http://www.aiml.net/volumes/volume12/ Dalmonte-Olivetti-Negri.pdf
Countermodel construction via optimal hypersequent calculi for non-normal modal logics. T Dalmonte, N B. Lellmann, E Olivetti, Pimentel, 10.1007/978-3-030-36755-8_3Logical Foundations of Computer Science 2020. S. Artemov and A. NerodeBerlinSpringerDalmonte, T., B. Lellmann, N. Olivetti and E. Pimentel, "Countermodel construction via optimal hypersequent calculi for non-normal modal log- ics", pages 27-46 in S. Artemov and A. Nerode (eds.), Logical Foundations of Computer Science 2020, Berlin: Springer, 2020. DOI: 10.1007/978-3- 030-36755-8_3
Proof Methods for Modal and Intuitionistic Logics, Dordrecht: Reidel. M Fitting, 10.1007/978-94-017-2794-5Fitting, M., Proof Methods for Modal and Intuitionistic Logics, Dordrecht: Reidel, 1983. DOI: 10.1007/978-94-017-2794-5
Gentle murder, or the adverbial samaritan. J W Forrester, 10.2307/2026120Journal of Philosophy. 81Forrester, J. W., "Gentle murder, or the adverbial samaritan", Journal of Philosophy 81 (1984): 193-197. DOI: 10.2307/2026120
D M Gabbay, L Maksimova, 10.1093/acprof:oso/9780198511748.001.0001DOI: 10.1093/ acprof:oso/9780198511748.001.0001Interpolation and Definability: Modal and Intuitionistic Logics. Oxford: ClarendonGabbay, D. M., and L. Maksimova, Interpolation and Definability: Modal and Intuitionistic Logics, Oxford: Clarendon, 2005. DOI: 10.1093/ acprof:oso/9780198511748.001.0001
Modular sequent calculi for classical modal logics. D Gilbert, P Maffezioli, 10.1007/s11225-014-9556-1Studia Logica. 103Gilbert, D., and P. Maffezioli, "Modular sequent calculi for classical modal logics", Studia Logica 103 (2015): 175-217. DOI: 10.1007/s11225-014- 9556-1
Standard sequent calculi for Lewis' logics of counterfactuals. M Girlando, N B. Lellmann, G L Olivetti, Pozzato, 10.1007/978-3-319-48758-8_18JELIA 2016. L. Michael and A. KakasChamSpringerGirlando, M., B. Lellmann, N. Olivetti and G. L. Pozzato, "Standard sequent calculi for Lewis' logics of counterfactuals", pages 272-287 in L. Michael and A. Kakas (ed.), JELIA 2016, Cham: Springer, 2016. DOI: 10.1007/978-3-319-48758-8_18
Prima facie norms, normative conflicts, and dilemmas. L Goble, Handbook of Deontic Logic and Normative Systems. D. Gabbay et al.LondonCollege PublicationsGoble, L., "Prima facie norms, normative conflicts, and dilemmas", pages 241-351 in D. Gabbay et al. (eds.), Handbook of Deontic Logic and Normative Systems, London: College Publications, 2013. http://www. collegepublications.co.uk/downloads/handbooks00001.pdf
On the semantic non-completeness of certain Lewis calculi. S Halldén, 10.2307/2266686Journal of Symbolic Logic. 16Halldén, S., "On the semantic non-completeness of certain Lewis calculi", Journal of Symbolic Logic 16 (1951): 127-129. DOI: 10.2307/2266686
Neighborhood structures: bisimilarity and basic model theory. H H Hansen, C Kupke, E Pacuit, 10.2168/LMCS-5(2:2)2009Logical Methods in Computer Science. 5Hansen H. H., C. Kupke and E. Pacuit, "Neighborhood structures: bisim- ilarity and basic model theory", Logical Methods in Computer Science 5 (2009): 1-38. DOI: 10.2168/LMCS-5(2:2)2009
Deontic logic: a historical survey and introduction. R Hilpinen, P Mcnamara, Handbook of Deontic Logic and Normative Systems. D. M. Gabbay et al.LondonCollege PublicationsHilpinen, R., and P. McNamara, "Deontic logic: a historical survey and introduction", pages 3-136 in D. M. Gabbay et al. (eds.), Hand- book of Deontic Logic and Normative Systems, London: College Pub- lications, 2013. http://www.collegepublications.co.uk/downloads/ handbooks00001.pdf
An O(n log n)-space decision procedure for intuitionistic propositional logic. J Hudelmaier, 10.1093/logcom/3.1.63Journal of Logic and Computation. 3Hudelmaier, J., "An O(n log n)-space decision procedure for intuitionistic propositional logic", Journal of Logic and Computation 3 (1993): 63-76. DOI: 10.1093/logcom/3.1.63
Improved decision procedures for the modal logics K, T and S4. J Hudelmaier, 10.1007/3-540-61377-3_46in H. Kleine Büning, Computer Science Logic (CSL 1995). BerlinSpringerHudelmaier, J., "Improved decision procedures for the modal logics K, T and S4", pages 320-334 in H. Kleine Büning, Computer Science Logic (CSL 1995), Berlin: Springer, 1995. DOI: 10.1007/3-540-61377-3_46
Sequent calculi for monotonic modal logics. A Indrzejczak, Bullettin of the Section of Logic. 34Indrzejczak, A., "Sequent calculi for monotonic modal logics", Bullettin of the Section of Logic 34 (2005): 151-164. http://www.filozof.uni. lodz.pl/bulletin/pdf/34_3_4.pdf
Admissiblity of cut in congruent modal logics. A Indrzejczak, 10.12775/LLP.2011.010Logic and Logical Philosophy. 21Indrzejczak, A., "Admissiblity of cut in congruent modal logics", Logic and Logical Philosophy 21 (2011): 189-203. DOI: 10.12775/LLP.2011.010
Sequent calculi and decision procedures for weak modal systems. R Lavendhomme, L Lucas, 10.1023/A:1026753129680DOI: 10. 1023/A:1026753129680Studia Logica. 65Lavendhomme, R., and L. Lucas, "Sequent calculi and decision procedures for weak modal systems", Studia Logica 65 (2000): 121-145. DOI: 10. 1023/A:1026753129680
Modularization of sequent calculi for normal and non-normal modalities. B Lellmann, E Pimentel, 10.1145/3288757ACM Transactions on Computational Logic. 20Lellmann, B., and E. Pimentel, "Modularization of sequent calculi for normal and non-normal modalities", ACM Transactions on Computational Logic, 20, (2019): 1-46. DOI: 10.1145/3288757
Sequent calculi for global modal consequence relations. M Ma, J Chen, 10.1007/s11225-018-9806-8Studia Logica. 107Ma, M., and J. Chen, "Sequent calculi for global modal consequence re- lations", Studia Logica 107 (2019): 613-637. DOI: 10.1007/s11225-018- 9806-8
On the interpolation theorem of Craig. S Maehara, Japanese), Sugaku. 12Maehara, S., "On the interpolation theorem of Craig" (Japanese), Sugaku 12 (1960): 235-237.
A formal system of first-order predicate calculus with infinitely long expressions. S Maehara, G Takeuti, 10.2969/jmsj/01340357Journal of the Mathematical Society of Japan. 13Maehara, S., and G. Takeuti, "A formal system of first-order predicate calculus with infinitely long expressions", Journal of the Mathematical Society of Japan 13 (1961): 357-370. DOI: 10.2969/jmsj/01340357
Proof theory for non-normal modal logics: the neighbourhood formalism and basic results. S Negri, IfCoLog Journal of Logics and their Applications. 4Negri, S., "Proof theory for non-normal modal logics: the neighbourhood formalism and basic results", IfCoLog Journal of Logics and their Appli- cations 4 (2017): 1241-1286. http://www.collegepublications.co.uk/ downloads/ifcolog00013.pdf
Does the deduction theorem fail for modal logic?. S Negri, R Hakli, 10.1007/s11229-011-9905-9Synthese. 184Negri, S., and R. Hakli, "Does the deduction theorem fail for modal logic?", Synthese 184 (2012): 849-867. DOI: 10.1007/s11229-011-9905-9
Proof theory for quantified monotone modal logics. S Negri, E Orlandelli, 10.1093/jigpal/jzz015DOI: 10.1093/ jigpal/jzz015Logic Journal of the IGPL. 27Negri, S., and E. Orlandelli, "Proof theory for quantified monotone modal logics", Logic Journal of the IGPL 27 (2019): 478-506. DOI: 10.1093/ jigpal/jzz015
S Negri, J Plato, 10.1017/CBO9780511527340Structural Proof Theory. CambridgeCUPNegri, S., and J. von Plato, Structural Proof Theory, Cambridge: CUP, 2001. DOI: 10.1017/CBO9780511527340
Proof Analysis, Cambridge: CUP. S Negri, J Plato, 10.1017/CBO9781139003513Negri, S., and J. von Plato, Proof Analysis, Cambridge: CUP, 2011. DOI: 10.1017/CBO9781139003513
Proof analysis in deontic logics. E Orlandelli, 10.1007/978-3-319-08615-6_11Deontic Logic and Normative Systems (DEON 2014). F. Cariani et al.BerlinSpringerOrlandelli, E., "Proof analysis in deontic logics", pages 139-148 in F. Car- iani et al. (eds.), Deontic Logic and Normative Systems (DEON 2014), Berlin: Springer, 2015. DOI: 10.1007/978-3-319-08615-6_11
Decidable term-modal logics. E Orlandelli, G Corsi, 10.1007/978-3-030-01713-2_11EUMAS 2017/AT 2017. F. Belardinelli and E. ArgenteBerlinSpringerOrlandelli, E., and G. Corsi, "Decidable term-modal logics", pages 147-162 in F. Belardinelli and E. Argente (eds.), EUMAS 2017/AT 2017, Berlin: Springer, 2018. DOI: 10.1007/978-3-030-01713-2_11
Neighbourhood Semantics for Modal Logic. E Pacuit, 10.1007/978-3-319-67149-9SpringerBerlinPacuit, E., Neighbourhood Semantics for Modal Logic, Berlin: Springer, 2017. DOI: 10.1007/978-3-319-67149-9
The logic of exact covers: completeness and uniform interpolation. D Pattinson, 10.1109/LICS.2013.48LICS 2013. IEEE Computer SocietyPattinson, D., "The logic of exact covers: completeness and uniform in- terpolation", pages 418-427 in LICS 2013, IEEE Computer Society, 2013. DOI: 10.1109/LICS.2013.48
Cut elimination in coalgebraic logics. D Pattinson, L Schröder, 10.1016/j.ic.2009.11.008DOI: 10.1016/j. ic.2009.11.008Information and Computation. 208Pattinson, D., and L. Schröder, "Cut elimination in coalgebraic logics", Information and Computation 208 (2010): 1447-1468. DOI: 10.1016/j. ic.2009.11.008
On an interpretation of second order quantification in first order intuitionistic propositional logic. A J Pitts, 10.2307/2275175Journal of Symbolic Logic. 57Pitts, A. J., "On an interpretation of second order quantification in first or- der intuitionistic propositional logic", Journal of Symbolic Logic 57 (1992): 33-52. DOI: 10.2307/2275175
PSPACE bounds for rank-1 modal logics. L Schröder, D Pattinson, 10.1109/LICS.2006.44ACM Transactions in Computational Logics. 10Schröder, L., and D. Pattinson, "PSPACE bounds for rank-1 modal log- ics", ACM Transactions in Computational Logics 10 (2009): 1-33. DOI: 10.1109/LICS.2006.44
Reasoning about Obligations. L Van Der Torre, Tinbergen Institute Research Series. Amsterdamvan der Torre, L., Reasoning about Obligations, Amsterdam: Tinbergen Institute Research Series, 1997. https://icr.uni.lu/leonvandertorre/ papers/thesis.pdf
A S Troelstra, H Schwichtenberg, 10.1017/CBO9781139168717Basic Proof Theory. CambridgeCUPTroelstra, A. S., and H. Schwichtenberg, Basic Proof Theory, Cambridge: CUP, 2000 2 . DOI: 10.1017/CBO9781139168717
The sequent calculus for the modal logic D. S Valentini, Bollettino dell'Unione Matematica Italiana. 7Valentini, S., "The sequent calculus for the modal logic D", Bollettino dell'Unione Matematica Italiana 7 (1993): 455-460. https://www.math. unipd.it/~silvio/papers/ModalLogic/DModalLogic.pdf
On the complexity of epistemic reasoning. M Y Vardi, 10.1109/LICS.1989.39179Proceedings of the Fourth Annual Symposium on Logic in Computer Science. the Fourth Annual Symposium on Logic in Computer ScienceIEEE PressVardi, M. Y., "On the complexity of epistemic reasoning", pages 243-252 in Proceedings of the Fourth Annual Symposium on Logic in Computer Science, IEEE Press, 1989. DOI: 10.1109/LICS.1989.39179
| [] |
[
"Resurrecting the One-sided P-value as a Likelihood Ratio Resurrecting the One-sided P-value as a Likelihood Ratio",
"Resurrecting the One-sided P-value as a Likelihood Ratio Resurrecting the One-sided P-value as a Likelihood Ratio"
] | [] | [] | [] | The one-sided P-value has a long history stretching at least as far back as Laplace (1812) but has in recent times been mostly supplanted by the two-sided P-value.We present justification for a bijective relationship between the one-sided P-value and a likelihood ratio based on maximum likelihood, a relationship that cannot be demonstrated for the two-sided P-value. A number of criticisms of P-values are discussed and it is shown that many of these criticisms are not justified when a likelihood ratio interpretation of a one-sided P-value is employed. Converting a one-sided P-value to a likelihood ratio provides the advantages of the likelihood evidential paradigm. | null | [
"https://arxiv.org/pdf/2108.03544v3.pdf"
] | 236,956,484 | 2108.03544 | 27cb61bb6d160e09934d42a0365dd4b5b8a9ba0a |
Resurrecting the One-sided P-value as a Likelihood Ratio Resurrecting the One-sided P-value as a Likelihood Ratio
Resurrecting the One-sided P-value as a Likelihood Ratio Resurrecting the One-sided P-value as a Likelihood Ratio
1likelihood paradigmROC curvemaximum likelihoodBayes' factor
The one-sided P-value has a long history stretching at least as far back as Laplace (1812) but has in recent times been mostly supplanted by the two-sided P-value.We present justification for a bijective relationship between the one-sided P-value and a likelihood ratio based on maximum likelihood, a relationship that cannot be demonstrated for the two-sided P-value. A number of criticisms of P-values are discussed and it is shown that many of these criticisms are not justified when a likelihood ratio interpretation of a one-sided P-value is employed. Converting a one-sided P-value to a likelihood ratio provides the advantages of the likelihood evidential paradigm.
INTRODUCTION
It is perhaps paradoxical that the most widely used statistic, the P-value, is also the most widely criticised. We briefly consider some of these criticisms below, but it is worth noting here that most are directed at the two-sided P-value rather than the one-sided. For an isomorphic and symmetric distribution like the normal, the one-sided P-value is exactly one half of the two-sided and this leads to the misapprehension that the onesided form is merely a weaker, or less conservative, form of the two-sided. However, Nicholas Adams, Alfred Health 2 we argue that they are fundamentally different, the two-sided P-value being a test of existence and the one-sided being, (when properly employed), a test of direction and magnitude. In the dominant null hypothesis significance testing framework, the point null is assumed to be true, and any discordance between the data and this hypothesis is implied as being due to the play of chance, and hence its direction is of no interest. Of course, the direction of the effect is obvious to us, but the point is, the strength of evidence supporting the effect being in that direction is not fully captured by the twosided P-value. By contrast, in the framework of a one-sided P-value against a dividing hypothesis the assumption is of a non-zero effect in one direction or the other, and hence both the direction and magnitude of an observed effect are of interest. In this paper we derive a bijective relationship between the one-sided P-value and a likelihood ratio, a relationship that cannot be demonstrated for the two-sided P-value given that it ignores the information provided by the direction of the effect.
METHODS
The Receiver Operator Characteristic (ROC) curve of a test with a continuous outcome provides a link between the true positive rate (TPR), the false positive rate (FPR), and the likelihood ratio (LR) of a particular test result (Choi 1998). If the ROC curve is convex (monotonically increasing) and its slope is monotonically decreasing, then the following relationship exists:
= − 2 − 2 (1)
where LR is the ratio of the likelihood that the condition being tested for is present versus the likelihood that it is absent. (See appendix A for justification of equation 1).
Congruently, a trial can be approached in a similar manner with the data representing 3 the result of a test, and the null hypothesis representing the condition being tested for (Cali and Longobardi 2015).
Likelihood Ratio of a One-sided P-value
We consider a trial measuring a continuous variable () where the sampling distribution is symmetric (e.g. normal). Pre-data, an effect size of practical interest () is identified and used to define dividing hypotheses, H1:< and H2: >. In medical science for instance, is usually defined as the minimum clinically important difference and is used in sample size calculations. It is not strictly necessary to specify whether H1 or H2 is the favored hypothesis pre-data (i.e. a directionality of effect), although doing so makes the bijection clearer: an effect in the favored direction produces an LR>1 and an effect in the opposite direction generates an LR<1. The trial is performed and an observed effect size (obs) is obtained. Using a ROC curve, any test value can be defined as a cut-off point between what is regarded as a positive or negative result, each cut-off having its own TPR and FPR values. Accordingly, post-data we declare a test threshold equal to obs, such that any effect size greater than or equal to obs will be regarded as a "positive" test, that is, positive for the true effect size to be greater than . Note that this test is framed around the maximum likelihood estimate (MLE) of the true effect size, that is, the observed effect size. It is the optimal test threshold in that, a) any higher threshold would lead us to regard the test as 'negative' which is clearly wrong given the direction of the effect is obvious, and b) any lower threshold would fail to use the full magnitude of the observed effect size. A one-sided P-value is calculated against the dividing hypothesis (=), the direction of which determines whether the data argue for the competing hypotheses H1 or H2. Under these circumstances, the one-sided P-value equals the FPR, being the probability of obtaining such a result if the true effect is in the 4 opposite direction. To estimate the TPR, given that the observed effect size is always the maximum likelihood estimate, and that from equation (1) the likelihood ratio is maximised when TPR = 0.5, it follows that the likelihood ratio associated with a onesided P-value is: = 0.25 − 2 , a likelihood ratio which we shall refer to as the maximum likelihood estimate likelihood ratio (MLE-LR), given that it is calculated from the TPR and FPR of the MLE-framed test. This is the ratio of the likelihood that the true effect size equals against the likelihood that the true effect size is greater than . The calculation maximises the LR and has parallels with maximisation of the Bayes factor as advanced by I.J. Good (1985, chapter 3). Generalising the likelihood ratio to the composite case, if the parameter space is divided at then the supremum on one side is the likelihood of , and hence the MLE-LR is the likelihood ratio that the true effect size lies on one side of the partition versus the other (Zhang 2009, Bickel 2012. That is, the LR against the simple hypothesis = is the same as that against the composite (dividing) hypothesis <.
It could be argued that while LR and hence TPR is maximal at the observed effect size, that maximum value might be less than 0.5. To address this, once again consider that any point (q) on the ROC curve can be used as a cut-off point for a test to determine whether we decide that the true effect size is in the same direction as the observed effect size, that is we declare the test is "positive" if qobs q. As above, if postdata we set q equal to the observed effect size then the test is always "positive" and the FPR of the test is the 1-sided P-value. Assuming the true effect size equals the observed effect size (the maximum likelihood estimate) and using this 1-sided P-value as the value for in a power calculation then power=0.5=TPR. Here we are calculating the power (equivalent to TPR) using the observed effect size as the true effect size and the observed P-value as a, a procedure which always results in power=TPR=0.5. This phenomenon has previously been noted in discussions of observed (or post-hoc) power (Hoenig and Heisey 2001). Setting the alpha level equal to the P-value at first glance seems counter-intuitive but consider that the LR>1 must always be true given that it is calculated in the direction of the observed effect size. A power calculation always returns a value for power greater than or equal to the nominated alpha-level and therefore setting the alpha-level equal to the P-value ensures that TPRFPR and therefore LR1.
Alternative Methods to Calculate the Likelihood Ratio (or Bayes Factor)
Using the normal probability distribution function, the likelihood ratio against a point null has been calculated as described by Goodman (1999), after A.W.F Edwards: = exp ( 2 2 ⁄ ), where is the observed effect size divided by its standard error. This is the likelihood ratio of H0:= versus HA:=obs, which once again using the generalised ratio is equivalent to the likelihood ratio that the true effect size is greater than versus less than . From Figure 1 it can be seen that this is less than the MLE-LR for all Pvalues. This is because the Goodman-Edwards likelihood ratio is not a bijection of given that 2 = (-) 2 and hence ignores the information contained in the knowledge of the direction of the effect, leading to an under-estimation of the likelihood ratio (Held and Ott 2018). In our simple normal-mean model, a trial of size N produces three pieces of information: the absolute effect size, the uncertainty of its estimation (proportional to
[N] -1 ), and the direction of the effect. The Goodman-Edwards LR only uses the first two of these and thus is the likelihood equivalent of the 2-sided P-value.
We can view a two-sided P-value as being composed of two one-sided tests, one against H0:< and the other against H0:>, the resultant 2-sided P-value being twice 6 the value of the lesser of these two one-sided P-values. This makes it clear that a twosided P-value is direction-agnostic. This formulation has obvious parallels with the TOST (two one-sided tests) procedure for testing equivalencein a sense the two-sided Finally, one may ask why it is not possible to calculate a MLE likelihood using a twosided P-value. The problem is that such a P-value is composed of two one-sided tests, and it is unclear how the FPR and TPR from these two tests might be combined. Hence a one-sided P-value is compatible with likelihood-ratio based inference, but a two-sided P-value is not, as has been noted before (DeGroot 1973, Autzen 2016.
From the Bayesian perspective, going back historically as far as Gosset, the 1-sided P-value of itself has been sometimes regarded by Bayesians as approximately the posterior probability that the true effect is in the opposite direction to that observed, assuming an improper uniform prior (Greenland and Poole 2013), and if this is the case then LR=P/(1-P) (Marsman and Wagenmakers 2017). Alternatively, a second Bayesian method is the "e p log(p)" calibration of Selke, Bayarri and Berger (2001) which uses an arbitrary beta distribution as the prior. A comparison between these Bayesian approaches and the MLE-LR is shown in Figure 1. Figure 1. The likelihood ratio/minimum Bayes factor associated with a 1-sided P-value (or its associated Z-statistic) from a normal distribution is shown for four different methods of calculation.
Cognitive Inference from a Likelihood Ratio
Defining a one-sided P-value as the ( naturally comprehensible measure of the weight of evidence for or against a hypothesis (Good 1992). In addition, we can apply a form of Bayes theorem: = × , and estimate the posterior probability that the result represents a sign error (that is, an estimated effect in the opposite direction to the true effect). Of course, each individual end-user is free to assign their own prior odds which might be problematic, but it is somewhat simpler than assigning a full Bayesian prior probability distribution. Furthermore, calculating a one-sided P-value against an effect size of practical interest () rather than =0 gives a MLE-LR that represents a clear summary of the strength of evidence for or against the true value of being of practical significance. In addition, it is worth considering that if we assign prior odds of 1:1 (known as equipoise) then the posterior odds of the true effect being in the opposite direction to that observed are equal to the LR, a useful correspondence given that many non-statisticians have a good informal understanding of betting odds. This observation is not novel having been noted as far back as 1878 by C.S. Pierce but has perhaps been forgotten (Banks 1996).
As an illustrative example of the various methods of calculating an LR or minimum Bayes factor, we consider a poll that samples the voting intentions for an electoral contest between two candidates. The poll results favour one of the candidates with a proportion of 50% plus one standard error, and we ask what is the probability that this candidate will subsequently receive more than 50% of the vote assuming that these voting intentions are followed (Table 1). The probability distribution is binomial but as the proportion is close to 50% a normal approximation is appropriate. Intuitively, the posterior probability of about 2/3 seems appropriate whereas many would argue that 84% is too high (Gelman 2013), and 55% too low. As stated above, the Goodman (1999) likelihood ratio does not use all the information available and hence we can assume is too conservative.
DISCUSSION
A number of criticisms have been directed at P-values, particularly with the recent campaign to "abandon statistical significance" (Benjamin 2018). In this section we briefly argue that the majority of these criticisms do not apply to one-sided P-values, particularly when considered as bijections of a likelihood ratio. A detailed analysis and rebuttal, however, are beyond the scope of this article. Critical discussion of P-values can be easily found, for instance, in McShane et al (2019), Krueger and Heck (2017), and Szucs and Ioannidis (2017).
• The point null hypothesis is almost always implausiblethis does not apply to a one-sided P-value against a dividing null hypothesis which instead implicitly assumes that the true effect size is non-zero.
• The incidence of sign errors is large (and hidden)the probability of a sign error (FPR) is integrated into the calculation of the MLE-LR and is explicit.
• Multiple comparisons invalidate the P-valueif the likelihood principle holds then the one-sided P-value expressed as a likelihood ratio should not require adjustment for multiple comparisons nor multiple looks at the data.
• The P-value is a poor measure of the strength of evidence for or against a hypothesisa likelihood ratio is often regarded as the ideal measure of strength of evidence (Edwards 2001, Royall 1997).
• The P-value invites confusion between statistical significance and practical importanceselecting an appropriate value of for the dividing null hypothesis H0< or H0> will lead to the one-sided P-value and its associated MLE-LR directly addressing the likelihood of an effect size of practical importance.
• The P-value is difficult to understanda LR can be combined with a prior probability to produce an explicit and clear posterior probability that its associated hypothesis is true.
• Point and dividing hypotheses are forms of interval hypotheses and P-values are incoherent when calculated for an interval hypothesis nested within a second interval hypothesis (Schervish 1996) dividing hypotheses cannot be nested inside each other given that they extend to + or -. Furthermore, Lavine and Schervish (1999) demonstrate that likelihood ratios are coherent whereas Bayes factors are not.
• One-sided P-values as a test of direction are invalid if the parameter value is exactly zero, on the grounds that in this case the test is between two directional models that are both wrong (Marsman and Wagenmakers 2017)there is no substantial difference between calculating the 1-sided P-value against a H0:q>0 versus H0:q0.
Goodman (2008) He claims all of these properties are possessed by the Bayes factor, and none by the two-sided P-value. The one-sided P-value, however, possesses all these desirable evidential qualities when considered as a likelihood ratio, a fact that is hardly surprising given the close relationship between Bayes factors and likelihood ratios. These qualities include providing information about the effect size, using only observed data, including an explicit alternative hypothesis, insensitivity to stopping rules, and being a measure that increases (rather than decreases) as the strength of evidence increases.
In summary, a common and simple type of research is the comparison of two different interventions where the aims are to determine a) which intervention is better,
and b) what is the strength of evidence supporting this determination. Null hypothesis significance testing using a 2-sided P-value seems ill-suited to this task whereas, viewed as a bijection of a likelihood ratio, the one-sided P-value can be incorporated into the likelihood evidential paradigm (see Royall 1997). We would argue that this paradigm is simpler to comprehend and more coherent than that of null-hypothesis significance The slope of each these secants equals the average slope of their corresponding curve segments, and also the positive and negative likelihood ratios respectively, associated with the specified cut-off value. The average slope of the ROC curve as a whole always equals one, as shown by a diagonal (0,0 to 1,1) and hence the average slope of the two secants combined is also equal to one.
If we convert the cut-off point A to a very small interval (A ) bridged by a third secant, the slope of the two original secants will remain approximately the same, and, as the average slope of the overall ROC curve is unchanged, the slope of the third secant must be the product (by the chain rule) of the slopes of these two original secants. Taking the limit as approaches zero, the approximation disappears, and the slope of the now infinitesimal secant equals the tangent to the curve at point A:
lim →0 ( ) = + × − = 1 − × 1 − = − 2 − 2
conditional) probability of obtaining the observed effect size if the true effect was in the opposite direction is, like the corresponding definition of a two-sided P-value, unhelpful and confusing to many(Goodman 2008, Chun Wah et al 2018, Reaburn 2017. A likelihood ratio on the other hand seems like a
Figure 2 .
2A proper ROC curve. The slope of secant 0-A equals the positive likelihood ratio. The slope of secant A-1 equals the negative likelihood ratio. The slope of the tangent to the curve at point A equals the likelihood ratio at that point.
Table 1 .
1Comparison of four different likelihood ratios (LR) or Bayes factors (BF) for a
dichotomous outcome calculated from a standardised effect size of =1, and their
corresponding posterior probabilities in the absence of any prior knowledge.
Method
Formula
Prior
LR/BF
Post.
Probability
Marsman et al
(2017)
P/(1-P)
Uniform
5.25
84%
MLE-LR
0.25/(P-P 2 )
1:1 odds
1.86
65%
Goodman (1999)
e (Z^2)/2
1:1 odds
1.65
62%
Selke et al (2001)
-e p log(p)
Beta prior
1.25
55%
advocates use of Bayes factors instead of two-sided P-values and includes a table (page 139) summarizing desirable properties of an evidential measure.
Resurrection of the one-sided P-value mitigates many of the problems that have led to the call to abandon the use of P-values. Referencesusing a two-sided P-value. Resurrection of the one-sided P-value mitigates many of the problems that have led to the call to abandon the use of P-values. References:
P-values and the Principle of Total Evidence. B Autzen, European Journal of Philosophical Science. 6Significance TestingAutzen, B. (2016) "Significance Testing, "P-values and the Principle of Total Evidence," European Journal of Philosophical Science, 6, 281-295.
A Conversation with I. D L. ; J Banks, Good, Statistical Science. 11Banks, D.L. (1996) "A Conversation with I.J. Good," Statistical Science, 11, 1-19.
Redefine Statistical Significance. D J Benjamin, J O Berger, M Johannesson, Nature Human Behaviour. 2Benjamin, D.J., Berger, J.O. and Johannesson M., et al. (2018) "Redefine Statistical Significance," Nature Human Behaviour, 2, 6-10.
The Strength of Evidence for Composite Hypotheses: Inference to the Best Explanation. D Bickel, Statistica Sinica. 22Bickel, D. (2012) "The Strength of Evidence for Composite Hypotheses: Inference to the Best Explanation," Statistica Sinica, 22, 1147-1198.
Some Mathematical Properties of the ROC Curve and Their Applications. C Cali, M Longobardi, Richerce Mathematemics. 64Cali C., and Longobardi M. "Some Mathematical Properties of the ROC Curve and Their Applications," Richerce Mathematemics, 64, 391-402.
Slopes of a Receiver Operating Curve and Likelihood Ratios for a Diagnostic Test. B C K Choi, American Journal of Epidemiology. 148Choi, B.C.K. (1998) "Slopes of a Receiver Operating Curve and Likelihood Ratios for a Diagnostic Test," American Journal of Epidemiology, 148, 1127-1132.
The Reproducibility of Research and the Misinterpretation of Pvalues. D Colquhoun, Royal Society Open Science. 4171085Colquhoun, D. (2017) "The Reproducibility of Research and the Misinterpretation of P- values," Royal Society Open Science, 4, 171085.
Doing What Comes Naturally: Interpreting a Tail Area as a Posterior Probability or as a Likelihood Ratio. M H Degroot, Journal of the American Statistical Society. 68DeGroot, M.H. (1973) "Doing What Comes Naturally: Interpreting a Tail Area as a Posterior Probability or as a Likelihood Ratio," Journal of the American Statistical Society ,68, 966-969.
Bayesian Statistical Inference for Psychological Research. W Edwards, H Lindman, L J Savage, Psychology Review. 70Edwards, W., Lindman H., Savage L.J. (1963) "Bayesian Statistical Inference for Psychological Research," Psychology Review, 70, 193-242.
Likelihood in Statistics. A W F Edwards, International Encyclopedia of the Social and Behavioural Sciences. N.J.Smelser and P.B.BaltesOxfordPergamonEdwards, A.W.F. (2001) "Likelihood in Statistics," in International Encyclopedia of the Social and Behavioural Sciences, ed -N.J.Smelser and P.B.Baltes, Oxford: Pergamon, pp. 8854-8858.
P-values and Statistical Practice. A Gelman, Epidemiology. 24Gelman, A. (2013) "P-values and Statistical Practice", Epidemiology, 24, 69-72.
Good Thinking: The Foundations of Probability and Its Applications. I J Good, University of Minnesota PressMinneapolisGood, I.J. (1985) "Good Thinking: The Foundations of Probability and Its Applications", Minneapolis: University of Minnesota Press.
The Bayes/non-Bayes Compromise: A Brief Review. I J Good, Journal of the American Statistical Society. 87Good, I.J. (1992) "The Bayes/non-Bayes Compromise: A Brief Review," Journal of the American Statistical Society, 87, 597-606.
Toward Evidence Based Medical Statistics 2: The Bayes Factor. S Goodman, Annals of Internal Medicine. 130Goodman, S. (1999) "Toward Evidence Based Medical Statistics 2: The Bayes Factor," Annals of Internal Medicine, 130, 1005-1013.
A Dirty Dozen: Twelve P-value Misconceptions. S Goodman, Seminars in Haematology. 45Goodman, S. (2008) "A Dirty Dozen: Twelve P-value Misconceptions," Seminars in Haematology, 45, 135-140.
Living with P-values: Resurrecting a Bayesian Perspective on Frequentist Statistics. S Greenland, C Poole, Epidemiology. 24Greenland, S., Poole, C. (2013) "Living with P-values: Resurrecting a Bayesian Perspective on Frequentist Statistics", Epidemiology, 24, 62-68.
On P-Values and Bayes Factors. L Held, M Ott, Annual Review of Statistics and Its Application. 5Held, L. and Ott, M. (2018) "On P-Values and Bayes Factors," Annual Review of Statistics and Its Application, 5, 393-419.
Bayes Factors Based on Test Statistics. V E Johnson, Journal of the Royal Statistical Society B. 67Johnson, V.E. (2005) "Bayes Factors Based on Test Statistics," Journal of the Royal Statistical Society B, 67, 689-701.
The Heuristic Value of P in Inductive Statistical Inference. J Krueger, P Heck, Frontiers in Psychology. 8908Krueger, J. and Heck P. (2017) "The Heuristic Value of P in Inductive Statistical Inference," Frontiers in Psychology, 8, Article 908.
P S Laplace, Théorie Analytique Des Probabilités. 1st Edition (1812)LaPlace, P.S. (1812) Théorie Analytique Des Probabilités, 1 st Edition (1812).
Bayes Factors: What They Are and What They Are Not. M Lavine, M J Schervish, The American Statistician. 53Lavine, M. and Schervish, M.J. (1999) "Bayes Factors: What They Are and What They Are Not," The American Statistician, 53, 119-122.
Three Insights from a Bayesian Interpretation of the One-sided P Value. M Marsman, E-J Wagenmakers, Educational and Psychological Measurement. 77Marsman, M. and Wagenmakers, E-J. (2017) "Three Insights from a Bayesian Interpretation of the One-sided P Value," Educational and Psychological Measurement, 77, 529-539.
Abandon Statistical Significance. B B Mcshane, D Gal, A Gelman, C Robert, J L Tackett, The American Statistician. 73McShane, B.B., Gal, D., Gelman, A., Robert, C., Tackett, J.L. (2019) "Abandon Statistical Significance," The American Statistician, 73, sup1, 235-245.
Statistics Instructors Beliefs and Misconceptions About P-values. R Reaburn, Proceedings of the 40 th Mathematics Education Research Group of Australasia. the 40 th Mathematics Education Research Group of AustralasiaReaburn, R. (2017) "Statistics Instructors Beliefs and Misconceptions About P-values," in Proceedings of the 40 th Mathematics Education Research Group of Australasia 2017, pp 428-433.
Statistical Evidence: A Likelihood Paradigm. R Royall, Chapman HallLondonRoyall, R. (1997) "Statistical Evidence: A Likelihood Paradigm", London: Chapman Hall.
P Values: What they are and What they are Not. M J Schervish, The American Statistician. 50Schervish, M.J. (1996) "P Values: What they are and What they are Not," The American Statistician, 50, 203-206.
Calibration of P values for Testing Precise Null Hypotheses. T Sellke, M J Bayarri, J O Berger, The American Statistician. 55Sellke T., Bayarri, M.J., and Berger, J.O. (2001) "Calibration of P values for Testing Precise Null Hypotheses," The American Statistician, 55, 62-71
When Null Hypothesis Significance is Unsuitable for Research: A Reassessment. D Szucs, J Ioannidis, Frontiers in Human Neuroscience. 390Szucs, D. and Ioannidis, J. (2017) "When Null Hypothesis Significance is Unsuitable for Research: A Reassessment," Frontiers in Human Neuroscience, 11, Article 390.
How Doctors Conceptualise P-values: A Mixed Methods Study. C W M Tam, A H Khan, A Knight, J Rhee, K Price, K Mclean, Australian Journal of General Practice. 47Tam, C.W.M., Khan, A.H., Knight, A., Rhee, J., Price, K., and McLean, K. (2018) "How Doctors Conceptualise P-values: A Mixed Methods Study," Australian Journal of General Practice, 47, 705-710.
A Law of Likelihood for Composite Hypotheses. Z Zhang, arXiv:0901.0463Zhang, Z. (2009) "A Law of Likelihood for Composite Hypotheses," arXiv:0901.0463
) of which is normally distributed around the true population mean, and a dividing hypothesis such as H0: <0 is specified. It is possible to specify a cut-off value for that determines whether to reject or accept H0. Each cut-off value has an associated TPR and FPR and from these a ROC curve can be constructed. Under these conditions, the ROC curve will be convex from above (or proper), monotonically increasing and with a monotonically decreasing slope. Consider a trial where a sample is taken, the mean. Two secants to the curve can be drawn from the specified cut-off value (A) to the origin (0,0, bottom left) and to the terminus (1,1, top rightConsider a trial where a sample is taken, the mean () of which is normally distributed around the true population mean, and a dividing hypothesis such as H0: <0 is specified. It is possible to specify a cut-off value for that determines whether to reject or accept H0. Each cut-off value has an associated TPR and FPR and from these a ROC curve can be constructed. Under these conditions, the ROC curve will be convex from above (or proper), monotonically increasing and with a monotonically decreasing slope. Two secants to the curve can be drawn from the specified cut-off value (A) to the origin (0,0, bottom left) and to the terminus (1,1, top right) (Figure 2).
| [] |
[
"PSEUDO-DIFFERENTIAL INTEGRAL OPERATOR FOR LEARNING SOLUTION OPERATORS OF PARTIAL DIFFERENTIAL EQUATIONS",
"PSEUDO-DIFFERENTIAL INTEGRAL OPERATOR FOR LEARNING SOLUTION OPERATORS OF PARTIAL DIFFERENTIAL EQUATIONS"
] | [
"Jin Young Shin \nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\n37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea\n",
"Jae Yong Lee \nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\n37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea\n",
"Hyung Ju Hwang [email protected] \nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\n37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea\n"
] | [
"Department of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\n37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea",
"Department of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\n37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea",
"Department of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\nDepartment of Mathematics POSTECH Pohang\n37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea"
] | [] | Learning mapping between two function spaces has attracted considerable research attention. However, learning the solution operator of partial differential equations (PDEs) remains a challenge in scientific computing. Therefore, in this study, we propose a novel pseudo-differential integral operator (PDIO) inspired by a pseudo-differential operator, which is a generalization of a differential operator and characterized by a certain symbol. We parameterize the symbol by using a neural network and show that the neural-network-based symbol is contained in a smooth symbol class. Subsequently, we prove that the PDIO is a bounded linear operator, and thus is continuous in the Sobolev space. We combine the PDIO with the neural operator to develop a pseudo-differential neural operator (PDNO) to learn the nonlinear solution operator of PDEs. We experimentally validate the effectiveness of the proposed model by using Burgers' equation, Darcy flow, and the Navier-Stokes equation. The results reveal that the proposed PDNO outperforms the existing neural operator approaches in most experiments. | null | [
"https://arxiv.org/pdf/2201.11967v2.pdf"
] | 246,411,148 | 2201.11967 | 1c178b8653e5fc737be8e68e66d2d7a4a278d91e |
PSEUDO-DIFFERENTIAL INTEGRAL OPERATOR FOR LEARNING SOLUTION OPERATORS OF PARTIAL DIFFERENTIAL EQUATIONS
February 17, 2022
Jin Young Shin
Department of Mathematics POSTECH Pohang
Department of Mathematics POSTECH Pohang
Department of Mathematics POSTECH Pohang
37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea
Jae Yong Lee
Department of Mathematics POSTECH Pohang
Department of Mathematics POSTECH Pohang
Department of Mathematics POSTECH Pohang
37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea
Hyung Ju Hwang [email protected]
Department of Mathematics POSTECH Pohang
Department of Mathematics POSTECH Pohang
Department of Mathematics POSTECH Pohang
37673, 37673, 37673Republic of Korea, Republic of Korea, Republic of Korea
PSEUDO-DIFFERENTIAL INTEGRAL OPERATOR FOR LEARNING SOLUTION OPERATORS OF PARTIAL DIFFERENTIAL EQUATIONS
February 17, 2022
Learning mapping between two function spaces has attracted considerable research attention. However, learning the solution operator of partial differential equations (PDEs) remains a challenge in scientific computing. Therefore, in this study, we propose a novel pseudo-differential integral operator (PDIO) inspired by a pseudo-differential operator, which is a generalization of a differential operator and characterized by a certain symbol. We parameterize the symbol by using a neural network and show that the neural-network-based symbol is contained in a smooth symbol class. Subsequently, we prove that the PDIO is a bounded linear operator, and thus is continuous in the Sobolev space. We combine the PDIO with the neural operator to develop a pseudo-differential neural operator (PDNO) to learn the nonlinear solution operator of PDEs. We experimentally validate the effectiveness of the proposed model by using Burgers' equation, Darcy flow, and the Navier-Stokes equation. The results reveal that the proposed PDNO outperforms the existing neural operator approaches in most experiments.
Introduction
In science and engineering, many physical systems and natural phenomena are described by partial differential equations (PDEs) [4]. Approximating the solution of the underlying PDEs is critical to understand and predict a system. Conventional numerical methods, such as finite difference methods (FDMs) and finite element methods, involve a trade-off between accuracy and the time required. In many complex systems, it may be highly time-consuming to use numerical methods to obtain accurate solutions. Furthermore, in some cases, the underlying PDE may be unknown.
With remarkable advancements in deep learning, studies have focused on using deep learning to solve PDEs [25,7,30,26,16,20]. An example is an operator learning [9,1,18], which utilizes neural networks to parameterize the mapping from the parameters (external force, initial, and boundary condition) of the given PDE to the solutions of that PDE. Many studies employed different convolutional neural networks as surrogate models to solve various problems, such as the uncertainty quantification tasks for PDEs [33,34] and PDE-constrained control problems [12,17]. Based on the universal approximation theorem of operator [3], DeepONet was introduced [24]. In follow-up works, an extension model of the DeepONet was proposed [32,19].
Another approach to operator learning is a neural operator, proposed in [23,22,21]. [23] proposed an iterative architecture inspired by Green's function of elliptic PDEs. The iterative architecture consists of a linear transformation, an integral operator, and a nonlinear activation function, allowing the architecture to approximate complex nonlinear mapping. An extension of this work, [22] used a multi-pole method to develop a multi-scale graph structure. [21] proposed a Fourier integral operator using fast Fourier transform (FFT) to reduce the cost of approximating the integral operator. Recently, [10] approximated the kernel of the integral operator using the multiwavelet transform.
In this study, a pseudo-differential integral operator (PDIO) is proposed to learn the solution operator of PDEs using neural networks based on pseudo-differential operators. Pseudo-differential operators (PDOs) are a generalization of linear partial differential operators and have been extensively studied mathematically [2,15,29,31]. A neural network called a symbol network is used to approximate the PDO symbols. The proposed symbol network is contained in a toroidal class of symbols; thus, a PDIO is a continuous operator in the Sobolev space. Furthermore, the PDIO can be applied to the solution operator of time-dependent PDEs using a time-dependent symbol network. In addition, the proposed integral operator is a generalized model, including the Fourier integral operator proposed in [21].
The main contributions of the study are as follows.
• A novel PDIO is proposed based on the PDO to learn the PDE solution operator. PDIO approximates the PDO using symbol networks. The proposed symbol network is contained in a toroidal symbol class of PDOs, implying that the PDIO with the symbol network is a continuous operator in the Sobolev space.
• Time-dependent PDIO, a PDIO with time-dependent symbol networks, can be used to approximate the solution operator of time-dependent PDEs. It approximates the solution operator, including the solution for time t, which is not in the training data. Furthermore, it is a continuous-in-time operator.
• The Fourier integral operator proposed in [21] can be also interpreted from a PDO perspective. However, the symbol of the Fourier integral operator may not be contained in a toroidal symbol class. By contrast, PDIO has smooth symbols and may represent a general form of PDOs.
• A pseudo-differential neural operator (PDNO), which consists of a linear combination of our PDIOs combined with the neural operator proposed in [23], is developed. PDNO outperforms the existing operator learning models, such as the Fourier neural operator in [21] and the multiwavelet-based operator in [10] in most experiments (Burgers', Darcy flow, Navier-Stokes equation).
2 Pseudo-differential operator
Motivation and definition
In this study, we aim to approximate an operator T : A → U between two function spaces A and U. First, we consider a PDE L a [u(x)] = f (x) with a linear differential operator L a = α c α D α . To find a map T from f to u, we apply the Fourier transform to obtain the following:
a(ξ)û def = α c α (iξ) α û =f ,(1)
where ξ ∈ R n represents variables in the Fourier space and thef (ξ) is a Fourier transform of function f (x). If a(ξ) never attains zero, we obtain the solution operator of the PDE as follows:
u(x) = T (f )(x) def = R n 1 a(ξ)f (ξ)e 2πiξx dξ.(2)
From this motivation, a PDO can be defined as a generalization of differential operators by replacing 1 a(ξ) with a(x, ξ), called a symbol [14,15]. First, we define a symbol a(x, ξ) and a class of the symbols.
Definition 2.1. Let 0 < ρ ≤ 1 and 0 ≤ δ < 1. A function a(x, ξ) is called a Euclidean symbol on T n × R n in a class S m ρ,δ (T n × R n ) if a(x, ξ)
is smooth on T n × R n and satisfies the following inequality:
|∂ β x ∂ α ξ a(x, ξ)| ≤ c αβ ξ m−ρ|α|+δ|β| ,(3)
for all α, β ∈ N n 0 , and for all x ∈ T n and ξ ∈ R n , where a constant c αβ may depend on α and β but not on x and ξ. Here, ξ def = (1 + ξ 2 ) 1/2 with the Euclidean norm ξ .
The PDO corresponding to the symbol class S m ρ,δ (T n × R n ) can be defined as follows: Definition 2.2. The Euclidean PDO T a : A → U with the Euclidean symbol a(x, ξ) ∈ S m ρ,δ (T n × R n ) is defined as follows:
T
a (f )(x) = R n a(x, ξ)f (ξ)e 2πiξx dξ,(4)
wheref (ξ) is the Fourier transform of function f (x). Figure 1: An architecture of a PDIO with symbol networks a nn θ1 (x) and a nn θ2 (ξ). Considering that FFT and inverse FFT are used, both the input and output are in the form of uniform mesh. Each value a nn θ1 (x) and a nn θ2 (ξ) is obtained from separate neural networks.
Symbol network and PDIO
The primary idea in our study is to parameterize the Euclidean symbol a(x, ξ) using neural networks a nn θ (x, ξ) to render the symbol smooth. This network is called a symbol network. The symbol network a nn θ (x, ξ) is assumed to be factorized into a nn θ (x, ξ) = a nn θ1 (x)a nn θ2 (ξ). Both smooth functions a nn θ1 (x) and a nn θ2 (ξ) are parameterized by fully connected neural networks. We propose a PDIO to approximate the Euclidean PDO using the symbol network and the Fourier transform as follows:
T a (f )(x) ≈ T nn a θ (f )(x) = a nn θ1 (x)F −1 a nn θ2 (ξ)F[f ](ξ) ,(5)
where F is the Fourier transform and F −1 is its inverse. The diagram of the PDIO is explained in Figure 1.
Practically, F and F −1 in Eq. (5) are approximated by the FFT, which is an effective algorithm that computes the discrete Fourier transform (DFT). Although the symbol network a nn θ2 (ξ) is defined on R n , the inverse DFT is evaluated only on the discrete space Z n . Therefore, the symbol network a nn θ (x, ξ) should be considered on the restricted domain T n × Z n (See Appendix C for details). The following section details the definitions and properties of the symbol and PDO on T n × Z n . Moreover, we introduce a theorem that connects between the Euclidean symbol and the restricted Euclidean symbol.
2.3 PDOs on T n × Z n Definition 2.3. A toroidal symbol class is a set S m ρ,δ (T n × Z n ) consisting of the toroidal symbols a(x, ξ), which are smooth in x for all ξ ∈ Z n , and satisfy the following inequality:
| α ξ ∂ β x a(x, ξ)| ≤ c αβ ξ m−ρ|α|+δ|β| ,(6)
for all α, β ∈ N n 0 , and for all (x, ξ) ∈ T n × Z n . Here, α ξ are the difference operators.
The PDO corresponding to the symbol class S m ρ,δ (T n × Z n ) can be defined as follows: Definition 2.4. The toroidal PDO T a : A → U with the toroidal symbol a(x, ξ) ∈ S m ρ,δ (T n × Z n ) is defined by the following equation:
T a (f )(x) = ξ∈Z n a(x, ξ)f (ξ)e 2πiξx .(7)
It is well-known that the toroidal PDO T a (f ) with f ∈ C ∞ (T n ) is well defined and T a (f ) ∈ C ∞ (T n ) [29].
Here, it is necessary to prove that the restricted symbol network a nn θ | T n ×Z n belongs to a certain toroidal symbol class. To connect the symbol network a nn θ and the restricted symbol network a nn θ | T n ×Z n , we introduce a useful theorem that connects the symbols between the Euclidean symbol and the toroidal symbol. Theorem 2.5. [29] (Connection between two symbols) Let 0 < ρ ≤ 1 and 0 ≤ δ ≤ 1. A symbolã ∈ S m ρ,δ (T n × Z n ) is a toroidal symbol if and only if there exists a Euclidean symbol a ∈ S m ρ,δ (T n × R n ) such thatã = a| T n ×Z n .
Therefore, it is sufficient to consider if the symbol network a nn θ (x, ξ) belongs to a certain Euclidean symbol class.
Propositions on the symbol network and PDIO
We show that the symbol network a nn θ (x, ξ) with the Gaussian error linear unit (GELU) activation function is contained in a certain Euclidean symbol class using the following proposition: Proposition 2.6. Suppose the symbol networks a nn θ1 (x) and a nn θ2 (ξ) are fully connected neural networks with nonlinear activation GELU. Then, the symbol network a nn θ (x, ξ) = a nn θ1 (x)a nn θ2 (ξ) is in S 1 1,0 (T n × R n ). Therefore, the restricted symbol network a nn θ def = a nn θ | T n ×Z n is in a toroidal symbol class S 1 1,0 (T n × Z n ). Remark 2.7. Here, we focus on the most important case where ρ = 1 and δ = 0, since S m ρ,δ ⊃ S m 1,0 for 0 < ρ ≤ 1 and 0 ≤ δ < 1 [15]. Although the proposition only considers the symbol network with GELU, it can be proved for various activation functions (See Appendix D).
Proof. The fully connected neural network for the symbol network a nn θ1 (x) is denoted as follows: Z
[l] 1 = W [l] 1 A [l−1] 1 + b [l] 1 (l = 1, 2, ..., L 1 ), A [l] 1 = σ(Z [l] 1 ) (l = 1, 2, ..., L 1 − 1), where W [l] 1 is a weight matrix, b [l]
1 is a bias vector in the l-th layer of the network, σ is an element-wise activation function, and A
[0] 1 = x is an input feature vector, and Z
[L1] 1 = a nn θ1 (x) is an output of the network with θ 1 = {W [l] 1 , b [l] 1 } L1 l=1 . Similarly, we define W [l] 2 , b [l] 2 , Z [l] 2 and A [l] 2 (l = 1, 2, ..., L 2 ) for the neural network a nn θ2 (ξ). The neural network a nn θ1 (x) is continuous on a compact set T n . Therefore, |∂ β x a nn θ1 (x)| ≤ c β , (8) for some constant c β > 0 and for all β ∈ N n 0 . For the case |α| = 0, |∂ α ξ a nn θ2 (ξ)| = |a nn θ2 (ξ)| = |W [L2] 2 σ(Z [L2−1] 2 ) + b [L2] 2 | ≤ c α ξ ,(9)
for some constant c α > 0 because the absolute value of GELU σ(z) is bounded by linear function |z|. Notably,
∂ ei ξ a nn θ2 (ξ) = W [L2] 1 diag σ Z [L2−1] 1 × W [L2−1] 1 diag σ Z [L2−2] 1 × · · · × W [2] 1 diag σ Z [1] 1 W [1] 1 e i .(10)
This result implies that the multi-derivatives of symbol ∂ α ξ a nn θ2 (ξ) with |α| ≥ 1 consists of the product of the weight matrix and the first or higher derivatives of the activation functions. Furthermore, the derivative of GELU is bounded, and the second or higher derivatives of the function asymptotically become zero rapidly, that is, σ (k) ∈ S(R) when k ≥ 2 (see Definition D.1). Thus, we have the following inequality:
|∂ α ξ a nn θ2 (ξ)| ≤ c α ξ 1−|α| ,(11)
for all α ∈ N n 0 with |α| ≥ 1 for some positive constants c α . Therefore, we bound the derivative of the symbol network a nn θ (x, ξ) as follows:
|∂ β x ∂ α ξ a(x, ξ)| = |∂ β x a nn θ1 (x)||∂ α ξ a nn θ2 (ξ)| ≤ c α c β =c αβ ξ 1−|α| .(12)
Therefore, the symbol network a nn
θ (x, ξ) is in S 1 1,0 (T n × R n ) as defined in Definition 2.1. Finally, using Theorem 2.5, we deduce that a nn = a nn | T n ×Z n is in S 1 1,0 (T n × Z n ).
We introduce the theorem on the boundedness of a toroidal PDO as follows: Theorem 2.8. [29] (Boundedness of a toroidal PDO in the Sobolev space) Let m ∈ R and k ∈ N, which is the smallest integer greater than n 2 , and let a :
T n × Z n → C such that |∂ β x α ξ a(x, ξ)| ≤ C ξ m−|α| for all (x, ξ) ∈ T n × Z n ,(13)
and all multi-indices α such that |α| ≤ k and all multi-indices β. Then the corresponding toroidal PDO T a defined in Definition 2.4 extends to a bounded linear operator from the Sobolev space W p,s (T n ) to the Sobolev space W p,s−m (T n ) for all 1 < p < ∞ and any s ∈ R.
The restricted symbol network a nn θ is in a toroidal symbol class S 1 1,0 (T n × Z n ) from Proposition 2.6. Thus, it satisfies the condition in Theorem 2.8. Thus, the PDIO T nn a θ with the restricted symbol network a nn θ is a bounded linear operator from W p,s (T n ) to W p,s−1 (T n ) for all 1 < p < ∞ and s ∈ R. This implies that the PDIO is a continuous operator between the Sobolev spaces. Therefore, we expect that the PDIO can be applied to a neural operator (17) to obtain a smooth and general solution operator. A description of its application to neural operator is explained in Section 3.
∂u ∂t = Lu, u(x, 0) = u 0 (x), (x, t) ∈ T n × [0, ∞).(14)
This is well-posed and has the unique solution provided that the operator L is semi-bounded [11]. The solution is given by u(x, t) = e tL u 0 (x). In the case of the 1D heat equation, L is c∂ xx with diffusivity constant c. Then, the solution of the heat equation is given by
u(x, t) = ξ∈Z e −4π 2 ξ 2 ctû 0 (ξ)e 2πixξ .(15)
This shows that the mapping from u 0 (x) to u(x, t) is the PDO with the symbol e −4π 2 ξ 2 ct . Consequently, we propose the time dependent PDIOs given as follows:
T a (f )(x, t) ≈ T nn a θ (f )(x, t) = a nn θ1 (x, t)F −1 a nn θ2 (ξ, t)F[f ](ξ) ,(16)
where a nn θ1 (x, t) and a nn θ2 (ξ, t) are time dependent symbol networks. In the experiment, we verify that the time dependent PDIOs approximate time dependent symbol accurately even in a finer time grid than a time grid used for training. Therefore, we can obtain the continuous-in-time solution operator of the PDEs.
Application to a nonlinear operator combined with a neural operator
In many problems, the solution operator of the PDE is nonlinear. Although the PDIO by itself is a linear operator, it can be combined with a nonlinear architecture to learn highly nonlinear operators. We introduce a neural operator architecture to learn infinite-dimensional operators effectively.
Neural operator [23]
Inspired by Green's functions of elliptic PDEs, [23] proposed an iterative neural operator to approximate the solution operators of parametric PDEs. To find a map from the function f (x) to the solution u(x), the input f (x) is lifted to a higher representation f 0 (x) = P (f (x)). Next, the iterations f 0 → f 1 .... → f T are applied using the update f t → f t+1 formulated using the following expression:
f t+1 (x) = σ (W f t (x) + K φ (f t )(x)) ,(17)
for t = 0, ..., T − 1, where W is a local linear transformation, K φ is an integral operator K parameterized by φ, and σ is a nonlinear activation function. The output
u(x) = Q(f T (x)) is the projection of f T (x) by the local transformation Q.
The integral operator K φ is parameterized using the message passing on graph networks. Furthermore, [21] proposed a Fourier integral operator in which the integral operator K φ is a convolution operator.
Proposed PDNO
A structure consisting of the PDIO is introduced for K φ in Eq. (17). The proposed structure is similar to a fully connected neural network structure. Precisely,
let f t (x) ∈ R cin , where f t (x) = [f t,i (x) : R n → R] cin i=1
with the number of input channels c in and x ∈ R n . Then, the integral operator K a is expressed as follows: where θ 1 , θ 2 are the parameters of each symbol network and the c out is the number of output channels with f t+1 (x) ∈ R cout . Indeed, the proposed integral operator (Eq. (18)) is a linear combination of PDIOs (Eq. (5)), as displayed in Figure 2. Therefore, the solution operator can be approximated by a model combining the proposed integral operator with the neural operator (Eq. (17)). The combined model is called a pseudo-differential neural operator (PDNO). In the experiments, we use three separate symbol networks a nn θ1 (x), Re a nn θ2 (ξ) , and Im a nn θ2 (ξ) . Each symbol network has input dimension x, ξ ∈ R n and output dimension c in × c out .
K a (f t )(x) def = cin i=1 T nn a θ (f t )(x) cout j=1 = cin i=1 a nn θ1,i,j (x)F −1 a nn θ2,i,j (ξ)F[f t,i ](ξ) (x) cout j=1 ,(18)
Comparing with Fourier integral operator
In the Fourier neural operator proposed in [21], a neural operator structure with the integral operator K R called Fourier integral operator was introduced, which can be expressed as follows:
K R [f t ](x) = F −1 cin i=1 R ij (ξ) · F[f t,j ](ξ) (x) cout j=1 .(19)
As F −1 is a linear transformation, the summation and F −1 is commutative in Eq. (19). Thus, we can rewrite
K R [f t ](x) = cin j=1 F −1 [R ij (ξ) · F[f t,j ](ξ)] (x) cout j=1 .(20)
Therefore, each output channel of K R is also interpreted as a linear combination of PDOs with the parametric symbol R ij (ξ). Note that the parameter R ij (ξ) is directly parameterized on the discrete space Z n . The parameters R ij (ξ) may not satisfy the condition of the toroidal symbol (Eq. (6)) in Definition 2.3. Furthermore, parameters R ij (ξ) only consider the dependency on ξ, while the symbol network a nn θ (x, ξ) has a dependency on x. Therefore, we expect the proposed model using the symbol network, which belongs to a certain symbol class, to exhibit enhanced performance.
Experiments
Toy example : 1D heat equation
In this experiment, we verify whether the proposed time-dependent PDIO actually learns the symbol of the analytic PDO. We consider the 1D heat equation given in Eq. (14). The solution operator of the 1D heat equation is a PDO, which is given in Eq. (15). We aim to learn the mapping from the initial state and time (u 0 (x), t) to the solution u(x, t). The initial state u 0 (x) is generated from the Gaussian random field N (0, 7 4 (−∆ + 7 2 ) −2.5 ) with the periodic boundary conditions. ∆ denotes the Laplacian. The diffusivity constant c and the spatial resolution are set to 0.05 and 2 10 = 1024, respectively. We use 1000 pairs of train data comprising 10 time grids t = 0.05 + 0.1n (n = 0, 1, ..., 9) for each of the 100 initial states. We test for 20 initial states at finer time grids t = 0.05 + 0.05n (n = 0, 1, ..., 19). The time dependent PDIO (Eq. (16)) is used and achieves a relative L 2 error lower than 0.01 on both the traning and test sets. Figure 3 shows the symbol network and the analytic symbol given in Eq. Figure 7). Therefore, it does not require an x-coordinate to plot the learned symbol.
Nonlinear solution operators of PDEs
In this section, we verify the PDNO on a nonlinear PDEs dataset. For all the experiments, we use the PDNO that consists of four iterations of the network described in Figure 2 and Eq. (18) with a nonlinear activation GELU. Fully connected neural networks are used for symbol networks up to layer three and hidden dimension 64. The relative L 2 error is used for the loss function. Detailed hyperparameters are contained in Appendix B. We do not truncate the Fourier coefficient in any of the layers, indicating that we use all of the available frequencies from −[ s 2 ] to s 2 − 1. This is because PDNO does not require additional learning parameters, even if all frequencies are used. However, because evaluations are required at numerous grid points, considerable memory is required in the learning process. In practical settings, it is recommended to truncate the frequency space into appropriate maximum modes k max . We observed that there was little degradation in performance even if truncation was used (See Appendix E.2). All experiments were conducted using up to five NVIDIA A5000 GPUs with 24 GB memory. Benchmark models We compare the proposed model with the multiwavelet-based model (MWT) and the Fourier neural operator (FNO), which are the state-of-the-art approaches based on the neural operator architecture. For the difference between PDNO (a(x, ξ)) and PDNO (a(ξ)), see Section 4.3. We conducted the experiments on three PDEs, namely Burgers' equation, Darcy flow, and the Navier-Stokes equation. In the case of Burgers' equation and the Navier-Stokes equation, we use the same data attached in [21]. In the case of Darcy flow, we regenerate the data according to the same data generation scheme.
Burgers' equation The 1D Burgers' equation is a nonlinear PDE, which describes the interaction between the effects of nonlinear convection and diffusion as follows:
∂u ∂t = −u · ∂u ∂x + ν ∂ 2 u ∂x 2 , (x, t) ∈ (0, 1) × (0, 1], u(x, 0) = u 0 (x), x ∈ (0, 1),(21)
where u 0 is the initial state and ν is the viscosity. We aim to learn the nonlinear mapping from the initial state u 0 (x) to the solution u(x, 1) at t = 1. We use the same Burgers' data used in [21]. The initial state u 0 (x) is generated from the Gaussian random field N (0, 5 4 (−∆ + 25I) −2 ) with the periodic boundary conditions. The viscosity ν and the finest spatial resolution are set to 0.1 and 2 13 = 8192, respectively. The lower resolution dataset is obtained by subsampling. We experiment with the same hyperparameters for all resolutions. We use 1000 train pairs and 100 test pairs.
The results from the Burgers data are listed in Table 1
∇ · (a(x)∇u(x)) = f (x), x ∈ (0, 1) 2 , u(x) = 0, x ∈ ∂(0, 1) 2 ,(22)
where u is density of the fluid, a(x) is the diffusion coefficient, and f (x) is the external force. We aim to learn the nonlinear mapping from a(x) to the steady state u(x), fixing the external force f (x) = 1. The diffusion coefficient a(x) is generated from ψ # N (0, (−∆ + 9I) −2 ), where ∆ is the Laplacian with zero Neumann boundary conditions, and ψ # is the pointwise push forward, defined by ψ(x) = 12 if x > 0, 3 elsewhere. The coefficient imposes the ellipticity on the differential operator ∇ · (a(x)∇)(·). We generate a(x) and u(x) using the second order FDM on a 512 × 512 grid. The lower resolution dataset is obtained by subsampling. We use 1000 train pairs and 100 test pairs and fixed the hyperparameters for all resolutions.
The results on the Darcy flow are presented in Table 2 for various resolutions s. The proposed model achieves the lowest relative error for all resolutions. In the case of s = 32, particularly, MWT and FNO exhibit the highest errors. Furthermore, the proposed model maintains its performance even at low resolutions.
Navier-Stokes equation Navier-Stokes equation describes the dynamics of a viscous fluid. In the vorticity formulation, the incompressible Navier-Stokes equation on the unit torus can be expressed as follows:
∂w ∂t + u · ∇w − ν∆w = f, (x, t) ∈ (0, 1) 2 × (0, T ], ∇ · u = 0, (x, t) ∈ (0, 1) 2 × [0, T ], w(x, 0) = w 0 (x), x ∈ (0, 1) 2 ,(23)
where w is the vorticity, u is the velocity field, ν is the viscosity, and f is the external force. We use the same Navier-Stokes data used in [21] to learn the nonlinear mapping from w(x, 0), ...., w(x, 9) to w(x, 10), ..., w(x, T ), fixing the force f (x) = 0.1(sin(2π(x 1 + x 2 )) + cos(2π(x 1 + x 2 ))). The initial condition w 0 (x) is sampled from N (0, 7 1.5 (−∆ + 7 2 I) −2.5 ) with periodic boundary conditions. We experiment with four Navier-Stokes We employ a recurrent architecture to propagate along the time domain. From w(x, 0), ...., w(x, 9), the model predicts the vorticity at t = 10,w(x, 10). Then, from w(x, 1), ..., w(x, 9),w(x, 10), the model predicts the next vorticitȳ w(x, 11). We repeat this process until t = T .
For each experiment, we use 200 test samples. In the case of (ν, T, N ) = (10 −3 , 50, 1000), we use a batch size 10 or 20 otherwise. Furthermore, we use fixed hyperparameters for the four experiments.
The results on the Navier-Stokes equation are presented in Table 3. In all four datasets, the proposed model exhibits comparable or superior performances. Notably, the relative error improves considerably for (ν, T, N ) = (10 −5 , 20, 1000), exhibiting the lowest viscosity. Figure 4 displays a sample prediction at t = 19, which is highly unpredictable.
Additional experiments
On Burgers' equation and Darcy flow, we perform an additional experiment, which does not use a symbol network a nn θ1 (x), but only use a nn θ2 (ξ). In this case, the PDNO has the same structure as FNO except that the symbol is smooth. See PDNO (a(ξ)) in Table 1 and Table 2. Although less than the original PDNO, the results of the PDNO without the dependency of the x-symbol perform better than the other models, including FNO. This shows why the smoothness of the symbol of PDNO is important.
On the Navier-Stokes equation with ν = 1e − 5, we compare the train and the test relative L 2 errors along time t in Figure 5. All models show that the test errors grow exponentially according to time t. Among them, PDNO consistently demonstrates the least test errors for all time t. More notable is the difference between the solid lines and the dashed lines, showing that MWT and FNO suffer from overfitting, whereas PDNO does not. This might be related to the smoothness of the symbols of models. Furthermore, the symbols of PDNO and the FNO are visualized in Figure 6. x-axis and y-axis represent frequency domains. As we used real valued functions, the second coordinate is half the first.
Conclusion
Based on the theory of PDO, we developed a novel PDIO and PDNO framework that efficiently learns mappings between functions spaces. The proposed symbol networks are in a toroidal symbol class that renders the corresponding PDIOs continuous between Sobolev spaces on the torus, which can considerably improve the learning of the solution operators in most experiments. This study revealed an excellent ability for learning operators based on the theory of PDO. However, there is room for improvement in highly complex PDEs such as the Navier Stokes equation, and the time dependent PDIOs are difficult to apply to nonlinear architecture. We expect to solve these problems by using advanced operator theories [5,13,6], and the operator learning will solve engineering and physical problems.
A Notations
We list the main notations throughout this paper in Table 4. In this section, we detail why the proposed model should be addressed in T n × Z n instead of T n × R n . For convenience, we assume that n = 1. Let f : T → R and its N points discretization f ( 1
N ) = y 0 , f ( 2 N ) = y 1 , ... f ( N N ) = y(1) = y N −1 .
Then, the discrete Fourier transform (DFT) of the sequence {y n } 0≤n≤N −1 is expressed by the following:
ξ k = 1 N N −1 n=0 y n e −2πik n N(24)
and the inverse discrete Fourier transform (IDFT) of {ξ k } 0≤k≤N −1 is expressed as follows:
y n = N −1 k=0 ξ k e 2πik n N .(25)
As N goes ∞, we can see that
lim N →∞ ξ k = lim N →∞ 1 N N −1 n=0 f ( n + 1 N )e −2πik n N → 1 0 f (x)e −2πikx dx =f (k)(26)
and lim N →∞
y n = lim N →∞ f ( n N x + 1 N ) = lim N →∞ N −1 k=0 ξ k e 2πik n N → ∞ k=0f (k)e 2πikx = f (x),(27)
where x = n N . Thus, DFT is an approximation of integral on T and IDFT is an approximation of infinity sum on Z. Therefore, the theory of PDO on T n × Z n is more suited to our model.
D Activation functions for symbol network
In this section, we discuss the activation function for the symbol network. We proved the Proposition 2.6 when GELU activation function is used for the symbol network. Not only GELU, but also other activation functions can be used for the symbol network. To explain this, we first define the Schwartz space [28] as follows:
Definition D.1. The Schwartz space S(R n ) is the topological vector space of functions f : R n → C such that f ∈ C ∞ (R n ) and z α ∂ β f (z) → 0, as |z| → ∞,(28)
for every pair of multi-indices α, β ∈ N n 0 .
That is, the Schwartz space consists of smooth functions whose derivatives decay at infinity faster than any power. As mentioned in the proof of Proposition 2.6, it can be easily shown that the second or higher derivatives of GELU is in the Schwartz space S(R). Because GELU is defined as σ(z) = zΦ(z) with Φ(z) = 1 √ 2π z −∞ exp(−u 2 /2)du, the second or higher derivatives of GELU is the sum of exponential decay functions exp(−z 2 /2). Thus, the second or higher derivatives of the function is in the Schwartz space, that is, σ (k) ∈ S(R) when k ≥ 2.
Next, we prove that another activation function φ(z) is in symbol class S 1 1,0 (T n × R n ) if the difference between the function φ(z) and GELU σ(z) is in the Schwartz space. We call a function like φ(z) a GELU-like activation function. It can be easily shown that the function φ(z) is bounded by linear function |z| because GELU is bounded by the linear function. Because the Schwartz space is closed under differentiation, φ(z) − σ(z) ∈ S(R) implies φ (k) (z) − σ (k) (z) ∈ S(R) for k ∈ N. Because GELU satisfies σ (z) ≤ c α and σ (k) ∈ S(R) when k ≥ 2, the activation function φ(z) also satisfies φ (z) ≤ c α and φ (k) ∈ S(R) when k ≥ 2. Therefore, the proof of Proposition 2.6 can be obtained by changing another activation function φ(z) instead of GELU σ(z). GELU-like activation functions, such as the Softplus [8], and Swish [27] etc., satisfy the aforementioned assumption so that it can be used for the symbol network in our PDIO.
We can easily show that the symbol network a nn θ (x, ξ) with tanh(z) = e z −e −z e z +e −z is in S 0 1,0 (T n × R n ). In the proof of Proposition 2.6, we used the characteristic of GELU and its high derivatives. Tanh function is bounded and the first or higher derivatives of tanh function is in the Schwartz space. Therefore, neural network a nn θ2 (ξ) satisfies the following boundedness:
|∂ α ξ a nn θ2 (ξ)| ≤ c α , if |α| = 0,(29)|∂ α ξ a nn θ2 (ξ)| ≤ c α ξ −|α| , if |α| ≥ 1.(30)
Note that the boundedness of the neural network a nn θ1 (x) is same in the case of GELU. Thus, we can bound the derivative of the symbol network a nn θ (x, ξ) as follows:
|∂ β x ∂ α ξ a(x, ξ)| ≤ c αβ ξ −|α| .(31)
Therefore, the symbol network with tanh activation function is in S 0 1,0 (T n × R n ). Similarly, it is easy to prove that sigmoid function 1 1+e −z also is in a symbol class S 0 1,0 (T n × R n ). Therefore, the PDIOs with these two activation functions are bounded linear operators from the Sobolev space W p,s (T n ) to the Sobolev space W p,s (T n ) for all 1 < p < ∞ and any s ∈ R. In 1D heat equation experiments, we assume that the symbol is decomposed by a(x, ξ, t) ≈ a nn θ1 (x, t) × a nn θ2 (ξ, t). In Figure 7, it shows the learned symbol network a nn θ1 (x, t) on (x, t) ∈ T × [0.05, 1]. We can see that a nn θ1 (·, t) is almost In this respect, a nn θ1 (x, t) is treated as a function of t by taking the average along x-dimension to visualize a nn θ1 (x, t)a nn θ2 (ξ, t) in Figure 3. In addition, Figure 8 visualizes a sample prediction on 1D heat equation.
E.2 Changes in errors according to k max .
As mentioned in Section 4.2, we use all possible modes. Although PDNO does not require additional parameters to use all modes, it demands more memory in the learning process. So, we perform an additional experiments on Burgers Figure 9, changes in test relative L 2 error along k max are shown. Even with small k max , it still outperforms MWT and FNO (Table 1 and Table 2). And, for k max ≥ 20, PDNO obtains comparable relative L 2 error on both datasets.
E.3 Navier-Stokes equation with ν = 1e − 5
Samples with the lowest and highest error Figure 10 and Figure 11 show the samples with the highest and the lowest error, respectively. PDNOs consistently obtains the lowest error at all time steps of both samples.
PDNO and FNO with different number of channels We compare the performance of PDNO and FNO, which varies depending on the number of channels. For a fair comparison, the truncation is not used in the Fourier space for both FNO and PDNO. Furthermore, PDNO utilizes only a single symbol network a nn θ2 (ξ), not a nn θ1 (x). In Figure 12, as the number of channels increases, the test error decreases in both models. PDNO achieves lower test error than FNO and also shows small gap between the train error and the test error.
Figure 2 :
2Structure of Eq. (17) using the integral operator K a in Eq. (18) with c in = 3 and c out = 2. Each black solid line represents a PDIO with symbol network a nn ij .
(15) on (ξ, t) ∈ [−12, 12] × [0.05, 1]. Although the PDIO learns from a sparse time grid, it obtains an accurate symbol for all t ∈ [0.05, 1].
Figure 3 :
3Visualization of the learned symbol from the time dependent PDIO a nn θ1 (x, t)a nn θ2 (ξ, t) (top) and analytic symbol a(x, ξ, t) = e −4×0.05π 2 ξ 2 t (bottom) of the solution operator of the 1D heat equation. Note that learned a nn θ1 (x, t) is a constant function according to x. i.e. a nn θ1 (x, t) = c(t) (See
Figure 4 :
4Example of a prediction on the Navier-Stokes data with ν=1e-5 showing the prediction w(x, 19) from inputs [w(x, 0), ..., w(x,9)]. Each value on the top of the figure is the relative L 2 error between the true w(x, 19) and each prediction.
.
along with the different resolutions s. The proposed model outperforms both MWT and FNO for all resolutions. Remarkable improvements are obtained especially in the case of low resolutions. Darcy flow The Darcy flow problem is a diffusion equation with an external force, which describes the flow of a fluid through a porous medium. The steady state of the Darcy flow on the unit box is expressed as follows:
Figure 5 :
5Comparison of the train and the test relative L 2 error by time horizon t = 10, ..., 19 on the Navier-Stokes equation with ν = 1e − 5.
3 :
3Benchmark (relative L 2 error) on the Navier-Stokes equation on the various viscosity ν, the time horizon T , and the number of data N .ν = 1e − 3 ν = 1e − 4 ν = 1e − 4 ν = 1e − 5 (ν,T, N ) = (10 −3 , 50, 1000), (10 −4 , 30, 1000), (10 −4 , 30, 10000), and (10 −5 , 20, 1000), where ν is the viscosity, T is the final time to predict, and N is the number of training samples. Notably, the lower the viscosity, the more difficult the prediction. All datasets comprise 64 × 64 resolutions.
Figure 6 :
6Examples of the real part of learned symbol a nn ij (ξ) from the Navier-Stokes data with ν = 1e − 5.
E
Additional figures and experiments E.1 1D heat equation : Symbol network a nn θ1 (x, t).
Figure 7 :
7Learned symbol a nn θ1 (x, t) from 1D heat equation.
Figure 8 :
8A sample of prediction on 1D heat equation from a PDIO. The model is trained on 1024 × 10 dataset and evaluated on 1024 × 20. Dashed lines on the surface are contour lines. a constant function for each t ∈ [0.05, 1].
Figure 9 :
9Test relative L 2 error that depends on maximum modes k max of PDNO on Burgers' equation (resolution s = 8192) and Darcy flow (resolution s = 256).
equation and Darcy flow by limiting the number of modes k max . In
Figure 10 :
10Comparison the prediction on Navier-Stokes equation with ν = 1e − 5. This test sample shows the lowest relative L 2 error on average of three models.
Figure 11 :
11Comparison the prediction on Navier-Stokes equation with ν = 1e − 5. This test sample shows the greatest relative L 2 error on average of three models.
Figure 12 :
12Training and test error of the proposed model and FNO on Navier-Stokes data with ν = 1e − 5 according to the number of channels.
Table 1 :
1Benchmark (relative L 2 error) on Burgers' equation on different resolution s.NETWORKS
s = 256
s = 512
s = 1024 s = 2048 s = 4096 s = 8192
PDNO (a(x, ξ)) 0.000685 0.000849
0.00110
0.00118
0.00178
0.00165
PDNO (a(ξ))
0.000903
0.00122
0.00125
0.00129
0.00191
0.00225
MWT LEG
0.00199
0.00185
0.00184
0.00186
0.00185
0.00178
MWT CHB
0.00402
0.00381
0.00336
0.00395
0.00299
0.00289
FNO
0.00332
0.00333
0.00377
0.00346
0.00324
0.00336
2.5 Time dependent PDIO
Consider the time-dependent PDE
Table 2 :
2Benchmark (relative L 2 error) on Darcy Flow for different resolutions s.NETWORKS
s = 32
s = 64 s = 128 s = 256 s = 512
PDNO (a(x, ξ)) 0.0033 0.0025
0.0016
0.0014
0.0019
PDNO (a(ξ))
0.0038 0.0028
0.0025
0.0025
0.0022
MWT LEG
0.0162 0.0108
0.0093
0.0088
0.0092
FNO
0.0178 0.0112
0.0103
0.0101
0.0102
Table
Table 4 :
4NotationsNotations
Descriptions
A
an input function space
U
an output function space
T : A → U
an operator from A to U
x ∈ R n or T n
a variable in the spatial domain
ξ ∈ R n or Z n
a variable in the Fourier spacê
f (ξ)
Fourier transform of function f (x)
S m
ρ,δ
an Euclidean (or toroidal) symbol class
a(x, ξ) ∈ S m
ρ,δ
an Euclidean (or toroidal) symbol
a nn
θ (x, ξ)
a symbol network parameterized by θ
T a : A → U
a PDO with the symbol a(x, ξ)
T nn
a θ : A → U
a PDIO with a symbol network a θ (x, ξ)
F : A → U, F −1 : U → A Fourier transform and its inverse
ξ
Euclidean norm
ξ
(1 + ξ 2 )
1
2
α
ξ
a difference operator of order α on ξ
k max
the maximum number of Fourier modes
B Hyperparameters
Table 5 :
5Hyperparameters for learning PDNOs on each dataset. # layers, # hidden and activation are for symbol networks.C The reason for considering the symbol network on domain T n × Z nDATA
BATCH SIZE
LEARNING RATE
WEIGHT DECAY
EPOCHS
STEP SIZE
# CHANNEL # LAYERS # HIDDEN ACTIVATION
HEAT EQUATION
20
1 × 10 −2
1 × 10 −6
10000
2000
1
2
40
GELU
BURGERS' EQUATION
20
5 × 10 −3
0
1000
100
64
2
64
TANH
DARCY FLOW
20
1 × 10 −2
1 × 10 −6
1000
200
20
3
32
GELU
NAVIER-STOKES EQUATION
20
5 × 10 −3
1 × 10 −6
1000
200
20
2
32
GELU
Prediction of aerodynamic flow fields using convolutional neural networks. Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, Shailendra Kaushik, Comput. Mech. 642Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik. Prediction of aerodynamic flow fields using convolutional neural networks. Comput. Mech., 64(2):525-545, 2019.
Boundary problems for pseudo-differential operators. Louis Boutet De Monvel, Acta Math. 1261-2Louis Boutet de Monvel. Boundary problems for pseudo-differential operators. Acta Math., 126(1-2):11-51, 1971.
Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. Tianping Chen, Hong Chen, IEEE Transactions on Neural Networks. 64Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 6(4):911-917, 1995.
R Courant, D Hilbert, Methods of mathematical physics. New York, N.Y.Interscience Publishers, IncIR. Courant and D. Hilbert. Methods of mathematical physics. Vol. I. Interscience Publishers, Inc., New York, N.Y., 1953.
Fourier integral operators. J J Duistermaat, Progress in Mathematics. 130Birkhäuser Boston, IncJ. J. Duistermaat. Fourier integral operators, volume 130 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston, MA, 1996.
Fourier integral operators. II. J J Duistermaat, L Hörmander, Acta Math. 1283-4J. J. Duistermaat and L. Hörmander. Fourier integral operators. II. Acta Math., 128(3-4):183-269, 1972.
The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. E Weinan, Bing Yu, Commun. Math. Stat. 61Weinan E and Bing Yu. The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat., 6(1):1-12, 2018.
Deep sparse rectifier neural networks. Xavier Glorot, Antoine Bordes, Yoshua Bengio, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsXavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315-323. JMLR Workshop and Conference Proceedings, 2011.
Convolutional neural networks for steady flow approximation. Xiaoxiao Guo, Wei Li, Francesco Iorio, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningXiaoxiao Guo, Wei Li, and Francesco Iorio. Convolutional neural networks for steady flow approximation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 481-490, 2016.
Multiwavelet-based operator learning for differential equations. Gaurav Gupta, Xiongye Xiao, Paul Bogdan, Advances in Neural Information Processing Systems. 342021Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. Advances in Neural Information Processing Systems, 34, 2021.
Spectral methods for time-dependent problems. Sigal Jan S Hesthaven, David Gottlieb, Gottlieb, Cambridge University Press21Jan S Hesthaven, Sigal Gottlieb, and David Gottlieb. Spectral methods for time-dependent problems, volume 21. Cambridge University Press, 2007.
Philipp Holl, arXiv:2001.07457Vladlen Koltun, and Nils Thuerey. Learning to control pdes with differentiable physics. arXiv preprintPhilipp Holl, Vladlen Koltun, and Nils Thuerey. Learning to control pdes with differentiable physics. arXiv preprint arXiv:2001.07457, 2020.
Fourier integral operators. Lars Hörmander, I. Acta Math. 1271-2Lars Hörmander. Fourier integral operators. I. Acta Math., 127(1-2):79-183, 1971.
The analysis of linear partial differential operators. I. Classics in Mathematics. Lars Hörmander, Springer-VerlagBerlinDistribution theory and Fourier analysis. Reprint of the second (1990) edition [Springer, Berlin; MR1065993 (91m:35001a)Lars Hörmander. The analysis of linear partial differential operators. I. Classics in Mathematics. Springer-Verlag, Berlin, 2003. Distribution theory and Fourier analysis, Reprint of the second (1990) edition [Springer, Berlin; MR1065993 (91m:35001a)].
The analysis of linear partial differential operators. Lars Hörmander, III. Classics in Mathematics. SpringerPseudo-differential operators, Reprint of the 1994 editionLars Hörmander. The analysis of linear partial differential operators. III. Classics in Mathematics. Springer, Berlin, 2007. Pseudo-differential operators, Reprint of the 1994 edition.
Trend to equilibrium for the kinetic Fokker-Planck equation via the neural network approach. Ju Hyung, Jin Woo Hwang, Hyeontae Jang, Jae Yong Jo, Lee, J. Comput. Phys. 419Hyung Ju Hwang, Jin Woo Jang, Hyeontae Jo, and Jae Yong Lee. Trend to equilibrium for the kinetic Fokker- Planck equation via the neural network approach. J. Comput. Phys., 419:109665, 25, 2020.
Solving pde-constrained control problems using operator learning. Rakhoon Hwang, Jae Yong Lee, Jin Young Shin, Hyung Ju Hwang, arXiv:2111.04941arXiv preprintRakhoon Hwang, Jae Yong Lee, Jin Young Shin, and Hyung Ju Hwang. Solving pde-constrained control problems using operator learning. arXiv preprint arXiv:2111.04941, 2021.
Solving parametric PDE problems with artificial neural networks. Yuehaw Khoo, Jianfeng Lu, Lexing Ying, European J. Appl. Math. 323Yuehaw Khoo, Jianfeng Lu, and Lexing Ying. Solving parametric PDE problems with artificial neural networks. European J. Appl. Math., 32(3):421-435, 2021.
Learning operators with coupled attention. Georgios Kissas, Jacob Seidman, Leonardo Ferreira Guilhoto, M Victor, George J Preciado, Paris Pappas, Perdikaris, arXiv:2201.01032arXiv preprintGeorgios Kissas, Jacob Seidman, Leonardo Ferreira Guilhoto, Victor M Preciado, George J Pappas, and Paris Perdikaris. Learning operators with coupled attention. arXiv preprint arXiv:2201.01032, 2022.
The model reduction of the Vlasov-Poisson-Fokker-Planck system to the Poisson-Nernst-Planck system via the deep neural network approach. Jae Yong Lee, Jin Woo Jang, Hyung Ju Hwang, ESAIM Math. Model. Numer. Anal. 555Jae Yong Lee, Jin Woo Jang, and Hyung Ju Hwang. The model reduction of the Vlasov-Poisson-Fokker-Planck system to the Poisson-Nernst-Planck system via the deep neural network approach. ESAIM Math. Model. Numer. Anal., 55(5):1803-1846, 2021.
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2010.08895Fourier neural operator for parametric partial differential equations. arXiv preprintZongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
Multipole graph neural operator for parametric partial differential equations. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2006.09535arXiv preprintZongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. arXiv preprint arXiv:2006.09535, 2020.
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2003.03485Neural operator: Graph kernel network for partial differential equations. arXiv preprintZongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485, 2020.
Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. Lu Lu, Pengzhan Jin, George Em Karniadakis, arXiv:1910.03193arXiv preprintLu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193, 2019.
A deep neural network surrogate for high-dimensional random partial differential equations. Hadi Mohammad Amin Nabian, Meidani, arXiv:1806.02957arXiv preprintMohammad Amin Nabian and Hadi Meidani. A deep neural network surrogate for high-dimensional random partial differential equations. arXiv preprint arXiv:1806.02957, 2018.
Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. M Raissi, P Perdikaris, G E Karniadakis, J. Comput. Phys. 378M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys., 378:686-707, 2019.
Searching for activation functions. Prajit Ramachandran, Barret Zoph, Quoc V Le, arXiv:1710.05941arXiv preprintPrajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
Methods of modern mathematical physics. I. Functional analysis. Michael Reed, Barry Simon, Academic PressNew York-LondonMichael Reed and Barry Simon. Methods of modern mathematical physics. I. Functional analysis. Academic Press, New York-London, 1972.
Pseudo-differential operators and symmetries: background analysis and advanced topics. Michael Ruzhansky, Ville Turunen, Springer Science & Business Media2Michael Ruzhansky and Ville Turunen. Pseudo-differential operators and symmetries: background analysis and advanced topics, volume 2. Springer Science & Business Media, 2009.
DGM: a deep learning algorithm for solving partial differential equations. Justin Sirignano, Konstantinos Spiliopoulos, J. Comput. Phys. 375Justin Sirignano and Konstantinos Spiliopoulos. DGM: a deep learning algorithm for solving partial differential equations. J. Comput. Phys., 375:1339-1364, 2018.
Pseudodifferential Operators (PMS-34). Taylor Michael Eugene, Princeton University PressMichael Eugene Taylor. Pseudodifferential Operators (PMS-34). Princeton University Press, 2017.
Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Sifan Wang, Hanwen Wang, Paris Perdikaris, arXiv:2103.10974arXiv preprintSifan Wang, Hanwen Wang, and Paris Perdikaris. Learning the solution operator of parametric partial differential equations with physics-informed deeponets. arXiv preprint arXiv:2103.10974, 2021.
Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. Yinhao Zhu, Nicholas Zabaras, J. Comput. Phys. 366Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. J. Comput. Phys., 366:415-447, 2018.
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. Yinhao Zhu, Nicholas Zabaras, J. Comput. Phys. 394Phaedon-Stelios Koutsourelakis, and Paris PerdikarisYinhao Zhu, Nicholas Zabaras, Phaedon-Stelios Koutsourelakis, and Paris Perdikaris. Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. J. Comput. Phys., 394:56-81, 2019.
| [] |
[] | [
"Cristinel Ovidiu \nDepartment of Theoretical Physics\nTHE POST-DETERMINED BLOCK UNIVERSE\nNational Institute of Physics and Nuclear Engineering -Horia Hulubei\nBucharestRomania\n",
"Stoica \nDepartment of Theoretical Physics\nTHE POST-DETERMINED BLOCK UNIVERSE\nNational Institute of Physics and Nuclear Engineering -Horia Hulubei\nBucharestRomania\n"
] | [
"Department of Theoretical Physics\nTHE POST-DETERMINED BLOCK UNIVERSE\nNational Institute of Physics and Nuclear Engineering -Horia Hulubei\nBucharestRomania",
"Department of Theoretical Physics\nTHE POST-DETERMINED BLOCK UNIVERSE\nNational Institute of Physics and Nuclear Engineering -Horia Hulubei\nBucharestRomania"
] | [] | A series of reasons to take quantum unitary evolution seriously and explain the projection of the state vector as unitary and not discontinuous are presented, including some from General Relativity. This justifies a reevaluation of the currently accepted limits of unitary evolution and of relativistic spacetime. I argue that unitary evolution is consistent with both quantum measurements and the apparent classicality at the macroscopic level. This allows us to take the wavefunction as ontic (but holistic), but a global consistency condition has to be introduced to ensure this compatibility. As a consequence, Quantum Theory turns out to be consistent with a definite general relativistic spacetime, eliminating some of the tensions between the two, usually considered as reasons to quantize gravity. But the block universe subject to global consistency gains a new flavor, which for an observer experiencing the flow of time appears as "superdeterministic" or "retrocausal", although this does not manifest itself in observations in a way which would allow the violation of causality. However, the block universe view offers another interpretation of this behavior, which removes the tension with causality.Such a block universe subject to global consistency appears thus as being postdetermined. Here "post-determined" means that for an observer the block universe appears as not being completely determined from the beginning, but each new quantum observation eliminates some of the possible block universe solutions consistent with the previous observations. I compare the post-determined block universe with other views: the evolving present view, the block universe, the splitting block universe, and the growing block universe, and explain how it combines their major advantages in a qualitatively different picture. | 10.1007/s40509-020-00228-4 | [
"https://arxiv.org/pdf/1903.07078v2.pdf"
] | 119,089,199 | 1903.07078 | 38934d8ef56cbfff4218707b3491d1505bd238cc |
19 Aug 2019
Cristinel Ovidiu
Department of Theoretical Physics
THE POST-DETERMINED BLOCK UNIVERSE
National Institute of Physics and Nuclear Engineering -Horia Hulubei
BucharestRomania
Stoica
Department of Theoretical Physics
THE POST-DETERMINED BLOCK UNIVERSE
National Institute of Physics and Nuclear Engineering -Horia Hulubei
BucharestRomania
19 Aug 2019
A series of reasons to take quantum unitary evolution seriously and explain the projection of the state vector as unitary and not discontinuous are presented, including some from General Relativity. This justifies a reevaluation of the currently accepted limits of unitary evolution and of relativistic spacetime. I argue that unitary evolution is consistent with both quantum measurements and the apparent classicality at the macroscopic level. This allows us to take the wavefunction as ontic (but holistic), but a global consistency condition has to be introduced to ensure this compatibility. As a consequence, Quantum Theory turns out to be consistent with a definite general relativistic spacetime, eliminating some of the tensions between the two, usually considered as reasons to quantize gravity. But the block universe subject to global consistency gains a new flavor, which for an observer experiencing the flow of time appears as "superdeterministic" or "retrocausal", although this does not manifest itself in observations in a way which would allow the violation of causality. However, the block universe view offers another interpretation of this behavior, which removes the tension with causality.Such a block universe subject to global consistency appears thus as being postdetermined. Here "post-determined" means that for an observer the block universe appears as not being completely determined from the beginning, but each new quantum observation eliminates some of the possible block universe solutions consistent with the previous observations. I compare the post-determined block universe with other views: the evolving present view, the block universe, the splitting block universe, and the growing block universe, and explain how it combines their major advantages in a qualitatively different picture.
Introduction
The most successful theories we have, Quantum Mechanics (QM) and General Relativity (GR), seem to be mutually inconsistent, to have some internal tensions, and also some tensions with other observed phenomena. Should we, in order to solve these tensions, make radical changes, or should we rather take a more conservative route? If we learned something from the history of physics, it is that even the best theories we have tend to be replaced by better theories, so we should expect that this will likely continue to happen. However, at least some important principles survived when better theories replaced the old ones. To get results quicker, it may be tempting to focus on solving a particular problem, for example the measurement problem in QM, and sacrifice in the process more general well-tested principles, for example unitary evolution or relativistic invariance. But before trying to solve such problems, it may be useful to see how far we can push the validity of each principle, to see where they really break down, rather than declaring them dead the first time we meet an obstacle. In general, theories trying to solve the internal tensions of QM are what we call interpretations of QM. With the exception of the Many Worlds Interpretation and the proposal discussed in this article, the major interpretations propose new physics -either extensions with "hidden variables", or modifications of the unitary dynamics described by the Schrödinger equation.
In this article, I apply this conservative route to unitary evolution, to push it as far as possible and see if it really breaks down during measurements, if it is really necessary to invoke a wavefunction collapse. This is a natural continuation of some results developed in previous works of the author [93,100,94,102,103], and of some earlier ones developed by Schulman [86,87,88,89] (I will explain the difference between these two approaches in section §3). At the same time, I try to be conservative about GR as well. In the case of GR, the most common proposals are to modify the theory, or to obtain it as a limit of a supposedly better theory, most likely a theory of Quantum Gravity. But, again, before doing this, it would be useful to push the limits of the principles of GR. There are too many possibilities to take into account when advancing towards Quantum Gravity -even if most of those that we know are incomplete or have problems -and the only firm ground we have are the principles that we know to be well tested, so it may help to find out where their limits are, and see if they can agree with each other.
There are strong reasons to expect that the wavefunction is real, ontic, rather than a mere collection of probabilities about unseen states or variables or outcomes [90,55,29,30,79,54,81,74], and we will see some arguments supporting this in section §2. In section §3, I will present some problems which would arise if there is a wavefunction collapse taking place like a discontinuous projection. The possibility that what appears to be collapse takes place by unitary evolution alone is discussed in section §4. This turns out to be possible, but it can only happen for a small subset of the Hilbert space. This limitation of the allowed solutions gives the appearance of a conspiracy among the initial conditions of the quantum particles, or of retrocausality, depending on how you describe the order of the events in time. Retrocausal interpretations have been known for a long time to be able to save locality [35,49,80,31,32,67,118,77,106,6,78,107,1,2,4,27,28,119]. Bell's theorem forces us to choose between nonlocality, which is at least ontologically at odds with Special Relativity (SR) even if satisfies the no-signaling principle, and retrocausality or other ways to violate statistical independence, which are or can be consistent with relativistic local causality, do not signal backwards in time, but require the past to depend on the future as well as the other way around. Can there be a principle which, rather than modifying the dynamics or extending the theory with new variables, just keeps the "good" solutions of the Schrödinger equation, and exhibits this apparent retrocausality in a natural way? In section §5 I argue that this can be achieved by a global consistency condition, without requiring modifications of the important principles, and in fact it can save them. Not only is this not in tension with Special or General Relativity, but in fact the latter can provide an explanation, or at least a framework, for the former. Section §6 discusses how probabilities can arise, but unfortunately a derivation of the Born rule in this framework is still an open problem. Some apparent tensions between QM and GR are discussed in section §7, and it is shown that in this framework some of them are naturally resolved or avoided.
Along this discussion, a version of the block universe emerges as the natural framework for this interpretation of QM (section §8). The observer experiencing the flow of time will have an alternative to the retrocausal description, in which the state of the universe is initially undetermined, and becomes more and more determined with each new measurement. From the bird's eye view of an observer outside of time, this appears merely as the result of the global consistency condition. I call this framework the post-determined block universe because of the necessity to take into account all of the constraints imposed by quantum measurements, before determining the global solution.
Is there a reality beyond the wavefunction?
Schrödinger's equation was discovered when trying to explain the energy levels and structure of the atoms [83]. It was then extended to the relativistic case, including creation and annihilation of particles. The resulting Quantum Field Theory (QFT) provides an accurate description of the behavior of particles. If this would be all there is to be explained, Quantum Theory would be the perfect theory. A simple story that explains almost everything: many-particle states, which evolve in time by being transformed by a unitary operator. But this is not the full story.
In the following I will refer to the evolution equation by the name of Schrödinger, even though the relativistic versions are due to Klein-Gordon, Dirac, and others, and even if field quantization is in place. I will do this for simplicity, based on the fact that these equations have the general form of a Schrödinger equation or the square of such an equation, and the evolution is unitary, of the form (1).
(1) |ψ(t) =Û (t, t 0 )|ψ(t 0 ) ,
where |ψ(t) is an operator defined on a suitable Hilbert space H, andÛ is the unitary evolution operator. In QFT the state vectors |ψ(t) are replaced by linear operatorsψ acting on a special vacuum state |0 to create the states |ψ(t) , and their evolution is still unitary. But there are some important problems with Quantum Theory which we have to resolve. On the one hand, gravity, particularly as it is understood in Einstein's Theory of General Relativity, seems to not fit in this description. On the other hand, the world appears macroscopically classical, and measurements have definite outcomes. A quantum observation always finds the observed system to be in an eigenstate of the Hermitian operator corresponding to the observed property, and the outcome to be an eigenvalue. Since the probability that the state was already an eigenstate by chance is zero, there seems to be an inconsistency between the unitary evolution and the fact that the observed system is always found to be in an eigenstate of the Hermitian operator. To solve this inconsistency, it is proposed that the measurement itself is accompanied by some projection of the state vector which does not seem to follow from the Schrödinger equation itself, and even breaks it. This solution allows us to obtain a prediction of the probabilities associated to each outcome of the measurement, but we still need to understand how or why it happens, so we have what is called "the measurement problem". These problems are indicators that there's more to the story than it seems.
Bohr's solution was to take as given the classicality of the macroscopic level [20], and he described quantum measurements by assuming the apparatus and the outcomes of the measurements to be classical, and by accepting the wave-particle duality. Heisenberg's views cross-pollinated his ideas [63], leading to the Copenhagen Interpretation, which avoids discussing anything but the outcomes of the measurements. It is often considered that restraining the discussion to the outcomes of the measurements makes the problems vanish, or even that these are not real problems. But not everyone is convinced that this makes the problems go away, as we know from the objections by some, including Einstein and de Broglie. Also Schrödinger explained the problem of classicality in his famous cat thought experiment [84].
Avoiding to discuss a problem is useful in solving other independent problems, but it does not make it go away. These foundational problems are important in particular if we really want to understand our world, particularly how General Relativity and Quantum Theory can coexist in a consistent way.
The fact that they are indeed problems becomes apparent when we try to understand what the wavefunction is. When we are talking about atoms or other systems of particles which are stationary or even interact, the wavefunction seems to be like a classical field if the state is separable, or at least like a classical field on the configuration space in general. Something has to be real there, something has to carry the energy and momentum, and curve spacetime according to the Einstein equation. The classical limit of quantum electrodynamics comes with the appropriate stress-energy tensor, which seems to be distributed in space like the wavefunction is. The inertia properties of matter, even when they are not subject to quantum measurement, are consistent again with such a stress-energy tensor.
But when we perform a measurement, the wavefunction seems to turn into a probabilistic device, whose role is to predict the probabilities to jump from one state to another during a measurement. The probability is given by the squared scalar product between the state before and the eigenstate after the measurement (which is in fact the Born rule [21] extended from positions to any observable).
Is this a simple ambiguity, or a contradiction? How can we interpret the wavefunction as being ontic when we do not look at it, and epistemic during measurements? Obviously the Copenhagen Interpretation chooses a quick way out, by denying the reality of the wavefunction (or at best claiming that it is irrelevant), to avoid this contradiction.
This apparent contradiction is visible in the more rigorous mathematical formulations of Quantum Mechanics by Dirac [39] and von Neumann [113]. Accordingly, the state vector is always well defined and has a unitary time evolution, according to the Schrödinger equation, but when it is measured, it projects to an eigenstate of the operator corresponding to the observable, according to the Born rule. The apparent conflict becomes manifest when we try to formulate the theory in a mathematically rigorous way, and the infamous wavefunction collapse seems to be unavoidable.
As the father of General Relativity, Einstein realized this tension between realism and the purely epistemic view of the Copenhagen Interpretation in its full depth. He never ceased to hope that there is a better explanation, and considered that Quantum Mechanics is incomplete in some sense. He made his most concrete formulation of the problem together with his collaborators Podolsky and Rosen in [41], where they proposed the famous EPR experiment.
There are attempts to regard particles as well-localized to a point, and to interpret the wavefunction as just giving the probability to find the point-particle at given positions. But even the most successful theory that postulates point-particles, de Broglie-Bohm theory [37,36,17,40], attributes an ontological status to the wavefunction. Regardless of the proposed interpretation of particles as points, to avoid inconsistencies with the experiments, we have to assign physical properties like charge and mass densities, and ultimately all physical properties except for definite positions, to the wave itself. Both de Broglie and Bohm [17,18,19], as well as another major supporter of this interpretation, Bell 1 , realized this, and kept both the point particle and the wavefunction as ontic.
There are other reasons to take the wavefunction as ontic, rather than as a purely epistemic or probabilistic device, as implied by some results and no-go theorems [90,55,29,30,79,54,81,74].
For all these reasons, I suggest that if we insist on assigning an ontological interpretation to quantum particles, this should be done by considering the wavefunctions or quantum fields as fields, similar to the classical ones. The wavefunction is normally defined on the configuration space and not simply on the physical space, but maybe this difference is not necessarily a problem. In fact, it is possible to faithfully represent the wavefunction in terms of fields on the physical 3D-space, and they are local under the unitary evolution [104]. Assigning an ontological interpretation to the wavefunction is not straightforward, and comes with some puzzles which I will discuss in section §4.
3.
What would happen if the wavefunction collapses would be discontinuous?
Regardless of whether we take the wavefunction as ontic or not, it would be strange to have a unitary evolution law valid all the time except during measurements, when it is suddenly replaced, apparently without a cause, by a nonunitary projection or collapse. We expect the physical laws to be universal. So maybe there is an underlying universal law which appears like the Schrödinger equation almost all times, except for some discrete moments, when it appears like a projection.
But there is another problem: as long as we assume that the wavefunction collapses, the conservation laws are violated. We could guess this already, because in Quantum Mechanics the conserved quantities are those whose operators commute with the Hamiltonian, but during a collapse the Hamiltonian evolution is replaced by a projection, which does not commute with the operators corresponding to conserved quantities. Such violations are shown to accompany the collapse explicitly in [103]. Moreover, it is shown that if we impose the spin conservation in the case of the spin measurements of spin 1/2 particles, there should be no collapse of the spin degrees of freedom of the wavefunction.
This violation of conservation laws due to a discontinuous collapse is true for the measurement scheme in the standard interpretation of Quantum Mechanics [103], but also for interpretations which take the wavefunction and its collapse as real, like the GRW interpretation [51,50]. But, recalling that even in the de Broglie-Bohm interpretation, we have to assign physical properties to the pilot wave rather than to the point-particle, they are affected too (unless we reject their reality [34,52], which does not solve the problem of the conservation laws). It is sometimes claimed that in the Bohmian interpretation there is no collapse, the only thing that happens instead being that the measurement makes all of the branches of the wavefunction except one simply become "empty". Note that this emptiness is different from the update of information that the point-particle is in one of the branches and not the other, because if it was only about this information, then the empty branches would have been already empty before the measurement. But since before the measurement they have physical effects and interfere, and after the measurement they cease to have any effect, it is like they were there and then they disappeared. Once the measurement is done, the empty branches are effectively collapsed by plugging into the guiding equations the resulting position of the observed particle. And once a branch of the wave is emptied, the mentioned physical properties are no longer attributed to that branch, but only to the one in which the Bohmian point-particle was detected. So Bohmian mechanics simply cannot avoid the wavefunction collapse and the problems resulting from this, including the violation of conservation laws [103]. This also affects the Many Worlds Interpretation (MWI) [47,48]. While MWI is unitary and without collapse as long as all branches or worlds are taken into account, collapse is still present within each of the branches. The fact that there is collapse within each branch implies that the result from [103] applies to each branch, hence conservation laws should be violated. Conservation laws are restored in MWI (and possibly in Bohmian mechanics) when all of the branches are taken into account, because unitary evolution remains valid.
It follows that the only way to avoid the violation of conservation laws would be if the time evolution would really be unitary, and the collapse never be real, but only appear as a collapse. It is indeed possible to have unitary evolution without collapse at the level of a single world, as explained in a series of papers by Stoica [93,100,94,101,102,103]. Note that previously Schulman made a proposal based on special states which, as initial and final conditions, are required to be separable, and evolve unitarily (see [86,87,88,89] and references therein). This is not the position taken here and in Stoica's previous works, and the differences of motivation and implementation of the two approaches are discussed in [103]. In particular, the main difference consists of using global consistency between local solutions allowed by quantum measurements spread in spacetime, while Schulman's approach is based on the initial and final conditions of separability of states.
Another problem appears when we take into account General Relativity. In this case, when the wavefunction collapses discontinuously, the stress-energy tensor operatorT ab , and the expectation value T ab |0 of this operator collapses as well. This leads, via the Einstein equation
(2) R ab − 1 2 Rg ab = 8πG T ab |0 ,
to a discontinuous change in the Ricci tensor R ab , hence in the spacetime curvature. This means a discontinuity of the covariant derivative ∇. But the covariant derivative is involved in the time evolution equations of all particles. This means that the wavefunction collapse of a single particle should affect the time evolution of all other particles whose wavefunctions propagate in the region where the collapse happened. If this effect would be testable, it could be used to send superluminal signals. So probably it cannot be tested, or no collapse actually happens.
In addition, in [46] it is shown that QFT on curved spacetime where equation (2) holds could be used to send signals faster than c in a different way, and that it leads to violations of the uncertainty principle, assuming the wavefunction collapses discontinuously.
All these arguments lead to the question whether it is really necessary that the wavefunction collapses in a discontinuous way, or whether is possible to avoid this and have unitary evolution even during quantum measurements.
What happens if the wavefunction collapse is unitary?
The reasons mentioned so far justify us to at least consider seriously the possibility that the time evolution is always unitary.
The unitary time evolution of quantum systems is not an additional assumption, it follows from the Schrödinger equation and its relativistic versions. What I do is not to add a new assumption, but to argue that the assumption that unitary evolution is suspended during measurements and replaced by a discontinuous collapse of the wavefunction is not actually proven by experiments, and it was accepted too quickly. The idea that the state vector projection needs to be discontinuous is in fact a new assumption, a radical one, never proved directly, and, I argue, unnecessary. If we can show that the discontinuous collapse is unnecessary, new possibilities open up, including the possibility to combine Quantum Theory with General Relativity without sacrificing any of them.
But unitarity is a strong constraint, and its consequences have to be understood. Consider for example a measurement of the spin of a single particle. If the evolution is always unitary, and no discontinuous collapse happens, it means that the observed particle was already in a state which evolved in the observed eigenstate corresponding to the measured spin. We can account for the interaction between the observed particle and the measurement device, and such an interaction exists indeed, and changes the state of the observed particle. For example, the Stern-Gerlach device measures the spin of neutral atoms having a non-null magnetic moment by using the interaction between the particle and the magnetic field, so that the particle's spin changes. But the change is too small to bring the particle in an eigenstate of the spin along the chosen axis, if the previous spin of the particle was not already in an appropriate state which could evolve into the observed one. If the interaction taking place during the measurement is too large, then the Born rule is violated, in the sense that the spin can even be reversed from | ↑ to | ↓ . Not any interaction counts as a quantum measurement.
The situation gets even trickier if we want to perform multiple non-commuting spin measurements on the same particle. The only way to do this without collapsing the spin in a discontinuous way is that the magnetic interactions with the two Stern-Gerlach devices are fine-tuned so that the particle leaves the first device with deviated spin, and is then deviated more by the interaction with the second device, so that the total deviation changes the spin from an eigenstate of the first spin operator to an eigenstate of the second one. This fine-tuning of course requires that the particles in the two devices have fine-tuned states, as if they would conspire to give us the right outcomes.
If there is only unitary evolution, with no nonunitary collapse, then the total quantum state, containing the observed particle and the measurement device, should be in very special states, to allow for definite outcomes [100]. In fact, the measure of the initial states which can result in definite outcomes of the measurements rather than Schrödinger's cat type of superpositions is zero compared to the measure of the entire Hilbert space, or at least almost zero, considering that many mesurements are not exactly sharp due to limitations coming from the Wigner-Araki-Yanase theorem [122,25,5]. In other words, the initial state of the observed particle has to be perfectly synchronized with that of the measurement device, even though they are separated initially by a spacelike interval. There is no way to escape the apparent conspiracy between the observed particle and the measurement device, even if they come from causally separated regions of spacetime.
Such fine-tuning of the initial state is usually interpreted in terms of retrocausality. This may look very strange, but retrocausal models were already suggested by de Beauregard [35], and after that by Rietdijk [80]. Another approach, which provides a mechanism taking place in "pseudotime" is Cramer's Transactional Interpretation [31,32,67]. Different models and approaches are proposed and discussed in [49,118,77,106,6,78,107,1,28,119]. An interesting and important proposal, based on evolution in both directions of time, is the two-state vector formalism [2,4,27].
A way to understand retrocausality is by appealing to Wheeler's delayed choice experiment [120]. In Wheeler's experiment, the setup is such that we can choose between making a which-path measurement or an interference measurement, after the moment when the observed photon either took both ways or randomly only one. So it looks like the photon's "choice" between the both-ways and the which-path possibilities is affected retrocausally by our choice of what experiment to perform. Of course, this experiment can be interpreted in terms of wavefunction collapse, but it is very eloquent in suggesting that a retrocausal effect takes place, which seems in this case a more natural explanation.
Later I will give an account of this apparent retrocausality, based on the block universe, which makes it less weird. But for the moment, let us face some more implications of this strange possibility.
An important feature of retrocausal approaches, including the purely unitary ones, is that they can recover both relativistic invariance and locality (or, we can interpret them as recovering locality in space, at the expense of "locality in time"). They are consistent with relativistic locality, and what they violate is statistical independence. As we remember from Bell's theorem, there are two conditions leading to Bell's inequality [14,15]. The first condition is that of locality, and the second one is that of statistical independence. Statistical independence means that the initial state of the observed pair of particles is independent from the experimental setup. From these conditions, Bell's inequality follows. But since the experimental evidence [8,7] showed repeatedly that Bell's inequality is violated, it means that at least one of the conditions of the theorem is violated, but we cannot say for sure which one. Retrocausal approaches violate statistical independence, and save locality. There is another implicit assumption in Bell's theorem, that the outcomes are determinate. If we assume that all outcomes happen, we obtain the Many Worlds Interpretation. I will come back to this later, because it provides a nice way out without having to choose between locality and statistical independence, and it also brings us a step closer to the unitary approach presented in this article. For the moment, we need to say more about retrocausality and locality.
Most physicists and philosophers of physics find it more acceptable to give up locality, and to maintain statistical independence. The reasons are obvious, violations of statistical independence seem to violate causality.
But a closer look considering relativistic causality in Minkowski spacetime shows that both options have similar problems. Nonlocality, coined by Einstein as "spooky action at a distance", is at odds with relativistic causality. But this nonlocality does not allow us to send signals or energy outside the light cone by quantum measurements. This is considered to be consistent with relativity, of course, at an operational or epistemic level, not at the level of ontology. At the epistemic or operational level there is a "peaceful coexistence" between QM and SR, but if we want to avoid being instrumentalists about QM, we should not do so by becoming instead instrumentalists about SR, so I would argue that no-signaling is not enough, locality should be obtained at the ontological level.
But the same can be said about retrocausality. It does not violate causality, because it cannot be used to send information or energy back in time. It cannot be used to change the outcomes of already performed measurements. The experiments of quantum time travel [71] can be done, but there is no such observable violation of causality, just like in the quantum teleportation experiment no such observed violation of Einstein causality occurs [16,110,22]. But if we take seriously Einstein's causality on Minkowski spacetime, and especially on curved spacetime, the option of rejecting locality is arguably more problematic than rejecting statistical independence. Both Special and General Relativity have well-defined ontologies, which are local. On a curved background, the fields whose stress-energy tensor corresponds to a spacetime curvature via Einstein's equation should have well-defined local ontologies too. So it is irrelevant from this point of view that we can claim that violations of causality happen at the ontological level, but they are not observable. If we seriously adhere to the goal of providing an ontology for Quantum Mechanics, we have to take the ontology seriously all the time, and not conveniently ignore this ontology on the grounds that no quantum measurement can show that it violates Einstein's causality. But the alternative way, of an ontology based on violations of statistical independence which does not violate locality, is not at all inconsistent at the ontological level with Einstein's causality, especially in the block universe view. Retrocausal models have an advantage -ontologically, they are perfectly consistent with SR and GR, there is no need to invoke instrumentalist arguments to recover this consistency. The only reason to invoke no-signaling backwards in time is to recover causality, but this kind of causality is not necessary for SR or GR at a fundamental level, it is only needed at the coarse grained level. Indeed, both SR and GR are perfectly time symmetric theories, and they cannot distinguish past from future lightcones, so the covariant equations can be solved in both directions of time, and they allow influences in both directions of time. Not only are they consistent with bicausal interpretations, but they even endorse them. A direction of time is preferred because of the Second Law of Thermodynamics, but this preference is only statistical, and for the other laws retrocausal influences can always be reinterpreted as causal influences with special initial conditions.
Of course, to see that the present proposal indeed avoids the problems of nonlocality, we have first to make sure that the unitary evolution ontology is local. As I explained, the wavefunction is defined on the configuration space, rather than on the physical space. This means that the wavefunction is holistic. But even so, it admits a representation in terms of fields on spacetime, which propagate and interact locally, as long as the evolution is unitary [104]. Being holistic means that it allows entanglement, but since there is no wavefunction collapse, this entanglement is never projected to separable states. If this would have happened, then of course violations of locality would occur, because such a projection would have to be nonlocal. Even in the case of the EPR-B experiment this can be true, because it is possible to explain them as the particles not being actually in entanglement, but rather as if the singlet state decayed into two separate states, as if it already collapsed before the two particles went in separate places [35,80,78]. This interpretation seems to be confirmed through weak measurements [3] in the exact way that Bohmian trajectories are considered to be confirmed [69]. Hence, we can consider the processes taking place during the EPR-B experiment as being local in the sense that the particles are described by local solutions of the Schrödinger equation. This kind of spacetime locality is not what we usually expect when we speak about locality, because it depends on the final conditions imposed by the experimental setup. The solutions are local in the sense that they obey partial differential equations on spacetime, but they are also subject to boundary conditions which are global and impose the apparent (spatial) nonlocality like that from Bell's theorem.
To understand how unitary evolution is local, let us first consider a wavefunction which is a separable state in a basis of eigenstates of an operator which commutes with the Hamiltonian. Since the relativistic evolution equation of such a state does not contain nonlocal interactions, this means that the state will evolve in a local manner, even though it is a multiparticle state. A general state is a superposition of such states, and each term of the superposition evolves locally. And since we cannot project any of them out, because we assume that all evolution is unitary and no discontinuous collapse occurs, this means that locality is ensured. Regardless of what other meanings we assign to locality, this type of locality is consistent with Einstein's locality and causality. More about how the wavefunction can be represented in a locally separable way on the three-dimensional space, and how its unitary evolution is local, can be found in full mathematical detail in [104].
Since I mentioned Einstein's locality and causality, it would be interesting which of the conditions from the 1935 EPR paper [41] is violated by this proposal. It is not difficult to see that Einstein's criterion of reality is violated, but not in a "lethal" way. Indeed, our ontology does not satisfy Einstein's realism, because the state before the measurement cannot be just any state, it depends on the experimental setup. This is another way to say that it violates statistical independence. Simply put, you cannot have any initial state for the observed particle and at the same time any initial quantum (microscopic) state of the apparatus. But this is not "lethal", since it is perfectly consistent with relativity and Einstein's causality and locality. In addition, as argued in [103], the macroscopic state of the apparatus and the quantum state of the observed system can be statistically independent.
Moreover, the avoidance of a discontinuous collapse also avoids the problems identified in Section §3, while a nonlocal ontology, at least in the currently known forms, cannot do this, because there is no way to take the wavefunction as becoming empty of physical properties, and at the same time to say that this is not the same as collapse, as explained in Section §2.
But, even if we still find retrocausality preposterous, as I already alluded, there is a way to tame it, which is based on the fact that there is another hidden assumption in Bell's theorem, which requires that the outcomes are determinate. The violation of this assumption makes possible the Many Worlds Interpretation, which allows all outcomes to be obtained. Alternatively, one can also interpret MWI at the branch level, where the outcomes are determinate, as violating one of the two main assumptions of Bell's theorem. At first sight, it may seem that branching happens through some nonlocal collapse accompanying the measurement, and then in each branch locality is violated. However, the other option makes more sense. Consider the EPR-B experiment, and start from the outcomes obtained by Alice and Bob. When the two particles are detected, they no longer form an entangled state, and they no longer interact. Therefore, if we evolve the solution unitarily backwards in time, the two particles turn out to be separable and to not interact for the entire time interval between the decay that produced them and the measurement, so no collapse is required to occur except when the spin 0 particle decayed into two spin 1/2 particles. So it makes sense to consider that the collapse accompanying the branching took place, in a very localized manner, when the decay occurred, and the particles evolved unitarily until their detection. This makes some proponents say that MWI is local [9,38,111,112]. A salient feature of this local interpretation of MWI is that, although in each branch retrocausality seems to happen, this does not really happen in the universal wavefunction, so overall, there is no violation of either statistical independence or locality, because all possible outcomes occur.
We have seen that it makes more sense that, in MWI, the split happens during decays and when systems are detected. While each branch taken independently manifests, by this, retrocausal features, on the overall there is no such thing. In the example with the EPR-B experiment, the branching seems to happen only during the decay into two spin 1/2 particles, and this can be understood by pushing unitary evolution back in time as much as it holds without contradicting the observations. But is it necessary to stop this at the decay? As it is understood from the Weisskopf-Wigner model of decay [117], while an isolated atom or composite particle should be stable, even if it is excited, vacuum fluctuations make it unstable, and this can be described as a superposition between the excited state and the decayed state (including the products of the decay). Then, the decay happens as a result of the amplitude damping of the excited state, so eventually the system decays. Observations of the system may introduce a collapse, which either makes the system decay, or makes it continue to remain in the excited state. But in the case of the singlet state used in the EPR-B experiment, there are different ways to decay, and the vacuum also is in an undetermined state. So it is possible that, when evolving backwards in time the two particles, one obtains a history in which the vacuum was in the right state required to provoke the decay of the singlet state exactly as needed for the subsequent measurements made by Alice and Bob to find the two particles in the right states. It is, at least in principle, possible that there is no actual branching at the fundamental level, but at the coarse-grained macroscopic level it would appear as if there is a collapse and that there is randomness. In this case, the MWI would be modified into a theory in which the worlds, rather than splitting, are really parallel and evolve unitarily, but they are not always distinguishable at the coarse grained level. In addition, it is known that MWI has a problem with probabilities. While even since Everett's seminal work [47,48] it was proposed to attribute probabilistic meaning to the amplitudes, this does not seem very appropriate, because the wavefunction is ontic. It would be perhaps more convincing to have a derivation of probabilities in terms of ensembles of microstates. But in a theory of truly parallel worlds, which only seem identical up to some point when measurements happen, and then they become distinguishable, probabilities would occur naturally in terms of ensembles of microstates. So, if we take the idea of parallel worlds which always evolve unitarily, we recover the conservation laws for each of the worlds, and also make room for probabilities. From all these arguments, it seems that if we push MWI to its consequences, each branch should in fact be independent and evolve unitarily for its entire existence. It remains an open problem how to derive the correct probabilities according to the Born rule in such a theory.
Global consistency condition
We have seen that, if the unitary evolution is not broken even during the measurements, then the initial states including both the observed system and the measurement apparatus (and the environment), have to be severely constrained, otherwise the outcome of the measurement will be undefined [100]. This looks like fine-tuning, superdeterminism, or retrocausality. We have seen in section §4 that MWI provides a nice way to tame retrocausality. But here I will discuss another possible explanation, which works for a single world, and makes sense in the block universe picture, and only looks like fine-tuning or retrocausality for the observers experiencing the flow of time.
Such an explanation is provided by sheaf theory [23]. Sheaf theory has many applications, but the one in which we are interested is how local solutions of partial differential equations (PDE) extend globally. When a PDE has well-defined initial conditions which ensure local solutions, this is not necessarily enough to ensure global solutions. Recall from Local or Algebraic Quantum Field Theory [53] that observables are local operators. Then, they can only give local information about the wavefunction. While this information is local, we need to extend the solution globally, and this has to be done consistently, so that we do not run into contradictions. For example, if a particle is found at a certain location, its amplitude at other locations should vanish. Similarly, in the EPR-B experiment, the local solutions found by Alice and Bob have to be mutually consistent, even if they are imposed at different locations. This is where sheaf theory becomes useful. Sheaves are collections of local solutions of PDE. When two local solutions have disjoint domains, or are equal on the common domain on which they are defined, they can be "glued" and extended to the union of their individual domains. But this extension of local solutions does not always work, because global extensions are not always guaranteed to exist. Even if the domains of the local solutions do not overlap, it is not always possible to have a global solution extending them. By this, sheaf theory shows us that, when the information we have about the wavefunction is spread in multiple locations, not all local conditions can be mutually consistent, so there are certain constraints, and correlations are enforced between local solutions. This is relevant for our discussion. For example, in the EPR experiment, we know that not all combinations of outcomes that Alice and Bob can obtain are consistent, and those that are consistent have different probabilities to occur. Global extensions of sheaves have similar properties. Sheaf cohomology studies the obstructions which prevent the extension of local solutions to global ones. These obstructions are usually of topological nature. The space of initial or boundary conditions which admit global extensions is reduced, in the presence of such obstructions, to lower dimensional spaces. A simple example is that of complex holomorphic functions. They are complex functions satisfying the Cauchy-Riemann condition, which is equivalent to saying that such functions depend on z ∈ C but not on its conjugate. On the complex plane C they are spanned by the powers n ≥ 0 of z ∈ C, but the solutions do not always extend globally through analytic extension, they can run into singularities. Those with global extensions on C without singularities are called holomorphic, and their space is smaller than the space of functions spanned by the powers of z, yet still infinite-dimensional (the polynomials in z form an infinite-dimensional subspace of the space of holomorphic functions on C). But on the Riemann sphere C ∪ ∞, the only holomorphic functions are the constant ones, so their space is one-dimensional. Changing the topology by adding a single point introduces incredible constraints. Such examples suggest an interesting possibility: what if there are constrains, perhaps topological in nature, at the level of the spin and gauge bundles and especially interaction bundles used to describe particles and forces, which ensure that only certain solutions are possible? Could the global solutions correspond to the definite outcomes of measurements, and explain the correlations between outcomes obtained when measuring entangled systems? From the mathematical examples we have in sheaf theory, this appears to be a plausible possibility, and it makes sense to explore it as a candidate solution of the measurement problem and the problem of the apparent classicality at the macroscopic level. But it is difficult to know for sure, until we will have a better understanding of the fiber bundle structures, and of what kind of constraints they impose to the global existence of solutions.
It is interesting that Schrödinger used a consistency condition on the space to obtain the energy eigenstates of the Hydrogen atom, in the form of a boundary condition of the wavefunction of the electron at infinity [85]. The usual view on this is in terms of eigenstates of the Hamiltonian, but in terms of a wavefunction on space, this turned out to come from boundary conditions at infinity. The requirement of global consistency is similar, but on spacetime rather than space, in fact, on the spinor and gauge bundles.
One may think that the example of complex holomorphic functions is irrelevant to Quantum Mechanics for two reasons. First, what is the relevance of the Cauchy-Riemann condition? Well, the Cauchy-Riemann operator is for two real dimensions what the Dirac operator is for Minkowski spacetime, as it is known from the theory of Clifford algebras [26,33]. Moreover, the same operator can be used to express the Maxwell equations in a more compact way as a single equation [64], and this also works for the Yang-Mills equation.
A second objection could be that Minkowski spacetime has a trivial topology, and even the curved spacetime probably has, if not a trivial topology, maybe a simple one. And this is true, but on the other hand in reality both the existence of spin 1/2 particles, and of the internal gauge degrees of freedom, require the existence of fiber bundles, and they have a complicated topology. Unfortunately, it is not currently understood exactly what happens from a topological point of view, and consequently we do not know the conditions to be satisfied for a solution to be global. But the point is that the Hilbert space is severely reduced, and we need to see exactly how. Future understanding of the geometric and topological properties of particles and their interactions, especially in a quantized theory, may shed light on this issue and explain both why the quantum states appear classical at a macroscopic level, and how quantum measurements yield definite outcomes without invoking the wavefunction collapse.
But what is relevant to our discussion is that, if this is the case, then we have an explanation for the restrictions on the initial conditions: only those initial conditions leading to globally consistent solutions on the entire spacetime are admitted. And while such severe restrictions appear as a conspiracy to an observer experiencing the flow of time, a bird's eye view of the block universe would see everything just as the natural condition that the solutions are global.
Hence, despite the fact that the block universe is sometimes seen as being at odds with Quantum Mechanics, it complements it and offers a possible solution to its problems.
Quantum probabilities
The solutions of Schrödinger's equation are unitary, but when we think about "wavefunction", we think at two different things. On the one hand, as long as no measurement is made on a quantum system, we can regard the wavefunctions as a function on the configuration space, but it also admits a representation as fields on spacetime [104]. On the other hand, no measurement can completely determine the quantum state of the entire system made of the observed system and the measurement apparatus (and other relevant parts of the environment). This means that our measurements cannot be used to determine the future outcomes of the measurements of the observed system, but only probabilities. I do not yet have a proof whether these probabilities are exactly those given by the Born rule or not, but the possibility to recover the Born rule exists. The real, "ontic" wavefunction will be never completely determined, but what we can know is an "epistemic" wavefunction. The notions "ontic" and "epistemic" may be used differently by different authors, but I will stick with the definition that there is a real, "ontic", physical wavefunction, and "epistemic" is the partial knowledge of the ontic wavefunction of the universe, which translates as probabilities when it comes to learn more about it through measurements.
Some usual terms associated to the wavefunction have a statistical connotation: expectation value, uncertainty, etc. These terms retain their statistical meaning when we are talking about the epistemic wavefunction, which is probabilistic. But when we are talking about the ontic wavefunction, the "expectation value" of an operatorÔ simply means the field ψ|Ô|ψ , and similarly for uncertainty and other terms.
In particular,
(3) T ab |0 = 0|T ab |0
is not a true expectation value, but, after suitable regularization, a physical field on spacetime. This will be relevant in the next section.
Quantum Theory and General Relativity
Despite the measurement problem and the problem of the emergence of the classical world of Quantum Mechanics, Quantum Field Theory is incredibly successful in describing the microscopic scale. At large scales, General Relativity does the job with equal success. Both theories were extensively tested, with great precision and in diverse situations, and their predictions turned out to be accurate every time. Not everything is understood, for example in cosmology there are the problems of inflation, dark energy, and dark matter.
We do not now yet if they require changing the Standard Model, Quantum Theory, or General Relativity.
However, since both theories have to be true, we need to understand how they work together. When we try to combine them, they seem to be in a conflict. The general trend is to consider that one of the two theories will have to be radically changed, or even replaced, and that this one is GR. The main invoked reasons are the successes of Quantum Theory, the prediction of singularities in GR, the black hole entropy, and the information loss paradox. But mainly the resistance of gravity to be quantized in the same way as the other forces.
The predictions of QFT are confirmed to great accuracy, in particular in the case of the anomalous magnetic dipole moment. On the other hand, GR's predictions are confirmed with at least the same accuracy, and even better in the case of the Hulse-Taylor binary pulsar PSR 1913+16 [62]. I think both theories were confirmed incredibly well in all predictions that could be tested. So it would not be fair to hold the success of QFT against GR.
It is true that GR predicts the occurrence of singularities [76,56,57,58,61]. But QFT is plagued with infinities too, in both the UV and IR regimes. It is true that renormalization worked particularly well in the predictions of great precision, and the renormalization group (actually it is a semigroup) idea provides a deeper understanding, but it is not as if the infinities are completely cured. In fact, they are rather an artifact of the perturbative expansion. But the singularities in GR are not worse at all. Indeed, it is more difficult to see this, but it turns out that differential geometry [98], and in particular GR [96], can be formulated in a completely invariant way which is equivalent to the standard formulation outside singularities, but which allows us to write field equations, including an equation equivalent to Einstein's, even at singularities (see [95] and references therein). Moreover, the same solution of the problem of singularities was used as an ingredient in an explanation of the observed values of the expansion rate of the universe [116,115].
As for the problem of the quantization of gravity, do we really need to do in the same way as other forces are quantized? They are of apparently different nature, gravity is due to spacetime curvature, and the other forces are gauge fields. There seems to be no apriori reason to do the same for gravity, which is just inertia on curved spacetime. Could the theory of quantum fields on curved background, where the expectation value of the stress-energy operator is introduced in the Einstein equation (2), be enough? This is usually called semi-classical gravity [70,82], and is considered to be an approximation of the true quantum gravity which remains unknown. But could it be enough?
In the case of a single particle which never collapses, it seems at first sight that there is no problem with semi-classical gravity: even if the particle is set in various superpositions, what matters for GR is its stress-energy tensor. If there are more particles, or even an undefined number as it is often the case in QFT, particularly on curved spacetime, the expectation value of the stress-energy operator does the same job. And since there is never a wavefunction collapse even though the macroscopic world seems classical and the measurements have definite outcomes, the stress-energy tensor is conserved and behaves as any classical source for gravity and spacetime curvature.
There are several arguments usually invoked that semi-classical gravity is not enough.
In [46], it is argued that if gravity is classical, one could use it to measure a quantum state in a way which violates the uncertainty principle. Two cases are identified. (1) If the wavefunction of the observed system is collapsed during the classical measurement with a gravitational wave, such a measurement could lead to violations of the momentum conservation. (2) If the classical measurement does not collapse the wavefunction of the observed system, then we could use it to send superluminal signals. To prove this argument, one considers a particle in two boxes, one given to Alice, and another one to Bob. Then, Bob can "look" inside his box using arbitrarily non-disturbing classical measurements, and determine the state of the wavefunction in his box without collapsing it. By this, he would be able to know if Alice looked in her box. So, it is concluded, semi-classical gravity may lead to the transmission of signals faster than c.
But if there is no wavefunction collapse, option (1) is easily avoided without quantizing gravity. At first sight it may seem that option (2) is avoided as well, because even if Bob does not collapse the wavefunction, Alice does it.
However, in our approach, measurements can introduce challenges. We have seen that if quantum measurements take place by unitary evolution, the present state of the particle should depend on the future measurement. So, if we can use such weak classical gravitational fields to determine the state of a particle without disturbing it too much, we can extract information about the choice of the future quantum measurement and its outcome. Since this choice is supposed to be free, it can be used to send information back in time. It can be argued that this situation is similar to the weak measurements [108] of the state before undergoing a quantum measurement -both theory and experiments show that there is a correlation between the weak measurements and the future quantum measurements, but it is not enough to signal back in time [3]. However, if by using classical fields one can extract more information about the state while maintaining the disturbance small enough, one could be able to signal back in time. Therefore, at least for this case, the most natural way out of this problem seems to be to quantize somehow the gravitational field, so that any measurement we make using gravity disturbs the observed system like quantum measurements do.
In [75] an experimental result is described, testing whether semi-classical gravity with no wavefunction collapse holds (although the authors assume that the unitarity could only be ensured by the Many Worlds Interpretation). The result is interpreted as a falsification of semi-classical gravity because no superposition of macroscopic massive objects at different locations was found. However, there is an alternative explanation favorable to semi-classical gravity without collapse, if we assume unitary evolution for a single world, such that macroscopic massive objects do not end out up in a superposition of being in different positions. According to the proposal supported in this article, global consistency may prevent Schrödinger cats, including superpositions of macroscopic massive objects at different locations. Hence, if the hypothesis that global consistency allows for a unitary solution which never collapses discontinuously, and at the same time prevents macroscopic superposition, is true, no serious theoretical or experimental evidence against semi-classical gravity is found in [46] and [75].
Therefore, the absence of a collapse makes Quantum Theory and GR more compatible. But this does not exclude the possibility that quantization of gravity is still needed, and in fact still seem to require it.
It is known that, normally, quantization of gravity cannot be done perturbatively. However, various suggestions of modifications are made, going under the common umbrella of dimensional reduction techniques. They are based on ad-hoc assumptions which have the purpose to reduce the divergences as the perturbative expansion goes to the UV limit [97]. It turns out that there is no need to make these ad-hoc hypotheses, since the treatment of singularities given in [95] ensures automatically the conditions imposed in several of these approaches, assuming that, in the UV limit and in a position basis, particles behave more and more like tiny black holes with singularities [97].
Another often-encountered argument is that GR has to be changed, because if we would try to experimentally probe spacetime at Planck scales, we would not be able to do it, because the high energies involved would lead to the creation of micro black holes. While indeed this will be the case, this only sets a limitation on our experimental possibilities, one not due to our limited technology, but to principial limitations due to backreaction. But the universe is under no contractual obligation to allow us to probe spacetime in these regimes. Such a limitation of our experimental possibilities does not require replacing spacetime with something else, especially since for anything else we would have the same experimental limitations.
Another reason sometimes invoked when semi-classical gravity is said to be only an approximation comes from the black hole entropy. It is said that the event horizon is homogeneous, and it would not be able to store information, so either the horizon or the spacetime inside the black hole have to be discrete to represent microstates which would give the right entropy. But the entropy calculations are done with the help of QFT on a spacetime regime. The black hole entropy is calculated based on the quantum information of the particles falling in the black hole [12,10,114,66]. Hence, the black hole entropy was derived in the strict framework of QFT on a curved background. The same holds for black hole evaporation -it was derived in the same framework [59,60]. This means that both black hole entropy and black hole evaporation are predicted and explained by QFT on spacetime already. Then why would we need another theory to derive the same predictions [105]? Moreover, the information is considered lost only because the black hole singularities seem to lead to Cauchy horizons, but if the global hyperbolicity is not necessarily destroyed by singularities, as explained in [95], then this problem can be solved within the framework of GR itself [99,105].
All these arguments concur in showing that the main problems we tend to think to require a Quantum Theory of spacetime itself may in fact be solved within QFT on curved spacetime. There is no necessary reason to think otherwise. Surely, it may still be true that there is a better theory, maybe one of the proposals of quantum gravity or maybe one we do not know yet, but the currently known arguments no longer seem to be so strong.
The post-determined block universe
As humans, very early in life we become aware that events that already happened cannot be changed, and that future events, although unpredictable, can be influenced by our present actions. This intuition is so deeply hardwired in our world view, that it seems unnatural to even question the idea that past and future do not exist, but only present does. Therefore, it comes as a surprise that thinkers like Parmenides, Boethius, Augustine of Hippo, Anselm of Canterbury, Dōgen, and others either claimed, or seem to suggest the opposite view, that past, present, and future are equally real. The first, more common position, is called presentism, while the second position is called eternalism. A major advocate of eternalism was McTaggart [72].
With the discovery of classical mechanics, and its determinism, we started to realize that, at least mathematically, the past, present, and future states of the universe are encoded in the state of the universe at any time t 0 . As Laplace explained it, if the state of the universe as a physical system is known with perfect accuracy at a time t 0 , and the equations or motions are known, one can, in principle, calculate the state of the universe at any other time. This is a property of the differential equations expressing the physical laws of classical mechanics. What appears to us as random events are in fact due to the absence of complete information about the state. So it is surprising that, knowing that past and future are encoded in the present, physicists did not consider eternalism as a viable option. We had to await the discovery of Special Relativity, with its relativity of simultaneity, to take this idea seriously. Accordingly, if two observers in relative motion agree that a certain event took place at a certain time, they will in general disagree about what other events happened at the same time with the first one. Spacetime as a block universe (BU) appeared gradually in the works of Lorentz, Poincaré, Einstein, and Minkowski [73]. The idea of a spacetime in which time and space are inseparable seemed too farfetched to many physicists, who either rejected Special Relativity, or considered spacetime as a convenient mathematical tool, but eventually the BU and the eternalist position started to be seen as endorsed by Special Relativity.
One of the major objections against the BU was its apparent incompatibility with freewill (a notion which I do not know how to define or explain). A hybrid proposal was made by C.D. Broad in 1923, the growing BU [24]. According to this proposal, both past and present exist, but the future does not yet exist. The past is a block which grows continuously with the passage of time, which is connected to the human experience. In a totally different direction, to reconcile free-will with classical determinism, compatibilists took the position that freedom means acting according to one's nature, and thus determinism is not only consistent with freedom, but it allows it to be expressed. This compatibilist position is consistent, in particular, with the BU. But Hoefer took a step beyond compatibilism, and proposed that we are even free to make choices affecting the initial conditions of the universe [65], within the framework of the BU of classical relativity.
Soon after the discovery of Special Relativity, Quantum Mechanics appeared, with its probabilistic Born rule and Heisenberg's uncertainty principle. This time, it seemed difficult to explain the probabilities as a mere lack of knowledge, and they came to be understood as irreducible. Results like Bell's theorem and the Kochen-Specker theorem [68,13] were taken by many as endorsing this position. This happened despite the fact that at least a deterministic interpretation of QM was known at that time, the pilotwave theory of de Broglie and Bohm [17], which is consistent with both no-go theorems, being both nonlocal and contextual. The presentist position seemed to win. Because of this, it is usually believed that the BU picture can only hold for classical GR, but quantum indeterminism is necessarily incompatible with it. This is sometimes taken as evidence that the eternalism of GR does not hold, and it should be replaced by presentism, which can be expected to be consistent with the wavefunction collapse and with nonlocal interpretations of QM. But the evolving or growing BU model [24] was seen by Ellis as consistent with QM, since the growth can be seen as taking place in a nonlocal and indeterministic way as new quantum measurements are performed [42,43]. Later, in 2009, Ellis and Rothman proposed a crystallizing BU, which captures the "quantum transition from indeterminacy to certainty" [45]. This model also includes retrocausal influences, and the crystallization does not always occur simultaneously, in some cases having to wait for future experiments to be done, as it is the case with the delayed choice experiment [120]. Thus, the model provides support in particular to retrocausal interpretations of QM in which there is collapse, the collapse leading to the growth of the past block, and quantum indeterminacy allowing the future to be open.
It is possible to conceive a splitting or branching BU, which would be the BU version of the Many Worlds Interpretation, see for example [109], and for some criticism [44]. One can imagine a branching BU, in which each branch is an evolving or crystallizing BU, but on the overall they are all part of a tree-like structure. But there is a problem: in MWI, the wavefunction is real, but it is defined on the configuration space. Even for each branch, there is entanglement, at least because each branch contain atoms, which contain entangled particles. So, the wavefunction has to be replaced somehow, with an equivalent structure defined on space or spacetime, rather than on the configuration space. In fact, this applies to the other interpretations which make use of the wavefunction. Fortunately, as I already mentioned, there is a way to replace the wavefunction by a highdimensional field defined on space, which is mathematically equivalent to the wavefunction on configuration space [104].
Despite the tremendous success of nonrelativistic Quantum Mechanics in describing microscopic physical phenomena, the domain of high energies requires something more. Quantum Field Theory appeared as a relativistic quantum theory. Wigner and Bargmann classified elementary particles in terms of representations of the Poincaré group [121,11]. Thus, as long as there is no collapse, QFT is consistent with relativistic invariance, and it even requires it at the deepest level. But the wavefunction collapse would be in tension with this relativistic invariance, required by the very definition of particles in terms of representations of the Poincaré group. This is another reason to take unitary evolution seriously, and this brings us to the central point of this article.
In this article, I argued that in Quantum Mechanics it is possible for the time evolution to be unitary, without discontinuous collapse, at the level of a single world. I argued that this has some advantages, in particular it allows us to have a single law which is not broken even occasionally, it ensures the conservation laws, avoids nonlocality, and makes QFT more consistent with GR, but seems to suggest that the initial conditions have to be very specially chosen, to conspire to make measurements have definite outcomes rather than resulting in Schrödinger cats. But these apparent conspiracies seem rather natural if we see them as the consequences of global consistency in a BU. I hypothesized that the global consistency is about the consistency of solutions in the presence of topological constraints of the fiber bundles from spin geometry and gauge theory, and that it manifests itself by reducing the Hilbert space to a subset which does not contain Schrödinger cats and where all quantum measurements have definite outcomes.
These arguments suggest the following picture of a post-determined block universe. There is a BU, on which QFT is true and the average of the stress-energy operator connects to the spacetime curvature via the Einstein equation (2). Yet not all possible initial conditions can be true, but only those which lead to globally consistent solutions. So we start with a set of possible BUs consistent with the previous quantum observations, and as we make new observations, we refine that set. So far it looks quite like classical physics, but since the initial conditions are severely constrained by global consistency, the resulting correlations can be expected to violate the Bell inequality and its generalizations in a way in which a dynamical system with no global constraints of the initial conditions could not be able to violate. This allows us to have a picture in which quantum measurements happen as predicted by the projection postulate, without breaking the unitary time evolution. In this picture, the BU is not pre-determined, but it is gradually post-determined as we make new quantum observations. This kind of BU is deterministic, but it is not predetermined in the usual sense. The initial conditions are determined with a delay, by each new measurement and each choice of what to measure. The requirement of global consistency implies a severe restriction on the possible solutions of the Schrödinger equation, but since the observers can choose what to measure, it looks like they determine the past initial conditions more, with each new choice. The solution is still deterministic, but it is determined by future choices and the outcomes of measurements. We can still think of this proposal as including a form of superdeterminism or retrocausality, if we assume that the initial conditions are fixed from the beginning. But we can also take the position that the quasi-classical limit, which is a coarse graining of the low-level quantum state, evolves by usual causality in an indeterministic way. As observers, we start with the full set of quantum states consistent with our previous macroscopic observations, and then reduce them as new measurements provide more information. And since we never know the true quantum state, but only outcomes of our observations made on subsystems, these observations allow us to predict only probabilities, or an epistemic wavefunction which is an approximation of the ontic wavefunction, and has to be readjusted after each new measurement by invoking the wavefunction collapse.
For the reader concerned about the determinism inherent in the post-determined BU, there is freedom in the initial conditions. The proposal is consistent with "free-will" like in Hoefer's model [65], but, in addition, it works in the context of QM, to explain measurements by unitary evolution alone. In relation to unitary evolution without collapse, this type of BU was suggested in various places by the author [93,91,92,94,101]. The post-determined BU can be compared with the crystallizing BU proposal [45], since the latter also has retrocausal influences due to delayed choices, but in the post-determined model, these apparent retrocausal influences apply to the entire past history of the universe. The post-determined BU can also be compared with the splitting BU of MWI, but there is no splitting at the fundamental level, only at the coarse grained level we can consider that there is branching. We start with a set of possible block universes consistent with our current observations, and as we add new observations, we refine the set of possible block universes, by eliminating those that are inconsistent with the new data. Indeterminism, which can also be interpreted as branching, is manifest only at the macroscopic or coarse grained level. In the post-determined BU, here is no actual growth or crystallization or branching at the fundamental level, but only at the coarse grained level accessible to the observers. The post-determined BU accommodates the main advantage of these proposals, which is their consistency with Quantum Mechanics, by solving the problems mentioned in §3, and restores the full advantages of the BU picture at the same time.
The post-determined BU is as deterministic and fixed as the standard one from the bird's eye view of someone who knows completely the ontic wavefunction of the universe. From the point of view of someone who is part of the universe itself, like us, it may look as a growing BU, with the amendment that the growth is not only towards the future, but at the quantum scale, because of global consistency, it also seems to be growing towards the past, giving the impression of retrocausality. But this retrocausality is not accessible to us to send messages into the past or at a distance, being forbidden by the fact that we only have " clearance" to approximate eigenstates, and not to the full quantum state of the observed systems.
By eliminating the discontinuous collapse, we remove important obstructions that seemed to put Quantum Theory and General Relativity at odds with each other. The so-called semi-classical gravity can now be more than an approximation of a future theory of quantum gravity, or at least it can be a better approximation than we used to think. With an ontic wavefunction, the "expectation value" of the stress-energy operator is not a probability, but a field, and we can plug it into Einstein's equation and get a well-defined classical geometry. This does not imply that there is no need to quantize gravity, but rather that such a quantization has to solve fewer problems, because some of them are already avoided by avoiding the collapse and by allowing matter to be described by an ontic wavefunction.
According to Bell, "No one can understand this theory until he is willing to think of [the wavefunction] as a real objective field rather than just a 'probability amplitude'. Even though it propagates not in 3-space but in 3N-space."[15] p. 128.
Spooky action at a temporal distance. Emily Adlam, Entropy. 201Emily Adlam. Spooky action at a temporal distance. Entropy, 20(1), 2018.
Time symmetry in the quantum process of measurement. Y Aharonov, P G Bergmann, J L Lebowitz, Phys. Rev. 134Y. Aharonov, P.G. Bergmann, and J.L. Lebowitz. Time symmetry in the quantum process of measurement. Phys. Rev., 134:1410-1416, 1964.
Can a future choice affect a past measurements outcome?. Y Aharonov, E Cohen, D Grossman, A C Elitzur, Ann. Phys. 355Y. Aharonov, E. Cohen, D. Grossman, and A.C. Elitzur. Can a future choice affect a past mea- surements outcome? Ann. Phys., 355:258-268, 2015.
Complete description of a quantum system at a given time. Y Aharonov, L Vaidman, J. Phys. A. 242315Y. Aharonov and L. Vaidman. Complete description of a quantum system at a given time. J. Phys. A, 24:2315, 1991.
Measurement of quantum mechanical operators. H Araki, Yanase, Phys. Rev. 1202622H Araki and MM Yanase. Measurement of quantum mechanical operators. Phys. Rev., 120(2):622, 1960.
On Bell's Theorem and Causality. N Argaman, arXiv:0807.2041Technical ReportN Argaman. On Bell's Theorem and Causality. Technical Report arXiv:0807.2041, Jul 2008.
Bell's Inequality Test: More Ideal than Ever. A Aspect, A. Aspect. Bell's Inequality Test: More Ideal than Ever, 1999.
Experimental realization of Einstein-Podolsky-Rosen-Bohm gedankenexperiment: A new violation of Bell's inequalities. A Aspect, P Grangier, G Roger, Phys. Rev. Lett. 49A. Aspect, P. Grangier, and G. Roger. Experimental realization of Einstein-Podolsky-Rosen-Bohm gedankenexperiment: A new violation of Bell's inequalities. Phys. Rev. Lett., (49), 1982.
Remarks on space-time and locality in everetts interpretation. G Bacciagaluppi, Non-locality and Modality. T Placek and J ButterfieldSpringer Science & Business Media64G Bacciagaluppi. Remarks on space-time and locality in everetts interpretation. In T Placek and J Butterfield, editors, Non-locality and Modality, volume 64, pages 105-122. Springer Science & Business Media, 2002.
The four laws of black hole mechanics. J M Bardeen, B Carter, S W Hawking, Comm. Math. Phys. 312J. M. Bardeen, B. Carter, and S. W. Hawking. The four laws of black hole mechanics. Comm. Math. Phys., 31(2):161-170, 1973.
On Unitary Ray Representations of Continuous Groups. V Bargmann, Ann. of Math. 59V. Bargmann. On Unitary Ray Representations of Continuous Groups. Ann. of Math., 59:1-46, 1954.
Black holes and entropy. J D Bekenstein, Phys. Rev. D. 782333J. D. Bekenstein. Black holes and entropy. Phys. Rev. D, 7(8):2333, 1973.
On the Problem of Hidden Variables in Quantum Mechanics. J S Bell, In Rev. Mod. Phys. 383J. S. Bell. On the Problem of Hidden Variables in Quantum Mechanics. In Rev. Mod. Phys. 38(3):447-452, 1966.
On the Einstein-Podolsky-Rosen paradox. J S Bell, Physics. 13J.S. Bell. On the Einstein-Podolsky-Rosen paradox. Physics, 1(3):195-200, 1964.
Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Bell, Cambridge University PressJS Bell. Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Cambridge University Press, 2004.
Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. C H Bennett, G Brassard, C Crépeau, R Jozsa, A Peres, W K Wootters, Phys. Rev. Lett. 70131895C.H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W.K. Wootters. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett., 70(13):1895, 1993.
A Suggested Interpretation of Quantum Mechanics in Terms of "Hidden" Variables, I and II. D Bohm, Phys. Rev. 852D. Bohm. A Suggested Interpretation of Quantum Mechanics in Terms of "Hidden" Variables, I and II. Phys. Rev., 85(2):166-193, 1952.
Wholeness and the Implicate Order. D Bohm, D. Bohm. Wholeness and the Implicate Order, 1995.
Causality and chance in modern physics. Routledge. D Bohm, D Bohm. Causality and chance in modern physics. Routledge, 2004.
Atomic Physics and Human Knowledge. N Bohr, N. Bohr. Atomic Physics and Human Knowledge, 1958.
Zur Quantenmechanik der Stoßvorgänge. M Born, Quantum Theory and Measurement. J. A. Wheeler and W. H. ZurekPrinceton, NJPrinceton University Press52M. Born. Zur Quantenmechanik der Stoßvorgänge, Reprinted and translated in J. A. Wheeler and W. H. Zurek (eds.), Quantum Theory and Measurement (Princeton University Press, Princeton, NJ, 1963), p. 52, 1926.
Experimental quantum teleportation. D Bouwmeester, J.-W Pan, K Mattle, M Eibl, H Weinfurter, A Zeilinger, Nature. 3906660D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger. Experimental quantum teleportation. Nature, 390(6660):575-579, 1997.
Sheaf theory. G E Bredon, Springer Verlag170G.E. Bredon. Sheaf theory, volume 170. Springer Verlag, 1997.
Scientific thought. Cd Broad, Routledge & Kegan Paul LondonCD Broad. Scientific thought. Routledge & Kegan Paul London, 1923.
Die Messung quantenmechanischer Operatoren" by EP˜Wigner. P Busch, arXiv:quant-ph/1012.4372Arxiv preprintTranslation ofP. Busch. Translation of "Die Messung quantenmechanischer Operatoren" by EP˜Wigner. Arxiv preprint quant-ph/1012.4372, December 2010. arXiv:quant-ph/1012.4372.
The algebraic theory of spinors and Clifford algebras. C Chevalley, Springer2Collected worksC. Chevalley. The algebraic theory of spinors and Clifford algebras (Collected works), volume 2. Springer, 1997.
Quantum to classical transitions via weak measurements and postselection. E Cohen, Y Aharonov, arXiv:1602.05083Quantum Structural Studies: Classical Emergence from the Quantum Level. World Scientific Publishing CoE Cohen and Y Aharonov. Quantum to classical transitions via weak measurements and post- selection. In Quantum Structural Studies: Classical Emergence from the Quantum Level. World Scientific Publishing Co., 2016. arXiv:1602.05083.
Realism and causality I: Pilot wave and retrocausal models as possible facilitators. E Cohen, Cortês, L Elitzur, Smolin, arXiv:1902.05108PreprintE Cohen, M Cortês, AC Elitzur, and L Smolin. Realism and causality I: Pilot wave and retrocausal models as possible facilitators. Preprint arXiv:1902.05108, 2019.
No extension of quantum theory can have improved predictive power. R Colbeck, R Renner, Nature Communications. 2411R. Colbeck and R. Renner. No extension of quantum theory can have improved predictive power. Nature Communications, 2:411, 2011.
Is a system's wave function in one-to-one correspondence with its elements of reality?. R Colbeck, R Renner, Phys. Rev. Lett. 10815150402R. Colbeck and R. Renner. Is a system's wave function in one-to-one correspondence with its elements of reality? Phys. Rev. Lett., 108(15):150402, 2012.
The transactional interpretation of quantum mechanics. J G Cramer, Rev. Mod. Phys. 583647J.G. Cramer. The transactional interpretation of quantum mechanics. Rev. Mod. Phys., 58(3):647, 1986.
An overview of the transactional interpretation of quantum mechanics. J G Cramer, Int. J. Theor. Phys. 272J.G. Cramer. An overview of the transactional interpretation of quantum mechanics. Int. J. Theor. Phys., 27(2):227-236, 1988.
Orthogonal and symplectic Clifford algebras. Spinor structures. A Crumeyrolle, Kluwer Academic PublishersDordrecht and BostonA. Crumeyrolle. Orthogonal and symplectic Clifford algebras. Spinor structures. Kluwer Academic Publishers (Dordrecht and Boston), 1990.
Naive realism about operators. M Daumer, Dürr, N Goldstein, ` Zangh, Erkenntnis. 452-3M Daumer, D Dürr, S Goldstein, and N Zangh`i. Naive realism about operators. Erkenntnis, 45(2- 3):379-397, 1996.
Time symmetry and the Einstein paradox. O Costa De, Beauregard, Il Nuovo Cimento B. 421O Costa de Beauregard. Time symmetry and the Einstein paradox. Il Nuovo Cimento B (1971- 1996), 42(1):41-64, 1977.
La théorie de la double solution. L De Broglie, Paris, Gauthier-VillarsL. de Broglie. La théorie de la double solution. Paris, Gauthier-Villars, 1956.
Une tentative d'interprétation causale et non linéaire de la mécanique ondulatoire: La théorie de la double solution. L De Broglie, Paris, Gauthier-VillarsL. de Broglie. Une tentative d'interprétation causale et non linéaire de la mécanique ondulatoire: La théorie de la double solution. Paris, Gauthier-Villars, 1956.
Vindication of quantum locality. D Deutsch, Proc. Roy. Soc. London Ser. A. 468D Deutsch. Vindication of quantum locality. Proc. Roy. Soc. London Ser. A, 468(2138):531-544, 2011.
P A M Dirac, The Principles of Quantum Mechanics. Oxford University PressP.A.M. Dirac. The Principles of Quantum Mechanics. Oxford University Press, 1958.
Bohmian mechanics as the foundation of quantum mechanics. D Dürr, S Goldstein, N Zangh`i, arXiv:quant-ph/9511016Bohmian mechanics and quantum theory: An appraisal. JT Cushing, A Fine, and S GoldsteinSpringerD. Dürr, S. Goldstein, and N. Zangh`i. Bohmian mechanics as the foundation of quantum mechan- ics. In JT Cushing, A Fine, and S Goldstein, editors, Bohmian mechanics and quantum theory: An appraisal, pages 21-44. Springer, 1996. arXiv:quant-ph/9511016.
Can quantum-mechanical description of physical reality be considered complete?. A Einstein, B Podolsky, N Rosen, Phys. Rev. 4710777A. Einstein, B. Podolsky, and N. Rosen. Can quantum-mechanical description of physical reality be considered complete? Phys. Rev., 47(10):777, 1935.
Physics in the real universe: Time and spacetime. Ellis Gfr, Gen. Relat. Grav. 3812GFR Ellis. Physics in the real universe: Time and spacetime. Gen. Relat. Grav., 38(12):1797-1824, 2006.
On the flow of time. Ellis Gfr, arXiv:0812.0240PreprintGFR Ellis. On the flow of time. Preprint arXiv:0812.0240, 2008.
The evolving block universe and the meshing together of times. Ellis Gfr, Annals of the New York Academy of Sciences. 13261GFR Ellis. The evolving block universe and the meshing together of times. Annals of the New York Academy of Sciences, 1326(1):26-41, 2014.
Time and spacetime: the crystallizing block universe. Gfr Ellis, Rothman, IJTP. 495GFR Ellis and T Rothman. Time and spacetime: the crystallizing block universe. IJTP, 49(5):988- 1003, 2010.
The necessity of quantizing the gravitational field. K Eppley, E Hannah, Found. Phys. 71-2K. Eppley and E. Hannah. The necessity of quantizing the gravitational field. Found. Phys., 7(1- 2):51-68, 1977.
Relative State" Formulation of Quantum Mechanics. H Everett, Rev. Mod. Phys. 293H. Everett. "Relative State" Formulation of Quantum Mechanics. Rev. Mod. Phys., 29(3):454-462, Jul 1957.
The Theory of the Universal Wave Function. H Everett, The Many-Worlds Hypothesis of Quantum Mechanics. Princeton University PressH. Everett. The Theory of the Universal Wave Function. In The Many-Worlds Hypothesis of Quan- tum Mechanics, pages 3-137. Princeton University Press, 1973.
Retrocausality in quantum mechanics. S Friederich, Evans, Edward N. ZaltaThe Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford Universitysummer 2019 editionS Friederich and PW Evans. Retrocausality in quantum mechanics. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, summer 2019 edition, 2019.
Relativistic dynamical reduction models: general framework and examples. G Ghirardi, R Grassi, P Pearle, Found. Phys. 2011G. Ghirardi, R. Grassi, and P. Pearle. Relativistic dynamical reduction models: general framework and examples. Found. Phys., 20(11):1271-1316, 1990.
Unified Dynamics of Microscopic and Macroscopic Systems. G C Ghirardi, A Rimini, T Weber, Phys. Rev. D. 34G. C. Ghirardi, A. Rimini, and T. Weber. Unified Dynamics of Microscopic and Macroscopic Systems. Phys. Rev. D, (34):470-491, 1986.
Reality and the role of the wave function in quantum theory. S Goldstein, N Zangh, ` , The wave function: Essays on the metaphysics of quantum mechanics. OxfordOxford University PressS Goldstein and N Zangh`i. Reality and the role of the wave function in quantum theory. In The wave function: Essays on the metaphysics of quantum mechanics, pages 91-109. Oxford University Press, Oxford, 2013.
Local quantum physics: Fields, particles, algebras. R Haag, Springer Science & Business MediaR Haag. Local quantum physics: Fields, particles, algebras. Springer Science & Business Media, 2012.
Are quantum states real?. L Hardy, Int. J. Mod. Phys. D. 2701n031345012L Hardy. Are quantum states real? Int. J. Mod. Phys. D., 27(01n03):1345012, 2013.
Einstein, incompleteness, and the epistemic view of quantum states. N Harrigan, Spekkens, Found. Phys. 402N Harrigan and RW Spekkens. Einstein, incompleteness, and the epistemic view of quantum states. Found. Phys., 40(2):125-157, 2010.
The occurrence of singularities in cosmology. S W Hawking, P. Roy. Soc. A-Math. Phy. 294S. W. Hawking. The occurrence of singularities in cosmology. P. Roy. Soc. A-Math. Phy., 294(1439):511-521, 1966.
The occurrence of singularities in cosmology. S W Hawking, II. P. Roy. Soc. A-Math. Phy. 295S. W. Hawking. The occurrence of singularities in cosmology. II. P. Roy. Soc. A-Math. Phy., 295(1443):490-493, 1966.
The occurrence of singularities in cosmology. III. Causality and singularities. S W Hawking, P. Roy. Soc. A-Math. Phy. 300S. W. Hawking. The occurrence of singularities in cosmology. III. Causality and singularities. P. Roy. Soc. A-Math. Phy., 300(1461):187-201, 1967.
Particle Creation by Black Holes. S W Hawking, Comm. Math. Phys. 433S. W. Hawking. Particle Creation by Black Holes. Comm. Math. Phys., 43(3):199-220, 1975.
Breakdown of Predictability in Gravitational Collapse. S W Hawking, Phys. Rev. D. 14102460S. W. Hawking. Breakdown of Predictability in Gravitational Collapse. Phys. Rev. D, 14(10):2460, 1976.
The Singularities of Gravitational Collapse and Cosmology. S W Hawking, R W Penrose, Proc. Roy. Soc. London Ser. A. 314S. W. Hawking and R. W. Penrose. The Singularities of Gravitational Collapse and Cosmology. Proc. Roy. Soc. London Ser. A, 314(1519):529-548, 1970.
The Nature of Space and Time. S W Hawking, R W Penrose, Princeton University PressPrinceton and OxfordS. W. Hawking and R. W. Penrose. The Nature of Space and Time. Princeton University Press, Princeton and Oxford, 1996.
The Physicist's Conception of Nature. W Heisenberg, W. Heisenberg. The Physicist's Conception of Nature, 1958.
Space-Time Algebra. D Hestenes, Gordon & BreachNew YorkD. Hestenes. Space-Time Algebra. Gordon & Breach, New York, 1966.
Freedom from the inside out. C Hoefer, Royal Institute of Philosophy Supplement. 50C. Hoefer. Freedom from the inside out. Royal Institute of Philosophy Supplement, 50:201-222, 2002.
Introductory lectures on black hole thermodynamics. Lectures given at the University of. Ted Jacobson, Utrecht, The NetherlandsTed Jacobson. Introductory lectures on black hole thermodynamics. Lectures given at the University of Utrecht, The Netherlands, 1996. http://www.physics.umd.edu/grt/taj/776b/lectures.pdf.
The transactional interpretation of quantum mechanics: the reality of possibility. Kastner, Cambridge University PressRE Kastner. The transactional interpretation of quantum mechanics: the reality of possibility. Cam- bridge University Press, 2012.
The problem of hidden variables in quantum mechanics. S Kochen, E P Specker, J. Math. Mech. 17S. Kochen and E.P. Specker. The problem of hidden variables in quantum mechanics. J. Math. Mech., 17:59-87, 1967.
Observing the average trajectories of single photons in a two-slit interferometer. S Kocsis, B Braverman, S Ravets, M J Stevens, R P Mirin, L K Shalm, A M Steinberg, Science. 3326034S. Kocsis, B. Braverman, S. Ravets, M. J. Stevens, R. P. Mirin, L. K. Shalm, and A. M. Stein- berg. Observing the average trajectories of single photons in a two-slit interferometer. Science, 332(6034):1170-1173, 2011.
Centre National de la Recherche Scientifique. A Lichnerowicz, Tonnelat, Number 91 in Colloques Internationaux. ParisProceedings of a conferenceA Lichnerowicz and A Tonnelat. Les théories relativistes de la gravitation, Number 91 in Colloques Internationaux, Paris. Centre National de la Recherche Scientifique. In Proceedings of a conference held at Royaumont in June, 1959.
Quantum mechanics of time travel through post-selected teleportation. S Lloyd, L Maccone, R Garcia-Patron, V Giovannetti, Y Shikano, Phys. Rev. D. 84225007S. Lloyd, L. Maccone, R. Garcia-Patron, V. Giovannetti, and Y. Shikano. Quantum mechanics of time travel through post-selected teleportation. Phys. Rev. D, 84(2):025007, 2011.
The unreality of time. Mind. Mctaggart, JME McTaggart. The unreality of time. Mind, pages 457-474, 1908.
The fundamental equations for electromagnetic processes in moving bodies. H Minkowski, Math. Ann. 68H Minkowski. The fundamental equations for electromagnetic processes in moving bodies. Math. Ann, 68:472-525, 1910.
ψ-ontology result without the Cartesian product assumption. Wc Myrvold, Phys. Rev. A. 97552109WC Myrvold. ψ-ontology result without the Cartesian product assumption. Phys. Rev. A, 97(5):052109, 2018.
Indirect evidence for quantum gravity. Don N Page, C D Geilker, Phys. Rev. Lett. 4714979Don N. Page and C.D. Geilker. Indirect evidence for quantum gravity. Phys. Rev. Lett., 47(14):979, 1981.
Gravitational Collapse and Space-Time Singularities. R Penrose, Phys. Rev. Lett. 143R. Penrose. Gravitational Collapse and Space-Time Singularities. Phys. Rev. Lett., 14(3):57-59, 1965.
Toy models for retrocausality. H Price, Stud. Hist. Philos. Sci. B: Stud. Hist. Philos. M. P. 394H. Price. Toy models for retrocausality. Stud. Hist. Philos. Sci. B: Stud. Hist. Philos. M. P., 39(4):752-761, 2008.
Disentangling the quantum world. H Price, K Wharton, Entropy. 1711H. Price and K. Wharton. Disentangling the quantum world. Entropy, 17(11):7752-7767, 2015.
On the reality of the quantum state. M F Pusey, J Barrett, T Rudolph, Nature Phys. 86M.F. Pusey, J. Barrett, and T. Rudolph. On the reality of the quantum state. Nature Phys., 8(6):475-478, 2012.
Proof of a retroactive influence. C W Rietdijk, Found. Phys. 87-8C.W. Rietdijk. Proof of a retroactive influence. Found. Phys., 8(7-8):615-628, 1978.
Measurements on the reality of the wavefunction. M Ringbauer, C Duffus, Branciard, A G Cavalcanti, A White, Fedrizzi, Nature Physics. 113249M Ringbauer, B Duffus, C Branciard, EG Cavalcanti, AG White, and A Fedrizzi. Measurements on the reality of the wavefunction. Nature Physics, 11(3):249, 2015.
On quantization of fields. L Rosenfeld, Nucl. Phys. 40L Rosenfeld. On quantization of fields. Nucl. Phys., 40:353-356, 1963.
An Undulatory Theory of the Mechanics of Atoms and Molecules. E Schrödinger, Phys. Rev. 28E. Schrödinger. An Undulatory Theory of the Mechanics of Atoms and Molecules. Phys. Rev., 28:1049-1070, December 1926.
Die gegenwärtige situation in der quantenmechanik. E Schrödinger, Naturwissenschaften. 2349E. Schrödinger. Die gegenwärtige situation in der quantenmechanik. Naturwissenschaften, 23(49):823-828, 1935.
An Undulatory Theory of the Mechanics of Atoms and Molecules. E Schrödinger, Phys. Rev. 286Schrödinger, E. An Undulatory Theory of the Mechanics of Atoms and Molecules. Phys. Rev., 28 (6):1049-1070, 1926.
Definite measurements and deterministic quantum evolution. L S Schulman, Phys. Lett. A. 1029Schulman, L.S. Definite measurements and deterministic quantum evolution. Phys. Lett. A, 102(9):396-400, 1984.
Time's arrows and quantum measurement. L S Schulman, Cambridge University PressSchulman, L.S. Time's arrows and quantum measurement. Cambridge University Press, 1997.
Experimental test of the "Special State" theory of quantum measurement. L S Schulman, Entropy. 144Schulman, L.S. Experimental test of the "Special State" theory of quantum measurement. Entropy, 14(4):665-686, 2012.
Special states demand a force for the observer. L S Schulman, Found. Phys. 4611Schulman, L.S. Special states demand a force for the observer. Found. Phys., 46(11):1471-1494, 2016.
Contextuality for preparations, transformations, and unsharp measurements. Rw Spekkens, Phys. Rev. A. 71552108RW Spekkens. Contextuality for preparations, transformations, and unsharp measurements. Phys. Rev. A, 71(5):052108, 2005.
Convergence and free-will. PhilSci Archive. O C Stoica, archive:00004356/O. C. Stoica. Convergence and free-will. PhilSci Archive, 2008. philsci-archive:00004356/.
Flowing with a Frozen River. Foundational Questions Institute. O C Stoica, The Nature of Time" essay contestO. C. Stoica. Flowing with a Frozen River. Foundational Questions Institute, "The Nature of Time" essay contest, 2008. http://fqxi.org/community/forum/topic/322, last accessed August 21, 2019.
O C Stoica, archive:00004344/Smooth quantum mechanics. PhilSci Archive. O. C. Stoica. Smooth quantum mechanics. PhilSci Archive, 2008. philsci-archive:00004344/.
Global and local aspects of causality in quantum mechanics. O C Stoica, 10.1051/epjconf/20135801017EPJ Web of Conferences, TM 2012 -The Time Machine Factory. Open Access581017unspeakable, speakable] on Time Travel in TurinO. C. Stoica. Global and local aspects of causality in quantum mechanics. In EPJ Web of Conferences, TM 2012 -The Time Machine Factory [unspeakable, speakable] on Time Travel in Turin, volume 58, page 01017. EPJ Web of Conferences, September 2013. Open Access.
Singular General Relativity -Ph. O C Stoica, arXiv:math.DG/1301.2231D. Thesis. Minkowski Institute PressO. C. Stoica. Singular General Relativity -Ph.D. Thesis. Minkowski Institute Press, 2013. arXiv:math.DG/1301.2231.
Einstein equation at singularities. Cent. Eur. O C Stoica, J. Phys. 12O. C. Stoica. Einstein equation at singularities. Cent. Eur. J. Phys, 12:123-131, 2014.
Metric dimensional reduction at singularities with implications to quantum gravity. O C Stoica, Ann. Phys. 347CO. C. Stoica. Metric dimensional reduction at singularities with implications to quantum gravity. Ann. Phys., 347(C):74-91, 2014.
On singular semi-Riemannian manifolds. O C Stoica, Int. J. Geom. Methods Mod. Phys. 1151450041O. C. Stoica. On singular semi-Riemannian manifolds. Int. J. Geom. Methods Mod. Phys., 11(5):1450041, 2014.
The geometry of singularities and the black hole information paradox. O C Stoica, J. Phys.: Conf. Ser. 626O. C. Stoica. The geometry of singularities and the black hole information paradox. J. Phys.: Conf. Ser., 626(012028), 2015.
Quantum measurement and initial conditions. O C Stoica, arXiv:quant-ph/1212.2601Int. J. Theor. Phys. O. C. Stoica. Quantum measurement and initial conditions. Int. J. Theor. Phys., pages 1-15, 2015. arXiv:quant-ph/1212.2601.
The Tao of It and Bit. O C Stoica, arXiv:1311.0765It From Bit or Bit From It?: On Physics and Information. SpringerO. C. Stoica. The Tao of It and Bit. In It From Bit or Bit From It?: On Physics and Information, pages 51-64. Springer, 2015. arXiv:1311.0765.
On the wavefunction collapse. O C Stoica, 10.12743/quanta.v5i1.40Quanta. 51O. C. Stoica. On the wavefunction collapse. Quanta, 5(1):19-33, 2016. http://dx.doi.org/10.12743/quanta.v5i1.40.
The universe remembers no wavefunction collapse. Quantum Stud. Math. Found. O C Stoica, arXiv:1607.02076O. C. Stoica. The universe remembers no wavefunction collapse. Quantum Stud. Math. Found., 2017. arXiv:1607.02076.
A representation of the wavefunction on the 3d-space. O C Stoica, arXiv:1906.12229PreprintO C Stoica. A representation of the wavefunction on the 3d-space. Preprint arXiv:1906.12229, 2019.
Revisiting the black hole entropy and the information paradox. Oc Stoica, AHEP. OC Stoica. Revisiting the black hole entropy and the information paradox. AHEP, 2018, 2018.
Causally symmetric Bohm model. Sutherland, Stud. Hist. Philos. Sci. B: Stud. Hist. Philos. M. P. 394RI Sutherland. Causally symmetric Bohm model. Stud. Hist. Philos. Sci. B: Stud. Hist. Philos. M. P., 39(4):782-805, 2008.
How retrocausality helps. Sutherland, AIP Conference Proceedings. AIP Publishing184120001RI Sutherland. How retrocausality helps. In AIP Conference Proceedings, volume 1841, page 020001. AIP Publishing, 2017.
Introduction to weak measurements and weak values. B Tamir, E Cohen, Quanta. 21B. Tamir and E. Cohen. Introduction to weak measurements and weak values. Quanta, 2(1):7-17, 2013.
Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. M Tegmark, Knopf Doubleday Publishing GroupM Tegmark. Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Knopf Doubleday Publishing Group, 2014.
Teleportation of quantum states. L Vaidman, Phys. Rev. A. 49L. Vaidman. Teleportation of quantum states. Phys. Rev. A, 49:1473-1476, 1994.
Many-worlds interpretation of quantum mechanics. L Vaidman, E.N. Zalta, editorThe Stanford Encyclopedia of Philosophy. Stanfordspring 2015 editionL. Vaidman. Many-worlds interpretation of quantum mechanics. In E.N. Zalta, ed- itor, The Stanford Encyclopedia of Philosophy. Stanford, spring 2015 edition, 2015. http://plato.stanford.edu/archives/spr2015/entries/qm-manyworlds/.
All is ψ. L Vaidman, J. Phys. Conf. Ser. 701L Vaidman. All is ψ. In J. Phys. Conf. Ser., volume 701, 2016.
J Neumann, Mathematical Foundations of Quantum Mechanics. Princeton University PressJ. von Neumann. Mathematical Foundations of Quantum Mechanics. Princeton University Press, 1955.
R M Wald, Quantum Field Theory in Curved Space-Time and Black Hole Thermodynamics. University of Chicago PressR. M. Wald. Quantum Field Theory in Curved Space-Time and Black Hole Thermodynamics. Uni- versity of Chicago Press, 1994.
Vacuum fluctuation, micro-cyclic "universes" and the cosmological constant problem. Q Wang, Unruh, arXiv:1904.08599PreprintQ Wang and WG Unruh. Vacuum fluctuation, micro-cyclic "universes" and the cosmological con- stant problem. Preprint arXiv:1904.08599, 2019.
How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the universe. Q Wang, W G Zhu, Unruh, Phys. Rev. D. 9510103504Q Wang, Z Zhu, and WG Unruh. How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the universe. Phys. Rev. D, 95(10):103504, 2017.
Calculation of the natural brightness of spectral lines on the basis of Dirac's theory. V F Weisskopf, E P Wigner, Z. Phys. 63V.F. Weisskopf and E.P. Wigner. Calculation of the natural brightness of spectral lines on the basis of Dirac's theory. Z. Phys., 63:54-73, 1930.
Time-symmetric quantum mechanics. K B Wharton, Found. Phys. 371K.B. Wharton. Time-symmetric quantum mechanics. Found. Phys., 37(1):159-168, 2007.
Bell's theorem and spacetime-based reformulations of quantum mechanics. N Kb Wharton, Argaman, arXiv:1906.04313PreprintKB Wharton and N Argaman. Bell's theorem and spacetime-based reformulations of quantum mechanics. Preprint arXiv:1906.04313, 2019.
The 'Past' and the 'Delayed-Choice' Experiment. J A Wheeler, Mathematical Foundations of Quantum Theory. A.R. Marlow30J. A. Wheeler. The 'Past' and the 'Delayed-Choice' Experiment. In A.R. Marlow, ed., Mathematical Foundations of Quantum Theory, page 30, 1978.
On Unitary Representations of the Inhomogeneous Lorentz Group. E P Wigner, Annals of Mathematics. 140E. P. Wigner. On Unitary Representations of the Inhomogeneous Lorentz Group. Annals of Math- ematics, 1(40):149-204, 1939.
Die messung quantenmechanischer operatoren. Zeitschrift für Physik A Hadrons and nuclei. E P Wigner, 133E.P. Wigner. Die messung quantenmechanischer operatoren. Zeitschrift für Physik A Hadrons and nuclei, 133(1):101-108, 1952.
| [] |
Subsets and Splits